code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def setspan(self, *args): """Sets the span of the span element anew, erases all data inside. Arguments: *args: Instances of :class:`Word`, :class:`Morpheme` or :class:`Phoneme` """ self.data = [] for child in args: self.append(child)
Sets the span of the span element anew, erases all data inside. Arguments: *args: Instances of :class:`Word`, :class:`Morpheme` or :class:`Phoneme`
Below is the the instruction that describes the task: ### Input: Sets the span of the span element anew, erases all data inside. Arguments: *args: Instances of :class:`Word`, :class:`Morpheme` or :class:`Phoneme` ### Response: def setspan(self, *args): """Sets the span of the span element anew, erases all data inside. Arguments: *args: Instances of :class:`Word`, :class:`Morpheme` or :class:`Phoneme` """ self.data = [] for child in args: self.append(child)
def delete_all_rows(self): """ Deletes the contents of all rows in the DataFrame. This function is faster than delete_rows() to remove all information, and at the same time it keeps the container lists for the columns and index so if there is another object that references this DataFrame, like a ViewSeries, the reference remains in tact. :return: nothing """ del self._index[:] for c in range(len(self._columns)): del self._data[c][:]
Deletes the contents of all rows in the DataFrame. This function is faster than delete_rows() to remove all information, and at the same time it keeps the container lists for the columns and index so if there is another object that references this DataFrame, like a ViewSeries, the reference remains in tact. :return: nothing
Below is the the instruction that describes the task: ### Input: Deletes the contents of all rows in the DataFrame. This function is faster than delete_rows() to remove all information, and at the same time it keeps the container lists for the columns and index so if there is another object that references this DataFrame, like a ViewSeries, the reference remains in tact. :return: nothing ### Response: def delete_all_rows(self): """ Deletes the contents of all rows in the DataFrame. This function is faster than delete_rows() to remove all information, and at the same time it keeps the container lists for the columns and index so if there is another object that references this DataFrame, like a ViewSeries, the reference remains in tact. :return: nothing """ del self._index[:] for c in range(len(self._columns)): del self._data[c][:]
def replace_parameters(context, nb, parameters): # Uma: This is a copy-paste from papermill papermill/execute.py:104 (execute_parameters). # Typically, papermill injects the injected-parameters cell *below* the parameters cell # but we want to *replace* the parameters cell, which is what this function does. '''Assigned parameters into the appropiate place in the input notebook Args: nb (NotebookNode): Executable notebook object parameters (dict): Arbitrary keyword arguments to pass to the notebook parameters. ''' # Copy the nb object to avoid polluting the input nb = copy.deepcopy(nb) # Generate parameter content based on the kernel_name param_content = DagsterTranslator.codify(parameters) # papermill method choosed translator based on kernel_name and language, # but we just call the DagsterTranslator # translate_parameters(kernel_name, language, parameters) newcell = nbformat.v4.new_code_cell(source=param_content) newcell.metadata['tags'] = ['injected-parameters'] param_cell_index = _find_first_tagged_cell_index(nb, 'parameters') injected_cell_index = _find_first_tagged_cell_index(nb, 'injected-parameters') if injected_cell_index >= 0: # Replace the injected cell with a new version before = nb.cells[:injected_cell_index] after = nb.cells[injected_cell_index + 1 :] check.int_value_param(param_cell_index, -1, 'param_cell_index') # We should have blown away the parameters cell if there is an injected-parameters cell elif param_cell_index >= 0: # Replace the parameter cell with the injected-parameters cell before = nb.cells[:param_cell_index] after = nb.cells[param_cell_index + 1 :] else: # Inject to the top of the notebook, presumably first cell includes dagstermill import context.log.debug( ( 'Warning notebook has no parameters cell, ' 'so first cell must import dagstermill and call dm.register_repo()' ) ) before = nb.cells[:1] after = nb.cells[1:] nb.cells = before + [newcell] + after nb.metadata.papermill['parameters'] = parameters return nb
Assigned parameters into the appropiate place in the input notebook Args: nb (NotebookNode): Executable notebook object parameters (dict): Arbitrary keyword arguments to pass to the notebook parameters.
Below is the the instruction that describes the task: ### Input: Assigned parameters into the appropiate place in the input notebook Args: nb (NotebookNode): Executable notebook object parameters (dict): Arbitrary keyword arguments to pass to the notebook parameters. ### Response: def replace_parameters(context, nb, parameters): # Uma: This is a copy-paste from papermill papermill/execute.py:104 (execute_parameters). # Typically, papermill injects the injected-parameters cell *below* the parameters cell # but we want to *replace* the parameters cell, which is what this function does. '''Assigned parameters into the appropiate place in the input notebook Args: nb (NotebookNode): Executable notebook object parameters (dict): Arbitrary keyword arguments to pass to the notebook parameters. ''' # Copy the nb object to avoid polluting the input nb = copy.deepcopy(nb) # Generate parameter content based on the kernel_name param_content = DagsterTranslator.codify(parameters) # papermill method choosed translator based on kernel_name and language, # but we just call the DagsterTranslator # translate_parameters(kernel_name, language, parameters) newcell = nbformat.v4.new_code_cell(source=param_content) newcell.metadata['tags'] = ['injected-parameters'] param_cell_index = _find_first_tagged_cell_index(nb, 'parameters') injected_cell_index = _find_first_tagged_cell_index(nb, 'injected-parameters') if injected_cell_index >= 0: # Replace the injected cell with a new version before = nb.cells[:injected_cell_index] after = nb.cells[injected_cell_index + 1 :] check.int_value_param(param_cell_index, -1, 'param_cell_index') # We should have blown away the parameters cell if there is an injected-parameters cell elif param_cell_index >= 0: # Replace the parameter cell with the injected-parameters cell before = nb.cells[:param_cell_index] after = nb.cells[param_cell_index + 1 :] else: # Inject to the top of the notebook, presumably first cell includes dagstermill import context.log.debug( ( 'Warning notebook has no parameters cell, ' 'so first cell must import dagstermill and call dm.register_repo()' ) ) before = nb.cells[:1] after = nb.cells[1:] nb.cells = before + [newcell] + after nb.metadata.papermill['parameters'] = parameters return nb
def is_possible_temp(temp: str) -> bool: """ Returns True if all characters are digits or 'M' (for minus) """ for char in temp: if not (char.isdigit() or char == 'M'): return False return True
Returns True if all characters are digits or 'M' (for minus)
Below is the the instruction that describes the task: ### Input: Returns True if all characters are digits or 'M' (for minus) ### Response: def is_possible_temp(temp: str) -> bool: """ Returns True if all characters are digits or 'M' (for minus) """ for char in temp: if not (char.isdigit() or char == 'M'): return False return True
def send(self, topic, *args, **kwargs): """ Appends the prefix to the topic before sendingf """ prefix_topic = self.heroku_kafka.prefix_topic(topic) return super(HerokuKafkaProducer, self).send(prefix_topic, *args, **kwargs)
Appends the prefix to the topic before sendingf
Below is the the instruction that describes the task: ### Input: Appends the prefix to the topic before sendingf ### Response: def send(self, topic, *args, **kwargs): """ Appends the prefix to the topic before sendingf """ prefix_topic = self.heroku_kafka.prefix_topic(topic) return super(HerokuKafkaProducer, self).send(prefix_topic, *args, **kwargs)
def comp_centroid(data, bounding_box, debug_plot=False, plot_reference=None, logger=None): """Detect objects in a region and return the centroid of the brightest one""" from matplotlib.patches import Ellipse if logger is None: logger = logging.getLogger(__name__) region = bounding_box.slice ref_x = region[1].start ref_y = region[0].start logger.debug('region ofset is %s, %s', ref_x, ref_y) subimage = data[region].copy() bkg = sep.Background(subimage) data_sub = subimage - bkg objects = sep.extract(data_sub, 1.5, err=bkg.globalrms) # Select brightest object logger.debug('%d object found', len(objects)) if len(objects) == 0: # print('No objects') return None iadx = objects['flux'].argmax() # plot background-subtracted image maxflux = objects[iadx] if debug_plot: fig, ax = plt.subplots() m, s = np.mean(data_sub), np.std(data_sub) ax.imshow(data_sub, interpolation='nearest', cmap='gray', vmin=m - s, vmax=m + s, origin='lower', extent=bounding_box.extent) if plot_reference: e = Ellipse(xy=(plot_reference[0], plot_reference[1]), width=6, height=6, angle=0) e.set_facecolor('none') e.set_edgecolor('green') ax.add_artist(e) # plot an ellipse for each object for idx, obj in enumerate(objects): e = Ellipse(xy=(obj['x'] + ref_x, obj['y'] + ref_y), width=6 * obj['a'], height=6 * obj['b'], angle=obj['theta'] * 180. / np.pi) e.set_facecolor('none') if idx == iadx: e.set_edgecolor('blue') else: e.set_edgecolor('red') ax.add_artist(e) return maxflux['x'], maxflux['y'], ax else: return maxflux['x'], maxflux['y']
Detect objects in a region and return the centroid of the brightest one
Below is the the instruction that describes the task: ### Input: Detect objects in a region and return the centroid of the brightest one ### Response: def comp_centroid(data, bounding_box, debug_plot=False, plot_reference=None, logger=None): """Detect objects in a region and return the centroid of the brightest one""" from matplotlib.patches import Ellipse if logger is None: logger = logging.getLogger(__name__) region = bounding_box.slice ref_x = region[1].start ref_y = region[0].start logger.debug('region ofset is %s, %s', ref_x, ref_y) subimage = data[region].copy() bkg = sep.Background(subimage) data_sub = subimage - bkg objects = sep.extract(data_sub, 1.5, err=bkg.globalrms) # Select brightest object logger.debug('%d object found', len(objects)) if len(objects) == 0: # print('No objects') return None iadx = objects['flux'].argmax() # plot background-subtracted image maxflux = objects[iadx] if debug_plot: fig, ax = plt.subplots() m, s = np.mean(data_sub), np.std(data_sub) ax.imshow(data_sub, interpolation='nearest', cmap='gray', vmin=m - s, vmax=m + s, origin='lower', extent=bounding_box.extent) if plot_reference: e = Ellipse(xy=(plot_reference[0], plot_reference[1]), width=6, height=6, angle=0) e.set_facecolor('none') e.set_edgecolor('green') ax.add_artist(e) # plot an ellipse for each object for idx, obj in enumerate(objects): e = Ellipse(xy=(obj['x'] + ref_x, obj['y'] + ref_y), width=6 * obj['a'], height=6 * obj['b'], angle=obj['theta'] * 180. / np.pi) e.set_facecolor('none') if idx == iadx: e.set_edgecolor('blue') else: e.set_edgecolor('red') ax.add_artist(e) return maxflux['x'], maxflux['y'], ax else: return maxflux['x'], maxflux['y']
def dbmin_stddev(self, value=None): """ Corresponds to IDD Field `dbmin_stddev` Standard deviation of extreme annual minimum dry-bulb temperature Args: value (float): value for IDD Field `dbmin_stddev` Unit: C if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value """ if value is not None: try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float ' 'for field `dbmin_stddev`'.format(value)) self._dbmin_stddev = value
Corresponds to IDD Field `dbmin_stddev` Standard deviation of extreme annual minimum dry-bulb temperature Args: value (float): value for IDD Field `dbmin_stddev` Unit: C if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
Below is the the instruction that describes the task: ### Input: Corresponds to IDD Field `dbmin_stddev` Standard deviation of extreme annual minimum dry-bulb temperature Args: value (float): value for IDD Field `dbmin_stddev` Unit: C if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value ### Response: def dbmin_stddev(self, value=None): """ Corresponds to IDD Field `dbmin_stddev` Standard deviation of extreme annual minimum dry-bulb temperature Args: value (float): value for IDD Field `dbmin_stddev` Unit: C if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value """ if value is not None: try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float ' 'for field `dbmin_stddev`'.format(value)) self._dbmin_stddev = value
def trace_method(method): """ Decorator to catch and print the exceptions that happen within async tasks. Note: this should be applied to methods of VSphereCheck only! """ def wrapper(*args, **kwargs): try: method(*args, **kwargs) except Exception: args[0].print_exception("A worker thread crashed:\n" + traceback.format_exc()) return wrapper
Decorator to catch and print the exceptions that happen within async tasks. Note: this should be applied to methods of VSphereCheck only!
Below is the the instruction that describes the task: ### Input: Decorator to catch and print the exceptions that happen within async tasks. Note: this should be applied to methods of VSphereCheck only! ### Response: def trace_method(method): """ Decorator to catch and print the exceptions that happen within async tasks. Note: this should be applied to methods of VSphereCheck only! """ def wrapper(*args, **kwargs): try: method(*args, **kwargs) except Exception: args[0].print_exception("A worker thread crashed:\n" + traceback.format_exc()) return wrapper
def run(self, maxiter = 4, verbose = False): """ Full artillery :-) - Find saturated stars - Run maxiter L.A.Cosmic iterations (stops if no more cosmics are found) Stops if no cosmics are found or if maxiter is reached. """ if self.satlevel > 0 and self.satstars == None: self.findsatstars(verbose=True) print "Starting %i L.A.Cosmic iterations ..." % maxiter for i in range(1, maxiter+1): print "Iteration %i" % i iterres = self.lacosmiciteration(verbose=verbose) print "%i cosmic pixels (%i new)" % (iterres["niter"], iterres["nnew"]) #self.clean(mask = iterres["mask"]) # No, we want clean to operate on really clean pixels only ! # Thus we always apply it on the full mask, as lacosmic does : self.clean(verbose=verbose) # But note that for huge cosmics, one might want to revise this. # Thats why I added a feature to skip saturated stars ! if iterres["niter"] == 0: break
Full artillery :-) - Find saturated stars - Run maxiter L.A.Cosmic iterations (stops if no more cosmics are found) Stops if no cosmics are found or if maxiter is reached.
Below is the the instruction that describes the task: ### Input: Full artillery :-) - Find saturated stars - Run maxiter L.A.Cosmic iterations (stops if no more cosmics are found) Stops if no cosmics are found or if maxiter is reached. ### Response: def run(self, maxiter = 4, verbose = False): """ Full artillery :-) - Find saturated stars - Run maxiter L.A.Cosmic iterations (stops if no more cosmics are found) Stops if no cosmics are found or if maxiter is reached. """ if self.satlevel > 0 and self.satstars == None: self.findsatstars(verbose=True) print "Starting %i L.A.Cosmic iterations ..." % maxiter for i in range(1, maxiter+1): print "Iteration %i" % i iterres = self.lacosmiciteration(verbose=verbose) print "%i cosmic pixels (%i new)" % (iterres["niter"], iterres["nnew"]) #self.clean(mask = iterres["mask"]) # No, we want clean to operate on really clean pixels only ! # Thus we always apply it on the full mask, as lacosmic does : self.clean(verbose=verbose) # But note that for huge cosmics, one might want to revise this. # Thats why I added a feature to skip saturated stars ! if iterres["niter"] == 0: break
def _update_config_location(self,directory,files=None): """Loads location and applies to all files in given files list (or if none, all files in flickr DB) google reverse GEO to figure out location lat/long file can contain things like: Australia Sydney, Australia Holcomb Campground, California. If image already has Lat/Lon in EXIF, it's location will not be updated. FIXME: Could save location lookup results in .file such that other files don't need to do it again """ if not self._connectToFlickr(): print("%s - Couldn't connect to flickr"%(filename)) return False # --- Read location out of file fullfile=os.path.join(directory,LOCATION_FILE) try: location=open(fullfile).readline().strip() except: logger.info('No location information found'); return False logger.debug('Setting location information : %s'%(location)) # ---- Now do reverse geocoding try: results = Geocoder.geocode(location) except: logger.error("Couldn't find lat/lon for %s"%(location)) return False #logger.debug(results.raw) logger.debug('google says location is: %s'%(results[0])) _lat,_lon=results[0].coordinates placename=results[0] # --- Load DB of photos, and update them all with new location db = self._loadDB(directory) for fn in db: # --- If file list provided, skip files not in the list if files and fn not in files: continue logger.debug('Checking %s for location change'%(fn)) exif_lat,exif_lon=pusher_utils.getexif_location(directory,fn) if exif_lat and exif_lon: logger.info("%s [flickr] EXIF GPS found (%f,%f) - skipping"\ %(fn,exif_lat,exif_lon)) continue else: logger.info("%s - GPS: no position, using location file"%(fn)) logger.info("%s [flickr] Updating loc to %f,%f [%s]"\ %(fn,_lat,_lon,placename)) pid=db[fn]['photoid'] resp=self.flickr.photos_geo_setLocation(photo_id=pid,lat=_lat,lon=_lon) if resp.attrib['stat']!='ok': logger.error("%s - flickr: geo_setLocation failed with status: %s",\ resp.attrib['stat']); return False return True
Loads location and applies to all files in given files list (or if none, all files in flickr DB) google reverse GEO to figure out location lat/long file can contain things like: Australia Sydney, Australia Holcomb Campground, California. If image already has Lat/Lon in EXIF, it's location will not be updated. FIXME: Could save location lookup results in .file such that other files don't need to do it again
Below is the the instruction that describes the task: ### Input: Loads location and applies to all files in given files list (or if none, all files in flickr DB) google reverse GEO to figure out location lat/long file can contain things like: Australia Sydney, Australia Holcomb Campground, California. If image already has Lat/Lon in EXIF, it's location will not be updated. FIXME: Could save location lookup results in .file such that other files don't need to do it again ### Response: def _update_config_location(self,directory,files=None): """Loads location and applies to all files in given files list (or if none, all files in flickr DB) google reverse GEO to figure out location lat/long file can contain things like: Australia Sydney, Australia Holcomb Campground, California. If image already has Lat/Lon in EXIF, it's location will not be updated. FIXME: Could save location lookup results in .file such that other files don't need to do it again """ if not self._connectToFlickr(): print("%s - Couldn't connect to flickr"%(filename)) return False # --- Read location out of file fullfile=os.path.join(directory,LOCATION_FILE) try: location=open(fullfile).readline().strip() except: logger.info('No location information found'); return False logger.debug('Setting location information : %s'%(location)) # ---- Now do reverse geocoding try: results = Geocoder.geocode(location) except: logger.error("Couldn't find lat/lon for %s"%(location)) return False #logger.debug(results.raw) logger.debug('google says location is: %s'%(results[0])) _lat,_lon=results[0].coordinates placename=results[0] # --- Load DB of photos, and update them all with new location db = self._loadDB(directory) for fn in db: # --- If file list provided, skip files not in the list if files and fn not in files: continue logger.debug('Checking %s for location change'%(fn)) exif_lat,exif_lon=pusher_utils.getexif_location(directory,fn) if exif_lat and exif_lon: logger.info("%s [flickr] EXIF GPS found (%f,%f) - skipping"\ %(fn,exif_lat,exif_lon)) continue else: logger.info("%s - GPS: no position, using location file"%(fn)) logger.info("%s [flickr] Updating loc to %f,%f [%s]"\ %(fn,_lat,_lon,placename)) pid=db[fn]['photoid'] resp=self.flickr.photos_geo_setLocation(photo_id=pid,lat=_lat,lon=_lon) if resp.attrib['stat']!='ok': logger.error("%s - flickr: geo_setLocation failed with status: %s",\ resp.attrib['stat']); return False return True
def reindex(args): """ %prog agpfile assume the component line order is correct, modify coordinates, this is necessary mostly due to manual edits (insert/delete) that disrupts the target coordinates. """ p = OptionParser(reindex.__doc__) p.add_option("--nogaps", default=False, action="store_true", help="Remove all gap lines [default: %default]") p.add_option("--inplace", default=False, action="store_true", help="Replace input file [default: %default]") opts, args = p.parse_args(args) if len(args) != 1: sys.exit(p.print_help()) agpfile, = args inplace = opts.inplace agp = AGP(agpfile, validate=False) pf = agpfile.rsplit(".", 1)[0] newagpfile = pf + ".reindexed.agp" fw = open(newagpfile, "w") agp.transfer_header(fw) for chr, chr_agp in groupby(agp, lambda x: x.object): chr_agp = list(chr_agp) object_beg = 1 for i, b in enumerate(chr_agp): b.object_beg = object_beg b.part_number = i + 1 if opts.nogaps and b.is_gap: continue if b.is_gap: b.object_end = object_beg + b.gap_length - 1 else: b.object_end = object_beg + b.component_span - 1 object_beg = b.object_end + 1 print(str(b), file=fw) # Last step: validate the new agpfile fw.close() agp = AGP(newagpfile, validate=True) if inplace: shutil.move(newagpfile, agpfile) logging.debug("Rename file `{0}` to `{1}`".format(newagpfile, agpfile)) newagpfile = agpfile return newagpfile
%prog agpfile assume the component line order is correct, modify coordinates, this is necessary mostly due to manual edits (insert/delete) that disrupts the target coordinates.
Below is the the instruction that describes the task: ### Input: %prog agpfile assume the component line order is correct, modify coordinates, this is necessary mostly due to manual edits (insert/delete) that disrupts the target coordinates. ### Response: def reindex(args): """ %prog agpfile assume the component line order is correct, modify coordinates, this is necessary mostly due to manual edits (insert/delete) that disrupts the target coordinates. """ p = OptionParser(reindex.__doc__) p.add_option("--nogaps", default=False, action="store_true", help="Remove all gap lines [default: %default]") p.add_option("--inplace", default=False, action="store_true", help="Replace input file [default: %default]") opts, args = p.parse_args(args) if len(args) != 1: sys.exit(p.print_help()) agpfile, = args inplace = opts.inplace agp = AGP(agpfile, validate=False) pf = agpfile.rsplit(".", 1)[0] newagpfile = pf + ".reindexed.agp" fw = open(newagpfile, "w") agp.transfer_header(fw) for chr, chr_agp in groupby(agp, lambda x: x.object): chr_agp = list(chr_agp) object_beg = 1 for i, b in enumerate(chr_agp): b.object_beg = object_beg b.part_number = i + 1 if opts.nogaps and b.is_gap: continue if b.is_gap: b.object_end = object_beg + b.gap_length - 1 else: b.object_end = object_beg + b.component_span - 1 object_beg = b.object_end + 1 print(str(b), file=fw) # Last step: validate the new agpfile fw.close() agp = AGP(newagpfile, validate=True) if inplace: shutil.move(newagpfile, agpfile) logging.debug("Rename file `{0}` to `{1}`".format(newagpfile, agpfile)) newagpfile = agpfile return newagpfile
def is_json(self): """ Returns: bool: True if `content_type` is `application/json` """ return (self.content_type.startswith('application/json') or re.match(r'application/vnd.go.cd.v(\d+)\+json', self.content_type))
Returns: bool: True if `content_type` is `application/json`
Below is the the instruction that describes the task: ### Input: Returns: bool: True if `content_type` is `application/json` ### Response: def is_json(self): """ Returns: bool: True if `content_type` is `application/json` """ return (self.content_type.startswith('application/json') or re.match(r'application/vnd.go.cd.v(\d+)\+json', self.content_type))
def publish_post(self, request, pk): """ Admin view to publish a single post :param request: request :param pk: primary key of the post to publish :return: Redirect to the post itself (if found) or fallback urls """ language = get_language_from_request(request, check_path=True) try: post = Post.objects.get(pk=int(pk)) post.publish = True post.save() return HttpResponseRedirect(post.get_absolute_url(language)) except Exception: try: return HttpResponseRedirect(request.META['HTTP_REFERER']) except KeyError: return HttpResponseRedirect(reverse('djangocms_blog:posts-latest'))
Admin view to publish a single post :param request: request :param pk: primary key of the post to publish :return: Redirect to the post itself (if found) or fallback urls
Below is the the instruction that describes the task: ### Input: Admin view to publish a single post :param request: request :param pk: primary key of the post to publish :return: Redirect to the post itself (if found) or fallback urls ### Response: def publish_post(self, request, pk): """ Admin view to publish a single post :param request: request :param pk: primary key of the post to publish :return: Redirect to the post itself (if found) or fallback urls """ language = get_language_from_request(request, check_path=True) try: post = Post.objects.get(pk=int(pk)) post.publish = True post.save() return HttpResponseRedirect(post.get_absolute_url(language)) except Exception: try: return HttpResponseRedirect(request.META['HTTP_REFERER']) except KeyError: return HttpResponseRedirect(reverse('djangocms_blog:posts-latest'))
def visit_Name(self, node): """All assignments to names go through this function.""" if node.ctx == 'store': self.identifiers.declared_locally.add(node.name) elif node.ctx == 'param': self.identifiers.declared_parameter.add(node.name) elif node.ctx == 'load' and not \ self.identifiers.is_declared(node.name): self.identifiers.undeclared.add(node.name)
All assignments to names go through this function.
Below is the the instruction that describes the task: ### Input: All assignments to names go through this function. ### Response: def visit_Name(self, node): """All assignments to names go through this function.""" if node.ctx == 'store': self.identifiers.declared_locally.add(node.name) elif node.ctx == 'param': self.identifiers.declared_parameter.add(node.name) elif node.ctx == 'load' and not \ self.identifiers.is_declared(node.name): self.identifiers.undeclared.add(node.name)
def _updateKW(image, filename, exten, skyKW, Value): """update the header with the kw,value""" # Update the value in memory image.header[skyKW] = Value # Now update the value on disk if isinstance(exten,tuple): strexten = '[%s,%s]'%(exten[0],str(exten[1])) else: strexten = '[%s]'%(exten) log.info('Updating keyword %s in %s' % (skyKW, filename + strexten)) fobj = fileutil.openImage(filename, mode='update', memmap=False) fobj[exten].header[skyKW] = (Value, 'Sky value computed by AstroDrizzle') fobj.close()
update the header with the kw,value
Below is the the instruction that describes the task: ### Input: update the header with the kw,value ### Response: def _updateKW(image, filename, exten, skyKW, Value): """update the header with the kw,value""" # Update the value in memory image.header[skyKW] = Value # Now update the value on disk if isinstance(exten,tuple): strexten = '[%s,%s]'%(exten[0],str(exten[1])) else: strexten = '[%s]'%(exten) log.info('Updating keyword %s in %s' % (skyKW, filename + strexten)) fobj = fileutil.openImage(filename, mode='update', memmap=False) fobj[exten].header[skyKW] = (Value, 'Sky value computed by AstroDrizzle') fobj.close()
def hook_wrapper(prompt): u'''Wrap a Python readline so it behaves like GNU readline.''' try: # call the Python hook res = ensure_str(readline_hook(prompt)) # make sure it returned the right sort of thing if res and not isinstance(res, str): raise TypeError, u'readline must return a string.' except KeyboardInterrupt: # GNU readline returns 0 on keyboard interrupt return 0 except EOFError: # It returns an empty string on EOF res = u'' except: print >>sys.stderr, u'Readline internal error' traceback.print_exc() res = u'\n' # we have to make a copy because the caller expects to free the result p = _strdup(res) return p
u'''Wrap a Python readline so it behaves like GNU readline.
Below is the the instruction that describes the task: ### Input: u'''Wrap a Python readline so it behaves like GNU readline. ### Response: def hook_wrapper(prompt): u'''Wrap a Python readline so it behaves like GNU readline.''' try: # call the Python hook res = ensure_str(readline_hook(prompt)) # make sure it returned the right sort of thing if res and not isinstance(res, str): raise TypeError, u'readline must return a string.' except KeyboardInterrupt: # GNU readline returns 0 on keyboard interrupt return 0 except EOFError: # It returns an empty string on EOF res = u'' except: print >>sys.stderr, u'Readline internal error' traceback.print_exc() res = u'\n' # we have to make a copy because the caller expects to free the result p = _strdup(res) return p
def read_struct_file(struct_data): """Interpret a struct file defining the location of variables in memory. Parameters ---------- struct_data : :py:class:`bytes` String of :py:class:`bytes` containing data to interpret as the struct definition. Returns ------- {struct_name: :py:class:`~.Struct`} A dictionary mapping the struct name to a :py:class:`~.Struct` instance. **Note:** the struct name will be a string of bytes, e.g., `b"vcpu"`. """ # Holders for all structs structs = dict() # Holders for the current struct name = None # Iterate over every line in the file for i, l in enumerate(struct_data.splitlines()): # Empty the line of comments, if the line is empty then skip to the # next line. Split on whitespace to get the tokens. tokens = re_comment.sub(b"", l).strip().split() if len(tokens) == 0: continue elif len(tokens) == 3: # 3 tokens implies header data (key, _, value) = tokens if key == b"name": if name is not None: if structs[name].size is None: raise ValueError( "size value missing for struct '{}'".format(name)) if structs[name].base is None: raise ValueError( "base value missing for struct '{}'".format(name)) name = value structs[name] = Struct(name) elif key == b"size": structs[name].size = num(value) elif key == b"base": structs[name].base = num(value) else: raise ValueError(key) elif len(tokens) == 5: # 5 tokens implies entry in struct. (field, pack, offset, printf, default) = tokens # Convert the packing character from Perl to Python standard num_pack = re_numbered_pack.match(pack) if num_pack is not None: pack = (num_pack.group("num") + perl_to_python_packs[num_pack.group("char")]) else: pack = perl_to_python_packs[pack] # If the field is an array then extract the length length = 1 field_exp = re_array_field.match(field) if field_exp is not None: field = field_exp.group("field") length = num(field_exp.group("length")) structs[name][field] = StructField(pack, num(offset), printf, num(default), length) else: raise ValueError( "line {}: Invalid syntax in struct file".format(i)) # Final check for setting size and base if structs[name].size is None: raise ValueError( "size value missing for struct '{}'".format(name)) if structs[name].base is None: raise ValueError( "base value missing for struct '{}'".format(name)) return structs
Interpret a struct file defining the location of variables in memory. Parameters ---------- struct_data : :py:class:`bytes` String of :py:class:`bytes` containing data to interpret as the struct definition. Returns ------- {struct_name: :py:class:`~.Struct`} A dictionary mapping the struct name to a :py:class:`~.Struct` instance. **Note:** the struct name will be a string of bytes, e.g., `b"vcpu"`.
Below is the the instruction that describes the task: ### Input: Interpret a struct file defining the location of variables in memory. Parameters ---------- struct_data : :py:class:`bytes` String of :py:class:`bytes` containing data to interpret as the struct definition. Returns ------- {struct_name: :py:class:`~.Struct`} A dictionary mapping the struct name to a :py:class:`~.Struct` instance. **Note:** the struct name will be a string of bytes, e.g., `b"vcpu"`. ### Response: def read_struct_file(struct_data): """Interpret a struct file defining the location of variables in memory. Parameters ---------- struct_data : :py:class:`bytes` String of :py:class:`bytes` containing data to interpret as the struct definition. Returns ------- {struct_name: :py:class:`~.Struct`} A dictionary mapping the struct name to a :py:class:`~.Struct` instance. **Note:** the struct name will be a string of bytes, e.g., `b"vcpu"`. """ # Holders for all structs structs = dict() # Holders for the current struct name = None # Iterate over every line in the file for i, l in enumerate(struct_data.splitlines()): # Empty the line of comments, if the line is empty then skip to the # next line. Split on whitespace to get the tokens. tokens = re_comment.sub(b"", l).strip().split() if len(tokens) == 0: continue elif len(tokens) == 3: # 3 tokens implies header data (key, _, value) = tokens if key == b"name": if name is not None: if structs[name].size is None: raise ValueError( "size value missing for struct '{}'".format(name)) if structs[name].base is None: raise ValueError( "base value missing for struct '{}'".format(name)) name = value structs[name] = Struct(name) elif key == b"size": structs[name].size = num(value) elif key == b"base": structs[name].base = num(value) else: raise ValueError(key) elif len(tokens) == 5: # 5 tokens implies entry in struct. (field, pack, offset, printf, default) = tokens # Convert the packing character from Perl to Python standard num_pack = re_numbered_pack.match(pack) if num_pack is not None: pack = (num_pack.group("num") + perl_to_python_packs[num_pack.group("char")]) else: pack = perl_to_python_packs[pack] # If the field is an array then extract the length length = 1 field_exp = re_array_field.match(field) if field_exp is not None: field = field_exp.group("field") length = num(field_exp.group("length")) structs[name][field] = StructField(pack, num(offset), printf, num(default), length) else: raise ValueError( "line {}: Invalid syntax in struct file".format(i)) # Final check for setting size and base if structs[name].size is None: raise ValueError( "size value missing for struct '{}'".format(name)) if structs[name].base is None: raise ValueError( "base value missing for struct '{}'".format(name)) return structs
def lstsq(a, b, rcond=None, weighted=False, extrainfo=False): """ Least-squares solution ``x`` to ``a @ x = b`` for |GVar|\s. Here ``x`` is defined to be the solution that minimizes ``||b - a @ x||``. If ``b`` has a covariance matrix, another option is to weight the norm with the inverse covariance matrix: i.e., minimize ``|| isig @ b - isig @ a @ x||`` where ``isig`` is the square root of the inverse of ``b``'s covariance matrix. Set parameter ``weighted=True`` to obtain the weighted-least-squares solution. Args: a : Matrix/array of shape ``(M,N)`` containing numbers and/or |GVar|\s. b : Vector/array of shape ``(M,)`` containing numbers and/or |GVar|\s. rcond (float): Cutoff for singular values of ``a``. Singular values smaller than ``rcond`` times the maximum eigenvalue are ignored. Default (``rcond=None``) is ``max(M,N)`` times machine precision. weighted (bool): If ``True``, use weighted least squares; otherwise use unweighted least squares. extrainfo (bool): If ``False`` (default) only ``x`` is returned; otherwise ``(x, residual, rank, s)`` is returned. Returns: Array ``x`` of shape ``(N,)`` that minimizes ``|| b - a @ x||`` if ``extrainfo==False`` (default); otherwise returns a tuple ``(x, residual, rank, s)`` where ``residual`` is the sum of the squares of ``b - a @ x``, ``rank`` is the rank of matrix ``a``, and ``s`` is an array containing the singular values. """ a = numpy.asarray(a) b = numpy.asarray(b) if a.ndim != 2: raise ValueError( 'a must have dimension 2: actual shape = ' + str(a.shape) ) if a.shape[0] != b.shape[0]: raise ValueError( 'a and b shapes mismatched: {} vs {}'.format(a.shape, b.shape) ) if rcond is None: rcond = numpy.finfo(float).eps * max(a.shape) if weighted: try: cov = gvar.evalcov(b) except ValueError: raise ValueError('b does not have a covariance matrix') try: icov = numpy.linalg.inv(cov) except numpy.linalg.LinAlgError: raise ValueError("b's covariance matrix cannot be inverted") ata = a.T.dot(icov.dot(a)) atb = a.T.dot(icov.dot(b)) else: ata = a.T.dot(a) atb = a.T.dot(b) val, vec = gvar.linalg.eigh(ata) maxval = numpy.max(gvar.mean(val)) # N.B. val > 0 required ans = 0 for i in range(len(val)): if gvar.mean(val[i]) < rcond * maxval: continue ans += vec[:, i] * vec[:, i].dot(atb) / val[i] if not extrainfo: return ans val = val[val >= rcond * maxval] ** 0.5 d = a.dot(ans) - b residual = d.dot(icov.dot(d)) if weighted else d.dot(d) k = len(val) return ans, residual, k, val
Least-squares solution ``x`` to ``a @ x = b`` for |GVar|\s. Here ``x`` is defined to be the solution that minimizes ``||b - a @ x||``. If ``b`` has a covariance matrix, another option is to weight the norm with the inverse covariance matrix: i.e., minimize ``|| isig @ b - isig @ a @ x||`` where ``isig`` is the square root of the inverse of ``b``'s covariance matrix. Set parameter ``weighted=True`` to obtain the weighted-least-squares solution. Args: a : Matrix/array of shape ``(M,N)`` containing numbers and/or |GVar|\s. b : Vector/array of shape ``(M,)`` containing numbers and/or |GVar|\s. rcond (float): Cutoff for singular values of ``a``. Singular values smaller than ``rcond`` times the maximum eigenvalue are ignored. Default (``rcond=None``) is ``max(M,N)`` times machine precision. weighted (bool): If ``True``, use weighted least squares; otherwise use unweighted least squares. extrainfo (bool): If ``False`` (default) only ``x`` is returned; otherwise ``(x, residual, rank, s)`` is returned. Returns: Array ``x`` of shape ``(N,)`` that minimizes ``|| b - a @ x||`` if ``extrainfo==False`` (default); otherwise returns a tuple ``(x, residual, rank, s)`` where ``residual`` is the sum of the squares of ``b - a @ x``, ``rank`` is the rank of matrix ``a``, and ``s`` is an array containing the singular values.
Below is the the instruction that describes the task: ### Input: Least-squares solution ``x`` to ``a @ x = b`` for |GVar|\s. Here ``x`` is defined to be the solution that minimizes ``||b - a @ x||``. If ``b`` has a covariance matrix, another option is to weight the norm with the inverse covariance matrix: i.e., minimize ``|| isig @ b - isig @ a @ x||`` where ``isig`` is the square root of the inverse of ``b``'s covariance matrix. Set parameter ``weighted=True`` to obtain the weighted-least-squares solution. Args: a : Matrix/array of shape ``(M,N)`` containing numbers and/or |GVar|\s. b : Vector/array of shape ``(M,)`` containing numbers and/or |GVar|\s. rcond (float): Cutoff for singular values of ``a``. Singular values smaller than ``rcond`` times the maximum eigenvalue are ignored. Default (``rcond=None``) is ``max(M,N)`` times machine precision. weighted (bool): If ``True``, use weighted least squares; otherwise use unweighted least squares. extrainfo (bool): If ``False`` (default) only ``x`` is returned; otherwise ``(x, residual, rank, s)`` is returned. Returns: Array ``x`` of shape ``(N,)`` that minimizes ``|| b - a @ x||`` if ``extrainfo==False`` (default); otherwise returns a tuple ``(x, residual, rank, s)`` where ``residual`` is the sum of the squares of ``b - a @ x``, ``rank`` is the rank of matrix ``a``, and ``s`` is an array containing the singular values. ### Response: def lstsq(a, b, rcond=None, weighted=False, extrainfo=False): """ Least-squares solution ``x`` to ``a @ x = b`` for |GVar|\s. Here ``x`` is defined to be the solution that minimizes ``||b - a @ x||``. If ``b`` has a covariance matrix, another option is to weight the norm with the inverse covariance matrix: i.e., minimize ``|| isig @ b - isig @ a @ x||`` where ``isig`` is the square root of the inverse of ``b``'s covariance matrix. Set parameter ``weighted=True`` to obtain the weighted-least-squares solution. Args: a : Matrix/array of shape ``(M,N)`` containing numbers and/or |GVar|\s. b : Vector/array of shape ``(M,)`` containing numbers and/or |GVar|\s. rcond (float): Cutoff for singular values of ``a``. Singular values smaller than ``rcond`` times the maximum eigenvalue are ignored. Default (``rcond=None``) is ``max(M,N)`` times machine precision. weighted (bool): If ``True``, use weighted least squares; otherwise use unweighted least squares. extrainfo (bool): If ``False`` (default) only ``x`` is returned; otherwise ``(x, residual, rank, s)`` is returned. Returns: Array ``x`` of shape ``(N,)`` that minimizes ``|| b - a @ x||`` if ``extrainfo==False`` (default); otherwise returns a tuple ``(x, residual, rank, s)`` where ``residual`` is the sum of the squares of ``b - a @ x``, ``rank`` is the rank of matrix ``a``, and ``s`` is an array containing the singular values. """ a = numpy.asarray(a) b = numpy.asarray(b) if a.ndim != 2: raise ValueError( 'a must have dimension 2: actual shape = ' + str(a.shape) ) if a.shape[0] != b.shape[0]: raise ValueError( 'a and b shapes mismatched: {} vs {}'.format(a.shape, b.shape) ) if rcond is None: rcond = numpy.finfo(float).eps * max(a.shape) if weighted: try: cov = gvar.evalcov(b) except ValueError: raise ValueError('b does not have a covariance matrix') try: icov = numpy.linalg.inv(cov) except numpy.linalg.LinAlgError: raise ValueError("b's covariance matrix cannot be inverted") ata = a.T.dot(icov.dot(a)) atb = a.T.dot(icov.dot(b)) else: ata = a.T.dot(a) atb = a.T.dot(b) val, vec = gvar.linalg.eigh(ata) maxval = numpy.max(gvar.mean(val)) # N.B. val > 0 required ans = 0 for i in range(len(val)): if gvar.mean(val[i]) < rcond * maxval: continue ans += vec[:, i] * vec[:, i].dot(atb) / val[i] if not extrainfo: return ans val = val[val >= rcond * maxval] ** 0.5 d = a.dot(ans) - b residual = d.dot(icov.dot(d)) if weighted else d.dot(d) k = len(val) return ans, residual, k, val
def _getModelCheckpointFilePath(checkpointDir): """ Return the absolute path of the model's checkpoint file. :param checkpointDir: (string) Directory of where the experiment is to be or was saved :returns: (string) An absolute path. """ path = os.path.join(checkpointDir, "model.data") path = os.path.abspath(path) return path
Return the absolute path of the model's checkpoint file. :param checkpointDir: (string) Directory of where the experiment is to be or was saved :returns: (string) An absolute path.
Below is the the instruction that describes the task: ### Input: Return the absolute path of the model's checkpoint file. :param checkpointDir: (string) Directory of where the experiment is to be or was saved :returns: (string) An absolute path. ### Response: def _getModelCheckpointFilePath(checkpointDir): """ Return the absolute path of the model's checkpoint file. :param checkpointDir: (string) Directory of where the experiment is to be or was saved :returns: (string) An absolute path. """ path = os.path.join(checkpointDir, "model.data") path = os.path.abspath(path) return path
def unload(self, refobj): """Load the given refobject Unload in this case means, that a reference is stays in the scene but it is not in a loaded state. So there is a reference, but data is not read from it. This will call :meth:`ReftypeInterface.unload`. :param refobj: the refobject :type refobj: refobj :returns: None :rtype: None :raises: None """ inter = self.get_typ_interface(self.get_typ(refobj)) ref = self.get_reference(refobj) inter.unload(refobj, ref)
Load the given refobject Unload in this case means, that a reference is stays in the scene but it is not in a loaded state. So there is a reference, but data is not read from it. This will call :meth:`ReftypeInterface.unload`. :param refobj: the refobject :type refobj: refobj :returns: None :rtype: None :raises: None
Below is the the instruction that describes the task: ### Input: Load the given refobject Unload in this case means, that a reference is stays in the scene but it is not in a loaded state. So there is a reference, but data is not read from it. This will call :meth:`ReftypeInterface.unload`. :param refobj: the refobject :type refobj: refobj :returns: None :rtype: None :raises: None ### Response: def unload(self, refobj): """Load the given refobject Unload in this case means, that a reference is stays in the scene but it is not in a loaded state. So there is a reference, but data is not read from it. This will call :meth:`ReftypeInterface.unload`. :param refobj: the refobject :type refobj: refobj :returns: None :rtype: None :raises: None """ inter = self.get_typ_interface(self.get_typ(refobj)) ref = self.get_reference(refobj) inter.unload(refobj, ref)
def duration(self): """ This read-only attribute specifies the server-side duration of a query in milliseconds. """ if self._closed or \ not self._result or \ "duration" not in self._result: return -1 return self._result.get("duration", 0)
This read-only attribute specifies the server-side duration of a query in milliseconds.
Below is the the instruction that describes the task: ### Input: This read-only attribute specifies the server-side duration of a query in milliseconds. ### Response: def duration(self): """ This read-only attribute specifies the server-side duration of a query in milliseconds. """ if self._closed or \ not self._result or \ "duration" not in self._result: return -1 return self._result.get("duration", 0)
def add_instance(self, name, properties): # type: (str, dict) -> None """ Stores the description of a component instance. The given properties are stored as is. :param name: Instance name :param properties: Instance properties :raise NameError: Already known instance name """ if name in self.__instances: raise NameError(name) # Store properties "as-is" self.__instances[name] = properties
Stores the description of a component instance. The given properties are stored as is. :param name: Instance name :param properties: Instance properties :raise NameError: Already known instance name
Below is the the instruction that describes the task: ### Input: Stores the description of a component instance. The given properties are stored as is. :param name: Instance name :param properties: Instance properties :raise NameError: Already known instance name ### Response: def add_instance(self, name, properties): # type: (str, dict) -> None """ Stores the description of a component instance. The given properties are stored as is. :param name: Instance name :param properties: Instance properties :raise NameError: Already known instance name """ if name in self.__instances: raise NameError(name) # Store properties "as-is" self.__instances[name] = properties
def reminders_add(self, *, text: str, time: str, **kwargs) -> SlackResponse: """Creates a reminder. Args: text (str): The content of the reminder. e.g. 'eat a banana' time (str): When this reminder should happen: the Unix timestamp (up to five years from now e.g. '1602288000'), the number of seconds until the reminder (if within 24 hours), or a natural language description (Ex. 'in 15 minutes' or 'every Thursday') """ self._validate_xoxp_token() kwargs.update({"text": text, "time": time}) return self.api_call("reminders.add", json=kwargs)
Creates a reminder. Args: text (str): The content of the reminder. e.g. 'eat a banana' time (str): When this reminder should happen: the Unix timestamp (up to five years from now e.g. '1602288000'), the number of seconds until the reminder (if within 24 hours), or a natural language description (Ex. 'in 15 minutes' or 'every Thursday')
Below is the the instruction that describes the task: ### Input: Creates a reminder. Args: text (str): The content of the reminder. e.g. 'eat a banana' time (str): When this reminder should happen: the Unix timestamp (up to five years from now e.g. '1602288000'), the number of seconds until the reminder (if within 24 hours), or a natural language description (Ex. 'in 15 minutes' or 'every Thursday') ### Response: def reminders_add(self, *, text: str, time: str, **kwargs) -> SlackResponse: """Creates a reminder. Args: text (str): The content of the reminder. e.g. 'eat a banana' time (str): When this reminder should happen: the Unix timestamp (up to five years from now e.g. '1602288000'), the number of seconds until the reminder (if within 24 hours), or a natural language description (Ex. 'in 15 minutes' or 'every Thursday') """ self._validate_xoxp_token() kwargs.update({"text": text, "time": time}) return self.api_call("reminders.add", json=kwargs)
def get_max_item(self): """ Get the current maximum item number :return: The current maximum item number. """ suburl = "v0/maxitem.json" try: max_item = self._make_request(suburl) except requests.HTTPError as e: hn_logger.exception('Faulted on get max item, with status {}'.format(e.errno)) raise e return max_item
Get the current maximum item number :return: The current maximum item number.
Below is the the instruction that describes the task: ### Input: Get the current maximum item number :return: The current maximum item number. ### Response: def get_max_item(self): """ Get the current maximum item number :return: The current maximum item number. """ suburl = "v0/maxitem.json" try: max_item = self._make_request(suburl) except requests.HTTPError as e: hn_logger.exception('Faulted on get max item, with status {}'.format(e.errno)) raise e return max_item
def get_hardware_info(self): """ Returns the extended hardware information of a device. With multi-channel USB-CANmoduls the information for both CAN channels are returned separately. :return: Tuple with extended hardware information structure (see structure :class:`HardwareInfoEx`) and structures with information of CAN channel 0 and 1 (see structure :class:`ChannelInfo`). :rtype: tuple(HardwareInfoEx, ChannelInfo, ChannelInfo) """ hw_info_ex = HardwareInfoEx() can_info_ch0, can_info_ch1 = ChannelInfo(), ChannelInfo() UcanGetHardwareInfoEx2(self._handle, byref(hw_info_ex), byref(can_info_ch0), byref(can_info_ch1)) return hw_info_ex, can_info_ch0, can_info_ch1
Returns the extended hardware information of a device. With multi-channel USB-CANmoduls the information for both CAN channels are returned separately. :return: Tuple with extended hardware information structure (see structure :class:`HardwareInfoEx`) and structures with information of CAN channel 0 and 1 (see structure :class:`ChannelInfo`). :rtype: tuple(HardwareInfoEx, ChannelInfo, ChannelInfo)
Below is the the instruction that describes the task: ### Input: Returns the extended hardware information of a device. With multi-channel USB-CANmoduls the information for both CAN channels are returned separately. :return: Tuple with extended hardware information structure (see structure :class:`HardwareInfoEx`) and structures with information of CAN channel 0 and 1 (see structure :class:`ChannelInfo`). :rtype: tuple(HardwareInfoEx, ChannelInfo, ChannelInfo) ### Response: def get_hardware_info(self): """ Returns the extended hardware information of a device. With multi-channel USB-CANmoduls the information for both CAN channels are returned separately. :return: Tuple with extended hardware information structure (see structure :class:`HardwareInfoEx`) and structures with information of CAN channel 0 and 1 (see structure :class:`ChannelInfo`). :rtype: tuple(HardwareInfoEx, ChannelInfo, ChannelInfo) """ hw_info_ex = HardwareInfoEx() can_info_ch0, can_info_ch1 = ChannelInfo(), ChannelInfo() UcanGetHardwareInfoEx2(self._handle, byref(hw_info_ex), byref(can_info_ch0), byref(can_info_ch1)) return hw_info_ex, can_info_ch0, can_info_ch1
def dissolve(inlist): """ list and tuple flattening Parameters ---------- inlist: list the list with sub-lists or tuples to be flattened Returns ------- list the flattened result Examples -------- >>> dissolve([[1, 2], [3, 4]]) [1, 2, 3, 4] >>> dissolve([(1, 2, (3, 4)), [5, (6, 7)]]) [1, 2, 3, 4, 5, 6, 7] """ out = [] for i in inlist: i = list(i) if isinstance(i, tuple) else i out.extend(dissolve(i)) if isinstance(i, list) else out.append(i) return out
list and tuple flattening Parameters ---------- inlist: list the list with sub-lists or tuples to be flattened Returns ------- list the flattened result Examples -------- >>> dissolve([[1, 2], [3, 4]]) [1, 2, 3, 4] >>> dissolve([(1, 2, (3, 4)), [5, (6, 7)]]) [1, 2, 3, 4, 5, 6, 7]
Below is the the instruction that describes the task: ### Input: list and tuple flattening Parameters ---------- inlist: list the list with sub-lists or tuples to be flattened Returns ------- list the flattened result Examples -------- >>> dissolve([[1, 2], [3, 4]]) [1, 2, 3, 4] >>> dissolve([(1, 2, (3, 4)), [5, (6, 7)]]) [1, 2, 3, 4, 5, 6, 7] ### Response: def dissolve(inlist): """ list and tuple flattening Parameters ---------- inlist: list the list with sub-lists or tuples to be flattened Returns ------- list the flattened result Examples -------- >>> dissolve([[1, 2], [3, 4]]) [1, 2, 3, 4] >>> dissolve([(1, 2, (3, 4)), [5, (6, 7)]]) [1, 2, 3, 4, 5, 6, 7] """ out = [] for i in inlist: i = list(i) if isinstance(i, tuple) else i out.extend(dissolve(i)) if isinstance(i, list) else out.append(i) return out
def agi_code_check(code=None, response=None, line=None): """ Check the AGI code and return a dict to help on error handling. """ code = int(code) response = response or "" result = {'status_code': code, 'result': ('', ''), 'msg': ''} if code == 100: result['msg'] = line elif code == 200: for key, value, data in re_kv.findall(response): result[key] = (value, data) # If user hangs up... we get 'hangup' in the data if data == 'hangup': return { 'error': 'AGIResultHangup', 'msg': 'User hungup during execution'} elif key == 'result' and value == '-1': return { 'error': 'AGIAppError', 'msg': 'Error executing application, or hangup'} elif code == 510: result['error'] = 'AGIInvalidCommand' elif code == 520: # AGI Usage error result['error'] = 'AGIUsageError' result['msg'] = line else: # Unhandled code or undefined response result['error'] = 'AGIUnknownError' result['msg'] = line return result
Check the AGI code and return a dict to help on error handling.
Below is the the instruction that describes the task: ### Input: Check the AGI code and return a dict to help on error handling. ### Response: def agi_code_check(code=None, response=None, line=None): """ Check the AGI code and return a dict to help on error handling. """ code = int(code) response = response or "" result = {'status_code': code, 'result': ('', ''), 'msg': ''} if code == 100: result['msg'] = line elif code == 200: for key, value, data in re_kv.findall(response): result[key] = (value, data) # If user hangs up... we get 'hangup' in the data if data == 'hangup': return { 'error': 'AGIResultHangup', 'msg': 'User hungup during execution'} elif key == 'result' and value == '-1': return { 'error': 'AGIAppError', 'msg': 'Error executing application, or hangup'} elif code == 510: result['error'] = 'AGIInvalidCommand' elif code == 520: # AGI Usage error result['error'] = 'AGIUsageError' result['msg'] = line else: # Unhandled code or undefined response result['error'] = 'AGIUnknownError' result['msg'] = line return result
def protorpc_to_endpoints_error(self, status, body): """Convert a ProtoRPC error to the format expected by Google Endpoints. If the body does not contain an ProtoRPC message in state APPLICATION_ERROR the status and body will be returned unchanged. Args: status: HTTP status of the response from the backend body: JSON-encoded error in format expected by Endpoints frontend. Returns: Tuple of (http status, body) """ try: rpc_error = self.__PROTOJSON.decode_message(remote.RpcStatus, body) except (ValueError, messages.ValidationError): rpc_error = remote.RpcStatus() if rpc_error.state == remote.RpcStatus.State.APPLICATION_ERROR: # Try to map to HTTP error code. error_class = _ERROR_NAME_MAP.get(rpc_error.error_name) if error_class: status, body = self.__write_error(error_class.http_status, rpc_error.error_message) return status, body
Convert a ProtoRPC error to the format expected by Google Endpoints. If the body does not contain an ProtoRPC message in state APPLICATION_ERROR the status and body will be returned unchanged. Args: status: HTTP status of the response from the backend body: JSON-encoded error in format expected by Endpoints frontend. Returns: Tuple of (http status, body)
Below is the the instruction that describes the task: ### Input: Convert a ProtoRPC error to the format expected by Google Endpoints. If the body does not contain an ProtoRPC message in state APPLICATION_ERROR the status and body will be returned unchanged. Args: status: HTTP status of the response from the backend body: JSON-encoded error in format expected by Endpoints frontend. Returns: Tuple of (http status, body) ### Response: def protorpc_to_endpoints_error(self, status, body): """Convert a ProtoRPC error to the format expected by Google Endpoints. If the body does not contain an ProtoRPC message in state APPLICATION_ERROR the status and body will be returned unchanged. Args: status: HTTP status of the response from the backend body: JSON-encoded error in format expected by Endpoints frontend. Returns: Tuple of (http status, body) """ try: rpc_error = self.__PROTOJSON.decode_message(remote.RpcStatus, body) except (ValueError, messages.ValidationError): rpc_error = remote.RpcStatus() if rpc_error.state == remote.RpcStatus.State.APPLICATION_ERROR: # Try to map to HTTP error code. error_class = _ERROR_NAME_MAP.get(rpc_error.error_name) if error_class: status, body = self.__write_error(error_class.http_status, rpc_error.error_message) return status, body
def on_before_publish_insert_request_id_header(headers, **kwargs): """ This function is meant to be used as signal processor for "before_task_publish". :param Dict headers: The headers of the message :param kwargs: Any extra keyword arguments """ if _CELERY_X_HEADER not in headers: request_id = current_request_id() headers[_CELERY_X_HEADER] = request_id logger.debug("Forwarding request_id '{}' to the task consumer.".format(request_id))
This function is meant to be used as signal processor for "before_task_publish". :param Dict headers: The headers of the message :param kwargs: Any extra keyword arguments
Below is the the instruction that describes the task: ### Input: This function is meant to be used as signal processor for "before_task_publish". :param Dict headers: The headers of the message :param kwargs: Any extra keyword arguments ### Response: def on_before_publish_insert_request_id_header(headers, **kwargs): """ This function is meant to be used as signal processor for "before_task_publish". :param Dict headers: The headers of the message :param kwargs: Any extra keyword arguments """ if _CELERY_X_HEADER not in headers: request_id = current_request_id() headers[_CELERY_X_HEADER] = request_id logger.debug("Forwarding request_id '{}' to the task consumer.".format(request_id))
def make_eventlogitem_log(log, condition='is', negate=False, preserve_case=False): """ Create a node for EventLogItem/log :return: A IndicatorItem represented as an Element node """ document = 'EventLogItem' search = 'EventLogItem/log' content_type = 'string' content = log ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content, negate=negate, preserve_case=preserve_case) return ii_node
Create a node for EventLogItem/log :return: A IndicatorItem represented as an Element node
Below is the the instruction that describes the task: ### Input: Create a node for EventLogItem/log :return: A IndicatorItem represented as an Element node ### Response: def make_eventlogitem_log(log, condition='is', negate=False, preserve_case=False): """ Create a node for EventLogItem/log :return: A IndicatorItem represented as an Element node """ document = 'EventLogItem' search = 'EventLogItem/log' content_type = 'string' content = log ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content, negate=negate, preserve_case=preserve_case) return ii_node
def assemble_common_meta(common_meta_dfs, fields_to_remove, sources, remove_all_metadata_fields, error_report_file): """ Assemble the common metadata dfs together. Both indices are sorted. Fields that are not in all the dfs are dropped. Args: common_meta_dfs (list of pandas dfs) fields_to_remove (list of strings): fields to be removed from the common metadata because they don't agree across files Returns: all_meta_df_sorted (pandas df) """ all_meta_df, all_meta_df_with_dups = build_common_all_meta_df(common_meta_dfs, fields_to_remove, remove_all_metadata_fields) if not all_meta_df.index.is_unique: all_report_df = build_mismatched_common_meta_report([x.shape for x in common_meta_dfs], sources, all_meta_df, all_meta_df_with_dups) unique_duplicate_ids = all_report_df.index.unique() if error_report_file is not None: all_report_df.to_csv(error_report_file, sep="\t") msg = """There are inconsistencies in common_metadata_df between different files. Try excluding metadata fields using the fields_to_remove argument. unique_duplicate_ids: {} all_report_df: {}""".format(unique_duplicate_ids, all_report_df) raise MismatchCommonMetadataConcatException(msg) # Finally, sort the index all_meta_df_sorted = all_meta_df.sort_index(axis=0) return all_meta_df_sorted
Assemble the common metadata dfs together. Both indices are sorted. Fields that are not in all the dfs are dropped. Args: common_meta_dfs (list of pandas dfs) fields_to_remove (list of strings): fields to be removed from the common metadata because they don't agree across files Returns: all_meta_df_sorted (pandas df)
Below is the the instruction that describes the task: ### Input: Assemble the common metadata dfs together. Both indices are sorted. Fields that are not in all the dfs are dropped. Args: common_meta_dfs (list of pandas dfs) fields_to_remove (list of strings): fields to be removed from the common metadata because they don't agree across files Returns: all_meta_df_sorted (pandas df) ### Response: def assemble_common_meta(common_meta_dfs, fields_to_remove, sources, remove_all_metadata_fields, error_report_file): """ Assemble the common metadata dfs together. Both indices are sorted. Fields that are not in all the dfs are dropped. Args: common_meta_dfs (list of pandas dfs) fields_to_remove (list of strings): fields to be removed from the common metadata because they don't agree across files Returns: all_meta_df_sorted (pandas df) """ all_meta_df, all_meta_df_with_dups = build_common_all_meta_df(common_meta_dfs, fields_to_remove, remove_all_metadata_fields) if not all_meta_df.index.is_unique: all_report_df = build_mismatched_common_meta_report([x.shape for x in common_meta_dfs], sources, all_meta_df, all_meta_df_with_dups) unique_duplicate_ids = all_report_df.index.unique() if error_report_file is not None: all_report_df.to_csv(error_report_file, sep="\t") msg = """There are inconsistencies in common_metadata_df between different files. Try excluding metadata fields using the fields_to_remove argument. unique_duplicate_ids: {} all_report_df: {}""".format(unique_duplicate_ids, all_report_df) raise MismatchCommonMetadataConcatException(msg) # Finally, sort the index all_meta_df_sorted = all_meta_df.sort_index(axis=0) return all_meta_df_sorted
def get_taglist(node): """ Return a list of tags (with NRML namespace removed) representing the order of the nodes within a node """ return [re.sub(r'\{[^}]*\}', "", copy(subnode.tag)) for subnode in node.nodes]
Return a list of tags (with NRML namespace removed) representing the order of the nodes within a node
Below is the the instruction that describes the task: ### Input: Return a list of tags (with NRML namespace removed) representing the order of the nodes within a node ### Response: def get_taglist(node): """ Return a list of tags (with NRML namespace removed) representing the order of the nodes within a node """ return [re.sub(r'\{[^}]*\}', "", copy(subnode.tag)) for subnode in node.nodes]
def run(users, hosts, func, **kwargs): """ Convenience function that creates an Exscript.Queue instance, adds the given accounts, and calls Queue.run() with the given hosts and function as an argument. If you also want to pass arguments to the given function, you may use util.decorator.bind() like this:: def my_callback(job, host, conn, my_arg, **kwargs): print(my_arg, kwargs.get('foo')) run(account, host, bind(my_callback, 'hello', foo = 'world'), max_threads = 10) :type users: Account|list[Account] :param users: The account(s) to use for logging in. :type hosts: Host|list[Host] :param hosts: A list of Host objects. :type func: function :param func: The callback function. :type kwargs: dict :param kwargs: Passed to the Exscript.Queue constructor. """ attempts = kwargs.get("attempts", 1) if "attempts" in kwargs: del kwargs["attempts"] queue = Queue(**kwargs) queue.add_account(users) queue.run(hosts, func, attempts) queue.destroy()
Convenience function that creates an Exscript.Queue instance, adds the given accounts, and calls Queue.run() with the given hosts and function as an argument. If you also want to pass arguments to the given function, you may use util.decorator.bind() like this:: def my_callback(job, host, conn, my_arg, **kwargs): print(my_arg, kwargs.get('foo')) run(account, host, bind(my_callback, 'hello', foo = 'world'), max_threads = 10) :type users: Account|list[Account] :param users: The account(s) to use for logging in. :type hosts: Host|list[Host] :param hosts: A list of Host objects. :type func: function :param func: The callback function. :type kwargs: dict :param kwargs: Passed to the Exscript.Queue constructor.
Below is the the instruction that describes the task: ### Input: Convenience function that creates an Exscript.Queue instance, adds the given accounts, and calls Queue.run() with the given hosts and function as an argument. If you also want to pass arguments to the given function, you may use util.decorator.bind() like this:: def my_callback(job, host, conn, my_arg, **kwargs): print(my_arg, kwargs.get('foo')) run(account, host, bind(my_callback, 'hello', foo = 'world'), max_threads = 10) :type users: Account|list[Account] :param users: The account(s) to use for logging in. :type hosts: Host|list[Host] :param hosts: A list of Host objects. :type func: function :param func: The callback function. :type kwargs: dict :param kwargs: Passed to the Exscript.Queue constructor. ### Response: def run(users, hosts, func, **kwargs): """ Convenience function that creates an Exscript.Queue instance, adds the given accounts, and calls Queue.run() with the given hosts and function as an argument. If you also want to pass arguments to the given function, you may use util.decorator.bind() like this:: def my_callback(job, host, conn, my_arg, **kwargs): print(my_arg, kwargs.get('foo')) run(account, host, bind(my_callback, 'hello', foo = 'world'), max_threads = 10) :type users: Account|list[Account] :param users: The account(s) to use for logging in. :type hosts: Host|list[Host] :param hosts: A list of Host objects. :type func: function :param func: The callback function. :type kwargs: dict :param kwargs: Passed to the Exscript.Queue constructor. """ attempts = kwargs.get("attempts", 1) if "attempts" in kwargs: del kwargs["attempts"] queue = Queue(**kwargs) queue.add_account(users) queue.run(hosts, func, attempts) queue.destroy()
def add_transcription(self, gene: Gene, rna: Union[Rna, MicroRna]) -> str: """Add a transcription relation from a gene to an RNA or miRNA node. :param gene: A gene node :param rna: An RNA or microRNA node """ return self.add_unqualified_edge(gene, rna, TRANSCRIBED_TO)
Add a transcription relation from a gene to an RNA or miRNA node. :param gene: A gene node :param rna: An RNA or microRNA node
Below is the the instruction that describes the task: ### Input: Add a transcription relation from a gene to an RNA or miRNA node. :param gene: A gene node :param rna: An RNA or microRNA node ### Response: def add_transcription(self, gene: Gene, rna: Union[Rna, MicroRna]) -> str: """Add a transcription relation from a gene to an RNA or miRNA node. :param gene: A gene node :param rna: An RNA or microRNA node """ return self.add_unqualified_edge(gene, rna, TRANSCRIBED_TO)
def data(self): """ Returns raw data representation of the document or document segment. Mappings are rendered as ordered dicts, sequences as lists and scalar values as whatever the validator returns (int, string, etc.). If no validators are used, scalar values are always returned as strings. """ if isinstance(self._value, CommentedMap): mapping = OrderedDict() for key, value in self._value.items(): mapping[key.data] = value.data return mapping elif isinstance(self._value, CommentedSeq): return [item.data for item in self._value] else: return self._value
Returns raw data representation of the document or document segment. Mappings are rendered as ordered dicts, sequences as lists and scalar values as whatever the validator returns (int, string, etc.). If no validators are used, scalar values are always returned as strings.
Below is the the instruction that describes the task: ### Input: Returns raw data representation of the document or document segment. Mappings are rendered as ordered dicts, sequences as lists and scalar values as whatever the validator returns (int, string, etc.). If no validators are used, scalar values are always returned as strings. ### Response: def data(self): """ Returns raw data representation of the document or document segment. Mappings are rendered as ordered dicts, sequences as lists and scalar values as whatever the validator returns (int, string, etc.). If no validators are used, scalar values are always returned as strings. """ if isinstance(self._value, CommentedMap): mapping = OrderedDict() for key, value in self._value.items(): mapping[key.data] = value.data return mapping elif isinstance(self._value, CommentedSeq): return [item.data for item in self._value] else: return self._value
def climatology(self, startclim, endclim, **kwargs): r""" Returns a climatology of observations at a user specified location for a specified time. Users must specify at least one geographic search parameter ('stid', 'state', 'country', 'county', 'radius', 'bbox', 'cwa', 'nwsfirezone', 'gacc', or 'subgacc') to obtain observation data. Other parameters may also be included. See below mandatory and optional parameters. Also see the metadata() function for station IDs. Arguments: ---------- startclim: string, mandatory Start date in form of MMDDhhmm. MUST BE USED WITH THE ENDCLIM PARAMETER. Default time is UTC e.g. startclim='06011800' Do not specify a year endclim: string, mandatory End date in form of MMDDhhmm. MUST BE USED WITH THE STARTCLIM PARAMETER. Default time is UTC e.g. endclim='06011800' Do not specify a year obtimezone: string, optional Set to either UTC or local. Sets timezone of obs. Default is UTC. e.g. obtimezone='local' showemptystations: string, optional Set to '1' to show stations even if no obs exist that match the time period. Stations without obs are omitted by default. stid: string, optional Single or comma separated list of MesoWest station IDs. e.g. stid='kden,kslc,wbb' county: string, optional County/parish/borough (US/Canada only), full name e.g. county='Larimer' state: string, optional US state, 2-letter ID e.g. state='CO' country: string, optional Single or comma separated list of abbreviated 2 or 3 character countries e.g. country='us,ca,mx' radius: string, optional Distance from a lat/lon pt or stid as [lat,lon,radius (mi)] or [stid, radius (mi)]. e.g. radius="-120,40,20" bbox: string, optional Stations within a [lon/lat] box in the order [lonmin,latmin,lonmax,latmax] e.g. bbox="-120,40,-119,41" cwa: string, optional NWS county warning area. See http://www.nws.noaa.gov/organization.php for CWA list. e.g. cwa='LOX' nwsfirezone: string, optional NWS fire zones. See http://www.nws.noaa.gov/geodata/catalog/wsom/html/firezone.htm for a shapefile containing the full list of zones. e.g. nwsfirezone='LOX241' gacc: string, optional Name of Geographic Area Coordination Center e.g. gacc='EBCC' See http://gacc.nifc.gov/ for a list of GACCs. subgacc: string, optional Name of Sub GACC e.g. subgacc='EB07' vars: string, optional Single or comma separated list of sensor variables. Will return all stations that match one of provided variables. Useful for filtering all stations that sense only certain vars. Do not request vars twice in the query. e.g. vars='wind_speed,pressure' Use the variables function to see a list of sensor vars. status: string, optional A value of either active or inactive returns stations currently set as active or inactive in the archive. Omitting this param returns all stations. e.g. status='active' units: string, optional String or set of strings and pipes separated by commas. Default is metric units. Set units='ENGLISH' for FREEDOM UNITS ;) Valid other combinations are as follows: temp|C, temp|F, temp|K; speed|mps, speed|mph, speed|kph, speed|kts; pres|pa, pres|mb; height|m, height|ft; precip|mm, precip|cm, precip|in; alti|pa, alti|inhg. e.g. units='temp|F,speed|kph,metric' groupby: string, optional Results can be grouped by key words: state, county, country, cwa, nwszone, mwsfirezone, gacc, subgacc e.g. groupby='state' timeformat: string, optional A python format string for returning customized date-time groups for observation times. Can include characters. e.g. timeformat='%m/%d/%Y at %H:%M' Returns: -------- Dictionary of climatology observations through the get_response() function. Raises: ------- None. """ self._check_geo_param(kwargs) kwargs['startclim'] = startclim kwargs['endclim'] = endclim kwargs['token'] = self.token return self._get_response('stations/climatology', kwargs)
r""" Returns a climatology of observations at a user specified location for a specified time. Users must specify at least one geographic search parameter ('stid', 'state', 'country', 'county', 'radius', 'bbox', 'cwa', 'nwsfirezone', 'gacc', or 'subgacc') to obtain observation data. Other parameters may also be included. See below mandatory and optional parameters. Also see the metadata() function for station IDs. Arguments: ---------- startclim: string, mandatory Start date in form of MMDDhhmm. MUST BE USED WITH THE ENDCLIM PARAMETER. Default time is UTC e.g. startclim='06011800' Do not specify a year endclim: string, mandatory End date in form of MMDDhhmm. MUST BE USED WITH THE STARTCLIM PARAMETER. Default time is UTC e.g. endclim='06011800' Do not specify a year obtimezone: string, optional Set to either UTC or local. Sets timezone of obs. Default is UTC. e.g. obtimezone='local' showemptystations: string, optional Set to '1' to show stations even if no obs exist that match the time period. Stations without obs are omitted by default. stid: string, optional Single or comma separated list of MesoWest station IDs. e.g. stid='kden,kslc,wbb' county: string, optional County/parish/borough (US/Canada only), full name e.g. county='Larimer' state: string, optional US state, 2-letter ID e.g. state='CO' country: string, optional Single or comma separated list of abbreviated 2 or 3 character countries e.g. country='us,ca,mx' radius: string, optional Distance from a lat/lon pt or stid as [lat,lon,radius (mi)] or [stid, radius (mi)]. e.g. radius="-120,40,20" bbox: string, optional Stations within a [lon/lat] box in the order [lonmin,latmin,lonmax,latmax] e.g. bbox="-120,40,-119,41" cwa: string, optional NWS county warning area. See http://www.nws.noaa.gov/organization.php for CWA list. e.g. cwa='LOX' nwsfirezone: string, optional NWS fire zones. See http://www.nws.noaa.gov/geodata/catalog/wsom/html/firezone.htm for a shapefile containing the full list of zones. e.g. nwsfirezone='LOX241' gacc: string, optional Name of Geographic Area Coordination Center e.g. gacc='EBCC' See http://gacc.nifc.gov/ for a list of GACCs. subgacc: string, optional Name of Sub GACC e.g. subgacc='EB07' vars: string, optional Single or comma separated list of sensor variables. Will return all stations that match one of provided variables. Useful for filtering all stations that sense only certain vars. Do not request vars twice in the query. e.g. vars='wind_speed,pressure' Use the variables function to see a list of sensor vars. status: string, optional A value of either active or inactive returns stations currently set as active or inactive in the archive. Omitting this param returns all stations. e.g. status='active' units: string, optional String or set of strings and pipes separated by commas. Default is metric units. Set units='ENGLISH' for FREEDOM UNITS ;) Valid other combinations are as follows: temp|C, temp|F, temp|K; speed|mps, speed|mph, speed|kph, speed|kts; pres|pa, pres|mb; height|m, height|ft; precip|mm, precip|cm, precip|in; alti|pa, alti|inhg. e.g. units='temp|F,speed|kph,metric' groupby: string, optional Results can be grouped by key words: state, county, country, cwa, nwszone, mwsfirezone, gacc, subgacc e.g. groupby='state' timeformat: string, optional A python format string for returning customized date-time groups for observation times. Can include characters. e.g. timeformat='%m/%d/%Y at %H:%M' Returns: -------- Dictionary of climatology observations through the get_response() function. Raises: ------- None.
Below is the the instruction that describes the task: ### Input: r""" Returns a climatology of observations at a user specified location for a specified time. Users must specify at least one geographic search parameter ('stid', 'state', 'country', 'county', 'radius', 'bbox', 'cwa', 'nwsfirezone', 'gacc', or 'subgacc') to obtain observation data. Other parameters may also be included. See below mandatory and optional parameters. Also see the metadata() function for station IDs. Arguments: ---------- startclim: string, mandatory Start date in form of MMDDhhmm. MUST BE USED WITH THE ENDCLIM PARAMETER. Default time is UTC e.g. startclim='06011800' Do not specify a year endclim: string, mandatory End date in form of MMDDhhmm. MUST BE USED WITH THE STARTCLIM PARAMETER. Default time is UTC e.g. endclim='06011800' Do not specify a year obtimezone: string, optional Set to either UTC or local. Sets timezone of obs. Default is UTC. e.g. obtimezone='local' showemptystations: string, optional Set to '1' to show stations even if no obs exist that match the time period. Stations without obs are omitted by default. stid: string, optional Single or comma separated list of MesoWest station IDs. e.g. stid='kden,kslc,wbb' county: string, optional County/parish/borough (US/Canada only), full name e.g. county='Larimer' state: string, optional US state, 2-letter ID e.g. state='CO' country: string, optional Single or comma separated list of abbreviated 2 or 3 character countries e.g. country='us,ca,mx' radius: string, optional Distance from a lat/lon pt or stid as [lat,lon,radius (mi)] or [stid, radius (mi)]. e.g. radius="-120,40,20" bbox: string, optional Stations within a [lon/lat] box in the order [lonmin,latmin,lonmax,latmax] e.g. bbox="-120,40,-119,41" cwa: string, optional NWS county warning area. See http://www.nws.noaa.gov/organization.php for CWA list. e.g. cwa='LOX' nwsfirezone: string, optional NWS fire zones. See http://www.nws.noaa.gov/geodata/catalog/wsom/html/firezone.htm for a shapefile containing the full list of zones. e.g. nwsfirezone='LOX241' gacc: string, optional Name of Geographic Area Coordination Center e.g. gacc='EBCC' See http://gacc.nifc.gov/ for a list of GACCs. subgacc: string, optional Name of Sub GACC e.g. subgacc='EB07' vars: string, optional Single or comma separated list of sensor variables. Will return all stations that match one of provided variables. Useful for filtering all stations that sense only certain vars. Do not request vars twice in the query. e.g. vars='wind_speed,pressure' Use the variables function to see a list of sensor vars. status: string, optional A value of either active or inactive returns stations currently set as active or inactive in the archive. Omitting this param returns all stations. e.g. status='active' units: string, optional String or set of strings and pipes separated by commas. Default is metric units. Set units='ENGLISH' for FREEDOM UNITS ;) Valid other combinations are as follows: temp|C, temp|F, temp|K; speed|mps, speed|mph, speed|kph, speed|kts; pres|pa, pres|mb; height|m, height|ft; precip|mm, precip|cm, precip|in; alti|pa, alti|inhg. e.g. units='temp|F,speed|kph,metric' groupby: string, optional Results can be grouped by key words: state, county, country, cwa, nwszone, mwsfirezone, gacc, subgacc e.g. groupby='state' timeformat: string, optional A python format string for returning customized date-time groups for observation times. Can include characters. e.g. timeformat='%m/%d/%Y at %H:%M' Returns: -------- Dictionary of climatology observations through the get_response() function. Raises: ------- None. ### Response: def climatology(self, startclim, endclim, **kwargs): r""" Returns a climatology of observations at a user specified location for a specified time. Users must specify at least one geographic search parameter ('stid', 'state', 'country', 'county', 'radius', 'bbox', 'cwa', 'nwsfirezone', 'gacc', or 'subgacc') to obtain observation data. Other parameters may also be included. See below mandatory and optional parameters. Also see the metadata() function for station IDs. Arguments: ---------- startclim: string, mandatory Start date in form of MMDDhhmm. MUST BE USED WITH THE ENDCLIM PARAMETER. Default time is UTC e.g. startclim='06011800' Do not specify a year endclim: string, mandatory End date in form of MMDDhhmm. MUST BE USED WITH THE STARTCLIM PARAMETER. Default time is UTC e.g. endclim='06011800' Do not specify a year obtimezone: string, optional Set to either UTC or local. Sets timezone of obs. Default is UTC. e.g. obtimezone='local' showemptystations: string, optional Set to '1' to show stations even if no obs exist that match the time period. Stations without obs are omitted by default. stid: string, optional Single or comma separated list of MesoWest station IDs. e.g. stid='kden,kslc,wbb' county: string, optional County/parish/borough (US/Canada only), full name e.g. county='Larimer' state: string, optional US state, 2-letter ID e.g. state='CO' country: string, optional Single or comma separated list of abbreviated 2 or 3 character countries e.g. country='us,ca,mx' radius: string, optional Distance from a lat/lon pt or stid as [lat,lon,radius (mi)] or [stid, radius (mi)]. e.g. radius="-120,40,20" bbox: string, optional Stations within a [lon/lat] box in the order [lonmin,latmin,lonmax,latmax] e.g. bbox="-120,40,-119,41" cwa: string, optional NWS county warning area. See http://www.nws.noaa.gov/organization.php for CWA list. e.g. cwa='LOX' nwsfirezone: string, optional NWS fire zones. See http://www.nws.noaa.gov/geodata/catalog/wsom/html/firezone.htm for a shapefile containing the full list of zones. e.g. nwsfirezone='LOX241' gacc: string, optional Name of Geographic Area Coordination Center e.g. gacc='EBCC' See http://gacc.nifc.gov/ for a list of GACCs. subgacc: string, optional Name of Sub GACC e.g. subgacc='EB07' vars: string, optional Single or comma separated list of sensor variables. Will return all stations that match one of provided variables. Useful for filtering all stations that sense only certain vars. Do not request vars twice in the query. e.g. vars='wind_speed,pressure' Use the variables function to see a list of sensor vars. status: string, optional A value of either active or inactive returns stations currently set as active or inactive in the archive. Omitting this param returns all stations. e.g. status='active' units: string, optional String or set of strings and pipes separated by commas. Default is metric units. Set units='ENGLISH' for FREEDOM UNITS ;) Valid other combinations are as follows: temp|C, temp|F, temp|K; speed|mps, speed|mph, speed|kph, speed|kts; pres|pa, pres|mb; height|m, height|ft; precip|mm, precip|cm, precip|in; alti|pa, alti|inhg. e.g. units='temp|F,speed|kph,metric' groupby: string, optional Results can be grouped by key words: state, county, country, cwa, nwszone, mwsfirezone, gacc, subgacc e.g. groupby='state' timeformat: string, optional A python format string for returning customized date-time groups for observation times. Can include characters. e.g. timeformat='%m/%d/%Y at %H:%M' Returns: -------- Dictionary of climatology observations through the get_response() function. Raises: ------- None. """ self._check_geo_param(kwargs) kwargs['startclim'] = startclim kwargs['endclim'] = endclim kwargs['token'] = self.token return self._get_response('stations/climatology', kwargs)
def run(self, args, pipeline_command): """ Invokes the pipeline with the defined command. Command line arguments, and the command need to be set with arg_builder, and command_builder respectively before this method can be invoked. """ # output that must be moved but not renamed consistentNaming = ['alignments/normal_dna_fix_pg_sorted.bam', 'alignments/normal_dna_fix_pg_sorted.bam.bai', 'alignments/rna_genome_sorted.bam', 'alignments/rna_genome_sorted.bam.bai', 'alignments/rna_transcriptome.bam', 'alignments/tumor_dna_fix_pg_sorted.bam', 'alignments/tumor_dna_fix_pg_sorted.bam.bai', 'mutations/merged/all_merged.vcf', 'rankboost/mhcii_rankboost_concise_results.tsv', 'rankboost/mhci_rankboost_concise_results.tsv', ] # output that must be renamed as well as moved # map of the original name to the final name renamingNeeded = {'binding_predictions': 'binding_predictions.tar', 'expression': 'expression.tar', 'haplotyping': 'haplotyping.tar', 'peptides': 'peptides.tar', 'rankboost': 'rankboost.tar', 'reports': 'reports.tar', 'mutations/snpeffed/mutations.vcf': 'all_snpeffed.vcf', 'mutations/transgened/mutations.vcf': 'all_transgened.vcf', 'mutations/merged': 'merged_perchrom.tar', 'mutations/muse': 'muse_perchrom.tar', 'mutations/mutect': 'mutect_perchrom.tar', 'mutations/radia': 'radia_perchrom.tar', 'mutations/somaticsniper': 'somaticsniper_perchrom.tar', 'mutations/strelka/snv': 'strelka_snv_perchrom.tar', 'mutations/strelka/indel': 'strelka_indel_perchrom.tar'} def make_output(output_dir, source_dir): """ :param output_dir: dir to write the output to :param source_dir: dir containing the directory structure to be parsed :return: """ def make_tar(dir, tar): with tarfile.open(tar, "w:gz") as tar: tar.add(dir) # the output dir is where the real output directories are written protect_outputs = os.listdir(source_dir) for protectOut in protect_outputs: def getName(fileName): return os.path.join(os.path.join(source_dir, protectOut), fileName) # move individual files out for fileName in consistentNaming: shutil.copyfile(getName(fileName), os.path.join(output_dir, os.path.basename(fileName))) for src, dst in renamingNeeded.iteritems(): if dst.endswith('.tar'): make_tar(getName(src), os.path.join(output_dir, dst)) else: shutil.copyfile(getName(src), os.path.join(output_dir, dst)) shutil.rmtree(source_dir) # prepare workdir mount = self._prepare_mount(args) self._workdir = os.path.join(mount, 'Toil-' + self._name) # insure the pairs are in the same directory, as protect expects # This is made more complicated by the fact CWLTool mounts inputs into random, read-only dirs # to get around this we copy all inputs into their own directories that we own tumor_dna_dir = os.path.expanduser('~/tumorDNA') tumor_rna_dir = os.path.expanduser('~/tumorRNA') normal_dna_dir = os.path.expanduser('~/normalDNA') os.mkdir(tumor_dna_dir) os.mkdir(tumor_rna_dir) os.mkdir(normal_dna_dir) shutil.copy(args.tumor_dna, tumor_dna_dir) shutil.copy(args.tumor_rna, tumor_rna_dir) shutil.copy(args.normal_dna, normal_dna_dir) shutil.copy(args.tumor_dna2, tumor_dna_dir) shutil.copy(args.tumor_rna2, tumor_rna_dir) shutil.copy(args.normal_dna2, normal_dna_dir) args.tumor_dna = os.path.join(tumor_dna_dir, os.path.basename(args.tumor_dna)) args.tumor_dna2 = os.path.join(tumor_dna_dir, os.path.basename(args.tumor_dna2)) args.tumor_rna = os.path.join(tumor_rna_dir, os.path.basename(args.tumor_rna)) args.tumor_rna2 = os.path.join(tumor_rna_dir, os.path.basename(args.tumor_rna2)) args.normal_dna = os.path.join(normal_dna_dir, os.path.basename(args.normal_dna)) args.normal_dna2 = os.path.join(normal_dna_dir, os.path.basename(args.normal_dna2)) # prepare config args_dict = vars(args) args_dict['output_dir'] = mount self._config = textwrap.dedent(self._config.format(**args_dict)) self._sample_name = args_dict["sample_name"] config_path = os.path.join(self._workdir, 'config') command = self._make_prefix(os.path.join(self._workdir, 'jobStore'), config_path, self._workdir) + pipeline_command if self._resume and args.resume: command.append('--restart') self._create_workdir(args) with open(config_path, 'w') as f: f.write(self._config) try: subprocess.check_call(command) except subprocess.CalledProcessError as e: print(e, file=sys.stderr) finally: log.info('Pipeline terminated, changing ownership of output files from root to user.') stat = os.stat(self._mount) subprocess.check_call(['chown', '-R', '{}:{}'.format(stat.st_uid, stat.st_gid), self._mount]) make_output(self._mount, os.path.join(self._mount, 'output')) if self._no_clean and args.no_clean: log.info('Flag "--no-clean" was used, therefore %s was not deleted.', self._workdir) else: log.info('Cleaning up temporary directory: %s', self._workdir) shutil.rmtree(self._workdir)
Invokes the pipeline with the defined command. Command line arguments, and the command need to be set with arg_builder, and command_builder respectively before this method can be invoked.
Below is the the instruction that describes the task: ### Input: Invokes the pipeline with the defined command. Command line arguments, and the command need to be set with arg_builder, and command_builder respectively before this method can be invoked. ### Response: def run(self, args, pipeline_command): """ Invokes the pipeline with the defined command. Command line arguments, and the command need to be set with arg_builder, and command_builder respectively before this method can be invoked. """ # output that must be moved but not renamed consistentNaming = ['alignments/normal_dna_fix_pg_sorted.bam', 'alignments/normal_dna_fix_pg_sorted.bam.bai', 'alignments/rna_genome_sorted.bam', 'alignments/rna_genome_sorted.bam.bai', 'alignments/rna_transcriptome.bam', 'alignments/tumor_dna_fix_pg_sorted.bam', 'alignments/tumor_dna_fix_pg_sorted.bam.bai', 'mutations/merged/all_merged.vcf', 'rankboost/mhcii_rankboost_concise_results.tsv', 'rankboost/mhci_rankboost_concise_results.tsv', ] # output that must be renamed as well as moved # map of the original name to the final name renamingNeeded = {'binding_predictions': 'binding_predictions.tar', 'expression': 'expression.tar', 'haplotyping': 'haplotyping.tar', 'peptides': 'peptides.tar', 'rankboost': 'rankboost.tar', 'reports': 'reports.tar', 'mutations/snpeffed/mutations.vcf': 'all_snpeffed.vcf', 'mutations/transgened/mutations.vcf': 'all_transgened.vcf', 'mutations/merged': 'merged_perchrom.tar', 'mutations/muse': 'muse_perchrom.tar', 'mutations/mutect': 'mutect_perchrom.tar', 'mutations/radia': 'radia_perchrom.tar', 'mutations/somaticsniper': 'somaticsniper_perchrom.tar', 'mutations/strelka/snv': 'strelka_snv_perchrom.tar', 'mutations/strelka/indel': 'strelka_indel_perchrom.tar'} def make_output(output_dir, source_dir): """ :param output_dir: dir to write the output to :param source_dir: dir containing the directory structure to be parsed :return: """ def make_tar(dir, tar): with tarfile.open(tar, "w:gz") as tar: tar.add(dir) # the output dir is where the real output directories are written protect_outputs = os.listdir(source_dir) for protectOut in protect_outputs: def getName(fileName): return os.path.join(os.path.join(source_dir, protectOut), fileName) # move individual files out for fileName in consistentNaming: shutil.copyfile(getName(fileName), os.path.join(output_dir, os.path.basename(fileName))) for src, dst in renamingNeeded.iteritems(): if dst.endswith('.tar'): make_tar(getName(src), os.path.join(output_dir, dst)) else: shutil.copyfile(getName(src), os.path.join(output_dir, dst)) shutil.rmtree(source_dir) # prepare workdir mount = self._prepare_mount(args) self._workdir = os.path.join(mount, 'Toil-' + self._name) # insure the pairs are in the same directory, as protect expects # This is made more complicated by the fact CWLTool mounts inputs into random, read-only dirs # to get around this we copy all inputs into their own directories that we own tumor_dna_dir = os.path.expanduser('~/tumorDNA') tumor_rna_dir = os.path.expanduser('~/tumorRNA') normal_dna_dir = os.path.expanduser('~/normalDNA') os.mkdir(tumor_dna_dir) os.mkdir(tumor_rna_dir) os.mkdir(normal_dna_dir) shutil.copy(args.tumor_dna, tumor_dna_dir) shutil.copy(args.tumor_rna, tumor_rna_dir) shutil.copy(args.normal_dna, normal_dna_dir) shutil.copy(args.tumor_dna2, tumor_dna_dir) shutil.copy(args.tumor_rna2, tumor_rna_dir) shutil.copy(args.normal_dna2, normal_dna_dir) args.tumor_dna = os.path.join(tumor_dna_dir, os.path.basename(args.tumor_dna)) args.tumor_dna2 = os.path.join(tumor_dna_dir, os.path.basename(args.tumor_dna2)) args.tumor_rna = os.path.join(tumor_rna_dir, os.path.basename(args.tumor_rna)) args.tumor_rna2 = os.path.join(tumor_rna_dir, os.path.basename(args.tumor_rna2)) args.normal_dna = os.path.join(normal_dna_dir, os.path.basename(args.normal_dna)) args.normal_dna2 = os.path.join(normal_dna_dir, os.path.basename(args.normal_dna2)) # prepare config args_dict = vars(args) args_dict['output_dir'] = mount self._config = textwrap.dedent(self._config.format(**args_dict)) self._sample_name = args_dict["sample_name"] config_path = os.path.join(self._workdir, 'config') command = self._make_prefix(os.path.join(self._workdir, 'jobStore'), config_path, self._workdir) + pipeline_command if self._resume and args.resume: command.append('--restart') self._create_workdir(args) with open(config_path, 'w') as f: f.write(self._config) try: subprocess.check_call(command) except subprocess.CalledProcessError as e: print(e, file=sys.stderr) finally: log.info('Pipeline terminated, changing ownership of output files from root to user.') stat = os.stat(self._mount) subprocess.check_call(['chown', '-R', '{}:{}'.format(stat.st_uid, stat.st_gid), self._mount]) make_output(self._mount, os.path.join(self._mount, 'output')) if self._no_clean and args.no_clean: log.info('Flag "--no-clean" was used, therefore %s was not deleted.', self._workdir) else: log.info('Cleaning up temporary directory: %s', self._workdir) shutil.rmtree(self._workdir)
def paga_compare( adata, basis=None, edges=False, color=None, alpha=None, groups=None, components=None, projection='2d', legend_loc='on data', legend_fontsize=None, legend_fontweight='bold', color_map=None, palette=None, frameon=False, size=None, title=None, right_margin=None, left_margin=0.05, show=None, save=None, title_graph=None, groups_graph=None, **paga_graph_params): """Scatter and PAGA graph side-by-side. Consists in a scatter plot and the abstracted graph. See :func:`~scanpy.api.pl.paga` for all related parameters. See :func:`~scanpy.api.pl.paga_path` for visualizing gene changes along paths through the abstracted graph. Additional parameters are as follows. Parameters ---------- adata : :class:`~anndata.AnnData` Annotated data matrix. kwds_scatter : `dict` Keywords for :func:`~scanpy.api.pl.scatter`. kwds_paga : `dict` Keywords for :func:`~scanpy.api.pl.paga`. Returns ------- A list of `matplotlib.axes.Axes` if `show` is `False`. """ axs, _, _, _ = utils.setup_axes(panels=[0, 1], right_margin=right_margin) if color is None: color = adata.uns['paga']['groups'] suptitle = None # common title for entire figure if title_graph is None: suptitle = color if title is None else title title, title_graph = '', '' if basis is None: if 'X_draw_graph_fa' in adata.obsm.keys(): basis = 'draw_graph_fa' elif 'X_umap' in adata.obsm.keys(): basis = 'umap' elif 'X_tsne' in adata.obsm.keys(): basis = 'tsne' elif 'X_draw_graph_fr' in adata.obsm.keys(): basis = 'draw_graph_fr' else: basis = 'umap' from .scatterplots import plot_scatter plot_scatter( adata, ax=axs[0], basis=basis, color=color, edges=edges, alpha=alpha, groups=groups, components=components, legend_loc=legend_loc, legend_fontsize=legend_fontsize, legend_fontweight=legend_fontweight, color_map=color_map, palette=palette, frameon=frameon, size=size, title=title, show=False, save=False) if 'pos' not in paga_graph_params: if color == adata.uns['paga']['groups']: paga_graph_params['pos'] = utils._tmp_cluster_pos else: paga_graph_params['pos'] = adata.uns['paga']['pos'] xlim, ylim = axs[0].get_xlim(), axs[0].get_ylim() axs[1].set_xlim(xlim) axs[1].set_ylim(ylim) if 'labels' in paga_graph_params: labels = paga_graph_params.pop('labels') else: labels = groups_graph paga( adata, ax=axs[1], show=False, save=False, title=title_graph, labels=labels, colors=color, frameon=frameon, **paga_graph_params) if suptitle is not None: pl.suptitle(suptitle) utils.savefig_or_show('paga_compare', show=show, save=save) if show == False: return axs
Scatter and PAGA graph side-by-side. Consists in a scatter plot and the abstracted graph. See :func:`~scanpy.api.pl.paga` for all related parameters. See :func:`~scanpy.api.pl.paga_path` for visualizing gene changes along paths through the abstracted graph. Additional parameters are as follows. Parameters ---------- adata : :class:`~anndata.AnnData` Annotated data matrix. kwds_scatter : `dict` Keywords for :func:`~scanpy.api.pl.scatter`. kwds_paga : `dict` Keywords for :func:`~scanpy.api.pl.paga`. Returns ------- A list of `matplotlib.axes.Axes` if `show` is `False`.
Below is the the instruction that describes the task: ### Input: Scatter and PAGA graph side-by-side. Consists in a scatter plot and the abstracted graph. See :func:`~scanpy.api.pl.paga` for all related parameters. See :func:`~scanpy.api.pl.paga_path` for visualizing gene changes along paths through the abstracted graph. Additional parameters are as follows. Parameters ---------- adata : :class:`~anndata.AnnData` Annotated data matrix. kwds_scatter : `dict` Keywords for :func:`~scanpy.api.pl.scatter`. kwds_paga : `dict` Keywords for :func:`~scanpy.api.pl.paga`. Returns ------- A list of `matplotlib.axes.Axes` if `show` is `False`. ### Response: def paga_compare( adata, basis=None, edges=False, color=None, alpha=None, groups=None, components=None, projection='2d', legend_loc='on data', legend_fontsize=None, legend_fontweight='bold', color_map=None, palette=None, frameon=False, size=None, title=None, right_margin=None, left_margin=0.05, show=None, save=None, title_graph=None, groups_graph=None, **paga_graph_params): """Scatter and PAGA graph side-by-side. Consists in a scatter plot and the abstracted graph. See :func:`~scanpy.api.pl.paga` for all related parameters. See :func:`~scanpy.api.pl.paga_path` for visualizing gene changes along paths through the abstracted graph. Additional parameters are as follows. Parameters ---------- adata : :class:`~anndata.AnnData` Annotated data matrix. kwds_scatter : `dict` Keywords for :func:`~scanpy.api.pl.scatter`. kwds_paga : `dict` Keywords for :func:`~scanpy.api.pl.paga`. Returns ------- A list of `matplotlib.axes.Axes` if `show` is `False`. """ axs, _, _, _ = utils.setup_axes(panels=[0, 1], right_margin=right_margin) if color is None: color = adata.uns['paga']['groups'] suptitle = None # common title for entire figure if title_graph is None: suptitle = color if title is None else title title, title_graph = '', '' if basis is None: if 'X_draw_graph_fa' in adata.obsm.keys(): basis = 'draw_graph_fa' elif 'X_umap' in adata.obsm.keys(): basis = 'umap' elif 'X_tsne' in adata.obsm.keys(): basis = 'tsne' elif 'X_draw_graph_fr' in adata.obsm.keys(): basis = 'draw_graph_fr' else: basis = 'umap' from .scatterplots import plot_scatter plot_scatter( adata, ax=axs[0], basis=basis, color=color, edges=edges, alpha=alpha, groups=groups, components=components, legend_loc=legend_loc, legend_fontsize=legend_fontsize, legend_fontweight=legend_fontweight, color_map=color_map, palette=palette, frameon=frameon, size=size, title=title, show=False, save=False) if 'pos' not in paga_graph_params: if color == adata.uns['paga']['groups']: paga_graph_params['pos'] = utils._tmp_cluster_pos else: paga_graph_params['pos'] = adata.uns['paga']['pos'] xlim, ylim = axs[0].get_xlim(), axs[0].get_ylim() axs[1].set_xlim(xlim) axs[1].set_ylim(ylim) if 'labels' in paga_graph_params: labels = paga_graph_params.pop('labels') else: labels = groups_graph paga( adata, ax=axs[1], show=False, save=False, title=title_graph, labels=labels, colors=color, frameon=frameon, **paga_graph_params) if suptitle is not None: pl.suptitle(suptitle) utils.savefig_or_show('paga_compare', show=show, save=save) if show == False: return axs
def authenticate(self, retries=3): """Set API token by authenticating via username/password. :param retries: Number of authentication attempts to make before giving up as an int. :return: Learned API token :rtype: string """ username = self.username or self._getuser(self.base) password = self.password while retries > 0: if password is None: password = self._getpass(self.base, username) try: resp = self.post(self.base+'/authenticate', params={'username': username, 'password': password}) schema = UserSchema() u = self.decode(schema, resp) if u.token is not None: self.lock.acquire() self.token = u.token self.lock.release() break except CDRouterError as cde: password = None retries -= 1 if retries == 0: raise cde return self.token
Set API token by authenticating via username/password. :param retries: Number of authentication attempts to make before giving up as an int. :return: Learned API token :rtype: string
Below is the the instruction that describes the task: ### Input: Set API token by authenticating via username/password. :param retries: Number of authentication attempts to make before giving up as an int. :return: Learned API token :rtype: string ### Response: def authenticate(self, retries=3): """Set API token by authenticating via username/password. :param retries: Number of authentication attempts to make before giving up as an int. :return: Learned API token :rtype: string """ username = self.username or self._getuser(self.base) password = self.password while retries > 0: if password is None: password = self._getpass(self.base, username) try: resp = self.post(self.base+'/authenticate', params={'username': username, 'password': password}) schema = UserSchema() u = self.decode(schema, resp) if u.token is not None: self.lock.acquire() self.token = u.token self.lock.release() break except CDRouterError as cde: password = None retries -= 1 if retries == 0: raise cde return self.token
def run_selected_clicked(self): """Run the selected scenario.""" # get all selected rows rows = sorted(set(index.row() for index in self.table.selectedIndexes())) self.enable_busy_cursor() # iterate over selected rows for row in rows: current_row = row item = self.table.item(current_row, 0) status_item = self.table.item(current_row, 1) self.run_task(item, status_item) self.disable_busy_cursor()
Run the selected scenario.
Below is the the instruction that describes the task: ### Input: Run the selected scenario. ### Response: def run_selected_clicked(self): """Run the selected scenario.""" # get all selected rows rows = sorted(set(index.row() for index in self.table.selectedIndexes())) self.enable_busy_cursor() # iterate over selected rows for row in rows: current_row = row item = self.table.item(current_row, 0) status_item = self.table.item(current_row, 1) self.run_task(item, status_item) self.disable_busy_cursor()
def force_iterable(f): """Will make any functions return an iterable objects by wrapping its result in a list.""" def wrapper(*args, **kwargs): r = f(*args, **kwargs) if hasattr(r, '__iter__'): return r else: return [r] return wrapper
Will make any functions return an iterable objects by wrapping its result in a list.
Below is the the instruction that describes the task: ### Input: Will make any functions return an iterable objects by wrapping its result in a list. ### Response: def force_iterable(f): """Will make any functions return an iterable objects by wrapping its result in a list.""" def wrapper(*args, **kwargs): r = f(*args, **kwargs) if hasattr(r, '__iter__'): return r else: return [r] return wrapper
def _getitem_normalized(self, index): """Builds the more compact fmtstrs by using fromstr( of the control sequences)""" index = normalize_slice(len(self), index) counter = 0 output = '' for fs in self.chunks: if index.start < counter + len(fs) and index.stop > counter: s_part = fs.s[max(0, index.start - counter):index.stop - counter] piece = Chunk(s_part, fs.atts).color_str output += piece counter += len(fs) if index.stop < counter: break return fmtstr(output)
Builds the more compact fmtstrs by using fromstr( of the control sequences)
Below is the the instruction that describes the task: ### Input: Builds the more compact fmtstrs by using fromstr( of the control sequences) ### Response: def _getitem_normalized(self, index): """Builds the more compact fmtstrs by using fromstr( of the control sequences)""" index = normalize_slice(len(self), index) counter = 0 output = '' for fs in self.chunks: if index.start < counter + len(fs) and index.stop > counter: s_part = fs.s[max(0, index.start - counter):index.stop - counter] piece = Chunk(s_part, fs.atts).color_str output += piece counter += len(fs) if index.stop < counter: break return fmtstr(output)
def _next_condition_lexems(self, source_code, source_code_size): """Return condition lexem readed in source_code""" # find three lexems lexems = tuple(( self._next_lexem(LEXEM_TYPE_COMPARISON, source_code, source_code_size), self._next_lexem(LEXEM_TYPE_OPERATOR , source_code, source_code_size), self._next_lexem(LEXEM_TYPE_COMPARISON, source_code, source_code_size) )) # verify integrity if None in lexems: # one of the condition lexem was not found in source code return None else: # all lexems are valid return ' '.join(lexems)
Return condition lexem readed in source_code
Below is the the instruction that describes the task: ### Input: Return condition lexem readed in source_code ### Response: def _next_condition_lexems(self, source_code, source_code_size): """Return condition lexem readed in source_code""" # find three lexems lexems = tuple(( self._next_lexem(LEXEM_TYPE_COMPARISON, source_code, source_code_size), self._next_lexem(LEXEM_TYPE_OPERATOR , source_code, source_code_size), self._next_lexem(LEXEM_TYPE_COMPARISON, source_code, source_code_size) )) # verify integrity if None in lexems: # one of the condition lexem was not found in source code return None else: # all lexems are valid return ' '.join(lexems)
def run(self, args): """**down** [*count*] Move the current frame down in the stack trace (to a newer frame). 0 is the most recent frame. If no count is given, move down 1. See also: --------- `up` and `frame`.""" Mframe.adjust_relative(self.proc, self.name, args, self.signum) return False
**down** [*count*] Move the current frame down in the stack trace (to a newer frame). 0 is the most recent frame. If no count is given, move down 1. See also: --------- `up` and `frame`.
Below is the the instruction that describes the task: ### Input: **down** [*count*] Move the current frame down in the stack trace (to a newer frame). 0 is the most recent frame. If no count is given, move down 1. See also: --------- `up` and `frame`. ### Response: def run(self, args): """**down** [*count*] Move the current frame down in the stack trace (to a newer frame). 0 is the most recent frame. If no count is given, move down 1. See also: --------- `up` and `frame`.""" Mframe.adjust_relative(self.proc, self.name, args, self.signum) return False
def parse_operand(string, location, tokens): """Parse an x86 instruction operand. """ mod = " ".join(tokens.get("modifier", "")) if "immediate" in tokens: imm = parse_immediate("".join(tokens["immediate"])) size = modifier_size.get(mod, None) oprnd = X86ImmediateOperand(imm, size) if "register" in tokens: name = tokens["register"] size = arch_info.registers_size[tokens["register"]] oprnd = X86RegisterOperand(name, size) if "memory" in tokens: seg_reg = tokens.get("segment", None) base_reg = tokens.get("base", None) index_reg = tokens.get("index", None) scale_imm = int(tokens.get("scale", "0x1"), 16) displ_imm = int("".join(tokens.get("displacement", "0x0")), 16) oprnd = X86MemoryOperand(seg_reg, base_reg, index_reg, scale_imm, displ_imm) oprnd.modifier = mod if not oprnd.size and oprnd.modifier: oprnd.size = modifier_size[oprnd.modifier] return oprnd
Parse an x86 instruction operand.
Below is the the instruction that describes the task: ### Input: Parse an x86 instruction operand. ### Response: def parse_operand(string, location, tokens): """Parse an x86 instruction operand. """ mod = " ".join(tokens.get("modifier", "")) if "immediate" in tokens: imm = parse_immediate("".join(tokens["immediate"])) size = modifier_size.get(mod, None) oprnd = X86ImmediateOperand(imm, size) if "register" in tokens: name = tokens["register"] size = arch_info.registers_size[tokens["register"]] oprnd = X86RegisterOperand(name, size) if "memory" in tokens: seg_reg = tokens.get("segment", None) base_reg = tokens.get("base", None) index_reg = tokens.get("index", None) scale_imm = int(tokens.get("scale", "0x1"), 16) displ_imm = int("".join(tokens.get("displacement", "0x0")), 16) oprnd = X86MemoryOperand(seg_reg, base_reg, index_reg, scale_imm, displ_imm) oprnd.modifier = mod if not oprnd.size and oprnd.modifier: oprnd.size = modifier_size[oprnd.modifier] return oprnd
def get_child(self, usage_id): """Return the child identified by ``usage_id``.""" if usage_id in self._child_cache: return self._child_cache[usage_id] child_block = self.runtime.get_block(usage_id, for_parent=self) self._child_cache[usage_id] = child_block return child_block
Return the child identified by ``usage_id``.
Below is the the instruction that describes the task: ### Input: Return the child identified by ``usage_id``. ### Response: def get_child(self, usage_id): """Return the child identified by ``usage_id``.""" if usage_id in self._child_cache: return self._child_cache[usage_id] child_block = self.runtime.get_block(usage_id, for_parent=self) self._child_cache[usage_id] = child_block return child_block
def dark(app): """ Apply Dark Theme to the Qt application instance. Args: app (QApplication): QApplication instance. """ _apply_base_theme(app) darkPalette = QPalette() # base darkPalette.setColor(QPalette.WindowText, QColor(180, 180, 180)) darkPalette.setColor(QPalette.Button, QColor(53, 53, 53)) darkPalette.setColor(QPalette.Light, QColor(180, 180, 180)) darkPalette.setColor(QPalette.Midlight, QColor(90, 90, 90)) darkPalette.setColor(QPalette.Dark, QColor(35, 35, 35)) darkPalette.setColor(QPalette.Text, QColor(180, 180, 180)) darkPalette.setColor(QPalette.BrightText, QColor(180, 180, 180)) darkPalette.setColor(QPalette.ButtonText, QColor(180, 180, 180)) darkPalette.setColor(QPalette.Base, QColor(42, 42, 42)) darkPalette.setColor(QPalette.Window, QColor(53, 53, 53)) darkPalette.setColor(QPalette.Shadow, QColor(20, 20, 20)) darkPalette.setColor(QPalette.Highlight, QColor(42, 130, 218)) darkPalette.setColor(QPalette.HighlightedText, QColor(180, 180, 180)) darkPalette.setColor(QPalette.Link, QColor(56, 252, 196)) darkPalette.setColor(QPalette.AlternateBase, QColor(66, 66, 66)) darkPalette.setColor(QPalette.ToolTipBase, QColor(53, 53, 53)) darkPalette.setColor(QPalette.ToolTipText, QColor(180, 180, 180)) # disabled darkPalette.setColor(QPalette.Disabled, QPalette.WindowText, QColor(127, 127, 127)) darkPalette.setColor(QPalette.Disabled, QPalette.Text, QColor(127, 127, 127)) darkPalette.setColor(QPalette.Disabled, QPalette.ButtonText, QColor(127, 127, 127)) darkPalette.setColor(QPalette.Disabled, QPalette.Highlight, QColor(80, 80, 80)) darkPalette.setColor(QPalette.Disabled, QPalette.HighlightedText, QColor(127, 127, 127)) app.setPalette(darkPalette)
Apply Dark Theme to the Qt application instance. Args: app (QApplication): QApplication instance.
Below is the the instruction that describes the task: ### Input: Apply Dark Theme to the Qt application instance. Args: app (QApplication): QApplication instance. ### Response: def dark(app): """ Apply Dark Theme to the Qt application instance. Args: app (QApplication): QApplication instance. """ _apply_base_theme(app) darkPalette = QPalette() # base darkPalette.setColor(QPalette.WindowText, QColor(180, 180, 180)) darkPalette.setColor(QPalette.Button, QColor(53, 53, 53)) darkPalette.setColor(QPalette.Light, QColor(180, 180, 180)) darkPalette.setColor(QPalette.Midlight, QColor(90, 90, 90)) darkPalette.setColor(QPalette.Dark, QColor(35, 35, 35)) darkPalette.setColor(QPalette.Text, QColor(180, 180, 180)) darkPalette.setColor(QPalette.BrightText, QColor(180, 180, 180)) darkPalette.setColor(QPalette.ButtonText, QColor(180, 180, 180)) darkPalette.setColor(QPalette.Base, QColor(42, 42, 42)) darkPalette.setColor(QPalette.Window, QColor(53, 53, 53)) darkPalette.setColor(QPalette.Shadow, QColor(20, 20, 20)) darkPalette.setColor(QPalette.Highlight, QColor(42, 130, 218)) darkPalette.setColor(QPalette.HighlightedText, QColor(180, 180, 180)) darkPalette.setColor(QPalette.Link, QColor(56, 252, 196)) darkPalette.setColor(QPalette.AlternateBase, QColor(66, 66, 66)) darkPalette.setColor(QPalette.ToolTipBase, QColor(53, 53, 53)) darkPalette.setColor(QPalette.ToolTipText, QColor(180, 180, 180)) # disabled darkPalette.setColor(QPalette.Disabled, QPalette.WindowText, QColor(127, 127, 127)) darkPalette.setColor(QPalette.Disabled, QPalette.Text, QColor(127, 127, 127)) darkPalette.setColor(QPalette.Disabled, QPalette.ButtonText, QColor(127, 127, 127)) darkPalette.setColor(QPalette.Disabled, QPalette.Highlight, QColor(80, 80, 80)) darkPalette.setColor(QPalette.Disabled, QPalette.HighlightedText, QColor(127, 127, 127)) app.setPalette(darkPalette)
def lookup_task(self, task): """ Looks up a task by name or by callable """ if isinstance(task, str): try: return self[task] except KeyError: pass raise TaskError('Unknown task %s' % task)
Looks up a task by name or by callable
Below is the the instruction that describes the task: ### Input: Looks up a task by name or by callable ### Response: def lookup_task(self, task): """ Looks up a task by name or by callable """ if isinstance(task, str): try: return self[task] except KeyError: pass raise TaskError('Unknown task %s' % task)
def validate_env(self, envname): """ Check the name of the environment against the black list and the whitelist. If a whitelist is specified only it is checked. """ if self.whitelist_envs and envname in self.whitelist_envs: return True elif self.whitelist_envs: return False if self.blacklist_envs and envname not in self.blacklist_envs: return True elif self.blacklist_envs: # If there is just a True, all envs are blacklisted return False else: return True
Check the name of the environment against the black list and the whitelist. If a whitelist is specified only it is checked.
Below is the the instruction that describes the task: ### Input: Check the name of the environment against the black list and the whitelist. If a whitelist is specified only it is checked. ### Response: def validate_env(self, envname): """ Check the name of the environment against the black list and the whitelist. If a whitelist is specified only it is checked. """ if self.whitelist_envs and envname in self.whitelist_envs: return True elif self.whitelist_envs: return False if self.blacklist_envs and envname not in self.blacklist_envs: return True elif self.blacklist_envs: # If there is just a True, all envs are blacklisted return False else: return True
async def clean(self): """ Close all the running tasks watching for a container timeout. All references to containers are removed: any attempt to was_killed after a call to clean() will return None. """ for x in self._running_asyncio_tasks: x.cancel() self._container_had_error = set() self._watching = set() self._running_asyncio_tasks = set()
Close all the running tasks watching for a container timeout. All references to containers are removed: any attempt to was_killed after a call to clean() will return None.
Below is the the instruction that describes the task: ### Input: Close all the running tasks watching for a container timeout. All references to containers are removed: any attempt to was_killed after a call to clean() will return None. ### Response: async def clean(self): """ Close all the running tasks watching for a container timeout. All references to containers are removed: any attempt to was_killed after a call to clean() will return None. """ for x in self._running_asyncio_tasks: x.cancel() self._container_had_error = set() self._watching = set() self._running_asyncio_tasks = set()
def read_local_conf(local_conf): """Search for conf.py in any rel_source directory in CWD and if found read it and return. :param str local_conf: Path to conf.py to read. :return: Loaded conf.py. :rtype: dict """ log = logging.getLogger(__name__) # Attempt to read. log.info('Reading config from %s...', local_conf) try: config = read_config(os.path.dirname(local_conf), '<local>') except HandledError: log.warning('Unable to read file, continuing with only CLI args.') return dict() # Filter and return. return {k[4:]: v for k, v in config.items() if k.startswith('scv_') and not k[4:].startswith('_')}
Search for conf.py in any rel_source directory in CWD and if found read it and return. :param str local_conf: Path to conf.py to read. :return: Loaded conf.py. :rtype: dict
Below is the the instruction that describes the task: ### Input: Search for conf.py in any rel_source directory in CWD and if found read it and return. :param str local_conf: Path to conf.py to read. :return: Loaded conf.py. :rtype: dict ### Response: def read_local_conf(local_conf): """Search for conf.py in any rel_source directory in CWD and if found read it and return. :param str local_conf: Path to conf.py to read. :return: Loaded conf.py. :rtype: dict """ log = logging.getLogger(__name__) # Attempt to read. log.info('Reading config from %s...', local_conf) try: config = read_config(os.path.dirname(local_conf), '<local>') except HandledError: log.warning('Unable to read file, continuing with only CLI args.') return dict() # Filter and return. return {k[4:]: v for k, v in config.items() if k.startswith('scv_') and not k[4:].startswith('_')}
def get_normalized_term(term_id: str, equivalents: list, namespace_targets: dict) -> str: """Get normalized term""" if equivalents and len(equivalents) > 0: for start_ns in namespace_targets: if re.match(start_ns, term_id): for target_ns in namespace_targets[start_ns]: for e in equivalents: if e and target_ns in e["namespace"] and e["primary"]: normalized_term = e["term_id"] return normalized_term return term_id
Get normalized term
Below is the the instruction that describes the task: ### Input: Get normalized term ### Response: def get_normalized_term(term_id: str, equivalents: list, namespace_targets: dict) -> str: """Get normalized term""" if equivalents and len(equivalents) > 0: for start_ns in namespace_targets: if re.match(start_ns, term_id): for target_ns in namespace_targets[start_ns]: for e in equivalents: if e and target_ns in e["namespace"] and e["primary"]: normalized_term = e["term_id"] return normalized_term return term_id
def get_all_supported_exts_for_type(self, type_to_match: Type[Any], strict: bool) -> Set[str]: """ Utility method to return the set of all supported file extensions that may be converted to objects of the given type. type=JOKER is a joker that means all types :param type_to_match: :param strict: :return: """ matching = self.find_all_matching_parsers(desired_type=type_to_match, strict=strict)[0] return {ext for exts in [p.supported_exts for p in (matching[0] + matching[1] + matching[2])] for ext in exts}
Utility method to return the set of all supported file extensions that may be converted to objects of the given type. type=JOKER is a joker that means all types :param type_to_match: :param strict: :return:
Below is the the instruction that describes the task: ### Input: Utility method to return the set of all supported file extensions that may be converted to objects of the given type. type=JOKER is a joker that means all types :param type_to_match: :param strict: :return: ### Response: def get_all_supported_exts_for_type(self, type_to_match: Type[Any], strict: bool) -> Set[str]: """ Utility method to return the set of all supported file extensions that may be converted to objects of the given type. type=JOKER is a joker that means all types :param type_to_match: :param strict: :return: """ matching = self.find_all_matching_parsers(desired_type=type_to_match, strict=strict)[0] return {ext for exts in [p.supported_exts for p in (matching[0] + matching[1] + matching[2])] for ext in exts}
def from_str(string, max_number=9, separator="."): """Parses string :param string: Version :param max_number: Max number reachable by sub :param separator: Version numbers are separated with this split :return: Parses string and returns object """ tokens = string.split(separator) tokens = list(reversed(tokens)) # reverse order of importance most_important = tokens[-1] # cannot be parsed like the others levels = [ Level(max_number, int(token)) for token in tokens[:-1] ] levels.append( Level(float("inf"), int(most_important)) ) return Subsystem(levels, separator)
Parses string :param string: Version :param max_number: Max number reachable by sub :param separator: Version numbers are separated with this split :return: Parses string and returns object
Below is the the instruction that describes the task: ### Input: Parses string :param string: Version :param max_number: Max number reachable by sub :param separator: Version numbers are separated with this split :return: Parses string and returns object ### Response: def from_str(string, max_number=9, separator="."): """Parses string :param string: Version :param max_number: Max number reachable by sub :param separator: Version numbers are separated with this split :return: Parses string and returns object """ tokens = string.split(separator) tokens = list(reversed(tokens)) # reverse order of importance most_important = tokens[-1] # cannot be parsed like the others levels = [ Level(max_number, int(token)) for token in tokens[:-1] ] levels.append( Level(float("inf"), int(most_important)) ) return Subsystem(levels, separator)
def king(surface_tilt, dhi, ghi, solar_zenith): ''' Determine diffuse irradiance from the sky on a tilted surface using the King model. King's model determines the diffuse irradiance from the sky (ground reflected irradiance is not included in this algorithm) on a tilted surface using the surface tilt angle, diffuse horizontal irradiance, global horizontal irradiance, and sun zenith angle. Note that this model is not well documented and has not been published in any fashion (as of January 2012). Parameters ---------- surface_tilt : numeric Surface tilt angles in decimal degrees. The tilt angle is defined as degrees from horizontal (e.g. surface facing up = 0, surface facing horizon = 90) dhi : numeric Diffuse horizontal irradiance in W/m^2. ghi : numeric Global horizontal irradiance in W/m^2. solar_zenith : numeric Apparent (refraction-corrected) zenith angles in decimal degrees. Returns -------- poa_sky_diffuse : numeric The diffuse component of the solar radiation. ''' sky_diffuse = (dhi * ((1 + tools.cosd(surface_tilt))) / 2 + ghi * ((0.012 * solar_zenith - 0.04)) * ((1 - tools.cosd(surface_tilt))) / 2) sky_diffuse = np.maximum(sky_diffuse, 0) return sky_diffuse
Determine diffuse irradiance from the sky on a tilted surface using the King model. King's model determines the diffuse irradiance from the sky (ground reflected irradiance is not included in this algorithm) on a tilted surface using the surface tilt angle, diffuse horizontal irradiance, global horizontal irradiance, and sun zenith angle. Note that this model is not well documented and has not been published in any fashion (as of January 2012). Parameters ---------- surface_tilt : numeric Surface tilt angles in decimal degrees. The tilt angle is defined as degrees from horizontal (e.g. surface facing up = 0, surface facing horizon = 90) dhi : numeric Diffuse horizontal irradiance in W/m^2. ghi : numeric Global horizontal irradiance in W/m^2. solar_zenith : numeric Apparent (refraction-corrected) zenith angles in decimal degrees. Returns -------- poa_sky_diffuse : numeric The diffuse component of the solar radiation.
Below is the the instruction that describes the task: ### Input: Determine diffuse irradiance from the sky on a tilted surface using the King model. King's model determines the diffuse irradiance from the sky (ground reflected irradiance is not included in this algorithm) on a tilted surface using the surface tilt angle, diffuse horizontal irradiance, global horizontal irradiance, and sun zenith angle. Note that this model is not well documented and has not been published in any fashion (as of January 2012). Parameters ---------- surface_tilt : numeric Surface tilt angles in decimal degrees. The tilt angle is defined as degrees from horizontal (e.g. surface facing up = 0, surface facing horizon = 90) dhi : numeric Diffuse horizontal irradiance in W/m^2. ghi : numeric Global horizontal irradiance in W/m^2. solar_zenith : numeric Apparent (refraction-corrected) zenith angles in decimal degrees. Returns -------- poa_sky_diffuse : numeric The diffuse component of the solar radiation. ### Response: def king(surface_tilt, dhi, ghi, solar_zenith): ''' Determine diffuse irradiance from the sky on a tilted surface using the King model. King's model determines the diffuse irradiance from the sky (ground reflected irradiance is not included in this algorithm) on a tilted surface using the surface tilt angle, diffuse horizontal irradiance, global horizontal irradiance, and sun zenith angle. Note that this model is not well documented and has not been published in any fashion (as of January 2012). Parameters ---------- surface_tilt : numeric Surface tilt angles in decimal degrees. The tilt angle is defined as degrees from horizontal (e.g. surface facing up = 0, surface facing horizon = 90) dhi : numeric Diffuse horizontal irradiance in W/m^2. ghi : numeric Global horizontal irradiance in W/m^2. solar_zenith : numeric Apparent (refraction-corrected) zenith angles in decimal degrees. Returns -------- poa_sky_diffuse : numeric The diffuse component of the solar radiation. ''' sky_diffuse = (dhi * ((1 + tools.cosd(surface_tilt))) / 2 + ghi * ((0.012 * solar_zenith - 0.04)) * ((1 - tools.cosd(surface_tilt))) / 2) sky_diffuse = np.maximum(sky_diffuse, 0) return sky_diffuse
def _split_into_chunks(self): """Split the code object into a list of `Chunk` objects. Each chunk is only entered at its first instruction, though there can be many exits from a chunk. Returns a list of `Chunk` objects. """ # The list of chunks so far, and the one we're working on. chunks = [] chunk = None # A dict mapping byte offsets of line starts to the line numbers. bytes_lines_map = dict(self._bytes_lines()) # The block stack: loops and try blocks get pushed here for the # implicit jumps that can occur. # Each entry is a tuple: (block type, destination) block_stack = [] # Some op codes are followed by branches that should be ignored. This # is a count of how many ignores are left. ignore_branch = 0 # We have to handle the last two bytecodes specially. ult = penult = None # Get a set of all of the jump-to points. jump_to = set() bytecodes = list(ByteCodes(self.code.co_code)) for bc in bytecodes: if bc.jump_to >= 0: jump_to.add(bc.jump_to) chunk_lineno = 0 # Walk the byte codes building chunks. for bc in bytecodes: # Maybe have to start a new chunk start_new_chunk = False first_chunk = False if bc.offset in bytes_lines_map: # Start a new chunk for each source line number. start_new_chunk = True chunk_lineno = bytes_lines_map[bc.offset] first_chunk = True elif bc.offset in jump_to: # To make chunks have a single entrance, we have to make a new # chunk when we get to a place some bytecode jumps to. start_new_chunk = True elif bc.op in OPS_CHUNK_BEGIN: # Jumps deserve their own unnumbered chunk. This fixes # problems with jumps to jumps getting confused. start_new_chunk = True if not chunk or start_new_chunk: if chunk: chunk.exits.add(bc.offset) chunk = Chunk(bc.offset, chunk_lineno, first_chunk) chunks.append(chunk) # Look at the opcode if bc.jump_to >= 0 and bc.op not in OPS_NO_JUMP: if ignore_branch: # Someone earlier wanted us to ignore this branch. ignore_branch -= 1 else: # The opcode has a jump, it's an exit for this chunk. chunk.exits.add(bc.jump_to) if bc.op in OPS_CODE_END: # The opcode can exit the code object. chunk.exits.add(-self.code.co_firstlineno) if bc.op in OPS_PUSH_BLOCK: # The opcode adds a block to the block_stack. block_stack.append((bc.op, bc.jump_to)) if bc.op in OPS_POP_BLOCK: # The opcode pops a block from the block stack. block_stack.pop() if bc.op in OPS_CHUNK_END: # This opcode forces the end of the chunk. if bc.op == OP_BREAK_LOOP: # A break is implicit: jump where the top of the # block_stack points. chunk.exits.add(block_stack[-1][1]) chunk = None if bc.op == OP_END_FINALLY: # For the finally clause we need to find the closest exception # block, and use its jump target as an exit. for block in reversed(block_stack): if block[0] in OPS_EXCEPT_BLOCKS: chunk.exits.add(block[1]) break if bc.op == OP_COMPARE_OP and bc.arg == COMPARE_EXCEPTION: # This is an except clause. We want to overlook the next # branch, so that except's don't count as branches. ignore_branch += 1 penult = ult ult = bc if chunks: # The last two bytecodes could be a dummy "return None" that # shouldn't be counted as real code. Every Python code object seems # to end with a return, and a "return None" is inserted if there # isn't an explicit return in the source. if ult and penult: if penult.op == OP_LOAD_CONST and ult.op == OP_RETURN_VALUE: if self.code.co_consts[penult.arg] is None: # This is "return None", but is it dummy? A real line # would be a last chunk all by itself. if chunks[-1].byte != penult.offset: ex = -self.code.co_firstlineno # Split the last chunk last_chunk = chunks[-1] last_chunk.exits.remove(ex) last_chunk.exits.add(penult.offset) chunk = Chunk( penult.offset, last_chunk.line, False ) chunk.exits.add(ex) chunks.append(chunk) # Give all the chunks a length. chunks[-1].length = bc.next_offset - chunks[-1].byte # pylint: disable=W0631,C0301 for i in range(len(chunks)-1): chunks[i].length = chunks[i+1].byte - chunks[i].byte #self.validate_chunks(chunks) return chunks
Split the code object into a list of `Chunk` objects. Each chunk is only entered at its first instruction, though there can be many exits from a chunk. Returns a list of `Chunk` objects.
Below is the the instruction that describes the task: ### Input: Split the code object into a list of `Chunk` objects. Each chunk is only entered at its first instruction, though there can be many exits from a chunk. Returns a list of `Chunk` objects. ### Response: def _split_into_chunks(self): """Split the code object into a list of `Chunk` objects. Each chunk is only entered at its first instruction, though there can be many exits from a chunk. Returns a list of `Chunk` objects. """ # The list of chunks so far, and the one we're working on. chunks = [] chunk = None # A dict mapping byte offsets of line starts to the line numbers. bytes_lines_map = dict(self._bytes_lines()) # The block stack: loops and try blocks get pushed here for the # implicit jumps that can occur. # Each entry is a tuple: (block type, destination) block_stack = [] # Some op codes are followed by branches that should be ignored. This # is a count of how many ignores are left. ignore_branch = 0 # We have to handle the last two bytecodes specially. ult = penult = None # Get a set of all of the jump-to points. jump_to = set() bytecodes = list(ByteCodes(self.code.co_code)) for bc in bytecodes: if bc.jump_to >= 0: jump_to.add(bc.jump_to) chunk_lineno = 0 # Walk the byte codes building chunks. for bc in bytecodes: # Maybe have to start a new chunk start_new_chunk = False first_chunk = False if bc.offset in bytes_lines_map: # Start a new chunk for each source line number. start_new_chunk = True chunk_lineno = bytes_lines_map[bc.offset] first_chunk = True elif bc.offset in jump_to: # To make chunks have a single entrance, we have to make a new # chunk when we get to a place some bytecode jumps to. start_new_chunk = True elif bc.op in OPS_CHUNK_BEGIN: # Jumps deserve their own unnumbered chunk. This fixes # problems with jumps to jumps getting confused. start_new_chunk = True if not chunk or start_new_chunk: if chunk: chunk.exits.add(bc.offset) chunk = Chunk(bc.offset, chunk_lineno, first_chunk) chunks.append(chunk) # Look at the opcode if bc.jump_to >= 0 and bc.op not in OPS_NO_JUMP: if ignore_branch: # Someone earlier wanted us to ignore this branch. ignore_branch -= 1 else: # The opcode has a jump, it's an exit for this chunk. chunk.exits.add(bc.jump_to) if bc.op in OPS_CODE_END: # The opcode can exit the code object. chunk.exits.add(-self.code.co_firstlineno) if bc.op in OPS_PUSH_BLOCK: # The opcode adds a block to the block_stack. block_stack.append((bc.op, bc.jump_to)) if bc.op in OPS_POP_BLOCK: # The opcode pops a block from the block stack. block_stack.pop() if bc.op in OPS_CHUNK_END: # This opcode forces the end of the chunk. if bc.op == OP_BREAK_LOOP: # A break is implicit: jump where the top of the # block_stack points. chunk.exits.add(block_stack[-1][1]) chunk = None if bc.op == OP_END_FINALLY: # For the finally clause we need to find the closest exception # block, and use its jump target as an exit. for block in reversed(block_stack): if block[0] in OPS_EXCEPT_BLOCKS: chunk.exits.add(block[1]) break if bc.op == OP_COMPARE_OP and bc.arg == COMPARE_EXCEPTION: # This is an except clause. We want to overlook the next # branch, so that except's don't count as branches. ignore_branch += 1 penult = ult ult = bc if chunks: # The last two bytecodes could be a dummy "return None" that # shouldn't be counted as real code. Every Python code object seems # to end with a return, and a "return None" is inserted if there # isn't an explicit return in the source. if ult and penult: if penult.op == OP_LOAD_CONST and ult.op == OP_RETURN_VALUE: if self.code.co_consts[penult.arg] is None: # This is "return None", but is it dummy? A real line # would be a last chunk all by itself. if chunks[-1].byte != penult.offset: ex = -self.code.co_firstlineno # Split the last chunk last_chunk = chunks[-1] last_chunk.exits.remove(ex) last_chunk.exits.add(penult.offset) chunk = Chunk( penult.offset, last_chunk.line, False ) chunk.exits.add(ex) chunks.append(chunk) # Give all the chunks a length. chunks[-1].length = bc.next_offset - chunks[-1].byte # pylint: disable=W0631,C0301 for i in range(len(chunks)-1): chunks[i].length = chunks[i+1].byte - chunks[i].byte #self.validate_chunks(chunks) return chunks
def parse_template(template): """returns a 2-tuple of (template_name, number_of_priors)""" m = TEMPLATE_OVERRIDE_RE.match(template) if not m: return template, 0 return m.group('template'), int(m.group('depth'))
returns a 2-tuple of (template_name, number_of_priors)
Below is the the instruction that describes the task: ### Input: returns a 2-tuple of (template_name, number_of_priors) ### Response: def parse_template(template): """returns a 2-tuple of (template_name, number_of_priors)""" m = TEMPLATE_OVERRIDE_RE.match(template) if not m: return template, 0 return m.group('template'), int(m.group('depth'))
def population(self): """Return population, N. Returns ------- int The population (N) of the confusion table Example ------- >>> ct = ConfusionTable(120, 60, 20, 30) >>> ct.population() 230 """ return self._tp + self._tn + self._fp + self._fn
Return population, N. Returns ------- int The population (N) of the confusion table Example ------- >>> ct = ConfusionTable(120, 60, 20, 30) >>> ct.population() 230
Below is the the instruction that describes the task: ### Input: Return population, N. Returns ------- int The population (N) of the confusion table Example ------- >>> ct = ConfusionTable(120, 60, 20, 30) >>> ct.population() 230 ### Response: def population(self): """Return population, N. Returns ------- int The population (N) of the confusion table Example ------- >>> ct = ConfusionTable(120, 60, 20, 30) >>> ct.population() 230 """ return self._tp + self._tn + self._fp + self._fn
def preformat_cache(cache, start=None, end=None): """Preprocess a `list` of file paths for reading. - read the cache from the file (if necessary) - sieve the cache to only include data we need Parameters ---------- cache : `list`, `str` List of file paths, or path to a LAL-format cache file on disk. start : `~gwpy.time.LIGOTimeGPS`, `float`, `str`, optional GPS start time of required data, defaults to start of data found; any input parseable by `~gwpy.time.to_gps` is fine. end : `~gwpy.time.LIGOTimeGPS`, `float`, `str`, optional GPS end time of required data, defaults to end of data found; any input parseable by `~gwpy.time.to_gps` is fine. Returns ------- modcache : `list` A parsed, sieved list of paths based on the input arguments. """ # open cache file if isinstance(cache, FILE_LIKE + string_types): return read_cache(cache, sort=file_segment, segment=Segment(start, end)) # format existing cache file cache = type(cache)(cache) # copy cache # sort cache try: cache.sort(key=file_segment) # sort except ValueError: # if this failed, then the sieving will also fail, but lets proceed # anyway, since the user didn't actually ask us to do this (but # its a very good idea) return cache # sieve cache if start is None: # start time of earliest file start = file_segment(cache[0])[0] if end is None: # end time of latest file end = file_segment(cache[-1])[-1] return sieve(cache, segment=Segment(start, end))
Preprocess a `list` of file paths for reading. - read the cache from the file (if necessary) - sieve the cache to only include data we need Parameters ---------- cache : `list`, `str` List of file paths, or path to a LAL-format cache file on disk. start : `~gwpy.time.LIGOTimeGPS`, `float`, `str`, optional GPS start time of required data, defaults to start of data found; any input parseable by `~gwpy.time.to_gps` is fine. end : `~gwpy.time.LIGOTimeGPS`, `float`, `str`, optional GPS end time of required data, defaults to end of data found; any input parseable by `~gwpy.time.to_gps` is fine. Returns ------- modcache : `list` A parsed, sieved list of paths based on the input arguments.
Below is the the instruction that describes the task: ### Input: Preprocess a `list` of file paths for reading. - read the cache from the file (if necessary) - sieve the cache to only include data we need Parameters ---------- cache : `list`, `str` List of file paths, or path to a LAL-format cache file on disk. start : `~gwpy.time.LIGOTimeGPS`, `float`, `str`, optional GPS start time of required data, defaults to start of data found; any input parseable by `~gwpy.time.to_gps` is fine. end : `~gwpy.time.LIGOTimeGPS`, `float`, `str`, optional GPS end time of required data, defaults to end of data found; any input parseable by `~gwpy.time.to_gps` is fine. Returns ------- modcache : `list` A parsed, sieved list of paths based on the input arguments. ### Response: def preformat_cache(cache, start=None, end=None): """Preprocess a `list` of file paths for reading. - read the cache from the file (if necessary) - sieve the cache to only include data we need Parameters ---------- cache : `list`, `str` List of file paths, or path to a LAL-format cache file on disk. start : `~gwpy.time.LIGOTimeGPS`, `float`, `str`, optional GPS start time of required data, defaults to start of data found; any input parseable by `~gwpy.time.to_gps` is fine. end : `~gwpy.time.LIGOTimeGPS`, `float`, `str`, optional GPS end time of required data, defaults to end of data found; any input parseable by `~gwpy.time.to_gps` is fine. Returns ------- modcache : `list` A parsed, sieved list of paths based on the input arguments. """ # open cache file if isinstance(cache, FILE_LIKE + string_types): return read_cache(cache, sort=file_segment, segment=Segment(start, end)) # format existing cache file cache = type(cache)(cache) # copy cache # sort cache try: cache.sort(key=file_segment) # sort except ValueError: # if this failed, then the sieving will also fail, but lets proceed # anyway, since the user didn't actually ask us to do this (but # its a very good idea) return cache # sieve cache if start is None: # start time of earliest file start = file_segment(cache[0])[0] if end is None: # end time of latest file end = file_segment(cache[-1])[-1] return sieve(cache, segment=Segment(start, end))
def refresh(self, wait_for_active=False, retry_seconds=5): """ Refresh all of the fields of the Table object by calling the underlying DescribeTable request. :type wait_for_active: bool :param wait_for_active: If True, this command will not return until the table status, as returned from Amazon DynamoDB, is 'ACTIVE'. :type retry_seconds: int :param retry_seconds: If wait_for_active is True, this parameter controls the number of seconds of delay between calls to update_table in Amazon DynamoDB. Default is 5 seconds. """ done = False while not done: response = self.layer2.describe_table(self.name) self.update_from_response(response) if wait_for_active: if self.status == 'ACTIVE': done = True else: time.sleep(retry_seconds) else: done = True
Refresh all of the fields of the Table object by calling the underlying DescribeTable request. :type wait_for_active: bool :param wait_for_active: If True, this command will not return until the table status, as returned from Amazon DynamoDB, is 'ACTIVE'. :type retry_seconds: int :param retry_seconds: If wait_for_active is True, this parameter controls the number of seconds of delay between calls to update_table in Amazon DynamoDB. Default is 5 seconds.
Below is the the instruction that describes the task: ### Input: Refresh all of the fields of the Table object by calling the underlying DescribeTable request. :type wait_for_active: bool :param wait_for_active: If True, this command will not return until the table status, as returned from Amazon DynamoDB, is 'ACTIVE'. :type retry_seconds: int :param retry_seconds: If wait_for_active is True, this parameter controls the number of seconds of delay between calls to update_table in Amazon DynamoDB. Default is 5 seconds. ### Response: def refresh(self, wait_for_active=False, retry_seconds=5): """ Refresh all of the fields of the Table object by calling the underlying DescribeTable request. :type wait_for_active: bool :param wait_for_active: If True, this command will not return until the table status, as returned from Amazon DynamoDB, is 'ACTIVE'. :type retry_seconds: int :param retry_seconds: If wait_for_active is True, this parameter controls the number of seconds of delay between calls to update_table in Amazon DynamoDB. Default is 5 seconds. """ done = False while not done: response = self.layer2.describe_table(self.name) self.update_from_response(response) if wait_for_active: if self.status == 'ACTIVE': done = True else: time.sleep(retry_seconds) else: done = True
def set_config(self, config): """Update the component's configuration. Use the :py:meth:`get_config` method to get a copy of the component's configuration, update that copy then call :py:meth:`set_config` to update the component. This enables the configuration to be changed in a threadsafe manner while the component is running, and allows several values to be changed at once. :param ConfigParent config: New configuration. """ # put copy of config on queue for running component self._configmixin_queue.append(copy.deepcopy(config)) # notify component, using thread safe method self.new_config()
Update the component's configuration. Use the :py:meth:`get_config` method to get a copy of the component's configuration, update that copy then call :py:meth:`set_config` to update the component. This enables the configuration to be changed in a threadsafe manner while the component is running, and allows several values to be changed at once. :param ConfigParent config: New configuration.
Below is the the instruction that describes the task: ### Input: Update the component's configuration. Use the :py:meth:`get_config` method to get a copy of the component's configuration, update that copy then call :py:meth:`set_config` to update the component. This enables the configuration to be changed in a threadsafe manner while the component is running, and allows several values to be changed at once. :param ConfigParent config: New configuration. ### Response: def set_config(self, config): """Update the component's configuration. Use the :py:meth:`get_config` method to get a copy of the component's configuration, update that copy then call :py:meth:`set_config` to update the component. This enables the configuration to be changed in a threadsafe manner while the component is running, and allows several values to be changed at once. :param ConfigParent config: New configuration. """ # put copy of config on queue for running component self._configmixin_queue.append(copy.deepcopy(config)) # notify component, using thread safe method self.new_config()
def batch_size(self, batch_size): """Limits the number of documents returned in one batch. Each batch requires a round trip to the server. It can be adjusted to optimize performance and limit data transfer. .. note:: batch_size can not override MongoDB's internal limits on the amount of data it will return to the client in a single batch (i.e if you set batch size to 1,000,000,000, MongoDB will currently only return 4-16MB of results per batch). Raises :exc:`TypeError` if `batch_size` is not an integer. Raises :exc:`ValueError` if `batch_size` is less than ``0``. :Parameters: - `batch_size`: The size of each batch of results requested. """ if not isinstance(batch_size, integer_types): raise TypeError("batch_size must be an integer") if batch_size < 0: raise ValueError("batch_size must be >= 0") self.__batch_size = batch_size == 1 and 2 or batch_size return self
Limits the number of documents returned in one batch. Each batch requires a round trip to the server. It can be adjusted to optimize performance and limit data transfer. .. note:: batch_size can not override MongoDB's internal limits on the amount of data it will return to the client in a single batch (i.e if you set batch size to 1,000,000,000, MongoDB will currently only return 4-16MB of results per batch). Raises :exc:`TypeError` if `batch_size` is not an integer. Raises :exc:`ValueError` if `batch_size` is less than ``0``. :Parameters: - `batch_size`: The size of each batch of results requested.
Below is the the instruction that describes the task: ### Input: Limits the number of documents returned in one batch. Each batch requires a round trip to the server. It can be adjusted to optimize performance and limit data transfer. .. note:: batch_size can not override MongoDB's internal limits on the amount of data it will return to the client in a single batch (i.e if you set batch size to 1,000,000,000, MongoDB will currently only return 4-16MB of results per batch). Raises :exc:`TypeError` if `batch_size` is not an integer. Raises :exc:`ValueError` if `batch_size` is less than ``0``. :Parameters: - `batch_size`: The size of each batch of results requested. ### Response: def batch_size(self, batch_size): """Limits the number of documents returned in one batch. Each batch requires a round trip to the server. It can be adjusted to optimize performance and limit data transfer. .. note:: batch_size can not override MongoDB's internal limits on the amount of data it will return to the client in a single batch (i.e if you set batch size to 1,000,000,000, MongoDB will currently only return 4-16MB of results per batch). Raises :exc:`TypeError` if `batch_size` is not an integer. Raises :exc:`ValueError` if `batch_size` is less than ``0``. :Parameters: - `batch_size`: The size of each batch of results requested. """ if not isinstance(batch_size, integer_types): raise TypeError("batch_size must be an integer") if batch_size < 0: raise ValueError("batch_size must be >= 0") self.__batch_size = batch_size == 1 and 2 or batch_size return self
def get_transcript_preferences(course_id): """ Retrieves course wide transcript preferences Arguments: course_id (str): course id """ try: transcript_preference = TranscriptPreference.objects.get(course_id=course_id) except TranscriptPreference.DoesNotExist: return return TranscriptPreferenceSerializer(transcript_preference).data
Retrieves course wide transcript preferences Arguments: course_id (str): course id
Below is the the instruction that describes the task: ### Input: Retrieves course wide transcript preferences Arguments: course_id (str): course id ### Response: def get_transcript_preferences(course_id): """ Retrieves course wide transcript preferences Arguments: course_id (str): course id """ try: transcript_preference = TranscriptPreference.objects.get(course_id=course_id) except TranscriptPreference.DoesNotExist: return return TranscriptPreferenceSerializer(transcript_preference).data
def _coerce_state_from_string(value: Union[int, str]) -> str: """Return a proper state from a string input.""" try: return STATE_MAP[int(value)] except KeyError: _LOGGER.error('Unknown state: %s', value) return STATE_UNKNOWN
Return a proper state from a string input.
Below is the the instruction that describes the task: ### Input: Return a proper state from a string input. ### Response: def _coerce_state_from_string(value: Union[int, str]) -> str: """Return a proper state from a string input.""" try: return STATE_MAP[int(value)] except KeyError: _LOGGER.error('Unknown state: %s', value) return STATE_UNKNOWN
def decode(code, *, max_age): """decode(code, *, max_age) Decodes the code from the registration link and returns a tuple consisting of the verified email address and the payload which was passed through to ``get_confirmation_code``. The maximum age in seconds of the link has to be specified as ``max_age``. This method raises ``ValidationError`` exceptions when anything goes wrong when verifying the signature or the expiry timeout. """ try: data = get_signer().unsign(code, max_age=max_age) except signing.SignatureExpired: raise ValidationError( _("The link is expired. Please request another registration link."), code="email_registration_expired", ) except signing.BadSignature: raise ValidationError( _( "Unable to verify the signature. Please request a new" " registration link." ), code="email_registration_signature", ) return data.split(":", 1)
decode(code, *, max_age) Decodes the code from the registration link and returns a tuple consisting of the verified email address and the payload which was passed through to ``get_confirmation_code``. The maximum age in seconds of the link has to be specified as ``max_age``. This method raises ``ValidationError`` exceptions when anything goes wrong when verifying the signature or the expiry timeout.
Below is the the instruction that describes the task: ### Input: decode(code, *, max_age) Decodes the code from the registration link and returns a tuple consisting of the verified email address and the payload which was passed through to ``get_confirmation_code``. The maximum age in seconds of the link has to be specified as ``max_age``. This method raises ``ValidationError`` exceptions when anything goes wrong when verifying the signature or the expiry timeout. ### Response: def decode(code, *, max_age): """decode(code, *, max_age) Decodes the code from the registration link and returns a tuple consisting of the verified email address and the payload which was passed through to ``get_confirmation_code``. The maximum age in seconds of the link has to be specified as ``max_age``. This method raises ``ValidationError`` exceptions when anything goes wrong when verifying the signature or the expiry timeout. """ try: data = get_signer().unsign(code, max_age=max_age) except signing.SignatureExpired: raise ValidationError( _("The link is expired. Please request another registration link."), code="email_registration_expired", ) except signing.BadSignature: raise ValidationError( _( "Unable to verify the signature. Please request a new" " registration link." ), code="email_registration_signature", ) return data.split(":", 1)
def simDeath(self): ''' Randomly determine which consumers die, and distribute their wealth among the survivors. This method only works if there is only one period in the cycle. Parameters ---------- None Returns ------- who_dies : np.array(bool) Boolean array of size AgentCount indicating which agents die. ''' # Divide agents into wealth groups, kill one random agent per wealth group # order = np.argsort(self.aLvlNow) # how_many_die = int(self.AgentCount*(1.0-self.LivPrb[0])) # group_size = self.AgentCount/how_many_die # This should be an integer # base_idx = self.RNG.randint(0,group_size,size=how_many_die) # kill_by_rank = np.arange(how_many_die,dtype=int)*group_size + base_idx # who_dies = np.zeros(self.AgentCount,dtype=bool) # who_dies[order[kill_by_rank]] = True # Just select a random set of agents to die how_many_die = int(round(self.AgentCount*(1.0-self.LivPrb[0]))) base_bool = np.zeros(self.AgentCount,dtype=bool) base_bool[0:how_many_die] = True who_dies = self.RNG.permutation(base_bool) if self.T_age is not None: who_dies[self.t_age >= self.T_age] = True # Divide up the wealth of those who die, giving it to those who survive who_lives = np.logical_not(who_dies) wealth_living = np.sum(self.aLvlNow[who_lives]) wealth_dead = np.sum(self.aLvlNow[who_dies]) Ractuarial = 1.0 + wealth_dead/wealth_living self.aNrmNow[who_lives] = self.aNrmNow[who_lives]*Ractuarial self.aLvlNow[who_lives] = self.aLvlNow[who_lives]*Ractuarial return who_dies
Randomly determine which consumers die, and distribute their wealth among the survivors. This method only works if there is only one period in the cycle. Parameters ---------- None Returns ------- who_dies : np.array(bool) Boolean array of size AgentCount indicating which agents die.
Below is the the instruction that describes the task: ### Input: Randomly determine which consumers die, and distribute their wealth among the survivors. This method only works if there is only one period in the cycle. Parameters ---------- None Returns ------- who_dies : np.array(bool) Boolean array of size AgentCount indicating which agents die. ### Response: def simDeath(self): ''' Randomly determine which consumers die, and distribute their wealth among the survivors. This method only works if there is only one period in the cycle. Parameters ---------- None Returns ------- who_dies : np.array(bool) Boolean array of size AgentCount indicating which agents die. ''' # Divide agents into wealth groups, kill one random agent per wealth group # order = np.argsort(self.aLvlNow) # how_many_die = int(self.AgentCount*(1.0-self.LivPrb[0])) # group_size = self.AgentCount/how_many_die # This should be an integer # base_idx = self.RNG.randint(0,group_size,size=how_many_die) # kill_by_rank = np.arange(how_many_die,dtype=int)*group_size + base_idx # who_dies = np.zeros(self.AgentCount,dtype=bool) # who_dies[order[kill_by_rank]] = True # Just select a random set of agents to die how_many_die = int(round(self.AgentCount*(1.0-self.LivPrb[0]))) base_bool = np.zeros(self.AgentCount,dtype=bool) base_bool[0:how_many_die] = True who_dies = self.RNG.permutation(base_bool) if self.T_age is not None: who_dies[self.t_age >= self.T_age] = True # Divide up the wealth of those who die, giving it to those who survive who_lives = np.logical_not(who_dies) wealth_living = np.sum(self.aLvlNow[who_lives]) wealth_dead = np.sum(self.aLvlNow[who_dies]) Ractuarial = 1.0 + wealth_dead/wealth_living self.aNrmNow[who_lives] = self.aNrmNow[who_lives]*Ractuarial self.aLvlNow[who_lives] = self.aLvlNow[who_lives]*Ractuarial return who_dies
def unpack_rsp(cls, rsp_pb): """Convert from PLS response to user response""" ret_type = rsp_pb.retType ret_msg = rsp_pb.retMsg if ret_type != RET_OK: return RET_ERROR, ret_msg, None raw_snapshot_list = rsp_pb.s2c.snapshotList snapshot_list = [] for record in raw_snapshot_list: snapshot_tmp = {} snapshot_tmp['code'] = merge_qot_mkt_stock_str( int(record.basic.security.market), record.basic.security.code) snapshot_tmp['update_time'] = record.basic.updateTime snapshot_tmp['last_price'] = record.basic.curPrice snapshot_tmp['open_price'] = record.basic.openPrice snapshot_tmp['high_price'] = record.basic.highPrice snapshot_tmp['low_price'] = record.basic.lowPrice snapshot_tmp['prev_close_price'] = record.basic.lastClosePrice snapshot_tmp['volume'] = record.basic.volume snapshot_tmp['turnover'] = record.basic.turnover snapshot_tmp['turnover_rate'] = record.basic.turnoverRate snapshot_tmp['suspension'] = record.basic.isSuspend snapshot_tmp['listing_date'] = "N/A" if record.HasField('optionExData') else record.basic.listTime snapshot_tmp['price_spread'] = record.basic.priceSpread snapshot_tmp['lot_size'] = record.basic.lotSize snapshot_tmp['equity_valid'] = False # equityExData if record.HasField('equityExData'): snapshot_tmp['equity_valid'] = True snapshot_tmp[ 'issued_shares'] = record.equityExData.issuedShares snapshot_tmp[ 'total_market_val'] = record.equityExData.issuedMarketVal snapshot_tmp['net_asset'] = record.equityExData.netAsset snapshot_tmp['net_profit'] = record.equityExData.netProfit snapshot_tmp[ 'earning_per_share'] = record.equityExData.earningsPershare snapshot_tmp[ 'outstanding_shares'] = record.equityExData.outstandingShares snapshot_tmp[ 'circular_market_val'] = record.equityExData.outstandingMarketVal snapshot_tmp[ 'net_asset_per_share'] = record.equityExData.netAssetPershare snapshot_tmp['ey_ratio'] = record.equityExData.eyRate snapshot_tmp['pe_ratio'] = record.equityExData.peRate snapshot_tmp['pb_ratio'] = record.equityExData.pbRate snapshot_tmp['pe_ttm_ratio'] = record.equityExData.peTTMRate snapshot_tmp['wrt_valid'] = False if record.basic.type == SEC_TYPE_MAP[SecurityType.WARRANT]: snapshot_tmp['wrt_valid'] = True snapshot_tmp[ 'wrt_conversion_ratio'] = record.warrantExData.conversionRate snapshot_tmp['wrt_type'] = QUOTE.REV_WRT_TYPE_MAP[ record.warrantExData.warrantType] snapshot_tmp[ 'wrt_strike_price'] = record.warrantExData.strikePrice snapshot_tmp[ 'wrt_maturity_date'] = record.warrantExData.maturityTime snapshot_tmp[ 'wrt_end_trade'] = record.warrantExData.endTradeTime snapshot_tmp['stock_owner'] = merge_qot_mkt_stock_str( record.warrantExData.owner.market, record.warrantExData.owner.code) snapshot_tmp[ 'wrt_recovery_price'] = record.warrantExData.recoveryPrice snapshot_tmp[ 'wrt_street_vol'] = record.warrantExData.streetVolumn snapshot_tmp[ 'wrt_issue_vol'] = record.warrantExData.issueVolumn snapshot_tmp[ 'wrt_street_ratio'] = record.warrantExData.streetRate snapshot_tmp['wrt_delta'] = record.warrantExData.delta snapshot_tmp[ 'wrt_implied_volatility'] = record.warrantExData.impliedVolatility snapshot_tmp['wrt_premium'] = record.warrantExData.premium snapshot_tmp['option_valid'] = False if record.basic.type == SEC_TYPE_MAP[SecurityType.DRVT]: snapshot_tmp['option_valid'] = True snapshot_tmp[ 'option_type'] = QUOTE.REV_OPTION_TYPE_CLASS_MAP[record.optionExData.type] snapshot_tmp['stock_owner'] = merge_qot_mkt_stock_str( record.optionExData.owner.market, record.optionExData.owner.code) snapshot_tmp[ 'strike_time'] = record.optionExData.strikeTime snapshot_tmp[ 'option_strike_price'] = record.optionExData.strikePrice snapshot_tmp[ 'option_contract_size'] = record.optionExData.contractSize snapshot_tmp[ 'option_open_interest'] = record.optionExData.openInterest snapshot_tmp['option_implied_volatility'] = record.optionExData.impliedVolatility snapshot_tmp[ 'option_premium'] = record.optionExData.premium snapshot_tmp[ 'option_delta'] = record.optionExData.delta snapshot_tmp[ 'option_gamma'] = record.optionExData.gamma snapshot_tmp[ 'option_vega'] = record.optionExData.vega snapshot_tmp['option_theta'] = record.optionExData.theta snapshot_tmp['option_rho'] = record.optionExData.rho else: pass snapshot_list.append(snapshot_tmp) return RET_OK, "", snapshot_list
Convert from PLS response to user response
Below is the the instruction that describes the task: ### Input: Convert from PLS response to user response ### Response: def unpack_rsp(cls, rsp_pb): """Convert from PLS response to user response""" ret_type = rsp_pb.retType ret_msg = rsp_pb.retMsg if ret_type != RET_OK: return RET_ERROR, ret_msg, None raw_snapshot_list = rsp_pb.s2c.snapshotList snapshot_list = [] for record in raw_snapshot_list: snapshot_tmp = {} snapshot_tmp['code'] = merge_qot_mkt_stock_str( int(record.basic.security.market), record.basic.security.code) snapshot_tmp['update_time'] = record.basic.updateTime snapshot_tmp['last_price'] = record.basic.curPrice snapshot_tmp['open_price'] = record.basic.openPrice snapshot_tmp['high_price'] = record.basic.highPrice snapshot_tmp['low_price'] = record.basic.lowPrice snapshot_tmp['prev_close_price'] = record.basic.lastClosePrice snapshot_tmp['volume'] = record.basic.volume snapshot_tmp['turnover'] = record.basic.turnover snapshot_tmp['turnover_rate'] = record.basic.turnoverRate snapshot_tmp['suspension'] = record.basic.isSuspend snapshot_tmp['listing_date'] = "N/A" if record.HasField('optionExData') else record.basic.listTime snapshot_tmp['price_spread'] = record.basic.priceSpread snapshot_tmp['lot_size'] = record.basic.lotSize snapshot_tmp['equity_valid'] = False # equityExData if record.HasField('equityExData'): snapshot_tmp['equity_valid'] = True snapshot_tmp[ 'issued_shares'] = record.equityExData.issuedShares snapshot_tmp[ 'total_market_val'] = record.equityExData.issuedMarketVal snapshot_tmp['net_asset'] = record.equityExData.netAsset snapshot_tmp['net_profit'] = record.equityExData.netProfit snapshot_tmp[ 'earning_per_share'] = record.equityExData.earningsPershare snapshot_tmp[ 'outstanding_shares'] = record.equityExData.outstandingShares snapshot_tmp[ 'circular_market_val'] = record.equityExData.outstandingMarketVal snapshot_tmp[ 'net_asset_per_share'] = record.equityExData.netAssetPershare snapshot_tmp['ey_ratio'] = record.equityExData.eyRate snapshot_tmp['pe_ratio'] = record.equityExData.peRate snapshot_tmp['pb_ratio'] = record.equityExData.pbRate snapshot_tmp['pe_ttm_ratio'] = record.equityExData.peTTMRate snapshot_tmp['wrt_valid'] = False if record.basic.type == SEC_TYPE_MAP[SecurityType.WARRANT]: snapshot_tmp['wrt_valid'] = True snapshot_tmp[ 'wrt_conversion_ratio'] = record.warrantExData.conversionRate snapshot_tmp['wrt_type'] = QUOTE.REV_WRT_TYPE_MAP[ record.warrantExData.warrantType] snapshot_tmp[ 'wrt_strike_price'] = record.warrantExData.strikePrice snapshot_tmp[ 'wrt_maturity_date'] = record.warrantExData.maturityTime snapshot_tmp[ 'wrt_end_trade'] = record.warrantExData.endTradeTime snapshot_tmp['stock_owner'] = merge_qot_mkt_stock_str( record.warrantExData.owner.market, record.warrantExData.owner.code) snapshot_tmp[ 'wrt_recovery_price'] = record.warrantExData.recoveryPrice snapshot_tmp[ 'wrt_street_vol'] = record.warrantExData.streetVolumn snapshot_tmp[ 'wrt_issue_vol'] = record.warrantExData.issueVolumn snapshot_tmp[ 'wrt_street_ratio'] = record.warrantExData.streetRate snapshot_tmp['wrt_delta'] = record.warrantExData.delta snapshot_tmp[ 'wrt_implied_volatility'] = record.warrantExData.impliedVolatility snapshot_tmp['wrt_premium'] = record.warrantExData.premium snapshot_tmp['option_valid'] = False if record.basic.type == SEC_TYPE_MAP[SecurityType.DRVT]: snapshot_tmp['option_valid'] = True snapshot_tmp[ 'option_type'] = QUOTE.REV_OPTION_TYPE_CLASS_MAP[record.optionExData.type] snapshot_tmp['stock_owner'] = merge_qot_mkt_stock_str( record.optionExData.owner.market, record.optionExData.owner.code) snapshot_tmp[ 'strike_time'] = record.optionExData.strikeTime snapshot_tmp[ 'option_strike_price'] = record.optionExData.strikePrice snapshot_tmp[ 'option_contract_size'] = record.optionExData.contractSize snapshot_tmp[ 'option_open_interest'] = record.optionExData.openInterest snapshot_tmp['option_implied_volatility'] = record.optionExData.impliedVolatility snapshot_tmp[ 'option_premium'] = record.optionExData.premium snapshot_tmp[ 'option_delta'] = record.optionExData.delta snapshot_tmp[ 'option_gamma'] = record.optionExData.gamma snapshot_tmp[ 'option_vega'] = record.optionExData.vega snapshot_tmp['option_theta'] = record.optionExData.theta snapshot_tmp['option_rho'] = record.optionExData.rho else: pass snapshot_list.append(snapshot_tmp) return RET_OK, "", snapshot_list
def get_frameshift_lengths(num_bins): """Simple function that returns the lengths for each frameshift category if `num_bins` number of frameshift categories are requested. """ fs_len = [] i = 1 tmp_bins = 0 while(tmp_bins<num_bins): if i%3: fs_len.append(i) tmp_bins += 1 i += 1 return fs_len
Simple function that returns the lengths for each frameshift category if `num_bins` number of frameshift categories are requested.
Below is the the instruction that describes the task: ### Input: Simple function that returns the lengths for each frameshift category if `num_bins` number of frameshift categories are requested. ### Response: def get_frameshift_lengths(num_bins): """Simple function that returns the lengths for each frameshift category if `num_bins` number of frameshift categories are requested. """ fs_len = [] i = 1 tmp_bins = 0 while(tmp_bins<num_bins): if i%3: fs_len.append(i) tmp_bins += 1 i += 1 return fs_len
def do_verify(marfile, keyfiles=None): """Verify the MAR file.""" try: with open(marfile, 'rb') as f: with MarReader(f) as m: # Check various parts of the mar file # e.g. signature algorithms and additional block sections errors = m.get_errors() if errors: print("File is not well formed: {}".format(errors)) sys.exit(1) if keyfiles: try: keys = get_keys(keyfiles, m.signature_type) except ValueError as e: print(e) sys.exit(1) if any(m.verify(key) for key in keys): print("Verification OK") return True else: print("Verification failed") sys.exit(1) else: print("Verification OK") return True except Exception as e: print("Error opening or parsing file: {}".format(e)) sys.exit(1)
Verify the MAR file.
Below is the the instruction that describes the task: ### Input: Verify the MAR file. ### Response: def do_verify(marfile, keyfiles=None): """Verify the MAR file.""" try: with open(marfile, 'rb') as f: with MarReader(f) as m: # Check various parts of the mar file # e.g. signature algorithms and additional block sections errors = m.get_errors() if errors: print("File is not well formed: {}".format(errors)) sys.exit(1) if keyfiles: try: keys = get_keys(keyfiles, m.signature_type) except ValueError as e: print(e) sys.exit(1) if any(m.verify(key) for key in keys): print("Verification OK") return True else: print("Verification failed") sys.exit(1) else: print("Verification OK") return True except Exception as e: print("Error opening or parsing file: {}".format(e)) sys.exit(1)
def create_node(self, *args, **kwargs): """ Creates a new VPCS VM. :returns: VPCSVM instance """ node = yield from super().create_node(*args, **kwargs) self._free_mac_ids.setdefault(node.project.id, list(range(0, 255))) try: self._used_mac_ids[node.id] = self._free_mac_ids[node.project.id].pop(0) except IndexError: raise VPCSError("Cannot create a new VPCS VM (limit of 255 VMs reached on this host)") return node
Creates a new VPCS VM. :returns: VPCSVM instance
Below is the the instruction that describes the task: ### Input: Creates a new VPCS VM. :returns: VPCSVM instance ### Response: def create_node(self, *args, **kwargs): """ Creates a new VPCS VM. :returns: VPCSVM instance """ node = yield from super().create_node(*args, **kwargs) self._free_mac_ids.setdefault(node.project.id, list(range(0, 255))) try: self._used_mac_ids[node.id] = self._free_mac_ids[node.project.id].pop(0) except IndexError: raise VPCSError("Cannot create a new VPCS VM (limit of 255 VMs reached on this host)") return node
def _waitForIP(cls, instance): """ Wait until the instances has a public IP address assigned to it. :type instance: boto.ec2.instance.Instance """ logger.debug('Waiting for ip...') while True: time.sleep(a_short_time) instance.update() if instance.ip_address or instance.public_dns_name or instance.private_ip_address: logger.debug('...got ip') break
Wait until the instances has a public IP address assigned to it. :type instance: boto.ec2.instance.Instance
Below is the the instruction that describes the task: ### Input: Wait until the instances has a public IP address assigned to it. :type instance: boto.ec2.instance.Instance ### Response: def _waitForIP(cls, instance): """ Wait until the instances has a public IP address assigned to it. :type instance: boto.ec2.instance.Instance """ logger.debug('Waiting for ip...') while True: time.sleep(a_short_time) instance.update() if instance.ip_address or instance.public_dns_name or instance.private_ip_address: logger.debug('...got ip') break
def add_response_headers(self, headers, **overrides): """Adds the specified response headers while keeping existing ones in-tact""" response_headers = self.route.get('response_headers', {}).copy() response_headers.update(headers) return self.where(response_headers=response_headers, **overrides)
Adds the specified response headers while keeping existing ones in-tact
Below is the the instruction that describes the task: ### Input: Adds the specified response headers while keeping existing ones in-tact ### Response: def add_response_headers(self, headers, **overrides): """Adds the specified response headers while keeping existing ones in-tact""" response_headers = self.route.get('response_headers', {}).copy() response_headers.update(headers) return self.where(response_headers=response_headers, **overrides)
def _generate_footer(notebook_object, notebook_type): """ Internal function that is used for generation of the notebooks footer. ---------- Parameters ---------- notebook_object : notebook object Object of "notebook" class where the header will be created. notebook_type : str Notebook type: - "Main_Files_Signal_Samples" - "Main_Files_By_Category" - "Main_Files_By_Difficulty" - "Main_Files_By_Tag" - "Load" - "Record" - "Visualise" - "Pre-Process" - "Detect" - "Extract" - "Train_and_Classify" - "Understand" - "Evaluate" """ footer_aux = FOOTER if "Main_Files" in notebook_type: footer_aux = footer_aux.replace("../MainFiles/", "") # ================ Insertion of the div reserved to the Notebook Description =================== notebook_object["cells"].append(nb.v4.new_markdown_cell(footer_aux, **{"metadata": {"tags": ["footer"]}})) # ========== Code segment for application of the biosignalsnotebooks CSS style =========== notebook_object["cells"].append(nb.v4.new_markdown_cell(AUX_CODE_MESSAGE, **{"metadata": {"tags": ["hide_mark"]}})) notebook_object["cells"].append(nb.v4.new_code_cell(CSS_STYLE_CODE, **{"metadata": {"tags": ["hide_both"]}}))
Internal function that is used for generation of the notebooks footer. ---------- Parameters ---------- notebook_object : notebook object Object of "notebook" class where the header will be created. notebook_type : str Notebook type: - "Main_Files_Signal_Samples" - "Main_Files_By_Category" - "Main_Files_By_Difficulty" - "Main_Files_By_Tag" - "Load" - "Record" - "Visualise" - "Pre-Process" - "Detect" - "Extract" - "Train_and_Classify" - "Understand" - "Evaluate"
Below is the the instruction that describes the task: ### Input: Internal function that is used for generation of the notebooks footer. ---------- Parameters ---------- notebook_object : notebook object Object of "notebook" class where the header will be created. notebook_type : str Notebook type: - "Main_Files_Signal_Samples" - "Main_Files_By_Category" - "Main_Files_By_Difficulty" - "Main_Files_By_Tag" - "Load" - "Record" - "Visualise" - "Pre-Process" - "Detect" - "Extract" - "Train_and_Classify" - "Understand" - "Evaluate" ### Response: def _generate_footer(notebook_object, notebook_type): """ Internal function that is used for generation of the notebooks footer. ---------- Parameters ---------- notebook_object : notebook object Object of "notebook" class where the header will be created. notebook_type : str Notebook type: - "Main_Files_Signal_Samples" - "Main_Files_By_Category" - "Main_Files_By_Difficulty" - "Main_Files_By_Tag" - "Load" - "Record" - "Visualise" - "Pre-Process" - "Detect" - "Extract" - "Train_and_Classify" - "Understand" - "Evaluate" """ footer_aux = FOOTER if "Main_Files" in notebook_type: footer_aux = footer_aux.replace("../MainFiles/", "") # ================ Insertion of the div reserved to the Notebook Description =================== notebook_object["cells"].append(nb.v4.new_markdown_cell(footer_aux, **{"metadata": {"tags": ["footer"]}})) # ========== Code segment for application of the biosignalsnotebooks CSS style =========== notebook_object["cells"].append(nb.v4.new_markdown_cell(AUX_CODE_MESSAGE, **{"metadata": {"tags": ["hide_mark"]}})) notebook_object["cells"].append(nb.v4.new_code_cell(CSS_STYLE_CODE, **{"metadata": {"tags": ["hide_both"]}}))
def __get_search_results(self, url, limit, order_by, sort_order, filter): """ helper function for getting search results up to specified limit on the number of results. The Fred HTTP API truncates to 1000 results per request, so this may issue multiple HTTP requests to obtain more available data. """ order_by_options = ['search_rank', 'series_id', 'title', 'units', 'frequency', 'seasonal_adjustment', 'realtime_start', 'realtime_end', 'last_updated', 'observation_start', 'observation_end', 'popularity'] if order_by is not None: if order_by in order_by_options: url = url + '&order_by=' + order_by else: raise ValueError('%s is not in the valid list of order_by options: %s' % (order_by, str(order_by_options))) if filter is not None: if len(filter) == 2: url = url + '&filter_variable=%s&filter_value=%s' % (filter[0], filter[1]) else: raise ValueError('Filter should be a 2 item tuple like (filter_variable, filter_value)') sort_order_options = ['asc', 'desc'] if sort_order is not None: if sort_order in sort_order_options: url = url + '&sort_order=' + sort_order else: raise ValueError('%s is not in the valid list of sort_order options: %s' % (sort_order, str(sort_order_options))) data, num_results_total = self.__do_series_search(url) if data is None: return data if limit == 0: max_results_needed = num_results_total else: max_results_needed = limit if max_results_needed > self.max_results_per_request: for i in range(1, max_results_needed // self.max_results_per_request + 1): offset = i * self.max_results_per_request next_data, _ = self.__do_series_search(url + '&offset=' + str(offset)) data = data.append(next_data) return data.head(max_results_needed)
helper function for getting search results up to specified limit on the number of results. The Fred HTTP API truncates to 1000 results per request, so this may issue multiple HTTP requests to obtain more available data.
Below is the the instruction that describes the task: ### Input: helper function for getting search results up to specified limit on the number of results. The Fred HTTP API truncates to 1000 results per request, so this may issue multiple HTTP requests to obtain more available data. ### Response: def __get_search_results(self, url, limit, order_by, sort_order, filter): """ helper function for getting search results up to specified limit on the number of results. The Fred HTTP API truncates to 1000 results per request, so this may issue multiple HTTP requests to obtain more available data. """ order_by_options = ['search_rank', 'series_id', 'title', 'units', 'frequency', 'seasonal_adjustment', 'realtime_start', 'realtime_end', 'last_updated', 'observation_start', 'observation_end', 'popularity'] if order_by is not None: if order_by in order_by_options: url = url + '&order_by=' + order_by else: raise ValueError('%s is not in the valid list of order_by options: %s' % (order_by, str(order_by_options))) if filter is not None: if len(filter) == 2: url = url + '&filter_variable=%s&filter_value=%s' % (filter[0], filter[1]) else: raise ValueError('Filter should be a 2 item tuple like (filter_variable, filter_value)') sort_order_options = ['asc', 'desc'] if sort_order is not None: if sort_order in sort_order_options: url = url + '&sort_order=' + sort_order else: raise ValueError('%s is not in the valid list of sort_order options: %s' % (sort_order, str(sort_order_options))) data, num_results_total = self.__do_series_search(url) if data is None: return data if limit == 0: max_results_needed = num_results_total else: max_results_needed = limit if max_results_needed > self.max_results_per_request: for i in range(1, max_results_needed // self.max_results_per_request + 1): offset = i * self.max_results_per_request next_data, _ = self.__do_series_search(url + '&offset=' + str(offset)) data = data.append(next_data) return data.head(max_results_needed)
def get_family_search_session(self, proxy=None): """Gets the ``OsidSession`` associated with the family search service. arg: proxy (osid.proxy.Proxy): a proxy return: (osid.relationship.FamilySearchSession) - a ``FamilySearchSession`` raise: NullArgument - ``proxy`` is ``null`` raise: OperationFailed - unable to complete request raise: Unimplemented - ``supports_family_search()`` is ``false`` *compliance: optional -- This method must be implemented if ``supports_family_search()`` is ``true``.* """ if not self.supports_family_search(): raise Unimplemented() try: from . import sessions except ImportError: raise OperationFailed() proxy = self._convert_proxy(proxy) try: session = sessions.FamilySearchSession(proxy=proxy, runtime=self._runtime) except AttributeError: raise OperationFailed() return session
Gets the ``OsidSession`` associated with the family search service. arg: proxy (osid.proxy.Proxy): a proxy return: (osid.relationship.FamilySearchSession) - a ``FamilySearchSession`` raise: NullArgument - ``proxy`` is ``null`` raise: OperationFailed - unable to complete request raise: Unimplemented - ``supports_family_search()`` is ``false`` *compliance: optional -- This method must be implemented if ``supports_family_search()`` is ``true``.*
Below is the the instruction that describes the task: ### Input: Gets the ``OsidSession`` associated with the family search service. arg: proxy (osid.proxy.Proxy): a proxy return: (osid.relationship.FamilySearchSession) - a ``FamilySearchSession`` raise: NullArgument - ``proxy`` is ``null`` raise: OperationFailed - unable to complete request raise: Unimplemented - ``supports_family_search()`` is ``false`` *compliance: optional -- This method must be implemented if ``supports_family_search()`` is ``true``.* ### Response: def get_family_search_session(self, proxy=None): """Gets the ``OsidSession`` associated with the family search service. arg: proxy (osid.proxy.Proxy): a proxy return: (osid.relationship.FamilySearchSession) - a ``FamilySearchSession`` raise: NullArgument - ``proxy`` is ``null`` raise: OperationFailed - unable to complete request raise: Unimplemented - ``supports_family_search()`` is ``false`` *compliance: optional -- This method must be implemented if ``supports_family_search()`` is ``true``.* """ if not self.supports_family_search(): raise Unimplemented() try: from . import sessions except ImportError: raise OperationFailed() proxy = self._convert_proxy(proxy) try: session = sessions.FamilySearchSession(proxy=proxy, runtime=self._runtime) except AttributeError: raise OperationFailed() return session
def create_new_process(self, create_request): """CreateNewProcess. [Preview API] Creates a process. :param :class:`<CreateProcessModel> <azure.devops.v5_0.work_item_tracking_process.models.CreateProcessModel>` create_request: CreateProcessModel. :rtype: :class:`<ProcessInfo> <azure.devops.v5_0.work_item_tracking_process.models.ProcessInfo>` """ content = self._serialize.body(create_request, 'CreateProcessModel') response = self._send(http_method='POST', location_id='02cc6a73-5cfb-427d-8c8e-b49fb086e8af', version='5.0-preview.2', content=content) return self._deserialize('ProcessInfo', response)
CreateNewProcess. [Preview API] Creates a process. :param :class:`<CreateProcessModel> <azure.devops.v5_0.work_item_tracking_process.models.CreateProcessModel>` create_request: CreateProcessModel. :rtype: :class:`<ProcessInfo> <azure.devops.v5_0.work_item_tracking_process.models.ProcessInfo>`
Below is the the instruction that describes the task: ### Input: CreateNewProcess. [Preview API] Creates a process. :param :class:`<CreateProcessModel> <azure.devops.v5_0.work_item_tracking_process.models.CreateProcessModel>` create_request: CreateProcessModel. :rtype: :class:`<ProcessInfo> <azure.devops.v5_0.work_item_tracking_process.models.ProcessInfo>` ### Response: def create_new_process(self, create_request): """CreateNewProcess. [Preview API] Creates a process. :param :class:`<CreateProcessModel> <azure.devops.v5_0.work_item_tracking_process.models.CreateProcessModel>` create_request: CreateProcessModel. :rtype: :class:`<ProcessInfo> <azure.devops.v5_0.work_item_tracking_process.models.ProcessInfo>` """ content = self._serialize.body(create_request, 'CreateProcessModel') response = self._send(http_method='POST', location_id='02cc6a73-5cfb-427d-8c8e-b49fb086e8af', version='5.0-preview.2', content=content) return self._deserialize('ProcessInfo', response)
def quast_predicted_genes_barplot(self): """ Make a bar plot showing the number and length of predicted genes for each assembly """ # Prep the data # extract the ranges given to quast with "--gene-thresholds" prefix = '# predicted genes (>= ' suffix = ' bp)' all_thresholds = sorted(list(set([ int(key[len(prefix):-len(suffix)]) for _, d in self.quast_data.items() for key in d.keys() if key.startswith(prefix) ]))) data = {} ourpat = '>= {}{} bp' theirpat = prefix+"{}"+suffix for s_name, d in self.quast_data.items(): thresholds = sorted(list(set([ int(key[len(prefix):-len(suffix)]) for _, x in self.quast_data.items() for key in x.keys() if key.startswith(prefix) ]))) if len(thresholds)<2: continue p = dict() try: p = { ourpat.format(thresholds[-1],""): d[theirpat.format(thresholds[-1])] } for low,high in zip(thresholds[:-1], thresholds[1:]): p[ourpat.format(low,-high)] = d[theirpat.format(low)] - d[theirpat.format(high)] assert sum(p.values()) == d[theirpat.format(0)] except AssertionError: log.warning("Predicted gene counts didn't add up properly for \"{}\"".format(s_name)) except KeyError: log.warning("Not all predicted gene thresholds available for \"{}\"".format(s_name)) data[s_name] = p cats = [ ourpat.format(low,-high if high else "") for low,high in zip(all_thresholds, all_thresholds[1:]+[None]) ] if len(cats) > 0: return bargraph.plot(data, cats, {'id': 'quast_predicted_genes', 'title': 'QUAST: Number of predicted genes', 'ylab': 'Number of predicted genes'}) else: return None
Make a bar plot showing the number and length of predicted genes for each assembly
Below is the the instruction that describes the task: ### Input: Make a bar plot showing the number and length of predicted genes for each assembly ### Response: def quast_predicted_genes_barplot(self): """ Make a bar plot showing the number and length of predicted genes for each assembly """ # Prep the data # extract the ranges given to quast with "--gene-thresholds" prefix = '# predicted genes (>= ' suffix = ' bp)' all_thresholds = sorted(list(set([ int(key[len(prefix):-len(suffix)]) for _, d in self.quast_data.items() for key in d.keys() if key.startswith(prefix) ]))) data = {} ourpat = '>= {}{} bp' theirpat = prefix+"{}"+suffix for s_name, d in self.quast_data.items(): thresholds = sorted(list(set([ int(key[len(prefix):-len(suffix)]) for _, x in self.quast_data.items() for key in x.keys() if key.startswith(prefix) ]))) if len(thresholds)<2: continue p = dict() try: p = { ourpat.format(thresholds[-1],""): d[theirpat.format(thresholds[-1])] } for low,high in zip(thresholds[:-1], thresholds[1:]): p[ourpat.format(low,-high)] = d[theirpat.format(low)] - d[theirpat.format(high)] assert sum(p.values()) == d[theirpat.format(0)] except AssertionError: log.warning("Predicted gene counts didn't add up properly for \"{}\"".format(s_name)) except KeyError: log.warning("Not all predicted gene thresholds available for \"{}\"".format(s_name)) data[s_name] = p cats = [ ourpat.format(low,-high if high else "") for low,high in zip(all_thresholds, all_thresholds[1:]+[None]) ] if len(cats) > 0: return bargraph.plot(data, cats, {'id': 'quast_predicted_genes', 'title': 'QUAST: Number of predicted genes', 'ylab': 'Number of predicted genes'}) else: return None
def geopotential_to_height(geopot): r"""Compute height from a given geopotential. Parameters ---------- geopotential : `pint.Quantity` Geopotential (array_like) Returns ------- `pint.Quantity` The corresponding height value(s) Examples -------- >>> from metpy.constants import g, G, me, Re >>> import metpy.calc >>> from metpy.units import units >>> height = np.linspace(0,10000, num = 11) * units.m >>> geopot = metpy.calc.height_to_geopotential(height) >>> geopot <Quantity([ 0. 9817.46806283 19631.85526579 29443.16305888 39251.39289118 49056.54621087 58858.62446525 68657.62910064 78453.56156253 88246.42329545 98036.21574306], 'meter ** 2 / second ** 2')> >>> height = metpy.calc.geopotential_to_height(geopot) >>> height <Quantity([ 0. 1000. 2000. 3000. 4000. 5000. 6000. 7000. 8000. 9000. 10000.], 'meter')> Notes ----- Derived from definition of geopotential in [Hobbs2006]_ pg.14 Eq.1.8. """ # Calculate geopotential height = (((1 / mpconsts.Re) - (geopot / (mpconsts.G * mpconsts.me))) ** -1) - mpconsts.Re return height
r"""Compute height from a given geopotential. Parameters ---------- geopotential : `pint.Quantity` Geopotential (array_like) Returns ------- `pint.Quantity` The corresponding height value(s) Examples -------- >>> from metpy.constants import g, G, me, Re >>> import metpy.calc >>> from metpy.units import units >>> height = np.linspace(0,10000, num = 11) * units.m >>> geopot = metpy.calc.height_to_geopotential(height) >>> geopot <Quantity([ 0. 9817.46806283 19631.85526579 29443.16305888 39251.39289118 49056.54621087 58858.62446525 68657.62910064 78453.56156253 88246.42329545 98036.21574306], 'meter ** 2 / second ** 2')> >>> height = metpy.calc.geopotential_to_height(geopot) >>> height <Quantity([ 0. 1000. 2000. 3000. 4000. 5000. 6000. 7000. 8000. 9000. 10000.], 'meter')> Notes ----- Derived from definition of geopotential in [Hobbs2006]_ pg.14 Eq.1.8.
Below is the the instruction that describes the task: ### Input: r"""Compute height from a given geopotential. Parameters ---------- geopotential : `pint.Quantity` Geopotential (array_like) Returns ------- `pint.Quantity` The corresponding height value(s) Examples -------- >>> from metpy.constants import g, G, me, Re >>> import metpy.calc >>> from metpy.units import units >>> height = np.linspace(0,10000, num = 11) * units.m >>> geopot = metpy.calc.height_to_geopotential(height) >>> geopot <Quantity([ 0. 9817.46806283 19631.85526579 29443.16305888 39251.39289118 49056.54621087 58858.62446525 68657.62910064 78453.56156253 88246.42329545 98036.21574306], 'meter ** 2 / second ** 2')> >>> height = metpy.calc.geopotential_to_height(geopot) >>> height <Quantity([ 0. 1000. 2000. 3000. 4000. 5000. 6000. 7000. 8000. 9000. 10000.], 'meter')> Notes ----- Derived from definition of geopotential in [Hobbs2006]_ pg.14 Eq.1.8. ### Response: def geopotential_to_height(geopot): r"""Compute height from a given geopotential. Parameters ---------- geopotential : `pint.Quantity` Geopotential (array_like) Returns ------- `pint.Quantity` The corresponding height value(s) Examples -------- >>> from metpy.constants import g, G, me, Re >>> import metpy.calc >>> from metpy.units import units >>> height = np.linspace(0,10000, num = 11) * units.m >>> geopot = metpy.calc.height_to_geopotential(height) >>> geopot <Quantity([ 0. 9817.46806283 19631.85526579 29443.16305888 39251.39289118 49056.54621087 58858.62446525 68657.62910064 78453.56156253 88246.42329545 98036.21574306], 'meter ** 2 / second ** 2')> >>> height = metpy.calc.geopotential_to_height(geopot) >>> height <Quantity([ 0. 1000. 2000. 3000. 4000. 5000. 6000. 7000. 8000. 9000. 10000.], 'meter')> Notes ----- Derived from definition of geopotential in [Hobbs2006]_ pg.14 Eq.1.8. """ # Calculate geopotential height = (((1 / mpconsts.Re) - (geopot / (mpconsts.G * mpconsts.me))) ** -1) - mpconsts.Re return height
def _download(self, force_overwrite=False, verbose=False): """ Download the file if it's not already there. We shouldn't *need* to overwrite; the xml is not supposed to update. """ if not force_overwrite: # If the file is already there, we're done if os.path.isfile(self.filepath): if verbose: print( "File already available at %s -- skipping" % (self.filepath) ) return False stream_download(self.URL, self.filepath, verbose=verbose) return True
Download the file if it's not already there. We shouldn't *need* to overwrite; the xml is not supposed to update.
Below is the the instruction that describes the task: ### Input: Download the file if it's not already there. We shouldn't *need* to overwrite; the xml is not supposed to update. ### Response: def _download(self, force_overwrite=False, verbose=False): """ Download the file if it's not already there. We shouldn't *need* to overwrite; the xml is not supposed to update. """ if not force_overwrite: # If the file is already there, we're done if os.path.isfile(self.filepath): if verbose: print( "File already available at %s -- skipping" % (self.filepath) ) return False stream_download(self.URL, self.filepath, verbose=verbose) return True
def on_train_end(self, **kwargs): "Load the best model." if self.save_model: # Adapted from fast.ai "SaveModelCallback" if self.model_path.is_file(): with self.model_path.open('rb') as model_file: self.learn.load(model_file, purge=False) print(f'Loaded best saved model from {self.model_path}')
Load the best model.
Below is the the instruction that describes the task: ### Input: Load the best model. ### Response: def on_train_end(self, **kwargs): "Load the best model." if self.save_model: # Adapted from fast.ai "SaveModelCallback" if self.model_path.is_file(): with self.model_path.open('rb') as model_file: self.learn.load(model_file, purge=False) print(f'Loaded best saved model from {self.model_path}')
def print_new_versions(strict=False): """Prints new requirement versions.""" new_updates = [] same_updates = [] for req in everything_in(all_reqs): new_versions = [] same_versions = [] for ver_str in all_versions(req): if newer(ver_str_to_tuple(ver_str), min_versions[req], strict=True): new_versions.append(ver_str) elif not strict and newer(ver_str_to_tuple(ver_str), min_versions[req]): same_versions.append(ver_str) update_str = req + ": " + ver_tuple_to_str(min_versions[req]) + " -> " + ", ".join( new_versions + ["(" + v + ")" for v in same_versions], ) if new_versions: new_updates.append(update_str) elif same_versions: same_updates.append(update_str) print("\n".join(new_updates + same_updates))
Prints new requirement versions.
Below is the the instruction that describes the task: ### Input: Prints new requirement versions. ### Response: def print_new_versions(strict=False): """Prints new requirement versions.""" new_updates = [] same_updates = [] for req in everything_in(all_reqs): new_versions = [] same_versions = [] for ver_str in all_versions(req): if newer(ver_str_to_tuple(ver_str), min_versions[req], strict=True): new_versions.append(ver_str) elif not strict and newer(ver_str_to_tuple(ver_str), min_versions[req]): same_versions.append(ver_str) update_str = req + ": " + ver_tuple_to_str(min_versions[req]) + " -> " + ", ".join( new_versions + ["(" + v + ")" for v in same_versions], ) if new_versions: new_updates.append(update_str) elif same_versions: same_updates.append(update_str) print("\n".join(new_updates + same_updates))
def get_repos(self, type=github.GithubObject.NotSet, sort=github.GithubObject.NotSet, direction=github.GithubObject.NotSet): """ :calls: `GET /users/:user/repos <http://developer.github.com/v3/repos>`_ :param type: string :param sort: string :param direction: string :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.Repository.Repository` """ assert type is github.GithubObject.NotSet or isinstance(type, (str, unicode)), type assert sort is github.GithubObject.NotSet or isinstance(sort, (str, unicode)), sort assert direction is github.GithubObject.NotSet or isinstance(direction, (str, unicode)), direction url_parameters = dict() if type is not github.GithubObject.NotSet: url_parameters["type"] = type if sort is not github.GithubObject.NotSet: url_parameters["sort"] = sort if direction is not github.GithubObject.NotSet: url_parameters["direction"] = direction return github.PaginatedList.PaginatedList( github.Repository.Repository, self._requester, self.url + "/repos", url_parameters )
:calls: `GET /users/:user/repos <http://developer.github.com/v3/repos>`_ :param type: string :param sort: string :param direction: string :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.Repository.Repository`
Below is the the instruction that describes the task: ### Input: :calls: `GET /users/:user/repos <http://developer.github.com/v3/repos>`_ :param type: string :param sort: string :param direction: string :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.Repository.Repository` ### Response: def get_repos(self, type=github.GithubObject.NotSet, sort=github.GithubObject.NotSet, direction=github.GithubObject.NotSet): """ :calls: `GET /users/:user/repos <http://developer.github.com/v3/repos>`_ :param type: string :param sort: string :param direction: string :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.Repository.Repository` """ assert type is github.GithubObject.NotSet or isinstance(type, (str, unicode)), type assert sort is github.GithubObject.NotSet or isinstance(sort, (str, unicode)), sort assert direction is github.GithubObject.NotSet or isinstance(direction, (str, unicode)), direction url_parameters = dict() if type is not github.GithubObject.NotSet: url_parameters["type"] = type if sort is not github.GithubObject.NotSet: url_parameters["sort"] = sort if direction is not github.GithubObject.NotSet: url_parameters["direction"] = direction return github.PaginatedList.PaginatedList( github.Repository.Repository, self._requester, self.url + "/repos", url_parameters )
def last_word(text, include='alphanum_underscore'): """ Find the last word in a sentence. >>> last_word('abc') 'abc' >>> last_word(' abc') 'abc' >>> last_word('') '' >>> last_word(' ') '' >>> last_word('abc ') '' >>> last_word('abc def') 'def' >>> last_word('abc def ') '' >>> last_word('abc def;') '' >>> last_word('bac $def') 'def' >>> last_word('bac $def', include='most_punctuations') '$def' >>> last_word('bac \def', include='most_punctuations') '\\\\def' >>> last_word('bac \def;', include='most_punctuations') '\\\\def;' >>> last_word('bac::def', include='most_punctuations') 'def' """ if not text: # Empty string return '' if text[-1].isspace(): return '' else: regex = cleanup_regex[include] matches = regex.search(text) if matches: return matches.group(0) else: return ''
Find the last word in a sentence. >>> last_word('abc') 'abc' >>> last_word(' abc') 'abc' >>> last_word('') '' >>> last_word(' ') '' >>> last_word('abc ') '' >>> last_word('abc def') 'def' >>> last_word('abc def ') '' >>> last_word('abc def;') '' >>> last_word('bac $def') 'def' >>> last_word('bac $def', include='most_punctuations') '$def' >>> last_word('bac \def', include='most_punctuations') '\\\\def' >>> last_word('bac \def;', include='most_punctuations') '\\\\def;' >>> last_word('bac::def', include='most_punctuations') 'def'
Below is the the instruction that describes the task: ### Input: Find the last word in a sentence. >>> last_word('abc') 'abc' >>> last_word(' abc') 'abc' >>> last_word('') '' >>> last_word(' ') '' >>> last_word('abc ') '' >>> last_word('abc def') 'def' >>> last_word('abc def ') '' >>> last_word('abc def;') '' >>> last_word('bac $def') 'def' >>> last_word('bac $def', include='most_punctuations') '$def' >>> last_word('bac \def', include='most_punctuations') '\\\\def' >>> last_word('bac \def;', include='most_punctuations') '\\\\def;' >>> last_word('bac::def', include='most_punctuations') 'def' ### Response: def last_word(text, include='alphanum_underscore'): """ Find the last word in a sentence. >>> last_word('abc') 'abc' >>> last_word(' abc') 'abc' >>> last_word('') '' >>> last_word(' ') '' >>> last_word('abc ') '' >>> last_word('abc def') 'def' >>> last_word('abc def ') '' >>> last_word('abc def;') '' >>> last_word('bac $def') 'def' >>> last_word('bac $def', include='most_punctuations') '$def' >>> last_word('bac \def', include='most_punctuations') '\\\\def' >>> last_word('bac \def;', include='most_punctuations') '\\\\def;' >>> last_word('bac::def', include='most_punctuations') 'def' """ if not text: # Empty string return '' if text[-1].isspace(): return '' else: regex = cleanup_regex[include] matches = regex.search(text) if matches: return matches.group(0) else: return ''
def loop_getter( self ) -> typing.Optional[typing.Union[typing.Callable[..., asyncio.AbstractEventLoop], asyncio.AbstractEventLoop]]: """Loop getter. :rtype: typing.Union[None, typing.Callable[..., asyncio.AbstractEventLoop], asyncio.AbstractEventLoop] """ return self.__loop_getter
Loop getter. :rtype: typing.Union[None, typing.Callable[..., asyncio.AbstractEventLoop], asyncio.AbstractEventLoop]
Below is the the instruction that describes the task: ### Input: Loop getter. :rtype: typing.Union[None, typing.Callable[..., asyncio.AbstractEventLoop], asyncio.AbstractEventLoop] ### Response: def loop_getter( self ) -> typing.Optional[typing.Union[typing.Callable[..., asyncio.AbstractEventLoop], asyncio.AbstractEventLoop]]: """Loop getter. :rtype: typing.Union[None, typing.Callable[..., asyncio.AbstractEventLoop], asyncio.AbstractEventLoop] """ return self.__loop_getter
def calc_atlases(worklog, atlas_subject_id='fsaverage'): ''' cacl_atlases finds all available atlases in the possible subject directories of the given atlas subject. In order to be a template, it must either be a collection of files (either mgh/mgz or FreeSurfer curv/morph-data files) named as '<hemi>.<template>_<quantity><ending>' such as the files 'lh.wang2015_mplbl.mgz' and 'rh.wang2015_mplbl.mgz'. They may additionally have a version prior to the ending, as in 'lh.benson14_angle.v2_5.mgz'. Files without versions are considered to be of a higher version than all versioned files. All files must be found in the atlas subject's surf/ directory; however, all subjects in all FreeSurfer subjects paths with the same subject id are searched if the atlas is not found in the atlas subejct's directory. Afferent parameters: @ atlas_subject_id The FreeSurfer subject name subject path of the subject that is to be used as the atlas subject from which the atlas is interpolated. HCP subjects are not currently supported. Efferent values: @ atlas_map A persistent map whose keys are atlas names, the values of which are themselves persistent maps whose keys are the versions of the given atlas (None potentially being included). The values of these maps are again maps of hemisphere names then finally of the of the quantity names (such as 'eccen' or 'maxprob') to the property vectors imported from the appropriate files. ''' try: sub = freesurfer_subject(atlas_subject_id) except Exception: sub = None if sub is None: try: sub = hcp_subject(atlas_subject_id) except Exception: sub = None if sub is None: raise ValueError('Could not load atlas subject %s' % atlas_subject_id) worklog('Using Atlas subject: %s' % sub.path) # Now find the requested atlases atlases = AutoDict() atlas_patt = r'^([lr]h)\.([^_]+)_([^.]+)(\.(v(\d+(_\d+)*)))?((\.mg[hz])|\.nii(\.gz)?)?$' atlas_hemi_ii = 1 atlas_atls_ii = 2 atlas_meas_ii = 3 atlas_vrsn_ii = 6 libdir = os.path.join(library_path(), 'data') for pth in [libdir] + config['freesurfer_subject_paths'] + [sub.path]: # see if appropriate files are in this directory pth = os.path.join(pth, sub.name, 'surf') if not os.path.isdir(pth): continue for fl in os.listdir(pth): m = re.match(atlas_patt, fl) if m is None: continue fl = os.path.join(pth, fl) (h, atls, meas, vrsn) = [ m.group(ii) for ii in (atlas_hemi_ii, atlas_atls_ii, atlas_meas_ii, atlas_vrsn_ii)] if vrsn is not None: vrsn = tuple([int(s) for s in vrsn.split('_')]) atlases[atls][vrsn][h][meas] = curry(nyio.load, fl) # convert the possible atlas maps into persistent/lazy maps atlas_map = pyr.pmap({a:pyr.pmap({v:pyr.pmap({h:pimms.lazy_map(hv) for (h,hv) in six.iteritems(vv)}) for (v,vv) in six.iteritems(av)}) for (a,av) in six.iteritems(atlases)}) return {'atlas_map':atlas_map, 'atlas_subject':sub}
cacl_atlases finds all available atlases in the possible subject directories of the given atlas subject. In order to be a template, it must either be a collection of files (either mgh/mgz or FreeSurfer curv/morph-data files) named as '<hemi>.<template>_<quantity><ending>' such as the files 'lh.wang2015_mplbl.mgz' and 'rh.wang2015_mplbl.mgz'. They may additionally have a version prior to the ending, as in 'lh.benson14_angle.v2_5.mgz'. Files without versions are considered to be of a higher version than all versioned files. All files must be found in the atlas subject's surf/ directory; however, all subjects in all FreeSurfer subjects paths with the same subject id are searched if the atlas is not found in the atlas subejct's directory. Afferent parameters: @ atlas_subject_id The FreeSurfer subject name subject path of the subject that is to be used as the atlas subject from which the atlas is interpolated. HCP subjects are not currently supported. Efferent values: @ atlas_map A persistent map whose keys are atlas names, the values of which are themselves persistent maps whose keys are the versions of the given atlas (None potentially being included). The values of these maps are again maps of hemisphere names then finally of the of the quantity names (such as 'eccen' or 'maxprob') to the property vectors imported from the appropriate files.
Below is the the instruction that describes the task: ### Input: cacl_atlases finds all available atlases in the possible subject directories of the given atlas subject. In order to be a template, it must either be a collection of files (either mgh/mgz or FreeSurfer curv/morph-data files) named as '<hemi>.<template>_<quantity><ending>' such as the files 'lh.wang2015_mplbl.mgz' and 'rh.wang2015_mplbl.mgz'. They may additionally have a version prior to the ending, as in 'lh.benson14_angle.v2_5.mgz'. Files without versions are considered to be of a higher version than all versioned files. All files must be found in the atlas subject's surf/ directory; however, all subjects in all FreeSurfer subjects paths with the same subject id are searched if the atlas is not found in the atlas subejct's directory. Afferent parameters: @ atlas_subject_id The FreeSurfer subject name subject path of the subject that is to be used as the atlas subject from which the atlas is interpolated. HCP subjects are not currently supported. Efferent values: @ atlas_map A persistent map whose keys are atlas names, the values of which are themselves persistent maps whose keys are the versions of the given atlas (None potentially being included). The values of these maps are again maps of hemisphere names then finally of the of the quantity names (such as 'eccen' or 'maxprob') to the property vectors imported from the appropriate files. ### Response: def calc_atlases(worklog, atlas_subject_id='fsaverage'): ''' cacl_atlases finds all available atlases in the possible subject directories of the given atlas subject. In order to be a template, it must either be a collection of files (either mgh/mgz or FreeSurfer curv/morph-data files) named as '<hemi>.<template>_<quantity><ending>' such as the files 'lh.wang2015_mplbl.mgz' and 'rh.wang2015_mplbl.mgz'. They may additionally have a version prior to the ending, as in 'lh.benson14_angle.v2_5.mgz'. Files without versions are considered to be of a higher version than all versioned files. All files must be found in the atlas subject's surf/ directory; however, all subjects in all FreeSurfer subjects paths with the same subject id are searched if the atlas is not found in the atlas subejct's directory. Afferent parameters: @ atlas_subject_id The FreeSurfer subject name subject path of the subject that is to be used as the atlas subject from which the atlas is interpolated. HCP subjects are not currently supported. Efferent values: @ atlas_map A persistent map whose keys are atlas names, the values of which are themselves persistent maps whose keys are the versions of the given atlas (None potentially being included). The values of these maps are again maps of hemisphere names then finally of the of the quantity names (such as 'eccen' or 'maxprob') to the property vectors imported from the appropriate files. ''' try: sub = freesurfer_subject(atlas_subject_id) except Exception: sub = None if sub is None: try: sub = hcp_subject(atlas_subject_id) except Exception: sub = None if sub is None: raise ValueError('Could not load atlas subject %s' % atlas_subject_id) worklog('Using Atlas subject: %s' % sub.path) # Now find the requested atlases atlases = AutoDict() atlas_patt = r'^([lr]h)\.([^_]+)_([^.]+)(\.(v(\d+(_\d+)*)))?((\.mg[hz])|\.nii(\.gz)?)?$' atlas_hemi_ii = 1 atlas_atls_ii = 2 atlas_meas_ii = 3 atlas_vrsn_ii = 6 libdir = os.path.join(library_path(), 'data') for pth in [libdir] + config['freesurfer_subject_paths'] + [sub.path]: # see if appropriate files are in this directory pth = os.path.join(pth, sub.name, 'surf') if not os.path.isdir(pth): continue for fl in os.listdir(pth): m = re.match(atlas_patt, fl) if m is None: continue fl = os.path.join(pth, fl) (h, atls, meas, vrsn) = [ m.group(ii) for ii in (atlas_hemi_ii, atlas_atls_ii, atlas_meas_ii, atlas_vrsn_ii)] if vrsn is not None: vrsn = tuple([int(s) for s in vrsn.split('_')]) atlases[atls][vrsn][h][meas] = curry(nyio.load, fl) # convert the possible atlas maps into persistent/lazy maps atlas_map = pyr.pmap({a:pyr.pmap({v:pyr.pmap({h:pimms.lazy_map(hv) for (h,hv) in six.iteritems(vv)}) for (v,vv) in six.iteritems(av)}) for (a,av) in six.iteritems(atlases)}) return {'atlas_map':atlas_map, 'atlas_subject':sub}
def set_runtime_value_bool(self, ihcid: int, value: bool) -> bool: """ Set bool runtime value with re-authenticate if needed""" if self.client.set_runtime_value_bool(ihcid, value): return True self.re_authenticate() return self.client.set_runtime_value_bool(ihcid, value)
Set bool runtime value with re-authenticate if needed
Below is the the instruction that describes the task: ### Input: Set bool runtime value with re-authenticate if needed ### Response: def set_runtime_value_bool(self, ihcid: int, value: bool) -> bool: """ Set bool runtime value with re-authenticate if needed""" if self.client.set_runtime_value_bool(ihcid, value): return True self.re_authenticate() return self.client.set_runtime_value_bool(ihcid, value)
def save_function_tuple(self, func): """ Pickles an actual func object. A func comprises: code, globals, defaults, closure, and dict. We extract and save these, injecting reducing functions at certain points to recreate the func object. Keep in mind that some of these pieces can contain a ref to the func itself. Thus, a naive save on these pieces could trigger an infinite loop of save's. To get around that, we first create a skeleton func object using just the code (this is safe, since this won't contain a ref to the func), and memoize it as soon as it's created. The other stuff can then be filled in later. """ if is_tornado_coroutine(func): self.save_reduce(_rebuild_tornado_coroutine, (func.__wrapped__,), obj=func) return save = self.save write = self.write code, f_globals, defaults, closure_values, dct, base_globals = self.extract_func_data(func) save(_fill_function) # skeleton function updater write(pickle.MARK) # beginning of tuple that _fill_function expects self._save_subimports( code, itertools.chain(f_globals.values(), closure_values or ()), ) # create a skeleton function object and memoize it save(_make_skel_func) save(( code, len(closure_values) if closure_values is not None else -1, base_globals, )) write(pickle.REDUCE) self.memoize(func) # save the rest of the func data needed by _fill_function state = { 'globals': f_globals, 'defaults': defaults, 'dict': dct, 'closure_values': closure_values, 'module': func.__module__, 'name': func.__name__, 'doc': func.__doc__, } if hasattr(func, '__annotations__') and sys.version_info >= (3, 7): state['annotations'] = func.__annotations__ if hasattr(func, '__qualname__'): state['qualname'] = func.__qualname__ save(state) write(pickle.TUPLE) write(pickle.REDUCE)
Pickles an actual func object. A func comprises: code, globals, defaults, closure, and dict. We extract and save these, injecting reducing functions at certain points to recreate the func object. Keep in mind that some of these pieces can contain a ref to the func itself. Thus, a naive save on these pieces could trigger an infinite loop of save's. To get around that, we first create a skeleton func object using just the code (this is safe, since this won't contain a ref to the func), and memoize it as soon as it's created. The other stuff can then be filled in later.
Below is the the instruction that describes the task: ### Input: Pickles an actual func object. A func comprises: code, globals, defaults, closure, and dict. We extract and save these, injecting reducing functions at certain points to recreate the func object. Keep in mind that some of these pieces can contain a ref to the func itself. Thus, a naive save on these pieces could trigger an infinite loop of save's. To get around that, we first create a skeleton func object using just the code (this is safe, since this won't contain a ref to the func), and memoize it as soon as it's created. The other stuff can then be filled in later. ### Response: def save_function_tuple(self, func): """ Pickles an actual func object. A func comprises: code, globals, defaults, closure, and dict. We extract and save these, injecting reducing functions at certain points to recreate the func object. Keep in mind that some of these pieces can contain a ref to the func itself. Thus, a naive save on these pieces could trigger an infinite loop of save's. To get around that, we first create a skeleton func object using just the code (this is safe, since this won't contain a ref to the func), and memoize it as soon as it's created. The other stuff can then be filled in later. """ if is_tornado_coroutine(func): self.save_reduce(_rebuild_tornado_coroutine, (func.__wrapped__,), obj=func) return save = self.save write = self.write code, f_globals, defaults, closure_values, dct, base_globals = self.extract_func_data(func) save(_fill_function) # skeleton function updater write(pickle.MARK) # beginning of tuple that _fill_function expects self._save_subimports( code, itertools.chain(f_globals.values(), closure_values or ()), ) # create a skeleton function object and memoize it save(_make_skel_func) save(( code, len(closure_values) if closure_values is not None else -1, base_globals, )) write(pickle.REDUCE) self.memoize(func) # save the rest of the func data needed by _fill_function state = { 'globals': f_globals, 'defaults': defaults, 'dict': dct, 'closure_values': closure_values, 'module': func.__module__, 'name': func.__name__, 'doc': func.__doc__, } if hasattr(func, '__annotations__') and sys.version_info >= (3, 7): state['annotations'] = func.__annotations__ if hasattr(func, '__qualname__'): state['qualname'] = func.__qualname__ save(state) write(pickle.TUPLE) write(pickle.REDUCE)
def _get_summary(self, text, **kwargs): """ Render out just the summary """ card = cards.extract_card(text, kwargs, self.search_path) return flask.Markup((card.description or '').strip())
Render out just the summary
Below is the the instruction that describes the task: ### Input: Render out just the summary ### Response: def _get_summary(self, text, **kwargs): """ Render out just the summary """ card = cards.extract_card(text, kwargs, self.search_path) return flask.Markup((card.description or '').strip())
def bbox(self): """ The minimal `~photutils.aperture.BoundingBox` for the cutout region with respect to the original (large) image. """ return BoundingBox(self.slices[1].start, self.slices[1].stop, self.slices[0].start, self.slices[0].stop)
The minimal `~photutils.aperture.BoundingBox` for the cutout region with respect to the original (large) image.
Below is the the instruction that describes the task: ### Input: The minimal `~photutils.aperture.BoundingBox` for the cutout region with respect to the original (large) image. ### Response: def bbox(self): """ The minimal `~photutils.aperture.BoundingBox` for the cutout region with respect to the original (large) image. """ return BoundingBox(self.slices[1].start, self.slices[1].stop, self.slices[0].start, self.slices[0].stop)
def add_blacklisted_plugins(self, plugins): """ add blacklisted plugins. `plugins` may be a single object or iterable. """ plugins = util.return_list(plugins) self.blacklisted_plugins.extend(plugins)
add blacklisted plugins. `plugins` may be a single object or iterable.
Below is the the instruction that describes the task: ### Input: add blacklisted plugins. `plugins` may be a single object or iterable. ### Response: def add_blacklisted_plugins(self, plugins): """ add blacklisted plugins. `plugins` may be a single object or iterable. """ plugins = util.return_list(plugins) self.blacklisted_plugins.extend(plugins)
def add_step(self, step, step_id): """ Add a step to the list. The first step added becomes the initial step. """ assert step_id not in self._steps assert step_id not in self._order assert isinstance(step, Step) self._steps[step_id] = step self._order.append(step_id)
Add a step to the list. The first step added becomes the initial step.
Below is the the instruction that describes the task: ### Input: Add a step to the list. The first step added becomes the initial step. ### Response: def add_step(self, step, step_id): """ Add a step to the list. The first step added becomes the initial step. """ assert step_id not in self._steps assert step_id not in self._order assert isinstance(step, Step) self._steps[step_id] = step self._order.append(step_id)
def estimate_assignment_probs(bitstring_prep_histograms): """ Compute the estimated assignment probability matrix for a sequence of single shot histograms obtained by running the programs generated by `basis_state_preps()`. bitstring_prep_histograms[i,j] = #number of measured outcomes j when running program i The assignment probability is obtained by transposing and afterwards normalizing the columns. p[j, i] = Probability to measure outcome j when preparing the state with program i. :param list|numpy.ndarray bitstring_prep_histograms: A nested list or 2d array with shape (d, d), where ``d = 2**nqubits`` is the dimension of the Hilbert space. The first axis varies over the state preparation program index, the second axis corresponds to the measured bitstring. :return: The assignment probability matrix. :rtype: numpy.ndarray """ p = np.array(bitstring_prep_histograms, dtype=float).T p /= p.sum(axis=0)[np.newaxis, :] return p
Compute the estimated assignment probability matrix for a sequence of single shot histograms obtained by running the programs generated by `basis_state_preps()`. bitstring_prep_histograms[i,j] = #number of measured outcomes j when running program i The assignment probability is obtained by transposing and afterwards normalizing the columns. p[j, i] = Probability to measure outcome j when preparing the state with program i. :param list|numpy.ndarray bitstring_prep_histograms: A nested list or 2d array with shape (d, d), where ``d = 2**nqubits`` is the dimension of the Hilbert space. The first axis varies over the state preparation program index, the second axis corresponds to the measured bitstring. :return: The assignment probability matrix. :rtype: numpy.ndarray
Below is the the instruction that describes the task: ### Input: Compute the estimated assignment probability matrix for a sequence of single shot histograms obtained by running the programs generated by `basis_state_preps()`. bitstring_prep_histograms[i,j] = #number of measured outcomes j when running program i The assignment probability is obtained by transposing and afterwards normalizing the columns. p[j, i] = Probability to measure outcome j when preparing the state with program i. :param list|numpy.ndarray bitstring_prep_histograms: A nested list or 2d array with shape (d, d), where ``d = 2**nqubits`` is the dimension of the Hilbert space. The first axis varies over the state preparation program index, the second axis corresponds to the measured bitstring. :return: The assignment probability matrix. :rtype: numpy.ndarray ### Response: def estimate_assignment_probs(bitstring_prep_histograms): """ Compute the estimated assignment probability matrix for a sequence of single shot histograms obtained by running the programs generated by `basis_state_preps()`. bitstring_prep_histograms[i,j] = #number of measured outcomes j when running program i The assignment probability is obtained by transposing and afterwards normalizing the columns. p[j, i] = Probability to measure outcome j when preparing the state with program i. :param list|numpy.ndarray bitstring_prep_histograms: A nested list or 2d array with shape (d, d), where ``d = 2**nqubits`` is the dimension of the Hilbert space. The first axis varies over the state preparation program index, the second axis corresponds to the measured bitstring. :return: The assignment probability matrix. :rtype: numpy.ndarray """ p = np.array(bitstring_prep_histograms, dtype=float).T p /= p.sum(axis=0)[np.newaxis, :] return p
def ForceRemoveFileObject(self, path_spec): """Forces the removal of a file-like object based on a path specification. Args: path_spec (PathSpec): path specification. Returns: bool: True if the file-like object was cached. """ cache_value = self._file_object_cache.GetCacheValue(path_spec.comparable) if not cache_value: return False while not cache_value.IsDereferenced(): cache_value.vfs_object.close() return True
Forces the removal of a file-like object based on a path specification. Args: path_spec (PathSpec): path specification. Returns: bool: True if the file-like object was cached.
Below is the the instruction that describes the task: ### Input: Forces the removal of a file-like object based on a path specification. Args: path_spec (PathSpec): path specification. Returns: bool: True if the file-like object was cached. ### Response: def ForceRemoveFileObject(self, path_spec): """Forces the removal of a file-like object based on a path specification. Args: path_spec (PathSpec): path specification. Returns: bool: True if the file-like object was cached. """ cache_value = self._file_object_cache.GetCacheValue(path_spec.comparable) if not cache_value: return False while not cache_value.IsDereferenced(): cache_value.vfs_object.close() return True
def _handle_response(self, zbx_answer): """ Analyze Zabbix Server response Returns a list with number of: * processed items * failed items * total items * time spent :zbx_answer: Zabbix server response as string """ zbx_answer = json.loads(zbx_answer) if self._logger: # pragma: no cover self._logger.info( "Anaylizing Zabbix Server's answer" ) if zbx_answer: self._logger.debug("Zabbix Server response is: [%s]" % zbx_answer) # Default items number in length of th storage list nb_item = len(self._items_list) if self._config.debug_level >= 4: # If debug enabled, force it to 1 nb_item = 1 # If dryrun is disabled, we can process answer response = zbx_answer.get('response') result = re.findall(ZBX_RESP_REGEX, zbx_answer.get('info')) processed, failed, total, time = result[0] return response, int(processed), int(failed), int(total), float(time)
Analyze Zabbix Server response Returns a list with number of: * processed items * failed items * total items * time spent :zbx_answer: Zabbix server response as string
Below is the the instruction that describes the task: ### Input: Analyze Zabbix Server response Returns a list with number of: * processed items * failed items * total items * time spent :zbx_answer: Zabbix server response as string ### Response: def _handle_response(self, zbx_answer): """ Analyze Zabbix Server response Returns a list with number of: * processed items * failed items * total items * time spent :zbx_answer: Zabbix server response as string """ zbx_answer = json.loads(zbx_answer) if self._logger: # pragma: no cover self._logger.info( "Anaylizing Zabbix Server's answer" ) if zbx_answer: self._logger.debug("Zabbix Server response is: [%s]" % zbx_answer) # Default items number in length of th storage list nb_item = len(self._items_list) if self._config.debug_level >= 4: # If debug enabled, force it to 1 nb_item = 1 # If dryrun is disabled, we can process answer response = zbx_answer.get('response') result = re.findall(ZBX_RESP_REGEX, zbx_answer.get('info')) processed, failed, total, time = result[0] return response, int(processed), int(failed), int(total), float(time)
def _set_ethernet(self, v, load=False): """ Setter method for ethernet, mapped from YANG variable /interface/ethernet (list) If this variable is read-only (config: false) in the source YANG file, then _set_ethernet is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_ethernet() directly. YANG Description: The list of Ethernet interfaces in the managed device. Each row represents a Ethernet interface. The list provides a way to discover all the physical interfaces in a managed device. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("name",ethernet.ethernet, yang_name="ethernet", rest_name="Ethernet", parent=self, is_container='list', user_ordered=True, path_helper=self._path_helper, yang_keys='name', extensions={u'tailf-common': {u'info': u'The list of Ethernet interfaces.', u'cli-no-key-completion': None, u'alt-name': u'Ethernet', u'sort-priority': u'RUNNCFG_LEVEL_INTERFACE_TYPE_PHYSICAL', u'cli-suppress-no': None, u'cli-suppress-show-path': None, u'cli-custom-range-actionpoint': u'NsmRangeCliActionpoint', u'cli-custom-range-enumerator': u'NsmRangeCliActionpoint', u'cli-no-match-completion': None, u'callpoint': u'interface_phyintf', u'cli-mode-name': u'conf-if-eth-$(name)'}}), is_container='list', yang_name="ethernet", rest_name="Ethernet", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'The list of Ethernet interfaces.', u'cli-no-key-completion': None, u'alt-name': u'Ethernet', u'sort-priority': u'RUNNCFG_LEVEL_INTERFACE_TYPE_PHYSICAL', u'cli-suppress-no': None, u'cli-suppress-show-path': None, u'cli-custom-range-actionpoint': u'NsmRangeCliActionpoint', u'cli-custom-range-enumerator': u'NsmRangeCliActionpoint', u'cli-no-match-completion': None, u'callpoint': u'interface_phyintf', u'cli-mode-name': u'conf-if-eth-$(name)'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='list', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """ethernet must be of a type compatible with list""", 'defined-type': "list", 'generated-type': """YANGDynClass(base=YANGListType("name",ethernet.ethernet, yang_name="ethernet", rest_name="Ethernet", parent=self, is_container='list', user_ordered=True, path_helper=self._path_helper, yang_keys='name', extensions={u'tailf-common': {u'info': u'The list of Ethernet interfaces.', u'cli-no-key-completion': None, u'alt-name': u'Ethernet', u'sort-priority': u'RUNNCFG_LEVEL_INTERFACE_TYPE_PHYSICAL', u'cli-suppress-no': None, u'cli-suppress-show-path': None, u'cli-custom-range-actionpoint': u'NsmRangeCliActionpoint', u'cli-custom-range-enumerator': u'NsmRangeCliActionpoint', u'cli-no-match-completion': None, u'callpoint': u'interface_phyintf', u'cli-mode-name': u'conf-if-eth-$(name)'}}), is_container='list', yang_name="ethernet", rest_name="Ethernet", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'The list of Ethernet interfaces.', u'cli-no-key-completion': None, u'alt-name': u'Ethernet', u'sort-priority': u'RUNNCFG_LEVEL_INTERFACE_TYPE_PHYSICAL', u'cli-suppress-no': None, u'cli-suppress-show-path': None, u'cli-custom-range-actionpoint': u'NsmRangeCliActionpoint', u'cli-custom-range-enumerator': u'NsmRangeCliActionpoint', u'cli-no-match-completion': None, u'callpoint': u'interface_phyintf', u'cli-mode-name': u'conf-if-eth-$(name)'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='list', is_config=True)""", }) self.__ethernet = t if hasattr(self, '_set'): self._set()
Setter method for ethernet, mapped from YANG variable /interface/ethernet (list) If this variable is read-only (config: false) in the source YANG file, then _set_ethernet is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_ethernet() directly. YANG Description: The list of Ethernet interfaces in the managed device. Each row represents a Ethernet interface. The list provides a way to discover all the physical interfaces in a managed device.
Below is the the instruction that describes the task: ### Input: Setter method for ethernet, mapped from YANG variable /interface/ethernet (list) If this variable is read-only (config: false) in the source YANG file, then _set_ethernet is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_ethernet() directly. YANG Description: The list of Ethernet interfaces in the managed device. Each row represents a Ethernet interface. The list provides a way to discover all the physical interfaces in a managed device. ### Response: def _set_ethernet(self, v, load=False): """ Setter method for ethernet, mapped from YANG variable /interface/ethernet (list) If this variable is read-only (config: false) in the source YANG file, then _set_ethernet is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_ethernet() directly. YANG Description: The list of Ethernet interfaces in the managed device. Each row represents a Ethernet interface. The list provides a way to discover all the physical interfaces in a managed device. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("name",ethernet.ethernet, yang_name="ethernet", rest_name="Ethernet", parent=self, is_container='list', user_ordered=True, path_helper=self._path_helper, yang_keys='name', extensions={u'tailf-common': {u'info': u'The list of Ethernet interfaces.', u'cli-no-key-completion': None, u'alt-name': u'Ethernet', u'sort-priority': u'RUNNCFG_LEVEL_INTERFACE_TYPE_PHYSICAL', u'cli-suppress-no': None, u'cli-suppress-show-path': None, u'cli-custom-range-actionpoint': u'NsmRangeCliActionpoint', u'cli-custom-range-enumerator': u'NsmRangeCliActionpoint', u'cli-no-match-completion': None, u'callpoint': u'interface_phyintf', u'cli-mode-name': u'conf-if-eth-$(name)'}}), is_container='list', yang_name="ethernet", rest_name="Ethernet", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'The list of Ethernet interfaces.', u'cli-no-key-completion': None, u'alt-name': u'Ethernet', u'sort-priority': u'RUNNCFG_LEVEL_INTERFACE_TYPE_PHYSICAL', u'cli-suppress-no': None, u'cli-suppress-show-path': None, u'cli-custom-range-actionpoint': u'NsmRangeCliActionpoint', u'cli-custom-range-enumerator': u'NsmRangeCliActionpoint', u'cli-no-match-completion': None, u'callpoint': u'interface_phyintf', u'cli-mode-name': u'conf-if-eth-$(name)'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='list', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """ethernet must be of a type compatible with list""", 'defined-type': "list", 'generated-type': """YANGDynClass(base=YANGListType("name",ethernet.ethernet, yang_name="ethernet", rest_name="Ethernet", parent=self, is_container='list', user_ordered=True, path_helper=self._path_helper, yang_keys='name', extensions={u'tailf-common': {u'info': u'The list of Ethernet interfaces.', u'cli-no-key-completion': None, u'alt-name': u'Ethernet', u'sort-priority': u'RUNNCFG_LEVEL_INTERFACE_TYPE_PHYSICAL', u'cli-suppress-no': None, u'cli-suppress-show-path': None, u'cli-custom-range-actionpoint': u'NsmRangeCliActionpoint', u'cli-custom-range-enumerator': u'NsmRangeCliActionpoint', u'cli-no-match-completion': None, u'callpoint': u'interface_phyintf', u'cli-mode-name': u'conf-if-eth-$(name)'}}), is_container='list', yang_name="ethernet", rest_name="Ethernet", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'The list of Ethernet interfaces.', u'cli-no-key-completion': None, u'alt-name': u'Ethernet', u'sort-priority': u'RUNNCFG_LEVEL_INTERFACE_TYPE_PHYSICAL', u'cli-suppress-no': None, u'cli-suppress-show-path': None, u'cli-custom-range-actionpoint': u'NsmRangeCliActionpoint', u'cli-custom-range-enumerator': u'NsmRangeCliActionpoint', u'cli-no-match-completion': None, u'callpoint': u'interface_phyintf', u'cli-mode-name': u'conf-if-eth-$(name)'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='list', is_config=True)""", }) self.__ethernet = t if hasattr(self, '_set'): self._set()
def copy_directory(src, dest, force=False): ''' Copy an entire directory recursively ''' if os.path.exists(dest) and force is True: shutil.rmtree(dest) try: shutil.copytree(src, dest) except OSError as e: # If the error was caused because the source wasn't a directory if e.errno == errno.ENOTDIR: shutil.copy(src, dest) else: bot.error('Directory not copied. Error: %s' % e) sys.exit(1)
Copy an entire directory recursively
Below is the the instruction that describes the task: ### Input: Copy an entire directory recursively ### Response: def copy_directory(src, dest, force=False): ''' Copy an entire directory recursively ''' if os.path.exists(dest) and force is True: shutil.rmtree(dest) try: shutil.copytree(src, dest) except OSError as e: # If the error was caused because the source wasn't a directory if e.errno == errno.ENOTDIR: shutil.copy(src, dest) else: bot.error('Directory not copied. Error: %s' % e) sys.exit(1)
def identify_triggers( cfg, sources, sinks, lattice, nosec_lines ): """Identify sources, sinks and sanitisers in a CFG. Args: cfg(CFG): CFG to find sources, sinks and sanitisers in. sources(tuple): list of sources, a source is a (source, sanitiser) tuple. sinks(tuple): list of sources, a sink is a (sink, sanitiser) tuple. nosec_lines(set): lines with # nosec whitelisting Returns: Triggers tuple with sink and source nodes and a sanitiser node dict. """ assignment_nodes = filter_cfg_nodes(cfg, AssignmentNode) tainted_nodes = filter_cfg_nodes(cfg, TaintedNode) tainted_trigger_nodes = [ TriggerNode( Source('Framework function URL parameter'), cfg_node=node ) for node in tainted_nodes ] sources_in_file = find_triggers(assignment_nodes, sources, nosec_lines) sources_in_file.extend(tainted_trigger_nodes) find_secondary_sources(assignment_nodes, sources_in_file, lattice) sinks_in_file = find_triggers(cfg.nodes, sinks, nosec_lines) sanitiser_node_dict = build_sanitiser_node_dict(cfg, sinks_in_file) return Triggers(sources_in_file, sinks_in_file, sanitiser_node_dict)
Identify sources, sinks and sanitisers in a CFG. Args: cfg(CFG): CFG to find sources, sinks and sanitisers in. sources(tuple): list of sources, a source is a (source, sanitiser) tuple. sinks(tuple): list of sources, a sink is a (sink, sanitiser) tuple. nosec_lines(set): lines with # nosec whitelisting Returns: Triggers tuple with sink and source nodes and a sanitiser node dict.
Below is the the instruction that describes the task: ### Input: Identify sources, sinks and sanitisers in a CFG. Args: cfg(CFG): CFG to find sources, sinks and sanitisers in. sources(tuple): list of sources, a source is a (source, sanitiser) tuple. sinks(tuple): list of sources, a sink is a (sink, sanitiser) tuple. nosec_lines(set): lines with # nosec whitelisting Returns: Triggers tuple with sink and source nodes and a sanitiser node dict. ### Response: def identify_triggers( cfg, sources, sinks, lattice, nosec_lines ): """Identify sources, sinks and sanitisers in a CFG. Args: cfg(CFG): CFG to find sources, sinks and sanitisers in. sources(tuple): list of sources, a source is a (source, sanitiser) tuple. sinks(tuple): list of sources, a sink is a (sink, sanitiser) tuple. nosec_lines(set): lines with # nosec whitelisting Returns: Triggers tuple with sink and source nodes and a sanitiser node dict. """ assignment_nodes = filter_cfg_nodes(cfg, AssignmentNode) tainted_nodes = filter_cfg_nodes(cfg, TaintedNode) tainted_trigger_nodes = [ TriggerNode( Source('Framework function URL parameter'), cfg_node=node ) for node in tainted_nodes ] sources_in_file = find_triggers(assignment_nodes, sources, nosec_lines) sources_in_file.extend(tainted_trigger_nodes) find_secondary_sources(assignment_nodes, sources_in_file, lattice) sinks_in_file = find_triggers(cfg.nodes, sinks, nosec_lines) sanitiser_node_dict = build_sanitiser_node_dict(cfg, sinks_in_file) return Triggers(sources_in_file, sinks_in_file, sanitiser_node_dict)
def __install_ssh_config(self, config): """ Install the ssh configuration """ if not config.is_affirmative('use_global_ssh', default="no"): ssh_config_injection = self._build_ssh_config(config) if not os.path.exists(ssh_config_path): if self.injections.in_noninjected_file(ssh_config_path, "Host %s" % config.get('host')): if config.is_affirmative('override'): self.injections.inject(ssh_config_path, ssh_config_injection) else: self.injections.inject(ssh_config_path, ssh_config_injection) else: self.injections.inject(ssh_config_path, ssh_config_injection) self.injections.commit()
Install the ssh configuration
Below is the the instruction that describes the task: ### Input: Install the ssh configuration ### Response: def __install_ssh_config(self, config): """ Install the ssh configuration """ if not config.is_affirmative('use_global_ssh', default="no"): ssh_config_injection = self._build_ssh_config(config) if not os.path.exists(ssh_config_path): if self.injections.in_noninjected_file(ssh_config_path, "Host %s" % config.get('host')): if config.is_affirmative('override'): self.injections.inject(ssh_config_path, ssh_config_injection) else: self.injections.inject(ssh_config_path, ssh_config_injection) else: self.injections.inject(ssh_config_path, ssh_config_injection) self.injections.commit()
def parse(self, s, term_join=None): """ Parses search term to Args: s (str): string with search term. or_join (callable): function to join 'OR' terms. Returns: dict: all of the terms grouped by marker. Key is a marker, value is a term. Example: >>> SearchTermParser().parse('table2 from 1978 to 1979 in california') {'to': 1979, 'about': 'table2', 'from': 1978, 'in': 'california'} """ if not term_join: term_join = lambda x: '(' + ' OR '.join(x) + ')' toks = self.scan(s) # Examples: starting with this query: # diabetes from 2014 to 2016 source healthindicators.gov # Assume the first term is ABOUT, if it is not marked with a marker. if toks and toks[0] and (toks[0][0] == self.TERM or toks[0][0] == self.QUOTEDTERM): toks = [(self.MARKER, 'about')] + toks # The example query produces this list of tokens: #[(3, 'about'), # (0, 'diabetes'), # (3, 'from'), # (4, 2014), # (3, 'to'), # (4, 2016), # (3, 'source'), # (0, 'healthindicators.gov')] # Group the terms by their marker. bymarker = [] for t in toks: if t[0] == self.MARKER: bymarker.append((t[1], [])) else: bymarker[-1][1].append(t) # After grouping tokens by their markers # [('about', [(0, 'diabetes')]), # ('from', [(4, 2014)]), # ('to', [(4, 2016)]), # ('source', [(0, 'healthindicators.gov')]) # ] # Convert some of the markers based on their contents. This just changes the marker type for keywords # we'll do more adjustments later. comps = [] for t in bymarker: t = list(t) if t[0] == 'in' and len(t[1]) == 1 and isinstance(t[1][0][1], string_types) and self.stem( t[1][0][1]) in self.geograins.keys(): t[0] = 'by' # If the from term isn't an integer, then it is really a source. if t[0] == 'from' and len(t[1]) == 1 and t[1][0][0] != self.YEAR: t[0] = 'source' comps.append(t) # After conversions # [['about', [(0, 'diabetes')]], # ['from', [(4, 2014)]], # ['to', [(4, 2016)]], # ['source', [(0, 'healthindicators.gov')]]] # Join all of the terms into single marker groups groups = {marker: [] for marker, _ in comps} for marker, terms in comps: groups[marker] += [term for marker, term in terms] # At this point, the groups dict is formed, but it will have a list # for each marker that has multiple terms. # Only a few of the markers should have more than one term, so move # extras to the about group for marker, group in groups.items(): if marker == 'about': continue if len(group) > 1 and marker not in self.multiterms: groups[marker], extras = [group[0]], group[1:] if not 'about' in groups: groups['about'] = extras else: groups['about'] += extras if marker == 'by': groups['by'] = [ self.geograins.get(self.stem(e)) for e in group] for marker, terms in iteritems(groups): if len(terms) > 1: if marker in 'in': groups[marker] = ' '.join(terms) else: groups[marker] = term_join(terms) elif len(terms) == 1: groups[marker] = terms[0] else: pass # After grouping: # {'to': 2016, # 'about': 'diabetes', # 'from': 2014, # 'source': 'healthindicators.gov'} # If there were any markers with multiple terms, they would be cast in the or_join form. return groups
Parses search term to Args: s (str): string with search term. or_join (callable): function to join 'OR' terms. Returns: dict: all of the terms grouped by marker. Key is a marker, value is a term. Example: >>> SearchTermParser().parse('table2 from 1978 to 1979 in california') {'to': 1979, 'about': 'table2', 'from': 1978, 'in': 'california'}
Below is the the instruction that describes the task: ### Input: Parses search term to Args: s (str): string with search term. or_join (callable): function to join 'OR' terms. Returns: dict: all of the terms grouped by marker. Key is a marker, value is a term. Example: >>> SearchTermParser().parse('table2 from 1978 to 1979 in california') {'to': 1979, 'about': 'table2', 'from': 1978, 'in': 'california'} ### Response: def parse(self, s, term_join=None): """ Parses search term to Args: s (str): string with search term. or_join (callable): function to join 'OR' terms. Returns: dict: all of the terms grouped by marker. Key is a marker, value is a term. Example: >>> SearchTermParser().parse('table2 from 1978 to 1979 in california') {'to': 1979, 'about': 'table2', 'from': 1978, 'in': 'california'} """ if not term_join: term_join = lambda x: '(' + ' OR '.join(x) + ')' toks = self.scan(s) # Examples: starting with this query: # diabetes from 2014 to 2016 source healthindicators.gov # Assume the first term is ABOUT, if it is not marked with a marker. if toks and toks[0] and (toks[0][0] == self.TERM or toks[0][0] == self.QUOTEDTERM): toks = [(self.MARKER, 'about')] + toks # The example query produces this list of tokens: #[(3, 'about'), # (0, 'diabetes'), # (3, 'from'), # (4, 2014), # (3, 'to'), # (4, 2016), # (3, 'source'), # (0, 'healthindicators.gov')] # Group the terms by their marker. bymarker = [] for t in toks: if t[0] == self.MARKER: bymarker.append((t[1], [])) else: bymarker[-1][1].append(t) # After grouping tokens by their markers # [('about', [(0, 'diabetes')]), # ('from', [(4, 2014)]), # ('to', [(4, 2016)]), # ('source', [(0, 'healthindicators.gov')]) # ] # Convert some of the markers based on their contents. This just changes the marker type for keywords # we'll do more adjustments later. comps = [] for t in bymarker: t = list(t) if t[0] == 'in' and len(t[1]) == 1 and isinstance(t[1][0][1], string_types) and self.stem( t[1][0][1]) in self.geograins.keys(): t[0] = 'by' # If the from term isn't an integer, then it is really a source. if t[0] == 'from' and len(t[1]) == 1 and t[1][0][0] != self.YEAR: t[0] = 'source' comps.append(t) # After conversions # [['about', [(0, 'diabetes')]], # ['from', [(4, 2014)]], # ['to', [(4, 2016)]], # ['source', [(0, 'healthindicators.gov')]]] # Join all of the terms into single marker groups groups = {marker: [] for marker, _ in comps} for marker, terms in comps: groups[marker] += [term for marker, term in terms] # At this point, the groups dict is formed, but it will have a list # for each marker that has multiple terms. # Only a few of the markers should have more than one term, so move # extras to the about group for marker, group in groups.items(): if marker == 'about': continue if len(group) > 1 and marker not in self.multiterms: groups[marker], extras = [group[0]], group[1:] if not 'about' in groups: groups['about'] = extras else: groups['about'] += extras if marker == 'by': groups['by'] = [ self.geograins.get(self.stem(e)) for e in group] for marker, terms in iteritems(groups): if len(terms) > 1: if marker in 'in': groups[marker] = ' '.join(terms) else: groups[marker] = term_join(terms) elif len(terms) == 1: groups[marker] = terms[0] else: pass # After grouping: # {'to': 2016, # 'about': 'diabetes', # 'from': 2014, # 'source': 'healthindicators.gov'} # If there were any markers with multiple terms, they would be cast in the or_join form. return groups
def get_logs(self, stdout=True, stderr=True, timestamps=False, tail='all', since=None): """ Get container logs. This method does not support streaming, use :meth:`stream_logs` for that. """ return self.inner().logs( stdout=stdout, stderr=stderr, timestamps=timestamps, tail=tail, since=since)
Get container logs. This method does not support streaming, use :meth:`stream_logs` for that.
Below is the the instruction that describes the task: ### Input: Get container logs. This method does not support streaming, use :meth:`stream_logs` for that. ### Response: def get_logs(self, stdout=True, stderr=True, timestamps=False, tail='all', since=None): """ Get container logs. This method does not support streaming, use :meth:`stream_logs` for that. """ return self.inner().logs( stdout=stdout, stderr=stderr, timestamps=timestamps, tail=tail, since=since)