code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def setup_logging(handler, exclude=EXCLUDE_LOGGER_DEFAULTS): """ Configures logging to pipe to Sentry. - ``exclude`` is a list of loggers that shouldn't go to Sentry. For a typical Python install: >>> from raven.handlers.logging import SentryHandler >>> client = Sentry(...) >>> setup_logging(SentryHandler(client)) Within Django: >>> from raven.contrib.django.handlers import SentryHandler >>> setup_logging(SentryHandler()) Returns a boolean based on if logging was configured or not. """ logger = logging.getLogger() if handler.__class__ in map(type, logger.handlers): return False logger.addHandler(handler) # Add StreamHandler to sentry's default so you can catch missed exceptions for logger_name in exclude: logger = logging.getLogger(logger_name) logger.propagate = False logger.addHandler(logging.StreamHandler()) return True
Configures logging to pipe to Sentry. - ``exclude`` is a list of loggers that shouldn't go to Sentry. For a typical Python install: >>> from raven.handlers.logging import SentryHandler >>> client = Sentry(...) >>> setup_logging(SentryHandler(client)) Within Django: >>> from raven.contrib.django.handlers import SentryHandler >>> setup_logging(SentryHandler()) Returns a boolean based on if logging was configured or not.
Below is the the instruction that describes the task: ### Input: Configures logging to pipe to Sentry. - ``exclude`` is a list of loggers that shouldn't go to Sentry. For a typical Python install: >>> from raven.handlers.logging import SentryHandler >>> client = Sentry(...) >>> setup_logging(SentryHandler(client)) Within Django: >>> from raven.contrib.django.handlers import SentryHandler >>> setup_logging(SentryHandler()) Returns a boolean based on if logging was configured or not. ### Response: def setup_logging(handler, exclude=EXCLUDE_LOGGER_DEFAULTS): """ Configures logging to pipe to Sentry. - ``exclude`` is a list of loggers that shouldn't go to Sentry. For a typical Python install: >>> from raven.handlers.logging import SentryHandler >>> client = Sentry(...) >>> setup_logging(SentryHandler(client)) Within Django: >>> from raven.contrib.django.handlers import SentryHandler >>> setup_logging(SentryHandler()) Returns a boolean based on if logging was configured or not. """ logger = logging.getLogger() if handler.__class__ in map(type, logger.handlers): return False logger.addHandler(handler) # Add StreamHandler to sentry's default so you can catch missed exceptions for logger_name in exclude: logger = logging.getLogger(logger_name) logger.propagate = False logger.addHandler(logging.StreamHandler()) return True
def getThroughput(self, instId: int) -> float: """ Return the throughput of the specified instance. :param instId: the id of the protocol instance """ # We are using the instanceStarted time in the denominator instead of # a time interval. This is alright for now as all the instances on a # node are started at almost the same time. if instId not in self.instances.ids: return None perf_time = time.perf_counter() throughput = self.throughputs[instId].get_throughput(perf_time) return throughput
Return the throughput of the specified instance. :param instId: the id of the protocol instance
Below is the the instruction that describes the task: ### Input: Return the throughput of the specified instance. :param instId: the id of the protocol instance ### Response: def getThroughput(self, instId: int) -> float: """ Return the throughput of the specified instance. :param instId: the id of the protocol instance """ # We are using the instanceStarted time in the denominator instead of # a time interval. This is alright for now as all the instances on a # node are started at almost the same time. if instId not in self.instances.ids: return None perf_time = time.perf_counter() throughput = self.throughputs[instId].get_throughput(perf_time) return throughput
def str_presenter(dmpr, data): """Return correct str_presenter to write multiple lines to a yaml field. Source: http://stackoverflow.com/a/33300001 """ if is_multiline(data): return dmpr.represent_scalar('tag:yaml.org,2002:str', data, style='|') return dmpr.represent_scalar('tag:yaml.org,2002:str', data)
Return correct str_presenter to write multiple lines to a yaml field. Source: http://stackoverflow.com/a/33300001
Below is the the instruction that describes the task: ### Input: Return correct str_presenter to write multiple lines to a yaml field. Source: http://stackoverflow.com/a/33300001 ### Response: def str_presenter(dmpr, data): """Return correct str_presenter to write multiple lines to a yaml field. Source: http://stackoverflow.com/a/33300001 """ if is_multiline(data): return dmpr.represent_scalar('tag:yaml.org,2002:str', data, style='|') return dmpr.represent_scalar('tag:yaml.org,2002:str', data)
def to_internal_value(self, value): """Convert to integer id.""" natural_key = value.split("_") content_type = ContentType.objects.get_by_natural_key(*natural_key) return content_type.id
Convert to integer id.
Below is the the instruction that describes the task: ### Input: Convert to integer id. ### Response: def to_internal_value(self, value): """Convert to integer id.""" natural_key = value.split("_") content_type = ContentType.objects.get_by_natural_key(*natural_key) return content_type.id
def make_signer(self, salt=None): """A method that creates a new instance of the signer to be used. The default implementation uses the :class:`Signer` baseclass. """ if salt is None: salt = self.salt return self.signer(self.secret_key, salt=salt, **self.signer_kwargs)
A method that creates a new instance of the signer to be used. The default implementation uses the :class:`Signer` baseclass.
Below is the the instruction that describes the task: ### Input: A method that creates a new instance of the signer to be used. The default implementation uses the :class:`Signer` baseclass. ### Response: def make_signer(self, salt=None): """A method that creates a new instance of the signer to be used. The default implementation uses the :class:`Signer` baseclass. """ if salt is None: salt = self.salt return self.signer(self.secret_key, salt=salt, **self.signer_kwargs)
def print_tools(self, pattern=None, buf=sys.stdout): """Print a list of visible tools. Args: pattern (str): Only list tools that match this glob pattern. """ seen = set() rows = [] context = self.context if context: data = context.get_tools() conflicts = set(context.get_conflicting_tools().keys()) for _, (variant, tools) in sorted(data.items()): pkg_str = variant.qualified_package_name for tool in tools: if pattern and not fnmatch(tool, pattern): continue if tool in conflicts: label = "(in conflict)" color = critical else: label = '' color = None rows.append([tool, '-', pkg_str, "active context", label, color]) seen.add(tool) for suite in self.suites: for tool, d in suite.get_tools().iteritems(): if tool in seen: continue if pattern and not fnmatch(tool, pattern): continue label = [] color = None path = which(tool) if path: path_ = os.path.join(suite.tools_path, tool) if path != path_: label.append("(hidden by unknown tool '%s')" % path) color = warning variant = d["variant"] if isinstance(variant, set): pkg_str = ", ".join(variant) label.append("(in conflict)") color = critical else: pkg_str = variant.qualified_package_name orig_tool = d["tool_name"] if orig_tool == tool: orig_tool = '-' label = ' '.join(label) source = ("context '%s' in suite '%s'" % (d["context_name"], suite.load_path)) rows.append([tool, orig_tool, pkg_str, source, label, color]) seen.add(tool) _pr = Printer(buf) if not rows: _pr("No matching tools.") return False headers = [["TOOL", "ALIASING", "PACKAGE", "SOURCE", "", None], ["----", "--------", "-------", "------", "", None]] rows = headers + sorted(rows, key=lambda x: x[0].lower()) print_colored_columns(_pr, rows) return True
Print a list of visible tools. Args: pattern (str): Only list tools that match this glob pattern.
Below is the the instruction that describes the task: ### Input: Print a list of visible tools. Args: pattern (str): Only list tools that match this glob pattern. ### Response: def print_tools(self, pattern=None, buf=sys.stdout): """Print a list of visible tools. Args: pattern (str): Only list tools that match this glob pattern. """ seen = set() rows = [] context = self.context if context: data = context.get_tools() conflicts = set(context.get_conflicting_tools().keys()) for _, (variant, tools) in sorted(data.items()): pkg_str = variant.qualified_package_name for tool in tools: if pattern and not fnmatch(tool, pattern): continue if tool in conflicts: label = "(in conflict)" color = critical else: label = '' color = None rows.append([tool, '-', pkg_str, "active context", label, color]) seen.add(tool) for suite in self.suites: for tool, d in suite.get_tools().iteritems(): if tool in seen: continue if pattern and not fnmatch(tool, pattern): continue label = [] color = None path = which(tool) if path: path_ = os.path.join(suite.tools_path, tool) if path != path_: label.append("(hidden by unknown tool '%s')" % path) color = warning variant = d["variant"] if isinstance(variant, set): pkg_str = ", ".join(variant) label.append("(in conflict)") color = critical else: pkg_str = variant.qualified_package_name orig_tool = d["tool_name"] if orig_tool == tool: orig_tool = '-' label = ' '.join(label) source = ("context '%s' in suite '%s'" % (d["context_name"], suite.load_path)) rows.append([tool, orig_tool, pkg_str, source, label, color]) seen.add(tool) _pr = Printer(buf) if not rows: _pr("No matching tools.") return False headers = [["TOOL", "ALIASING", "PACKAGE", "SOURCE", "", None], ["----", "--------", "-------", "------", "", None]] rows = headers + sorted(rows, key=lambda x: x[0].lower()) print_colored_columns(_pr, rows) return True
def cleanup_temporary_directories(self): """Delete the build directories and any temporary directories created by pip.""" while self.build_directories: shutil.rmtree(self.build_directories.pop()) for requirement in self.reported_requirements: requirement.remove_temporary_source() while self.eggs_links: symbolic_link = self.eggs_links.pop() if os.path.islink(symbolic_link): os.unlink(symbolic_link)
Delete the build directories and any temporary directories created by pip.
Below is the the instruction that describes the task: ### Input: Delete the build directories and any temporary directories created by pip. ### Response: def cleanup_temporary_directories(self): """Delete the build directories and any temporary directories created by pip.""" while self.build_directories: shutil.rmtree(self.build_directories.pop()) for requirement in self.reported_requirements: requirement.remove_temporary_source() while self.eggs_links: symbolic_link = self.eggs_links.pop() if os.path.islink(symbolic_link): os.unlink(symbolic_link)
def compute_error(self, **args): """ Computes error for all non-output layers backwards through all projections. """ for key in args: layer = self.getLayer(key) if layer.kind == 'Output': self.copyTargets(layer, args[key]) self.verifyTargets() # better have targets set error, correct, total = self.ce_init() pcorrect = {} # go backwards through each proj but don't redo output errors! if len(self.cacheConnections) != 0: changeConnections = self.cacheConnections else: changeConnections = self.connections for connect in reverse(changeConnections): if connect.active and connect.toLayer.active and connect.fromLayer.active: connect.toLayer.delta = (connect.toLayer.error * (self.ACTPRIME(connect.toLayer.activation))) connect.fromLayer.error = connect.fromLayer.error + \ Numeric.matrixmultiply(connect.weight, connect.toLayer.delta) # now all errors are set on all layers! pcorrect = self.getLayerErrors() return (error, correct, total, pcorrect)
Computes error for all non-output layers backwards through all projections.
Below is the the instruction that describes the task: ### Input: Computes error for all non-output layers backwards through all projections. ### Response: def compute_error(self, **args): """ Computes error for all non-output layers backwards through all projections. """ for key in args: layer = self.getLayer(key) if layer.kind == 'Output': self.copyTargets(layer, args[key]) self.verifyTargets() # better have targets set error, correct, total = self.ce_init() pcorrect = {} # go backwards through each proj but don't redo output errors! if len(self.cacheConnections) != 0: changeConnections = self.cacheConnections else: changeConnections = self.connections for connect in reverse(changeConnections): if connect.active and connect.toLayer.active and connect.fromLayer.active: connect.toLayer.delta = (connect.toLayer.error * (self.ACTPRIME(connect.toLayer.activation))) connect.fromLayer.error = connect.fromLayer.error + \ Numeric.matrixmultiply(connect.weight, connect.toLayer.delta) # now all errors are set on all layers! pcorrect = self.getLayerErrors() return (error, correct, total, pcorrect)
def IsHuntStarted(self): """Is this hunt considered started? This method is used to check if new clients should be processed by this hunt. Note that child flow responses are always processed but new clients are not allowed to be scheduled unless the hunt is started. Returns: If a new client is allowed to be scheduled on this hunt. """ state = self.hunt_obj.Get(self.hunt_obj.Schema.STATE) if state != "STARTED": return False # Stop the hunt due to expiry. if self.CheckExpiry(): return False return True
Is this hunt considered started? This method is used to check if new clients should be processed by this hunt. Note that child flow responses are always processed but new clients are not allowed to be scheduled unless the hunt is started. Returns: If a new client is allowed to be scheduled on this hunt.
Below is the the instruction that describes the task: ### Input: Is this hunt considered started? This method is used to check if new clients should be processed by this hunt. Note that child flow responses are always processed but new clients are not allowed to be scheduled unless the hunt is started. Returns: If a new client is allowed to be scheduled on this hunt. ### Response: def IsHuntStarted(self): """Is this hunt considered started? This method is used to check if new clients should be processed by this hunt. Note that child flow responses are always processed but new clients are not allowed to be scheduled unless the hunt is started. Returns: If a new client is allowed to be scheduled on this hunt. """ state = self.hunt_obj.Get(self.hunt_obj.Schema.STATE) if state != "STARTED": return False # Stop the hunt due to expiry. if self.CheckExpiry(): return False return True
def create_transaction(self, *args: Any, **kwargs: Any) -> BaseTransaction: """ Passthrough helper to the current VM class. """ return self.get_vm().create_transaction(*args, **kwargs)
Passthrough helper to the current VM class.
Below is the the instruction that describes the task: ### Input: Passthrough helper to the current VM class. ### Response: def create_transaction(self, *args: Any, **kwargs: Any) -> BaseTransaction: """ Passthrough helper to the current VM class. """ return self.get_vm().create_transaction(*args, **kwargs)
def RingToneStatus(self, Id=1, Set=None): """Enables/disables a ringtone. :Parameters: Id : int Ringtone Id Set : bool True/False if the ringtone should be enabled/disabled or None if the current status should be queried. :return: Current status if Set=None, None otherwise. :rtype: bool """ if Set is None: return (self._Skype._Property('RINGTONE', Id, 'STATUS') == 'ON') self._Skype._Property('RINGTONE', Id, 'STATUS', cndexp(Set, 'ON', 'OFF'))
Enables/disables a ringtone. :Parameters: Id : int Ringtone Id Set : bool True/False if the ringtone should be enabled/disabled or None if the current status should be queried. :return: Current status if Set=None, None otherwise. :rtype: bool
Below is the the instruction that describes the task: ### Input: Enables/disables a ringtone. :Parameters: Id : int Ringtone Id Set : bool True/False if the ringtone should be enabled/disabled or None if the current status should be queried. :return: Current status if Set=None, None otherwise. :rtype: bool ### Response: def RingToneStatus(self, Id=1, Set=None): """Enables/disables a ringtone. :Parameters: Id : int Ringtone Id Set : bool True/False if the ringtone should be enabled/disabled or None if the current status should be queried. :return: Current status if Set=None, None otherwise. :rtype: bool """ if Set is None: return (self._Skype._Property('RINGTONE', Id, 'STATUS') == 'ON') self._Skype._Property('RINGTONE', Id, 'STATUS', cndexp(Set, 'ON', 'OFF'))
def poll(self): """ Check if the pod is still running. Uses the same interface as subprocess.Popen.poll(): if the pod is still running, returns None. If the pod has exited, return the exit code if we can determine it, or 1 if it has exited but we don't know how. These are the return values JupyterHub expects. Note that a clean exit will have an exit code of zero, so it is necessary to check that the returned value is None, rather than just Falsy, to determine that the pod is still running. """ # have to wait for first load of data before we have a valid answer if not self.pod_reflector.first_load_future.done(): yield self.pod_reflector.first_load_future data = self.pod_reflector.pods.get(self.pod_name, None) if data is not None: if data.status.phase == 'Pending': return None ctr_stat = data.status.container_statuses if ctr_stat is None: # No status, no container (we hope) # This seems to happen when a pod is idle-culled. return 1 for c in ctr_stat: # return exit code if notebook container has terminated if c.name == 'notebook': if c.state.terminated: # call self.stop to delete the pod if self.delete_stopped_pods: yield self.stop(now=True) return c.state.terminated.exit_code break # None means pod is running or starting up return None # pod doesn't exist or has been deleted return 1
Check if the pod is still running. Uses the same interface as subprocess.Popen.poll(): if the pod is still running, returns None. If the pod has exited, return the exit code if we can determine it, or 1 if it has exited but we don't know how. These are the return values JupyterHub expects. Note that a clean exit will have an exit code of zero, so it is necessary to check that the returned value is None, rather than just Falsy, to determine that the pod is still running.
Below is the the instruction that describes the task: ### Input: Check if the pod is still running. Uses the same interface as subprocess.Popen.poll(): if the pod is still running, returns None. If the pod has exited, return the exit code if we can determine it, or 1 if it has exited but we don't know how. These are the return values JupyterHub expects. Note that a clean exit will have an exit code of zero, so it is necessary to check that the returned value is None, rather than just Falsy, to determine that the pod is still running. ### Response: def poll(self): """ Check if the pod is still running. Uses the same interface as subprocess.Popen.poll(): if the pod is still running, returns None. If the pod has exited, return the exit code if we can determine it, or 1 if it has exited but we don't know how. These are the return values JupyterHub expects. Note that a clean exit will have an exit code of zero, so it is necessary to check that the returned value is None, rather than just Falsy, to determine that the pod is still running. """ # have to wait for first load of data before we have a valid answer if not self.pod_reflector.first_load_future.done(): yield self.pod_reflector.first_load_future data = self.pod_reflector.pods.get(self.pod_name, None) if data is not None: if data.status.phase == 'Pending': return None ctr_stat = data.status.container_statuses if ctr_stat is None: # No status, no container (we hope) # This seems to happen when a pod is idle-culled. return 1 for c in ctr_stat: # return exit code if notebook container has terminated if c.name == 'notebook': if c.state.terminated: # call self.stop to delete the pod if self.delete_stopped_pods: yield self.stop(now=True) return c.state.terminated.exit_code break # None means pod is running or starting up return None # pod doesn't exist or has been deleted return 1
def chi2_adaptive_binning(features_0,features_1,number_of_splits_list,systematics_fraction=0.0,title = "title", name="name", PLOT = True, DEBUG = False, transform='StandardScalar'): """This function takes in two 2D arrays with all features being columns""" max_number_of_splits = np.max(number_of_splits_list) #determine how many data points are in each sample no_0=features_0.shape[0] no_1=features_1.shape[0] print("features_0.shape : ", features_0.shape) no_dim = features_0.shape[1] #Give all samples in file 0 the label 0 and in file 1 the feature 1 label_0=np.zeros((no_0,1)) label_1=np.ones((no_1,1)) #Create an array containing samples and features. data_0=np.c_[features_0,label_0] data_1=np.c_[features_1,label_1] features= np.r_[features_0,features_1] labels= np.r_[label_0, label_1] data=np.r_[data_0,data_1] data_same=np.c_[features,labels] #print("data : ",data) #print("data_same : ", data_same) #print("np.sum(data!=data_same) : ",np.sum(data!=data_same)) assert np.sum(data!=data_same)==0 assert (no_dim == data.shape[1]-1) if no_dim==2: plt.scatter(features[:,0],features[:,1], 0.1) plt.savefig('test.png') plt.clf() if transform=='StandardScalar': features = preprocessing.scale(features) data = np.c_[features,labels] if transform=='uniform': #data_new2 = data[:,0] data_new = norm_highD_searchsorted(data[:,0]) for D in range(1,no_dim): temp = norm_highD_searchsorted(data[:,D]) data_new = np.c_[data_new,temp] #data_new2= np.c_[data_new2,data[:,D]] data_new = np.c_[data_new, np.r_[label_0,label_1]] #data_new2= np.c_[data_new2,np.r_[label_0,label_1]] print("data : ", data) data = data_new print("data new : ", data) #print("data_new2 : ", data_new2) #print("np.sum(data!=data_new2) : ",np.sum(data!=data_new2)) np.random.shuffle(data) assert (no_dim == data.shape[1]-1) labels=data[:,-1] X_values= data[:,:-1] X_max = np.amax(data,axis=0)[:-1] X_min = np.amin(data,axis=0)[:-1] X_total_width = (np.subtract(X_max,X_min)) del data if transform=='fill01': #Scaling X_values = X_values - X_min[None,:] X_values = X_values / X_total_width[None,:] if True: X_min = [0.]*no_dim X_total_width = [1.]*no_dim #b = X_values[:,0] #print("b[b[:]>2].shape[0] : \n", b[b[:]>2].shape[0] ) data = np.concatenate((X_values, labels[:,None]), axis=1) if no_dim==2: plt.scatter(data[:,0],data[:,1],0.1) plt.savefig('test_scaled.png') #print("X_values.shape : ",X_values.shape) starting_boundary = [] for i in range(no_dim): starting_boundary.append([0.0,1.0]) #Each key has the following stricture: # of splits and for each split if it was closer (a) or further away from (b) the origin. The original bin is "0" #For example "2ab" means it is the bin that was closer to the origin for the first split and further away for the second one. bin_boundaries_dict = {'0' : np.array(starting_boundary)} bin_points_dict = {'0' : data} for split_number in range(1,1+max_number_of_splits): for bin_key, bin_boundary in bin_boundaries_dict.items(): if str(split_number-1) in bin_key: variances= np.var(bin_points_dict[bin_key][:,:-1], axis=0) #print("\nvariances : ", variances) dim_to_be_sliced = np.argmax(variances) #print("dim_to_be_sliced : ",dim_to_be_sliced) #print("bin_points_dict[bin_key] : ",bin_points_dict[bin_key]) #print("bin_points_dict[bin_key][:,dim_to_be_sliced] : ",bin_points_dict[bin_key][:,dim_to_be_sliced]) median = np.median(bin_points_dict[bin_key][:,dim_to_be_sliced]) #print("median : ",median) a_bin_boundary, b_bin_boundary = bin_boundary.copy(), bin_boundary.copy() #print("a_bin_boundary : ",a_bin_boundary) a_bin_boundary[dim_to_be_sliced,1] = median b_bin_boundary[dim_to_be_sliced,0] = median bin_boundaries_dict[str(split_number)+bin_key[1:]+'a'] = a_bin_boundary bin_boundaries_dict[str(split_number)+bin_key[1:]+'b'] = b_bin_boundary a_points, b_points = [],[] for event_number in range(bin_points_dict[bin_key].shape[0]): if bin_points_dict[bin_key][event_number,dim_to_be_sliced] < median: a_points.append(bin_points_dict[bin_key][event_number,:].tolist()) else: b_points.append(bin_points_dict[bin_key][event_number,:].tolist()) bin_points_dict[str(split_number)+bin_key[1:]+'a'] = np.array(a_points) bin_points_dict[str(split_number)+bin_key[1:]+'b'] = np.array(b_points) #If a bin contains no particles it should be deleted if len(a_points)==0: del bin_points_dict[str(split_number)+bin_key[1:]+'a'] del bin_boundaries_dict[str(split_number)+bin_key[1:]+'a'] if len(b_points)==0: del bin_points_dict[str(split_number)+bin_key[1:]+'b'] del bin_boundaries_dict[str(split_number)+bin_key[1:]+'b'] if PLOT: pickle.dump( bin_boundaries_dict, open( "bin_boundaries_dict.p", "wb" ) ) bins_sample01_dict= {} signed_Scp2_dict= {} results_list = [] for number_of_splits in number_of_splits_list: print("\nnumber_of_splits : ",number_of_splits,"\nsystematics_fraction : ",systematics_fraction) bins_sample0, bins_sample1 = [] , [] for bin_key, bin_points in bin_points_dict.items(): if str(number_of_splits) in bin_key: labels_in_bin = bin_points[:,-1] #print("labels_in_bin : ",labels_in_bin) bin_sample0 = np.count_nonzero( labels_in_bin == 0) bin_sample1 = np.count_nonzero( labels_in_bin == 1) #print("bin_sample0 : ",bin_sample0) #print("bin_sample1 : ",bin_sample1) #simulate uncertainties if(systematics_fraction*float(bin_sample0)!=0.): bin_sample0 += int(round(np.random.normal(0.,systematics_fraction*float(bin_sample0)))) if(systematics_fraction*float(bin_sample1)!=0.): bin_sample1 += int(round(np.random.normal(0.,systematics_fraction*float(bin_sample1)))) bins_sample01_dict[bin_key]=[bin_sample0,bin_sample1] signed_Scp2_dict[bin_key] = np.square(float(bin_sample1-bin_sample0))/(float(bin_sample1)+float(bin_sample0)+np.square(float(bin_sample1)*systematics_fraction)+np.square(float(bin_sample1)*systematics_fraction))*np.sign(bin_sample1-bin_sample0) #print("\n\nbin_sample0 : ",bin_sample0, "\n bins_sample0 : ", bins_sample0 ) #print("type(bin_sample0) : ",type(bin_sample0), " type(bins_sample0) : ",type(bins_sample0)) bins_sample0.append(bin_sample0) #print(" bins_sample0 : ", bins_sample0, "\n\n" ) bins_sample1.append(bin_sample1) bins_sample0, bins_sample1 = np.array(bins_sample0,dtype=float), np.array(bins_sample1, dtype=float) print("bins_sample0 : ",bins_sample0,"\n bins_sample1 : ",bins_sample1) #element wise subtraction and division Scp2 = ((bins_sample1-bins_sample0)**2)/ (bins_sample1+bins_sample0+(systematics_fraction*bins_sample1)**2+(systematics_fraction*bins_sample0)**2 ) #Scp2 = np.divide(np.square(np.subtract(bins_sample1,bins_sample0)),np.add(bins_sample1,bins_sample0)) if DEBUG: print(Scp2) #nansum ignores all the contributions that are Not A Number (NAN) Chi2 = np.nansum(Scp2) if DEBUG: print("Chi2") print(Chi2) dof=bins_sample0.shape[0]-1 pvalue= 1 - stats.chi2.cdf(Chi2,dof) print("\nThe p value for Scp2 = ",Scp2," and Chi2 = ", Chi2, " is ",pvalue,"\n\n") if DEBUG: print(bins_sample0) print(bins_sample1) print("Chi2/dof : {0}".format(str(Chi2/dof))) print("pvalue : {0}".format(str(pvalue))) results_list.append(pvalue) if PLOT: if no_dim==1: chi2_plots.adaptive_binning_1Dplot(bin_boundaries_dict,data,number_of_splits,title+" "+str(no_dim) + "D "+str(number_of_splits)+ " splits ",name+"_"+str(no_dim) + "D_chi2_"+str(number_of_splits)+"_splits") if no_dim==2: chi2_plots.adaptive_binning_2Dplot(bin_boundaries_dict,signed_Scp2_dict,number_of_splits,X_values,title+" "+str(no_dim) + "D"+str(number_of_splits)+ " splits ",name+"_"+str(no_dim) + "D_chi2_"+str(number_of_splits)+"_splits", X_min= X_min,X_total_width=X_total_width ) if no_dim>1: chi2_plots.adaptive_binning_2D1Dplot(bin_boundaries_dict,bins_sample01_dict,number_of_splits,X_values,title+" "+str(no_dim) + "D"+str(number_of_splits)+ " splits ",name+"_"+str(no_dim) + "D_chi2_"+str(number_of_splits)+"_splits", no_dim) return results_list
This function takes in two 2D arrays with all features being columns
Below is the the instruction that describes the task: ### Input: This function takes in two 2D arrays with all features being columns ### Response: def chi2_adaptive_binning(features_0,features_1,number_of_splits_list,systematics_fraction=0.0,title = "title", name="name", PLOT = True, DEBUG = False, transform='StandardScalar'): """This function takes in two 2D arrays with all features being columns""" max_number_of_splits = np.max(number_of_splits_list) #determine how many data points are in each sample no_0=features_0.shape[0] no_1=features_1.shape[0] print("features_0.shape : ", features_0.shape) no_dim = features_0.shape[1] #Give all samples in file 0 the label 0 and in file 1 the feature 1 label_0=np.zeros((no_0,1)) label_1=np.ones((no_1,1)) #Create an array containing samples and features. data_0=np.c_[features_0,label_0] data_1=np.c_[features_1,label_1] features= np.r_[features_0,features_1] labels= np.r_[label_0, label_1] data=np.r_[data_0,data_1] data_same=np.c_[features,labels] #print("data : ",data) #print("data_same : ", data_same) #print("np.sum(data!=data_same) : ",np.sum(data!=data_same)) assert np.sum(data!=data_same)==0 assert (no_dim == data.shape[1]-1) if no_dim==2: plt.scatter(features[:,0],features[:,1], 0.1) plt.savefig('test.png') plt.clf() if transform=='StandardScalar': features = preprocessing.scale(features) data = np.c_[features,labels] if transform=='uniform': #data_new2 = data[:,0] data_new = norm_highD_searchsorted(data[:,0]) for D in range(1,no_dim): temp = norm_highD_searchsorted(data[:,D]) data_new = np.c_[data_new,temp] #data_new2= np.c_[data_new2,data[:,D]] data_new = np.c_[data_new, np.r_[label_0,label_1]] #data_new2= np.c_[data_new2,np.r_[label_0,label_1]] print("data : ", data) data = data_new print("data new : ", data) #print("data_new2 : ", data_new2) #print("np.sum(data!=data_new2) : ",np.sum(data!=data_new2)) np.random.shuffle(data) assert (no_dim == data.shape[1]-1) labels=data[:,-1] X_values= data[:,:-1] X_max = np.amax(data,axis=0)[:-1] X_min = np.amin(data,axis=0)[:-1] X_total_width = (np.subtract(X_max,X_min)) del data if transform=='fill01': #Scaling X_values = X_values - X_min[None,:] X_values = X_values / X_total_width[None,:] if True: X_min = [0.]*no_dim X_total_width = [1.]*no_dim #b = X_values[:,0] #print("b[b[:]>2].shape[0] : \n", b[b[:]>2].shape[0] ) data = np.concatenate((X_values, labels[:,None]), axis=1) if no_dim==2: plt.scatter(data[:,0],data[:,1],0.1) plt.savefig('test_scaled.png') #print("X_values.shape : ",X_values.shape) starting_boundary = [] for i in range(no_dim): starting_boundary.append([0.0,1.0]) #Each key has the following stricture: # of splits and for each split if it was closer (a) or further away from (b) the origin. The original bin is "0" #For example "2ab" means it is the bin that was closer to the origin for the first split and further away for the second one. bin_boundaries_dict = {'0' : np.array(starting_boundary)} bin_points_dict = {'0' : data} for split_number in range(1,1+max_number_of_splits): for bin_key, bin_boundary in bin_boundaries_dict.items(): if str(split_number-1) in bin_key: variances= np.var(bin_points_dict[bin_key][:,:-1], axis=0) #print("\nvariances : ", variances) dim_to_be_sliced = np.argmax(variances) #print("dim_to_be_sliced : ",dim_to_be_sliced) #print("bin_points_dict[bin_key] : ",bin_points_dict[bin_key]) #print("bin_points_dict[bin_key][:,dim_to_be_sliced] : ",bin_points_dict[bin_key][:,dim_to_be_sliced]) median = np.median(bin_points_dict[bin_key][:,dim_to_be_sliced]) #print("median : ",median) a_bin_boundary, b_bin_boundary = bin_boundary.copy(), bin_boundary.copy() #print("a_bin_boundary : ",a_bin_boundary) a_bin_boundary[dim_to_be_sliced,1] = median b_bin_boundary[dim_to_be_sliced,0] = median bin_boundaries_dict[str(split_number)+bin_key[1:]+'a'] = a_bin_boundary bin_boundaries_dict[str(split_number)+bin_key[1:]+'b'] = b_bin_boundary a_points, b_points = [],[] for event_number in range(bin_points_dict[bin_key].shape[0]): if bin_points_dict[bin_key][event_number,dim_to_be_sliced] < median: a_points.append(bin_points_dict[bin_key][event_number,:].tolist()) else: b_points.append(bin_points_dict[bin_key][event_number,:].tolist()) bin_points_dict[str(split_number)+bin_key[1:]+'a'] = np.array(a_points) bin_points_dict[str(split_number)+bin_key[1:]+'b'] = np.array(b_points) #If a bin contains no particles it should be deleted if len(a_points)==0: del bin_points_dict[str(split_number)+bin_key[1:]+'a'] del bin_boundaries_dict[str(split_number)+bin_key[1:]+'a'] if len(b_points)==0: del bin_points_dict[str(split_number)+bin_key[1:]+'b'] del bin_boundaries_dict[str(split_number)+bin_key[1:]+'b'] if PLOT: pickle.dump( bin_boundaries_dict, open( "bin_boundaries_dict.p", "wb" ) ) bins_sample01_dict= {} signed_Scp2_dict= {} results_list = [] for number_of_splits in number_of_splits_list: print("\nnumber_of_splits : ",number_of_splits,"\nsystematics_fraction : ",systematics_fraction) bins_sample0, bins_sample1 = [] , [] for bin_key, bin_points in bin_points_dict.items(): if str(number_of_splits) in bin_key: labels_in_bin = bin_points[:,-1] #print("labels_in_bin : ",labels_in_bin) bin_sample0 = np.count_nonzero( labels_in_bin == 0) bin_sample1 = np.count_nonzero( labels_in_bin == 1) #print("bin_sample0 : ",bin_sample0) #print("bin_sample1 : ",bin_sample1) #simulate uncertainties if(systematics_fraction*float(bin_sample0)!=0.): bin_sample0 += int(round(np.random.normal(0.,systematics_fraction*float(bin_sample0)))) if(systematics_fraction*float(bin_sample1)!=0.): bin_sample1 += int(round(np.random.normal(0.,systematics_fraction*float(bin_sample1)))) bins_sample01_dict[bin_key]=[bin_sample0,bin_sample1] signed_Scp2_dict[bin_key] = np.square(float(bin_sample1-bin_sample0))/(float(bin_sample1)+float(bin_sample0)+np.square(float(bin_sample1)*systematics_fraction)+np.square(float(bin_sample1)*systematics_fraction))*np.sign(bin_sample1-bin_sample0) #print("\n\nbin_sample0 : ",bin_sample0, "\n bins_sample0 : ", bins_sample0 ) #print("type(bin_sample0) : ",type(bin_sample0), " type(bins_sample0) : ",type(bins_sample0)) bins_sample0.append(bin_sample0) #print(" bins_sample0 : ", bins_sample0, "\n\n" ) bins_sample1.append(bin_sample1) bins_sample0, bins_sample1 = np.array(bins_sample0,dtype=float), np.array(bins_sample1, dtype=float) print("bins_sample0 : ",bins_sample0,"\n bins_sample1 : ",bins_sample1) #element wise subtraction and division Scp2 = ((bins_sample1-bins_sample0)**2)/ (bins_sample1+bins_sample0+(systematics_fraction*bins_sample1)**2+(systematics_fraction*bins_sample0)**2 ) #Scp2 = np.divide(np.square(np.subtract(bins_sample1,bins_sample0)),np.add(bins_sample1,bins_sample0)) if DEBUG: print(Scp2) #nansum ignores all the contributions that are Not A Number (NAN) Chi2 = np.nansum(Scp2) if DEBUG: print("Chi2") print(Chi2) dof=bins_sample0.shape[0]-1 pvalue= 1 - stats.chi2.cdf(Chi2,dof) print("\nThe p value for Scp2 = ",Scp2," and Chi2 = ", Chi2, " is ",pvalue,"\n\n") if DEBUG: print(bins_sample0) print(bins_sample1) print("Chi2/dof : {0}".format(str(Chi2/dof))) print("pvalue : {0}".format(str(pvalue))) results_list.append(pvalue) if PLOT: if no_dim==1: chi2_plots.adaptive_binning_1Dplot(bin_boundaries_dict,data,number_of_splits,title+" "+str(no_dim) + "D "+str(number_of_splits)+ " splits ",name+"_"+str(no_dim) + "D_chi2_"+str(number_of_splits)+"_splits") if no_dim==2: chi2_plots.adaptive_binning_2Dplot(bin_boundaries_dict,signed_Scp2_dict,number_of_splits,X_values,title+" "+str(no_dim) + "D"+str(number_of_splits)+ " splits ",name+"_"+str(no_dim) + "D_chi2_"+str(number_of_splits)+"_splits", X_min= X_min,X_total_width=X_total_width ) if no_dim>1: chi2_plots.adaptive_binning_2D1Dplot(bin_boundaries_dict,bins_sample01_dict,number_of_splits,X_values,title+" "+str(no_dim) + "D"+str(number_of_splits)+ " splits ",name+"_"+str(no_dim) + "D_chi2_"+str(number_of_splits)+"_splits", no_dim) return results_list
def wc(name_mode="_", head=None, args=None, kwargs=None, *, conditions=None) \ -> Pattern: """Constructor for a wildcard-:class:`Pattern` Helper function to create a Pattern object with an emphasis on wildcard patterns, if we don't care about the arguments of the matched expressions (otherwise, use :func:`pattern`) Args: name_mode (str): Combined `wc_name` and `mode` for :class:`Pattern` constructor argument. See below for syntax head (type, or None): See :class:`Pattern` args (list or None): See :class:`Pattern` kwargs (dict or None): See :class:`Pattern` conditions (list or None): See :class:`Pattern` The `name_mode` argument uses trailing underscored to indicate the `mode`: * ``A`` -> ``Pattern(wc_name="A", mode=Pattern.single, ...)`` * ``A_`` -> ``Pattern(wc_name="A", mode=Pattern.single, ...)`` * ``B__`` -> ``Pattern(wc_name="B", mode=Pattern.one_or_more, ...)`` * ``B___`` -> ``Pattern(wc_name="C", mode=Pattern.zero_or_more, ...)`` """ rx = re.compile(r"^([A-Za-z]?[A-Za-z0-9]*)(_{0,3})$") m = rx.match(name_mode) if not m: raise ValueError("Invalid name_mode: %s" % name_mode) wc_name, mode_underscores = m.groups() if wc_name == '': wc_name = None mode = len(mode_underscores) or Pattern.single return Pattern(head, args, kwargs, mode=mode, wc_name=wc_name, conditions=conditions)
Constructor for a wildcard-:class:`Pattern` Helper function to create a Pattern object with an emphasis on wildcard patterns, if we don't care about the arguments of the matched expressions (otherwise, use :func:`pattern`) Args: name_mode (str): Combined `wc_name` and `mode` for :class:`Pattern` constructor argument. See below for syntax head (type, or None): See :class:`Pattern` args (list or None): See :class:`Pattern` kwargs (dict or None): See :class:`Pattern` conditions (list or None): See :class:`Pattern` The `name_mode` argument uses trailing underscored to indicate the `mode`: * ``A`` -> ``Pattern(wc_name="A", mode=Pattern.single, ...)`` * ``A_`` -> ``Pattern(wc_name="A", mode=Pattern.single, ...)`` * ``B__`` -> ``Pattern(wc_name="B", mode=Pattern.one_or_more, ...)`` * ``B___`` -> ``Pattern(wc_name="C", mode=Pattern.zero_or_more, ...)``
Below is the the instruction that describes the task: ### Input: Constructor for a wildcard-:class:`Pattern` Helper function to create a Pattern object with an emphasis on wildcard patterns, if we don't care about the arguments of the matched expressions (otherwise, use :func:`pattern`) Args: name_mode (str): Combined `wc_name` and `mode` for :class:`Pattern` constructor argument. See below for syntax head (type, or None): See :class:`Pattern` args (list or None): See :class:`Pattern` kwargs (dict or None): See :class:`Pattern` conditions (list or None): See :class:`Pattern` The `name_mode` argument uses trailing underscored to indicate the `mode`: * ``A`` -> ``Pattern(wc_name="A", mode=Pattern.single, ...)`` * ``A_`` -> ``Pattern(wc_name="A", mode=Pattern.single, ...)`` * ``B__`` -> ``Pattern(wc_name="B", mode=Pattern.one_or_more, ...)`` * ``B___`` -> ``Pattern(wc_name="C", mode=Pattern.zero_or_more, ...)`` ### Response: def wc(name_mode="_", head=None, args=None, kwargs=None, *, conditions=None) \ -> Pattern: """Constructor for a wildcard-:class:`Pattern` Helper function to create a Pattern object with an emphasis on wildcard patterns, if we don't care about the arguments of the matched expressions (otherwise, use :func:`pattern`) Args: name_mode (str): Combined `wc_name` and `mode` for :class:`Pattern` constructor argument. See below for syntax head (type, or None): See :class:`Pattern` args (list or None): See :class:`Pattern` kwargs (dict or None): See :class:`Pattern` conditions (list or None): See :class:`Pattern` The `name_mode` argument uses trailing underscored to indicate the `mode`: * ``A`` -> ``Pattern(wc_name="A", mode=Pattern.single, ...)`` * ``A_`` -> ``Pattern(wc_name="A", mode=Pattern.single, ...)`` * ``B__`` -> ``Pattern(wc_name="B", mode=Pattern.one_or_more, ...)`` * ``B___`` -> ``Pattern(wc_name="C", mode=Pattern.zero_or_more, ...)`` """ rx = re.compile(r"^([A-Za-z]?[A-Za-z0-9]*)(_{0,3})$") m = rx.match(name_mode) if not m: raise ValueError("Invalid name_mode: %s" % name_mode) wc_name, mode_underscores = m.groups() if wc_name == '': wc_name = None mode = len(mode_underscores) or Pattern.single return Pattern(head, args, kwargs, mode=mode, wc_name=wc_name, conditions=conditions)
def paginate(self, request, collection): """ Paginate collection. :return object: Collection or paginator """ p = Paginator(request, self, collection) return p.paginator and p or UpdatedList(collection)
Paginate collection. :return object: Collection or paginator
Below is the the instruction that describes the task: ### Input: Paginate collection. :return object: Collection or paginator ### Response: def paginate(self, request, collection): """ Paginate collection. :return object: Collection or paginator """ p = Paginator(request, self, collection) return p.paginator and p or UpdatedList(collection)
def load_module(self, loader): ''' Load the module. Required for the Python meta-loading mechanism. ''' modfile, pathname, description = loader.info module = imp.load_module( loader.fullname, modfile, pathname, description ) sys.modules[loader.fullname] = module self.__loaded_modules.add(loader.fullname) autodecorator.decorate_module(module, decorator=self.__decorator) return module
Load the module. Required for the Python meta-loading mechanism.
Below is the the instruction that describes the task: ### Input: Load the module. Required for the Python meta-loading mechanism. ### Response: def load_module(self, loader): ''' Load the module. Required for the Python meta-loading mechanism. ''' modfile, pathname, description = loader.info module = imp.load_module( loader.fullname, modfile, pathname, description ) sys.modules[loader.fullname] = module self.__loaded_modules.add(loader.fullname) autodecorator.decorate_module(module, decorator=self.__decorator) return module
def check_exports(mod, specs, renamings): ''' Does nothing but raising PythranSyntaxError if specs references an undefined global ''' functions = {renamings.get(k, k): v for k, v in specs.functions.items()} mod_functions = {node.name: node for node in mod.body if isinstance(node, ast.FunctionDef)} for fname, signatures in functions.items(): try: fnode = mod_functions[fname] except KeyError: raise PythranSyntaxError( "Invalid spec: exporting undefined function `{}`" .format(fname)) for signature in signatures: args_count = len(fnode.args.args) if len(signature) > args_count: raise PythranSyntaxError( "Too many arguments when exporting `{}`" .format(fname)) elif len(signature) < args_count - len(fnode.args.defaults): raise PythranSyntaxError( "Not enough arguments when exporting `{}`" .format(fname))
Does nothing but raising PythranSyntaxError if specs references an undefined global
Below is the the instruction that describes the task: ### Input: Does nothing but raising PythranSyntaxError if specs references an undefined global ### Response: def check_exports(mod, specs, renamings): ''' Does nothing but raising PythranSyntaxError if specs references an undefined global ''' functions = {renamings.get(k, k): v for k, v in specs.functions.items()} mod_functions = {node.name: node for node in mod.body if isinstance(node, ast.FunctionDef)} for fname, signatures in functions.items(): try: fnode = mod_functions[fname] except KeyError: raise PythranSyntaxError( "Invalid spec: exporting undefined function `{}`" .format(fname)) for signature in signatures: args_count = len(fnode.args.args) if len(signature) > args_count: raise PythranSyntaxError( "Too many arguments when exporting `{}`" .format(fname)) elif len(signature) < args_count - len(fnode.args.defaults): raise PythranSyntaxError( "Not enough arguments when exporting `{}`" .format(fname))
def _apply_workspaces(self, combination, mode): """ Allows user to force move a comma separated list of workspaces to the given output when it's activated. Example: - DP1_workspaces = "1,2,3" """ if len(combination) > 1 and mode == "extend": sleep(3) for output in combination: workspaces = getattr(self, "{}_workspaces".format(output), "").split( "," ) for workspace in workspaces: if not workspace: continue # switch to workspace cmd = '{} workspace "{}"'.format(self.py3.get_wm_msg(), workspace) self.py3.command_run(cmd) # move it to output cmd = '{} move workspace to output "{}"'.format( self.py3.get_wm_msg(), output ) self.py3.command_run(cmd) # log this self.py3.log( "moved workspace {} to output {}".format(workspace, output) )
Allows user to force move a comma separated list of workspaces to the given output when it's activated. Example: - DP1_workspaces = "1,2,3"
Below is the the instruction that describes the task: ### Input: Allows user to force move a comma separated list of workspaces to the given output when it's activated. Example: - DP1_workspaces = "1,2,3" ### Response: def _apply_workspaces(self, combination, mode): """ Allows user to force move a comma separated list of workspaces to the given output when it's activated. Example: - DP1_workspaces = "1,2,3" """ if len(combination) > 1 and mode == "extend": sleep(3) for output in combination: workspaces = getattr(self, "{}_workspaces".format(output), "").split( "," ) for workspace in workspaces: if not workspace: continue # switch to workspace cmd = '{} workspace "{}"'.format(self.py3.get_wm_msg(), workspace) self.py3.command_run(cmd) # move it to output cmd = '{} move workspace to output "{}"'.format( self.py3.get_wm_msg(), output ) self.py3.command_run(cmd) # log this self.py3.log( "moved workspace {} to output {}".format(workspace, output) )
def construct_survival_curves(hazard_rates, timelines): """ Given hazard rates, reconstruct the survival curves Parameters ---------- hazard_rates: (n,t) array timelines: (t,) the observational times Returns ------- t: survial curves, (n,t) array """ cumulative_hazards = cumulative_integral(hazard_rates.values, timelines) return pd.DataFrame(np.exp(-cumulative_hazards), index=timelines)
Given hazard rates, reconstruct the survival curves Parameters ---------- hazard_rates: (n,t) array timelines: (t,) the observational times Returns ------- t: survial curves, (n,t) array
Below is the the instruction that describes the task: ### Input: Given hazard rates, reconstruct the survival curves Parameters ---------- hazard_rates: (n,t) array timelines: (t,) the observational times Returns ------- t: survial curves, (n,t) array ### Response: def construct_survival_curves(hazard_rates, timelines): """ Given hazard rates, reconstruct the survival curves Parameters ---------- hazard_rates: (n,t) array timelines: (t,) the observational times Returns ------- t: survial curves, (n,t) array """ cumulative_hazards = cumulative_integral(hazard_rates.values, timelines) return pd.DataFrame(np.exp(-cumulative_hazards), index=timelines)
def get_segmentation(X, rank, R, rank_labels, R_labels, niter=300, bound_idxs=None, in_labels=None): """ Gets the segmentation (boundaries and labels) from the factorization matrices. Parameters ---------- X: np.array() Features matrix (e.g. chromagram) rank: int Rank of decomposition R: int Size of the median filter for activation matrix niter: int Number of iterations for k-means bound_idxs : list Use previously found boundaries (None to detect them) in_labels : np.array() List of input labels (None to compute them) Returns ------- bounds_idx: np.array Bound indeces found labels: np.array Indeces of the labels representing the similarity between segments. """ #import pylab as plt #plt.imshow(X, interpolation="nearest", aspect="auto") #plt.show() # Find non filtered boundaries compute_bounds = True if bound_idxs is None else False while True: if bound_idxs is None: try: F, G = cnmf(X, rank, niter=niter, hull=False) except: return np.empty(0), [1] # Filter G G = filter_activation_matrix(G.T, R) if bound_idxs is None: bound_idxs = np.where(np.diff(G) != 0)[0] + 1 # Increase rank if we found too few boundaries if compute_bounds and len(np.unique(bound_idxs)) <= 2: rank += 1 bound_idxs = None else: break # Add first and last boundary bound_idxs = np.concatenate(([0], bound_idxs, [X.shape[1] - 1])) bound_idxs = np.asarray(bound_idxs, dtype=int) if in_labels is None: labels = compute_labels(X, rank_labels, R_labels, bound_idxs, niter=niter) else: labels = np.ones(len(bound_idxs) - 1) #plt.imshow(G[:, np.newaxis], interpolation="nearest", aspect="auto") #for b in bound_idxs: #plt.axvline(b, linewidth=2.0, color="k") #plt.show() return bound_idxs, labels
Gets the segmentation (boundaries and labels) from the factorization matrices. Parameters ---------- X: np.array() Features matrix (e.g. chromagram) rank: int Rank of decomposition R: int Size of the median filter for activation matrix niter: int Number of iterations for k-means bound_idxs : list Use previously found boundaries (None to detect them) in_labels : np.array() List of input labels (None to compute them) Returns ------- bounds_idx: np.array Bound indeces found labels: np.array Indeces of the labels representing the similarity between segments.
Below is the the instruction that describes the task: ### Input: Gets the segmentation (boundaries and labels) from the factorization matrices. Parameters ---------- X: np.array() Features matrix (e.g. chromagram) rank: int Rank of decomposition R: int Size of the median filter for activation matrix niter: int Number of iterations for k-means bound_idxs : list Use previously found boundaries (None to detect them) in_labels : np.array() List of input labels (None to compute them) Returns ------- bounds_idx: np.array Bound indeces found labels: np.array Indeces of the labels representing the similarity between segments. ### Response: def get_segmentation(X, rank, R, rank_labels, R_labels, niter=300, bound_idxs=None, in_labels=None): """ Gets the segmentation (boundaries and labels) from the factorization matrices. Parameters ---------- X: np.array() Features matrix (e.g. chromagram) rank: int Rank of decomposition R: int Size of the median filter for activation matrix niter: int Number of iterations for k-means bound_idxs : list Use previously found boundaries (None to detect them) in_labels : np.array() List of input labels (None to compute them) Returns ------- bounds_idx: np.array Bound indeces found labels: np.array Indeces of the labels representing the similarity between segments. """ #import pylab as plt #plt.imshow(X, interpolation="nearest", aspect="auto") #plt.show() # Find non filtered boundaries compute_bounds = True if bound_idxs is None else False while True: if bound_idxs is None: try: F, G = cnmf(X, rank, niter=niter, hull=False) except: return np.empty(0), [1] # Filter G G = filter_activation_matrix(G.T, R) if bound_idxs is None: bound_idxs = np.where(np.diff(G) != 0)[0] + 1 # Increase rank if we found too few boundaries if compute_bounds and len(np.unique(bound_idxs)) <= 2: rank += 1 bound_idxs = None else: break # Add first and last boundary bound_idxs = np.concatenate(([0], bound_idxs, [X.shape[1] - 1])) bound_idxs = np.asarray(bound_idxs, dtype=int) if in_labels is None: labels = compute_labels(X, rank_labels, R_labels, bound_idxs, niter=niter) else: labels = np.ones(len(bound_idxs) - 1) #plt.imshow(G[:, np.newaxis], interpolation="nearest", aspect="auto") #for b in bound_idxs: #plt.axvline(b, linewidth=2.0, color="k") #plt.show() return bound_idxs, labels
def navactive(request, urls): """ {% navactive request "view_name another_view_name" %} """ url_list = set(urls.split()) resolved = resolve(request.path) resolved_urls = set() if resolved.url_name: resolved_urls.add(resolved.url_name) if resolved.namespaces: resolved_urls = resolved_urls.union(["{}:{}".format(namespace, resolved.url_name) for namespace in resolved.namespaces]) resolved_urls = resolved_urls.union(["{}:".format(namespace) for namespace in resolved.namespaces]) if getattr(resolved, 'app_name', None): resolved_urls = resolved_urls.union(["{}:{}".format(resolved.app_name, resolved.url_name), "{}:".format(resolved.app_name)]) if getattr(resolved, 'app_names', []): resolved_urls = resolved_urls.union(["{}:{}".format(app_name, resolved.url_name) for app_name in resolved.app_names]) resolved_urls = resolved_urls.union(["{}:".format(app_name) for app_name in resolved.app_names]) if url_list and resolved_urls and bool(resolved_urls & url_list): return getattr(settings, "NAVHELPER_ACTIVE_CLASS", "active") return getattr(settings, "NAVHELPER_NOT_ACTIVE_CLASS", "")
{% navactive request "view_name another_view_name" %}
Below is the the instruction that describes the task: ### Input: {% navactive request "view_name another_view_name" %} ### Response: def navactive(request, urls): """ {% navactive request "view_name another_view_name" %} """ url_list = set(urls.split()) resolved = resolve(request.path) resolved_urls = set() if resolved.url_name: resolved_urls.add(resolved.url_name) if resolved.namespaces: resolved_urls = resolved_urls.union(["{}:{}".format(namespace, resolved.url_name) for namespace in resolved.namespaces]) resolved_urls = resolved_urls.union(["{}:".format(namespace) for namespace in resolved.namespaces]) if getattr(resolved, 'app_name', None): resolved_urls = resolved_urls.union(["{}:{}".format(resolved.app_name, resolved.url_name), "{}:".format(resolved.app_name)]) if getattr(resolved, 'app_names', []): resolved_urls = resolved_urls.union(["{}:{}".format(app_name, resolved.url_name) for app_name in resolved.app_names]) resolved_urls = resolved_urls.union(["{}:".format(app_name) for app_name in resolved.app_names]) if url_list and resolved_urls and bool(resolved_urls & url_list): return getattr(settings, "NAVHELPER_ACTIVE_CLASS", "active") return getattr(settings, "NAVHELPER_NOT_ACTIVE_CLASS", "")
def unlink(self): """ Unregisters the Link """ links = self.registry.get(self.source) if self in links: links.pop(links.index(self))
Unregisters the Link
Below is the the instruction that describes the task: ### Input: Unregisters the Link ### Response: def unlink(self): """ Unregisters the Link """ links = self.registry.get(self.source) if self in links: links.pop(links.index(self))
def get(self): """ Get a JSON-ready representation of this Attachment. :returns: This Attachment, ready for use in a request body. :rtype: dict """ attachment = {} if self.file_content is not None: attachment["content"] = self.file_content.get() if self.file_type is not None: attachment["type"] = self.file_type.get() if self.file_name is not None: attachment["filename"] = self.file_name.get() if self.disposition is not None: attachment["disposition"] = self.disposition.get() if self.content_id is not None: attachment["content_id"] = self.content_id.get() return attachment
Get a JSON-ready representation of this Attachment. :returns: This Attachment, ready for use in a request body. :rtype: dict
Below is the the instruction that describes the task: ### Input: Get a JSON-ready representation of this Attachment. :returns: This Attachment, ready for use in a request body. :rtype: dict ### Response: def get(self): """ Get a JSON-ready representation of this Attachment. :returns: This Attachment, ready for use in a request body. :rtype: dict """ attachment = {} if self.file_content is not None: attachment["content"] = self.file_content.get() if self.file_type is not None: attachment["type"] = self.file_type.get() if self.file_name is not None: attachment["filename"] = self.file_name.get() if self.disposition is not None: attachment["disposition"] = self.disposition.get() if self.content_id is not None: attachment["content_id"] = self.content_id.get() return attachment
def rotate_left(self): """ Left rotation """ new_root = self.node.right.node new_left_sub = new_root.left.node old_root = self.node self.node = new_root old_root.right.node = new_left_sub new_root.left.node = old_root
Left rotation
Below is the the instruction that describes the task: ### Input: Left rotation ### Response: def rotate_left(self): """ Left rotation """ new_root = self.node.right.node new_left_sub = new_root.left.node old_root = self.node self.node = new_root old_root.right.node = new_left_sub new_root.left.node = old_root
def execute_command(self, command, **kwargs): """Execute a command on the node Args: command (str) Kwargs: username (str) """ self.info_log("executing command: %s" % command) try: ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) username = kwargs.get( 'username', self.browser_config.get('username') ) password = self.browser_config.get('password') ssh.connect(self.get_ip(), username=username, password=password) stdin, stdout, stderr = ssh.exec_command(command) ssh.close() return (stdout, stderr) except Exception as e: msg = "Execute_command exception: %s" % str(e) self.error_log(msg) raise Exception(msg)
Execute a command on the node Args: command (str) Kwargs: username (str)
Below is the the instruction that describes the task: ### Input: Execute a command on the node Args: command (str) Kwargs: username (str) ### Response: def execute_command(self, command, **kwargs): """Execute a command on the node Args: command (str) Kwargs: username (str) """ self.info_log("executing command: %s" % command) try: ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) username = kwargs.get( 'username', self.browser_config.get('username') ) password = self.browser_config.get('password') ssh.connect(self.get_ip(), username=username, password=password) stdin, stdout, stderr = ssh.exec_command(command) ssh.close() return (stdout, stderr) except Exception as e: msg = "Execute_command exception: %s" % str(e) self.error_log(msg) raise Exception(msg)
def _get_filename(request, item): """ Creates a filename """ if request.keep_image_names: filename = OgcImageService.finalize_filename(item['niceName'].replace(' ', '_')) else: filename = OgcImageService.finalize_filename( '_'.join([str(GeopediaService._parse_layer(request.layer)), item['objectPath'].rsplit('/', 1)[-1]]), request.image_format ) LOGGER.debug("filename=%s", filename) return filename
Creates a filename
Below is the the instruction that describes the task: ### Input: Creates a filename ### Response: def _get_filename(request, item): """ Creates a filename """ if request.keep_image_names: filename = OgcImageService.finalize_filename(item['niceName'].replace(' ', '_')) else: filename = OgcImageService.finalize_filename( '_'.join([str(GeopediaService._parse_layer(request.layer)), item['objectPath'].rsplit('/', 1)[-1]]), request.image_format ) LOGGER.debug("filename=%s", filename) return filename
def bind(self, **bindings): """Creates a new template with the given unbound variables bound. Args: **bindings: Arguments for every deferred parameter. Returns: A new template with the given bindings. Raises: ValueError: If any of the bindings do not correspond to unbound variables. """ new_context = dict(self._partial_context) unknown_keys = [] for k, v in six.iteritems(bindings): if k not in self._unbound_vars: unknown_keys.append(k) new_context[self._unbound_vars[k]] = v if unknown_keys: raise ValueError( 'The following keys are not associated with any unbound vars: %s, ' 'legal values are %s' % (unknown_keys, list(self._unbound_vars.keys()))) return _DeferredLayer(self.bookkeeper, None, (), {}, scope=self._scope, defaults=self._defaults, pass_through=self, partial_context=new_context)
Creates a new template with the given unbound variables bound. Args: **bindings: Arguments for every deferred parameter. Returns: A new template with the given bindings. Raises: ValueError: If any of the bindings do not correspond to unbound variables.
Below is the the instruction that describes the task: ### Input: Creates a new template with the given unbound variables bound. Args: **bindings: Arguments for every deferred parameter. Returns: A new template with the given bindings. Raises: ValueError: If any of the bindings do not correspond to unbound variables. ### Response: def bind(self, **bindings): """Creates a new template with the given unbound variables bound. Args: **bindings: Arguments for every deferred parameter. Returns: A new template with the given bindings. Raises: ValueError: If any of the bindings do not correspond to unbound variables. """ new_context = dict(self._partial_context) unknown_keys = [] for k, v in six.iteritems(bindings): if k not in self._unbound_vars: unknown_keys.append(k) new_context[self._unbound_vars[k]] = v if unknown_keys: raise ValueError( 'The following keys are not associated with any unbound vars: %s, ' 'legal values are %s' % (unknown_keys, list(self._unbound_vars.keys()))) return _DeferredLayer(self.bookkeeper, None, (), {}, scope=self._scope, defaults=self._defaults, pass_through=self, partial_context=new_context)
def submission_status(self, link): """ Given the unique link of a submission, returns its current status. Keyword Arguments ----------------- * link: the unique id string of a submission Returns ------- A dictionary of the error, the result code and the status code. Notes ----- Status specifies the stage of execution. * status < 0 means the program awaits compilation * status == 0 means the program is done * status == 1 means the program is being compiled * status == 3 means the program is running Result specifies how the program finished. * result == 0 means not running, the program was submitted with run=False * result == 11 means compilation error * result == 12 means runtime error * result == 13 means timelimit exceeded * result == 15 means success * result == 17 means memory limit exceeded * result == 19 means illegal system call * result == 20 means Ideone internal error, submit a bug report Examples -------- >>> ideone_object = Ideone('username', 'password') >>> ideone_object.submission_status('LsSbo') {'error': 'OK', 'result': 15, 'status': 0} """ result = self.client.service.getSubmissionStatus(self.user, self.password, link) result_dict = Ideone._transform_to_dict(result) Ideone._handle_error(result_dict) return result_dict
Given the unique link of a submission, returns its current status. Keyword Arguments ----------------- * link: the unique id string of a submission Returns ------- A dictionary of the error, the result code and the status code. Notes ----- Status specifies the stage of execution. * status < 0 means the program awaits compilation * status == 0 means the program is done * status == 1 means the program is being compiled * status == 3 means the program is running Result specifies how the program finished. * result == 0 means not running, the program was submitted with run=False * result == 11 means compilation error * result == 12 means runtime error * result == 13 means timelimit exceeded * result == 15 means success * result == 17 means memory limit exceeded * result == 19 means illegal system call * result == 20 means Ideone internal error, submit a bug report Examples -------- >>> ideone_object = Ideone('username', 'password') >>> ideone_object.submission_status('LsSbo') {'error': 'OK', 'result': 15, 'status': 0}
Below is the the instruction that describes the task: ### Input: Given the unique link of a submission, returns its current status. Keyword Arguments ----------------- * link: the unique id string of a submission Returns ------- A dictionary of the error, the result code and the status code. Notes ----- Status specifies the stage of execution. * status < 0 means the program awaits compilation * status == 0 means the program is done * status == 1 means the program is being compiled * status == 3 means the program is running Result specifies how the program finished. * result == 0 means not running, the program was submitted with run=False * result == 11 means compilation error * result == 12 means runtime error * result == 13 means timelimit exceeded * result == 15 means success * result == 17 means memory limit exceeded * result == 19 means illegal system call * result == 20 means Ideone internal error, submit a bug report Examples -------- >>> ideone_object = Ideone('username', 'password') >>> ideone_object.submission_status('LsSbo') {'error': 'OK', 'result': 15, 'status': 0} ### Response: def submission_status(self, link): """ Given the unique link of a submission, returns its current status. Keyword Arguments ----------------- * link: the unique id string of a submission Returns ------- A dictionary of the error, the result code and the status code. Notes ----- Status specifies the stage of execution. * status < 0 means the program awaits compilation * status == 0 means the program is done * status == 1 means the program is being compiled * status == 3 means the program is running Result specifies how the program finished. * result == 0 means not running, the program was submitted with run=False * result == 11 means compilation error * result == 12 means runtime error * result == 13 means timelimit exceeded * result == 15 means success * result == 17 means memory limit exceeded * result == 19 means illegal system call * result == 20 means Ideone internal error, submit a bug report Examples -------- >>> ideone_object = Ideone('username', 'password') >>> ideone_object.submission_status('LsSbo') {'error': 'OK', 'result': 15, 'status': 0} """ result = self.client.service.getSubmissionStatus(self.user, self.password, link) result_dict = Ideone._transform_to_dict(result) Ideone._handle_error(result_dict) return result_dict
def setFont(self, font): """ Sets the font that will be returned when data() is called with the Qt.FontRole. Can be a QFont or None if no font is set. """ check_class(font, QtGui.QFont, allow_none=True) self._font = font
Sets the font that will be returned when data() is called with the Qt.FontRole. Can be a QFont or None if no font is set.
Below is the the instruction that describes the task: ### Input: Sets the font that will be returned when data() is called with the Qt.FontRole. Can be a QFont or None if no font is set. ### Response: def setFont(self, font): """ Sets the font that will be returned when data() is called with the Qt.FontRole. Can be a QFont or None if no font is set. """ check_class(font, QtGui.QFont, allow_none=True) self._font = font
def get_ports(device_owners=None, vnic_type=None, port_id=None, active=True): """Returns list of all ports in neutron the db""" session = db.get_reader_session() with session.begin(): port_model = models_v2.Port ports = (session .query(port_model) .filter_unnecessary_ports(device_owners, vnic_type, active)) if port_id: ports = ports.filter(port_model.id == port_id) return ports.all()
Returns list of all ports in neutron the db
Below is the the instruction that describes the task: ### Input: Returns list of all ports in neutron the db ### Response: def get_ports(device_owners=None, vnic_type=None, port_id=None, active=True): """Returns list of all ports in neutron the db""" session = db.get_reader_session() with session.begin(): port_model = models_v2.Port ports = (session .query(port_model) .filter_unnecessary_ports(device_owners, vnic_type, active)) if port_id: ports = ports.filter(port_model.id == port_id) return ports.all()
def capture_update_records(records): """Writes all updated configuration info to DynamoDB""" for rec in records: data = cloudwatch.get_historical_base_info(rec) group = describe_group(rec, cloudwatch.get_region(rec)) if len(group) > 1: raise Exception(f'[X] Multiple groups found. Record: {rec}') if not group: LOG.warning(f'[?] No group information found. Record: {rec}') continue group = group[0] # Determine event data for group - and pop off items that are going to the top-level: LOG.debug(f'Processing group. Group: {group}') data.update({ 'GroupId': group['GroupId'], 'GroupName': group.pop('GroupName'), 'VpcId': group.pop('VpcId', None), 'arn': get_arn(group.pop('GroupId'), cloudwatch.get_region(rec), group.pop('OwnerId')), 'Region': cloudwatch.get_region(rec) }) data['Tags'] = pull_tag_dict(group) # Set the remaining items to the configuration: data['configuration'] = group # Set the version: data['version'] = VERSION LOG.debug(f'[+] Writing Dynamodb Record. Records: {data}') current_revision = CurrentSecurityGroupModel(**data) current_revision.save()
Writes all updated configuration info to DynamoDB
Below is the the instruction that describes the task: ### Input: Writes all updated configuration info to DynamoDB ### Response: def capture_update_records(records): """Writes all updated configuration info to DynamoDB""" for rec in records: data = cloudwatch.get_historical_base_info(rec) group = describe_group(rec, cloudwatch.get_region(rec)) if len(group) > 1: raise Exception(f'[X] Multiple groups found. Record: {rec}') if not group: LOG.warning(f'[?] No group information found. Record: {rec}') continue group = group[0] # Determine event data for group - and pop off items that are going to the top-level: LOG.debug(f'Processing group. Group: {group}') data.update({ 'GroupId': group['GroupId'], 'GroupName': group.pop('GroupName'), 'VpcId': group.pop('VpcId', None), 'arn': get_arn(group.pop('GroupId'), cloudwatch.get_region(rec), group.pop('OwnerId')), 'Region': cloudwatch.get_region(rec) }) data['Tags'] = pull_tag_dict(group) # Set the remaining items to the configuration: data['configuration'] = group # Set the version: data['version'] = VERSION LOG.debug(f'[+] Writing Dynamodb Record. Records: {data}') current_revision = CurrentSecurityGroupModel(**data) current_revision.save()
def alias_symbol(self, alias_symbol=None, is_previous_symbol=None, hgnc_symbol=None, hgnc_identifier=None, limit=None, as_df=False): """Method to query :class:`.models.AliasSymbol` objects in database :param alias_symbol: alias symbol(s) :type alias_symbol: str or tuple(str) or None :param is_previous_symbol: flag for 'is previous' :type is_previous_symbol: bool or tuple(bool) or None :param hgnc_symbol: HGNC symbol(s) :type hgnc_symbol: str or tuple(str) or None :param hgnc_identifier: identifiers(s) in :class:`.models.HGNC` :type hgnc_identifier: int or tuple(int) or None :param limit: - if `isinstance(limit,int)==True` -> limit - if `isinstance(limit,tuple)==True` -> format:= tuple(page_number, results_per_page) - if limit == None -> all results :type limit: int or tuple(int) or None :param bool as_df: if `True` results are returned as :class:`pandas.DataFrame` :return: - if `as_df == False` -> list(:class:`.models.AliasSymbol`) - if `as_df == True` -> :class:`pandas.DataFrame` :rtype: list(:class:`.models.AliasSymbol`) or :class:`pandas.DataFrame` """ q = self.session.query(models.AliasSymbol) model_queries_config = ( (alias_symbol, models.AliasSymbol.alias_symbol), (is_previous_symbol, models.AliasSymbol.is_previous_symbol), ) q = self.get_model_queries(q, model_queries_config) one_to_many_queries_config = ( (hgnc_symbol, models.HGNC.symbol), (hgnc_identifier, models.HGNC.identifier) ) q = self.get_one_to_many_queries(q, one_to_many_queries_config) return self._limit_and_df(q, limit, as_df)
Method to query :class:`.models.AliasSymbol` objects in database :param alias_symbol: alias symbol(s) :type alias_symbol: str or tuple(str) or None :param is_previous_symbol: flag for 'is previous' :type is_previous_symbol: bool or tuple(bool) or None :param hgnc_symbol: HGNC symbol(s) :type hgnc_symbol: str or tuple(str) or None :param hgnc_identifier: identifiers(s) in :class:`.models.HGNC` :type hgnc_identifier: int or tuple(int) or None :param limit: - if `isinstance(limit,int)==True` -> limit - if `isinstance(limit,tuple)==True` -> format:= tuple(page_number, results_per_page) - if limit == None -> all results :type limit: int or tuple(int) or None :param bool as_df: if `True` results are returned as :class:`pandas.DataFrame` :return: - if `as_df == False` -> list(:class:`.models.AliasSymbol`) - if `as_df == True` -> :class:`pandas.DataFrame` :rtype: list(:class:`.models.AliasSymbol`) or :class:`pandas.DataFrame`
Below is the the instruction that describes the task: ### Input: Method to query :class:`.models.AliasSymbol` objects in database :param alias_symbol: alias symbol(s) :type alias_symbol: str or tuple(str) or None :param is_previous_symbol: flag for 'is previous' :type is_previous_symbol: bool or tuple(bool) or None :param hgnc_symbol: HGNC symbol(s) :type hgnc_symbol: str or tuple(str) or None :param hgnc_identifier: identifiers(s) in :class:`.models.HGNC` :type hgnc_identifier: int or tuple(int) or None :param limit: - if `isinstance(limit,int)==True` -> limit - if `isinstance(limit,tuple)==True` -> format:= tuple(page_number, results_per_page) - if limit == None -> all results :type limit: int or tuple(int) or None :param bool as_df: if `True` results are returned as :class:`pandas.DataFrame` :return: - if `as_df == False` -> list(:class:`.models.AliasSymbol`) - if `as_df == True` -> :class:`pandas.DataFrame` :rtype: list(:class:`.models.AliasSymbol`) or :class:`pandas.DataFrame` ### Response: def alias_symbol(self, alias_symbol=None, is_previous_symbol=None, hgnc_symbol=None, hgnc_identifier=None, limit=None, as_df=False): """Method to query :class:`.models.AliasSymbol` objects in database :param alias_symbol: alias symbol(s) :type alias_symbol: str or tuple(str) or None :param is_previous_symbol: flag for 'is previous' :type is_previous_symbol: bool or tuple(bool) or None :param hgnc_symbol: HGNC symbol(s) :type hgnc_symbol: str or tuple(str) or None :param hgnc_identifier: identifiers(s) in :class:`.models.HGNC` :type hgnc_identifier: int or tuple(int) or None :param limit: - if `isinstance(limit,int)==True` -> limit - if `isinstance(limit,tuple)==True` -> format:= tuple(page_number, results_per_page) - if limit == None -> all results :type limit: int or tuple(int) or None :param bool as_df: if `True` results are returned as :class:`pandas.DataFrame` :return: - if `as_df == False` -> list(:class:`.models.AliasSymbol`) - if `as_df == True` -> :class:`pandas.DataFrame` :rtype: list(:class:`.models.AliasSymbol`) or :class:`pandas.DataFrame` """ q = self.session.query(models.AliasSymbol) model_queries_config = ( (alias_symbol, models.AliasSymbol.alias_symbol), (is_previous_symbol, models.AliasSymbol.is_previous_symbol), ) q = self.get_model_queries(q, model_queries_config) one_to_many_queries_config = ( (hgnc_symbol, models.HGNC.symbol), (hgnc_identifier, models.HGNC.identifier) ) q = self.get_one_to_many_queries(q, one_to_many_queries_config) return self._limit_and_df(q, limit, as_df)
def add_monitor(self, pattern, callback, limit=80): """ Calls the given function whenever the given pattern matches the incoming data. .. HINT:: If you want to catch all incoming data regardless of a pattern, use the Protocol.data_received_event event instead. Arguments passed to the callback are the protocol instance, the index of the match, and the match object of the regular expression. :type pattern: str|re.RegexObject|list(str|re.RegexObject) :param pattern: One or more regular expressions. :type callback: callable :param callback: The function that is called. :type limit: int :param limit: The maximum size of the tail of the buffer that is searched, in number of bytes. """ self.buffer.add_monitor(pattern, partial(callback, self), limit)
Calls the given function whenever the given pattern matches the incoming data. .. HINT:: If you want to catch all incoming data regardless of a pattern, use the Protocol.data_received_event event instead. Arguments passed to the callback are the protocol instance, the index of the match, and the match object of the regular expression. :type pattern: str|re.RegexObject|list(str|re.RegexObject) :param pattern: One or more regular expressions. :type callback: callable :param callback: The function that is called. :type limit: int :param limit: The maximum size of the tail of the buffer that is searched, in number of bytes.
Below is the the instruction that describes the task: ### Input: Calls the given function whenever the given pattern matches the incoming data. .. HINT:: If you want to catch all incoming data regardless of a pattern, use the Protocol.data_received_event event instead. Arguments passed to the callback are the protocol instance, the index of the match, and the match object of the regular expression. :type pattern: str|re.RegexObject|list(str|re.RegexObject) :param pattern: One or more regular expressions. :type callback: callable :param callback: The function that is called. :type limit: int :param limit: The maximum size of the tail of the buffer that is searched, in number of bytes. ### Response: def add_monitor(self, pattern, callback, limit=80): """ Calls the given function whenever the given pattern matches the incoming data. .. HINT:: If you want to catch all incoming data regardless of a pattern, use the Protocol.data_received_event event instead. Arguments passed to the callback are the protocol instance, the index of the match, and the match object of the regular expression. :type pattern: str|re.RegexObject|list(str|re.RegexObject) :param pattern: One or more regular expressions. :type callback: callable :param callback: The function that is called. :type limit: int :param limit: The maximum size of the tail of the buffer that is searched, in number of bytes. """ self.buffer.add_monitor(pattern, partial(callback, self), limit)
def backwards(self, orm): "Write your backwards methods here." orm['avocado.DataConcept'].objects.filter(name='Sample')\ .update(queryable=True)
Write your backwards methods here.
Below is the the instruction that describes the task: ### Input: Write your backwards methods here. ### Response: def backwards(self, orm): "Write your backwards methods here." orm['avocado.DataConcept'].objects.filter(name='Sample')\ .update(queryable=True)
def _set_routing_system(self, v, load=False): """ Setter method for routing_system, mapped from YANG variable /routing_system (container) If this variable is read-only (config: false) in the source YANG file, then _set_routing_system is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_routing_system() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=routing_system.routing_system, is_container='container', presence=False, yang_name="routing-system", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'sort-priority': u'RUNNCFG_LEVEL_RBRIDGE'}}, namespace='urn:brocade.com:mgmt:brocade-common-def', defining_module='brocade-common-def', yang_type='container', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """routing_system must be of a type compatible with container""", 'defined-type': "container", 'generated-type': """YANGDynClass(base=routing_system.routing_system, is_container='container', presence=False, yang_name="routing-system", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'sort-priority': u'RUNNCFG_LEVEL_RBRIDGE'}}, namespace='urn:brocade.com:mgmt:brocade-common-def', defining_module='brocade-common-def', yang_type='container', is_config=True)""", }) self.__routing_system = t if hasattr(self, '_set'): self._set()
Setter method for routing_system, mapped from YANG variable /routing_system (container) If this variable is read-only (config: false) in the source YANG file, then _set_routing_system is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_routing_system() directly.
Below is the the instruction that describes the task: ### Input: Setter method for routing_system, mapped from YANG variable /routing_system (container) If this variable is read-only (config: false) in the source YANG file, then _set_routing_system is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_routing_system() directly. ### Response: def _set_routing_system(self, v, load=False): """ Setter method for routing_system, mapped from YANG variable /routing_system (container) If this variable is read-only (config: false) in the source YANG file, then _set_routing_system is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_routing_system() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=routing_system.routing_system, is_container='container', presence=False, yang_name="routing-system", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'sort-priority': u'RUNNCFG_LEVEL_RBRIDGE'}}, namespace='urn:brocade.com:mgmt:brocade-common-def', defining_module='brocade-common-def', yang_type='container', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """routing_system must be of a type compatible with container""", 'defined-type': "container", 'generated-type': """YANGDynClass(base=routing_system.routing_system, is_container='container', presence=False, yang_name="routing-system", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'sort-priority': u'RUNNCFG_LEVEL_RBRIDGE'}}, namespace='urn:brocade.com:mgmt:brocade-common-def', defining_module='brocade-common-def', yang_type='container', is_config=True)""", }) self.__routing_system = t if hasattr(self, '_set'): self._set()
def _initialize(self): """Read the SharQ configuration and set appropriate variables. Open a redis connection pool and load all the Lua scripts. """ self._key_prefix = self._config.get('redis', 'key_prefix') self._job_expire_interval = int( self._config.get('sharq', 'job_expire_interval') ) self._default_job_requeue_limit = int( self._config.get('sharq', 'default_job_requeue_limit') ) # initalize redis redis_connection_type = self._config.get('redis', 'conn_type') db = self._config.get('redis', 'db') if redis_connection_type == 'unix_sock': self._r = redis.StrictRedis( db=db, unix_socket_path=self._config.get('redis', 'unix_socket_path') ) elif redis_connection_type == 'tcp_sock': self._r = redis.StrictRedis( db=db, host=self._config.get('redis', 'host'), port=self._config.get('redis', 'port') ) self._load_lua_scripts()
Read the SharQ configuration and set appropriate variables. Open a redis connection pool and load all the Lua scripts.
Below is the the instruction that describes the task: ### Input: Read the SharQ configuration and set appropriate variables. Open a redis connection pool and load all the Lua scripts. ### Response: def _initialize(self): """Read the SharQ configuration and set appropriate variables. Open a redis connection pool and load all the Lua scripts. """ self._key_prefix = self._config.get('redis', 'key_prefix') self._job_expire_interval = int( self._config.get('sharq', 'job_expire_interval') ) self._default_job_requeue_limit = int( self._config.get('sharq', 'default_job_requeue_limit') ) # initalize redis redis_connection_type = self._config.get('redis', 'conn_type') db = self._config.get('redis', 'db') if redis_connection_type == 'unix_sock': self._r = redis.StrictRedis( db=db, unix_socket_path=self._config.get('redis', 'unix_socket_path') ) elif redis_connection_type == 'tcp_sock': self._r = redis.StrictRedis( db=db, host=self._config.get('redis', 'host'), port=self._config.get('redis', 'port') ) self._load_lua_scripts()
def unpack(self, unpacker): """ Unpacks the constant pool from an unpacker stream """ (count, ) = unpacker.unpack_struct(_H) # first item is never present in the actual data buffer, but # the count number acts like it would be. items = [(None, None), ] count -= 1 # Long and Double const types will "consume" an item count, # but not data hackpass = False for _i in range(0, count): if hackpass: # previous item was a long or double hackpass = False items.append((None, None)) else: item = _unpack_const_item(unpacker) items.append(item) # if this item was a long or double, skip the next # counter. if item[0] in (CONST_Long, CONST_Double): hackpass = True self.consts = items
Unpacks the constant pool from an unpacker stream
Below is the the instruction that describes the task: ### Input: Unpacks the constant pool from an unpacker stream ### Response: def unpack(self, unpacker): """ Unpacks the constant pool from an unpacker stream """ (count, ) = unpacker.unpack_struct(_H) # first item is never present in the actual data buffer, but # the count number acts like it would be. items = [(None, None), ] count -= 1 # Long and Double const types will "consume" an item count, # but not data hackpass = False for _i in range(0, count): if hackpass: # previous item was a long or double hackpass = False items.append((None, None)) else: item = _unpack_const_item(unpacker) items.append(item) # if this item was a long or double, skip the next # counter. if item[0] in (CONST_Long, CONST_Double): hackpass = True self.consts = items
def comments(case_id): """Upload a new comment.""" text = request.form['text'] variant_id = request.form.get('variant_id') username = request.form.get('username') case_obj = app.db.case(case_id) app.db.add_comment(case_obj, text, variant_id=variant_id, username=username) return redirect(request.referrer)
Upload a new comment.
Below is the the instruction that describes the task: ### Input: Upload a new comment. ### Response: def comments(case_id): """Upload a new comment.""" text = request.form['text'] variant_id = request.form.get('variant_id') username = request.form.get('username') case_obj = app.db.case(case_id) app.db.add_comment(case_obj, text, variant_id=variant_id, username=username) return redirect(request.referrer)
def build(self): ''' Returns ------- Corpus ''' constructor_kwargs = self._get_build_kwargs() if type(self.raw_texts) == list: constructor_kwargs['raw_texts'] = np.array(self.raw_texts) else: constructor_kwargs['raw_texts'] = self.raw_texts return Corpus(**constructor_kwargs)
Returns ------- Corpus
Below is the the instruction that describes the task: ### Input: Returns ------- Corpus ### Response: def build(self): ''' Returns ------- Corpus ''' constructor_kwargs = self._get_build_kwargs() if type(self.raw_texts) == list: constructor_kwargs['raw_texts'] = np.array(self.raw_texts) else: constructor_kwargs['raw_texts'] = self.raw_texts return Corpus(**constructor_kwargs)
def order_by(self, *field_names): """ Mark the filter as being ordered if search has occurred. """ if not self._search_ordered: self._search_ordered = len(self._search_terms) > 0 return super(SearchableQuerySet, self).order_by(*field_names)
Mark the filter as being ordered if search has occurred.
Below is the the instruction that describes the task: ### Input: Mark the filter as being ordered if search has occurred. ### Response: def order_by(self, *field_names): """ Mark the filter as being ordered if search has occurred. """ if not self._search_ordered: self._search_ordered = len(self._search_terms) > 0 return super(SearchableQuerySet, self).order_by(*field_names)
def extract_source(bundle_path, source_path): """ Extract the source bundle :param bundle_path: path to the aource bundle *.tar.gz :param source_path: path to location where to extractall """ with tarfile.open(bundle_path, 'r:gz') as tf: tf.extractall(path=source_path) logger.debug("Archive Files: %s" % os.listdir(os.path.dirname(bundle_path)))
Extract the source bundle :param bundle_path: path to the aource bundle *.tar.gz :param source_path: path to location where to extractall
Below is the the instruction that describes the task: ### Input: Extract the source bundle :param bundle_path: path to the aource bundle *.tar.gz :param source_path: path to location where to extractall ### Response: def extract_source(bundle_path, source_path): """ Extract the source bundle :param bundle_path: path to the aource bundle *.tar.gz :param source_path: path to location where to extractall """ with tarfile.open(bundle_path, 'r:gz') as tf: tf.extractall(path=source_path) logger.debug("Archive Files: %s" % os.listdir(os.path.dirname(bundle_path)))
def delete(self): """ Destructor. """ if self.glucose: pysolvers.glucose41_del(self.glucose) self.glucose = None if self.prfile: self.prfile.close()
Destructor.
Below is the the instruction that describes the task: ### Input: Destructor. ### Response: def delete(self): """ Destructor. """ if self.glucose: pysolvers.glucose41_del(self.glucose) self.glucose = None if self.prfile: self.prfile.close()
def metabolite_summary(met, solution=None, threshold=0.01, fva=False, names=False, floatfmt='.3g'): """ Print a summary of the production and consumption fluxes. This method requires the model for which this metabolite is a part to be solved. Parameters ---------- solution : cobra.Solution, optional A previously solved model solution to use for generating the summary. If none provided (default), the summary method will resolve the model. Note that the solution object must match the model, i.e., changes to the model such as changed bounds, added or removed reactions are not taken into account by this method. threshold : float, optional Threshold below which fluxes are not reported. fva : pandas.DataFrame, float or None, optional Whether or not to include flux variability analysis in the output. If given, fva should either be a previous FVA solution matching the model or a float between 0 and 1 representing the fraction of the optimum objective to be searched. names : bool, optional Emit reaction and metabolite names rather than identifiers (default False). floatfmt : string, optional Format string for floats (default '.3g'). """ if names: emit = attrgetter('name') else: emit = attrgetter('id') if solution is None: met.model.slim_optimize(error_value=None) solution = get_solution(met.model, reactions=met.reactions) rxns = sorted(met.reactions, key=attrgetter("id")) rxn_id = list() rxn_name = list() flux = list() reaction = list() for rxn in rxns: rxn_id.append(rxn.id) rxn_name.append(format_long_string(emit(rxn), 10)) flux.append(solution[rxn.id] * rxn.metabolites[met]) txt = rxn.build_reaction_string(use_metabolite_names=names) reaction.append(format_long_string(txt, 40 if fva is not None else 50)) flux_summary = pd.DataFrame({ "id": rxn_name, "flux": flux, "reaction": reaction }, index=rxn_id) if fva is not None: if hasattr(fva, 'columns'): fva_results = fva else: fva_results = flux_variability_analysis( met.model, list(met.reactions), fraction_of_optimum=fva) flux_summary["maximum"] = zeros(len(rxn_id), dtype=float) flux_summary["minimum"] = zeros(len(rxn_id), dtype=float) for rxn in rxns: fmax = rxn.metabolites[met] * fva_results.at[rxn.id, "maximum"] fmin = rxn.metabolites[met] * fva_results.at[rxn.id, "minimum"] if abs(fmin) <= abs(fmax): flux_summary.at[rxn.id, "fmax"] = fmax flux_summary.at[rxn.id, "fmin"] = fmin else: # Reverse fluxes. flux_summary.at[rxn.id, "fmax"] = fmin flux_summary.at[rxn.id, "fmin"] = fmax assert flux_summary["flux"].sum() < 1E-6, "Error in flux balance" flux_summary = _process_flux_dataframe(flux_summary, fva, threshold, floatfmt) flux_summary['percent'] = 0 total_flux = flux_summary.loc[flux_summary.is_input, "flux"].sum() flux_summary.loc[flux_summary.is_input, 'percent'] = \ flux_summary.loc[flux_summary.is_input, 'flux'] / total_flux flux_summary.loc[~flux_summary.is_input, 'percent'] = \ flux_summary.loc[~flux_summary.is_input, 'flux'] / total_flux flux_summary['percent'] = flux_summary.percent.apply( lambda x: '{:.0%}'.format(x)) if fva is not None: flux_table = tabulate( flux_summary.loc[:, ['percent', 'flux', 'fva_fmt', 'id', 'reaction']].values, floatfmt=floatfmt, headers=['%', 'FLUX', 'RANGE', 'RXN ID', 'REACTION']).split('\n') else: flux_table = tabulate( flux_summary.loc[:, ['percent', 'flux', 'id', 'reaction']].values, floatfmt=floatfmt, headers=['%', 'FLUX', 'RXN ID', 'REACTION'] ).split('\n') flux_table_head = flux_table[:2] met_tag = "{0} ({1})".format(format_long_string(met.name, 45), format_long_string(met.id, 10)) head = "PRODUCING REACTIONS -- " + met_tag print_(head) print_("-" * len(head)) print_('\n'.join(flux_table_head)) print_('\n'.join( pd.np.array(flux_table[2:])[flux_summary.is_input.values])) print_() print_("CONSUMING REACTIONS -- " + met_tag) print_("-" * len(head)) print_('\n'.join(flux_table_head)) print_('\n'.join( pd.np.array(flux_table[2:])[~flux_summary.is_input.values]))
Print a summary of the production and consumption fluxes. This method requires the model for which this metabolite is a part to be solved. Parameters ---------- solution : cobra.Solution, optional A previously solved model solution to use for generating the summary. If none provided (default), the summary method will resolve the model. Note that the solution object must match the model, i.e., changes to the model such as changed bounds, added or removed reactions are not taken into account by this method. threshold : float, optional Threshold below which fluxes are not reported. fva : pandas.DataFrame, float or None, optional Whether or not to include flux variability analysis in the output. If given, fva should either be a previous FVA solution matching the model or a float between 0 and 1 representing the fraction of the optimum objective to be searched. names : bool, optional Emit reaction and metabolite names rather than identifiers (default False). floatfmt : string, optional Format string for floats (default '.3g').
Below is the the instruction that describes the task: ### Input: Print a summary of the production and consumption fluxes. This method requires the model for which this metabolite is a part to be solved. Parameters ---------- solution : cobra.Solution, optional A previously solved model solution to use for generating the summary. If none provided (default), the summary method will resolve the model. Note that the solution object must match the model, i.e., changes to the model such as changed bounds, added or removed reactions are not taken into account by this method. threshold : float, optional Threshold below which fluxes are not reported. fva : pandas.DataFrame, float or None, optional Whether or not to include flux variability analysis in the output. If given, fva should either be a previous FVA solution matching the model or a float between 0 and 1 representing the fraction of the optimum objective to be searched. names : bool, optional Emit reaction and metabolite names rather than identifiers (default False). floatfmt : string, optional Format string for floats (default '.3g'). ### Response: def metabolite_summary(met, solution=None, threshold=0.01, fva=False, names=False, floatfmt='.3g'): """ Print a summary of the production and consumption fluxes. This method requires the model for which this metabolite is a part to be solved. Parameters ---------- solution : cobra.Solution, optional A previously solved model solution to use for generating the summary. If none provided (default), the summary method will resolve the model. Note that the solution object must match the model, i.e., changes to the model such as changed bounds, added or removed reactions are not taken into account by this method. threshold : float, optional Threshold below which fluxes are not reported. fva : pandas.DataFrame, float or None, optional Whether or not to include flux variability analysis in the output. If given, fva should either be a previous FVA solution matching the model or a float between 0 and 1 representing the fraction of the optimum objective to be searched. names : bool, optional Emit reaction and metabolite names rather than identifiers (default False). floatfmt : string, optional Format string for floats (default '.3g'). """ if names: emit = attrgetter('name') else: emit = attrgetter('id') if solution is None: met.model.slim_optimize(error_value=None) solution = get_solution(met.model, reactions=met.reactions) rxns = sorted(met.reactions, key=attrgetter("id")) rxn_id = list() rxn_name = list() flux = list() reaction = list() for rxn in rxns: rxn_id.append(rxn.id) rxn_name.append(format_long_string(emit(rxn), 10)) flux.append(solution[rxn.id] * rxn.metabolites[met]) txt = rxn.build_reaction_string(use_metabolite_names=names) reaction.append(format_long_string(txt, 40 if fva is not None else 50)) flux_summary = pd.DataFrame({ "id": rxn_name, "flux": flux, "reaction": reaction }, index=rxn_id) if fva is not None: if hasattr(fva, 'columns'): fva_results = fva else: fva_results = flux_variability_analysis( met.model, list(met.reactions), fraction_of_optimum=fva) flux_summary["maximum"] = zeros(len(rxn_id), dtype=float) flux_summary["minimum"] = zeros(len(rxn_id), dtype=float) for rxn in rxns: fmax = rxn.metabolites[met] * fva_results.at[rxn.id, "maximum"] fmin = rxn.metabolites[met] * fva_results.at[rxn.id, "minimum"] if abs(fmin) <= abs(fmax): flux_summary.at[rxn.id, "fmax"] = fmax flux_summary.at[rxn.id, "fmin"] = fmin else: # Reverse fluxes. flux_summary.at[rxn.id, "fmax"] = fmin flux_summary.at[rxn.id, "fmin"] = fmax assert flux_summary["flux"].sum() < 1E-6, "Error in flux balance" flux_summary = _process_flux_dataframe(flux_summary, fva, threshold, floatfmt) flux_summary['percent'] = 0 total_flux = flux_summary.loc[flux_summary.is_input, "flux"].sum() flux_summary.loc[flux_summary.is_input, 'percent'] = \ flux_summary.loc[flux_summary.is_input, 'flux'] / total_flux flux_summary.loc[~flux_summary.is_input, 'percent'] = \ flux_summary.loc[~flux_summary.is_input, 'flux'] / total_flux flux_summary['percent'] = flux_summary.percent.apply( lambda x: '{:.0%}'.format(x)) if fva is not None: flux_table = tabulate( flux_summary.loc[:, ['percent', 'flux', 'fva_fmt', 'id', 'reaction']].values, floatfmt=floatfmt, headers=['%', 'FLUX', 'RANGE', 'RXN ID', 'REACTION']).split('\n') else: flux_table = tabulate( flux_summary.loc[:, ['percent', 'flux', 'id', 'reaction']].values, floatfmt=floatfmt, headers=['%', 'FLUX', 'RXN ID', 'REACTION'] ).split('\n') flux_table_head = flux_table[:2] met_tag = "{0} ({1})".format(format_long_string(met.name, 45), format_long_string(met.id, 10)) head = "PRODUCING REACTIONS -- " + met_tag print_(head) print_("-" * len(head)) print_('\n'.join(flux_table_head)) print_('\n'.join( pd.np.array(flux_table[2:])[flux_summary.is_input.values])) print_() print_("CONSUMING REACTIONS -- " + met_tag) print_("-" * len(head)) print_('\n'.join(flux_table_head)) print_('\n'.join( pd.np.array(flux_table[2:])[~flux_summary.is_input.values]))
def getAllText(cls, where=None, SEPERATOR=' ', orderBy=None): """Retrieve a list of of all possible instances of this class. The list is composed of tuples in the format (id, description) - where description is a string composed by the fields from cls._shortView, joint with SEPERATOR. """ (sql, fields) = cls._prepareSQL("SELECTALL", where, orderBy=orderBy) curs = cls.cursor() curs.execute(sql) # We might start eating memory at this point rows = curs.fetchall() curs.close() result = [] idPositions = [fields.index(key) for key in cls._sqlPrimary] shortPos = [fields.index(short) for short in cls._shortView] for row in rows: ids = [row[pos] for pos in idPositions] if len(idPositions) > 1: ids = tuple(ids) else: ids = ids[0] text = SEPERATOR.join([str(row[pos]) for pos in shortPos]) result.append((ids, text)) return result
Retrieve a list of of all possible instances of this class. The list is composed of tuples in the format (id, description) - where description is a string composed by the fields from cls._shortView, joint with SEPERATOR.
Below is the the instruction that describes the task: ### Input: Retrieve a list of of all possible instances of this class. The list is composed of tuples in the format (id, description) - where description is a string composed by the fields from cls._shortView, joint with SEPERATOR. ### Response: def getAllText(cls, where=None, SEPERATOR=' ', orderBy=None): """Retrieve a list of of all possible instances of this class. The list is composed of tuples in the format (id, description) - where description is a string composed by the fields from cls._shortView, joint with SEPERATOR. """ (sql, fields) = cls._prepareSQL("SELECTALL", where, orderBy=orderBy) curs = cls.cursor() curs.execute(sql) # We might start eating memory at this point rows = curs.fetchall() curs.close() result = [] idPositions = [fields.index(key) for key in cls._sqlPrimary] shortPos = [fields.index(short) for short in cls._shortView] for row in rows: ids = [row[pos] for pos in idPositions] if len(idPositions) > 1: ids = tuple(ids) else: ids = ids[0] text = SEPERATOR.join([str(row[pos]) for pos in shortPos]) result.append((ids, text)) return result
def get_body(self, component): """ TODO: add documentation """ if component in self._bodies.keys(): return self._bodies[component] else: # then hopefully we're a child star of an contact_binary envelope parent_component = self._parent_envelope_of[component] return self._bodies[parent_component].get_half(component)
TODO: add documentation
Below is the the instruction that describes the task: ### Input: TODO: add documentation ### Response: def get_body(self, component): """ TODO: add documentation """ if component in self._bodies.keys(): return self._bodies[component] else: # then hopefully we're a child star of an contact_binary envelope parent_component = self._parent_envelope_of[component] return self._bodies[parent_component].get_half(component)
def nested_genobject(self, metadata, attr, datastore): """ Allow for the printing of nested GenObjects :param metadata: Nested dictionary containing the metadata. Will be further populated by this method :param attr: Current attribute being evaluated. Must be a GenObject e.g. sample.general :param datastore: The dictionary of the current attribute. Will be converted to nested dictionaries :return: Updated nested metadata dictionary with all GenObjects safely converted to dictionaries """ # Iterate through all the key: value pairs of the current datastore[attr] datastore # e.g. reverse_reads <accessoryFunctions.accessoryFunctions.GenObject object at 0x7fe153b725f8> for key, value in sorted(datastore[attr].datastore.items()): # If the type(value) is a GenObject, then JSON serialization will not work if 'GenObject' in str(type(value)): # Initialise the nested attribute: key nested dictionary within the metadata dictionary # e.g. attr: 100_100, key: reverse_reads metadata[attr][key] = dict() # Iterate through the nested keys and nested values within the value datastore # e.g. nested_key: length, nested_value: 100 for nested_key, nested_datastore in sorted(value.datastore.items()): # Create an additional dictionary layer within the metadata dictionary metadata[attr][key][nested_key] = dict() # If the type(nested_datastore) is a GenObject, recursively run this method to update the # metadata dictionary, supply the newly created nested dictionary: metadata[attr][key] as # the input metadata dictionary, the nested key as the input attribute, and the datastore of # value as the input datastore # e.g. key: 100_100, # datastore: <accessoryFunctions.accessoryFunctions.GenObject object at 0x7fc526001e80> if 'GenObject' in str(type(nested_datastore)): metadata[attr][key].update( self.nested_genobject(metadata[attr][key], nested_key, value.datastore)) # If the nested datastore is not a GenObject, populate the nested metadata dictionary with # the attribute, key, nested key, and nested datastore # e.g. attr: 100_100, key: reverse_reads, nested_key: length, nested_datastore: 100 else: metadata[attr][key][nested_key] = nested_datastore # Non-GenObjects can (usually) be added to the metadata dictionary without issues else: try: if key not in self.unwanted_keys: metadata[attr][key] = value except AttributeError: print('dumperror', attr) # Return the metadata return metadata
Allow for the printing of nested GenObjects :param metadata: Nested dictionary containing the metadata. Will be further populated by this method :param attr: Current attribute being evaluated. Must be a GenObject e.g. sample.general :param datastore: The dictionary of the current attribute. Will be converted to nested dictionaries :return: Updated nested metadata dictionary with all GenObjects safely converted to dictionaries
Below is the the instruction that describes the task: ### Input: Allow for the printing of nested GenObjects :param metadata: Nested dictionary containing the metadata. Will be further populated by this method :param attr: Current attribute being evaluated. Must be a GenObject e.g. sample.general :param datastore: The dictionary of the current attribute. Will be converted to nested dictionaries :return: Updated nested metadata dictionary with all GenObjects safely converted to dictionaries ### Response: def nested_genobject(self, metadata, attr, datastore): """ Allow for the printing of nested GenObjects :param metadata: Nested dictionary containing the metadata. Will be further populated by this method :param attr: Current attribute being evaluated. Must be a GenObject e.g. sample.general :param datastore: The dictionary of the current attribute. Will be converted to nested dictionaries :return: Updated nested metadata dictionary with all GenObjects safely converted to dictionaries """ # Iterate through all the key: value pairs of the current datastore[attr] datastore # e.g. reverse_reads <accessoryFunctions.accessoryFunctions.GenObject object at 0x7fe153b725f8> for key, value in sorted(datastore[attr].datastore.items()): # If the type(value) is a GenObject, then JSON serialization will not work if 'GenObject' in str(type(value)): # Initialise the nested attribute: key nested dictionary within the metadata dictionary # e.g. attr: 100_100, key: reverse_reads metadata[attr][key] = dict() # Iterate through the nested keys and nested values within the value datastore # e.g. nested_key: length, nested_value: 100 for nested_key, nested_datastore in sorted(value.datastore.items()): # Create an additional dictionary layer within the metadata dictionary metadata[attr][key][nested_key] = dict() # If the type(nested_datastore) is a GenObject, recursively run this method to update the # metadata dictionary, supply the newly created nested dictionary: metadata[attr][key] as # the input metadata dictionary, the nested key as the input attribute, and the datastore of # value as the input datastore # e.g. key: 100_100, # datastore: <accessoryFunctions.accessoryFunctions.GenObject object at 0x7fc526001e80> if 'GenObject' in str(type(nested_datastore)): metadata[attr][key].update( self.nested_genobject(metadata[attr][key], nested_key, value.datastore)) # If the nested datastore is not a GenObject, populate the nested metadata dictionary with # the attribute, key, nested key, and nested datastore # e.g. attr: 100_100, key: reverse_reads, nested_key: length, nested_datastore: 100 else: metadata[attr][key][nested_key] = nested_datastore # Non-GenObjects can (usually) be added to the metadata dictionary without issues else: try: if key not in self.unwanted_keys: metadata[attr][key] = value except AttributeError: print('dumperror', attr) # Return the metadata return metadata
def addRecordsFromThread(self, records): """ Adds the given record to the system. :param records | [<orb.Table>, ..] """ label_mapper = self.labelMapper() icon_mapper = self.iconMapper() tree = None if self.showTreePopup(): tree = self.treePopupWidget() # add the items to the list start = self.count() # update the item information blocked = self.signalsBlocked() self.blockSignals(True) for i, record in enumerate(records): index = start + i self.addItem(label_mapper(record)) self.setItemData(index, wrapVariant(record), Qt.UserRole) if icon_mapper: self.setItemIcon(index, icon_mapper(record)) if record == self._currentRecord: self.setCurrentIndex(self.count() - 1) if tree: XOrbRecordItem(tree, record) self.blockSignals(blocked)
Adds the given record to the system. :param records | [<orb.Table>, ..]
Below is the the instruction that describes the task: ### Input: Adds the given record to the system. :param records | [<orb.Table>, ..] ### Response: def addRecordsFromThread(self, records): """ Adds the given record to the system. :param records | [<orb.Table>, ..] """ label_mapper = self.labelMapper() icon_mapper = self.iconMapper() tree = None if self.showTreePopup(): tree = self.treePopupWidget() # add the items to the list start = self.count() # update the item information blocked = self.signalsBlocked() self.blockSignals(True) for i, record in enumerate(records): index = start + i self.addItem(label_mapper(record)) self.setItemData(index, wrapVariant(record), Qt.UserRole) if icon_mapper: self.setItemIcon(index, icon_mapper(record)) if record == self._currentRecord: self.setCurrentIndex(self.count() - 1) if tree: XOrbRecordItem(tree, record) self.blockSignals(blocked)
def _add_info(self, msg, **kwargs): """ Add information to a SAML message. If the attribute is not part of what's defined in the SAML standard add it as an extension. :param msg: :param kwargs: :return: """ args, extensions = self._filter_args(msg, **kwargs) for key, val in args.items(): setattr(msg, key, val) if extensions: if msg.extension_elements: msg.extension_elements.extend(extensions) else: msg.extension_elements = extensions
Add information to a SAML message. If the attribute is not part of what's defined in the SAML standard add it as an extension. :param msg: :param kwargs: :return:
Below is the the instruction that describes the task: ### Input: Add information to a SAML message. If the attribute is not part of what's defined in the SAML standard add it as an extension. :param msg: :param kwargs: :return: ### Response: def _add_info(self, msg, **kwargs): """ Add information to a SAML message. If the attribute is not part of what's defined in the SAML standard add it as an extension. :param msg: :param kwargs: :return: """ args, extensions = self._filter_args(msg, **kwargs) for key, val in args.items(): setattr(msg, key, val) if extensions: if msg.extension_elements: msg.extension_elements.extend(extensions) else: msg.extension_elements = extensions
def _set_ingress(self, v, load=False): """ Setter method for ingress, mapped from YANG variable /interface/fortygigabitethernet/storm_control/ingress (list) If this variable is read-only (config: false) in the source YANG file, then _set_ingress is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_ingress() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("protocol_type",ingress.ingress, yang_name="ingress", rest_name="ingress", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='protocol-type', extensions={u'tailf-common': {u'info': u'Ingress Direction', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-sequence-commands': None, u'cli-incomplete-command': None, u'cli-full-no': None}}), is_container='list', yang_name="ingress", rest_name="ingress", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Ingress Direction', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-sequence-commands': None, u'cli-incomplete-command': None, u'cli-full-no': None}}, namespace='urn:brocade.com:mgmt:brocade-bum-storm-control', defining_module='brocade-bum-storm-control', yang_type='list', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """ingress must be of a type compatible with list""", 'defined-type': "list", 'generated-type': """YANGDynClass(base=YANGListType("protocol_type",ingress.ingress, yang_name="ingress", rest_name="ingress", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='protocol-type', extensions={u'tailf-common': {u'info': u'Ingress Direction', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-sequence-commands': None, u'cli-incomplete-command': None, u'cli-full-no': None}}), is_container='list', yang_name="ingress", rest_name="ingress", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Ingress Direction', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-sequence-commands': None, u'cli-incomplete-command': None, u'cli-full-no': None}}, namespace='urn:brocade.com:mgmt:brocade-bum-storm-control', defining_module='brocade-bum-storm-control', yang_type='list', is_config=True)""", }) self.__ingress = t if hasattr(self, '_set'): self._set()
Setter method for ingress, mapped from YANG variable /interface/fortygigabitethernet/storm_control/ingress (list) If this variable is read-only (config: false) in the source YANG file, then _set_ingress is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_ingress() directly.
Below is the the instruction that describes the task: ### Input: Setter method for ingress, mapped from YANG variable /interface/fortygigabitethernet/storm_control/ingress (list) If this variable is read-only (config: false) in the source YANG file, then _set_ingress is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_ingress() directly. ### Response: def _set_ingress(self, v, load=False): """ Setter method for ingress, mapped from YANG variable /interface/fortygigabitethernet/storm_control/ingress (list) If this variable is read-only (config: false) in the source YANG file, then _set_ingress is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_ingress() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("protocol_type",ingress.ingress, yang_name="ingress", rest_name="ingress", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='protocol-type', extensions={u'tailf-common': {u'info': u'Ingress Direction', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-sequence-commands': None, u'cli-incomplete-command': None, u'cli-full-no': None}}), is_container='list', yang_name="ingress", rest_name="ingress", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Ingress Direction', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-sequence-commands': None, u'cli-incomplete-command': None, u'cli-full-no': None}}, namespace='urn:brocade.com:mgmt:brocade-bum-storm-control', defining_module='brocade-bum-storm-control', yang_type='list', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """ingress must be of a type compatible with list""", 'defined-type': "list", 'generated-type': """YANGDynClass(base=YANGListType("protocol_type",ingress.ingress, yang_name="ingress", rest_name="ingress", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='protocol-type', extensions={u'tailf-common': {u'info': u'Ingress Direction', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-sequence-commands': None, u'cli-incomplete-command': None, u'cli-full-no': None}}), is_container='list', yang_name="ingress", rest_name="ingress", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Ingress Direction', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-sequence-commands': None, u'cli-incomplete-command': None, u'cli-full-no': None}}, namespace='urn:brocade.com:mgmt:brocade-bum-storm-control', defining_module='brocade-bum-storm-control', yang_type='list', is_config=True)""", }) self.__ingress = t if hasattr(self, '_set'): self._set()
def window_taylor(N, nbar=4, sll=-30): """Taylor tapering window Taylor windows allows you to make tradeoffs between the mainlobe width and sidelobe level (sll). Implemented as described by Carrara, Goodman, and Majewski in 'Spotlight Synthetic Aperture Radar: Signal Processing Algorithms' Pages 512-513 :param N: window length :param float nbar: :param float sll: The default values gives equal height sidelobes (nbar) and maximum sidelobe level (sll). .. warning:: not implemented .. seealso:: :func:`create_window`, :class:`Window` """ B = 10**(-sll/20) A = log(B + sqrt(B**2 - 1))/pi s2 = nbar**2 / (A**2 + (nbar - 0.5)**2) ma = arange(1,nbar) def calc_Fm(m): numer = (-1)**(m+1) * prod(1-m**2/s2/(A**2 + (ma - 0.5)**2)) denom = 2* prod([ 1-m**2/j**2 for j in ma if j != m]) return numer/denom Fm = array([calc_Fm(m) for m in ma]) def W(n): return 2 * np.sum(Fm * cos(2*pi*ma*(n-N/2 + 1/2)/N)) + 1 w = array([W(n) for n in range(N)]) # normalize (Note that this is not described in the original text) scale = W((N-1)/2) w /= scale return w
Taylor tapering window Taylor windows allows you to make tradeoffs between the mainlobe width and sidelobe level (sll). Implemented as described by Carrara, Goodman, and Majewski in 'Spotlight Synthetic Aperture Radar: Signal Processing Algorithms' Pages 512-513 :param N: window length :param float nbar: :param float sll: The default values gives equal height sidelobes (nbar) and maximum sidelobe level (sll). .. warning:: not implemented .. seealso:: :func:`create_window`, :class:`Window`
Below is the the instruction that describes the task: ### Input: Taylor tapering window Taylor windows allows you to make tradeoffs between the mainlobe width and sidelobe level (sll). Implemented as described by Carrara, Goodman, and Majewski in 'Spotlight Synthetic Aperture Radar: Signal Processing Algorithms' Pages 512-513 :param N: window length :param float nbar: :param float sll: The default values gives equal height sidelobes (nbar) and maximum sidelobe level (sll). .. warning:: not implemented .. seealso:: :func:`create_window`, :class:`Window` ### Response: def window_taylor(N, nbar=4, sll=-30): """Taylor tapering window Taylor windows allows you to make tradeoffs between the mainlobe width and sidelobe level (sll). Implemented as described by Carrara, Goodman, and Majewski in 'Spotlight Synthetic Aperture Radar: Signal Processing Algorithms' Pages 512-513 :param N: window length :param float nbar: :param float sll: The default values gives equal height sidelobes (nbar) and maximum sidelobe level (sll). .. warning:: not implemented .. seealso:: :func:`create_window`, :class:`Window` """ B = 10**(-sll/20) A = log(B + sqrt(B**2 - 1))/pi s2 = nbar**2 / (A**2 + (nbar - 0.5)**2) ma = arange(1,nbar) def calc_Fm(m): numer = (-1)**(m+1) * prod(1-m**2/s2/(A**2 + (ma - 0.5)**2)) denom = 2* prod([ 1-m**2/j**2 for j in ma if j != m]) return numer/denom Fm = array([calc_Fm(m) for m in ma]) def W(n): return 2 * np.sum(Fm * cos(2*pi*ma*(n-N/2 + 1/2)/N)) + 1 w = array([W(n) for n in range(N)]) # normalize (Note that this is not described in the original text) scale = W((N-1)/2) w /= scale return w
def _publish(tgt, fun, arg=None, tgt_type='glob', returner='', timeout=None, form='clean', roster=None): ''' Publish a command "from the minion out to other minions". In reality, the minion does not execute this function, it is executed by the master. Thus, no access control is enabled, as minions cannot initiate publishes themselves. Salt-ssh publishes will default to whichever roster was used for the initiating salt-ssh call, and can be overridden using the ``roster`` argument Returners are not currently supported The arguments sent to the minion publish function are separated with commas. This means that for a minion executing a command with multiple args it will look like this:: salt-ssh system.example.com publish.publish '*' user.add 'foo,1020,1020' CLI Example: .. code-block:: bash salt-ssh system.example.com publish.publish '*' cmd.run 'ls -la /tmp' ''' if fun.startswith('publish.'): log.info('Cannot publish publish calls. Returning {}') return {} # TODO: implement returners? Do they make sense for salt-ssh calls? if returner: log.warning('Returners currently not supported in salt-ssh publish') # Make sure args have been processed if arg is None: arg = [] elif not isinstance(arg, list): arg = [salt.utils.args.yamlify_arg(arg)] else: arg = [salt.utils.args.yamlify_arg(x) for x in arg] if len(arg) == 1 and arg[0] is None: arg = [] # Set up opts for the SSH object opts = copy.deepcopy(__context__['master_opts']) minopts = copy.deepcopy(__opts__) opts.update(minopts) if roster: opts['roster'] = roster if timeout: opts['timeout'] = timeout opts['argv'] = [fun] + arg opts['selected_target_option'] = tgt_type opts['tgt'] = tgt opts['arg'] = arg # Create the SSH object to handle the actual call ssh = salt.client.ssh.SSH(opts) # Run salt-ssh to get the minion returns rets = {} for ret in ssh.run_iter(): rets.update(ret) if form == 'clean': cret = {} for host in rets: if 'return' in rets[host]: cret[host] = rets[host]['return'] else: cret[host] = rets[host] return cret else: return rets
Publish a command "from the minion out to other minions". In reality, the minion does not execute this function, it is executed by the master. Thus, no access control is enabled, as minions cannot initiate publishes themselves. Salt-ssh publishes will default to whichever roster was used for the initiating salt-ssh call, and can be overridden using the ``roster`` argument Returners are not currently supported The arguments sent to the minion publish function are separated with commas. This means that for a minion executing a command with multiple args it will look like this:: salt-ssh system.example.com publish.publish '*' user.add 'foo,1020,1020' CLI Example: .. code-block:: bash salt-ssh system.example.com publish.publish '*' cmd.run 'ls -la /tmp'
Below is the the instruction that describes the task: ### Input: Publish a command "from the minion out to other minions". In reality, the minion does not execute this function, it is executed by the master. Thus, no access control is enabled, as minions cannot initiate publishes themselves. Salt-ssh publishes will default to whichever roster was used for the initiating salt-ssh call, and can be overridden using the ``roster`` argument Returners are not currently supported The arguments sent to the minion publish function are separated with commas. This means that for a minion executing a command with multiple args it will look like this:: salt-ssh system.example.com publish.publish '*' user.add 'foo,1020,1020' CLI Example: .. code-block:: bash salt-ssh system.example.com publish.publish '*' cmd.run 'ls -la /tmp' ### Response: def _publish(tgt, fun, arg=None, tgt_type='glob', returner='', timeout=None, form='clean', roster=None): ''' Publish a command "from the minion out to other minions". In reality, the minion does not execute this function, it is executed by the master. Thus, no access control is enabled, as minions cannot initiate publishes themselves. Salt-ssh publishes will default to whichever roster was used for the initiating salt-ssh call, and can be overridden using the ``roster`` argument Returners are not currently supported The arguments sent to the minion publish function are separated with commas. This means that for a minion executing a command with multiple args it will look like this:: salt-ssh system.example.com publish.publish '*' user.add 'foo,1020,1020' CLI Example: .. code-block:: bash salt-ssh system.example.com publish.publish '*' cmd.run 'ls -la /tmp' ''' if fun.startswith('publish.'): log.info('Cannot publish publish calls. Returning {}') return {} # TODO: implement returners? Do they make sense for salt-ssh calls? if returner: log.warning('Returners currently not supported in salt-ssh publish') # Make sure args have been processed if arg is None: arg = [] elif not isinstance(arg, list): arg = [salt.utils.args.yamlify_arg(arg)] else: arg = [salt.utils.args.yamlify_arg(x) for x in arg] if len(arg) == 1 and arg[0] is None: arg = [] # Set up opts for the SSH object opts = copy.deepcopy(__context__['master_opts']) minopts = copy.deepcopy(__opts__) opts.update(minopts) if roster: opts['roster'] = roster if timeout: opts['timeout'] = timeout opts['argv'] = [fun] + arg opts['selected_target_option'] = tgt_type opts['tgt'] = tgt opts['arg'] = arg # Create the SSH object to handle the actual call ssh = salt.client.ssh.SSH(opts) # Run salt-ssh to get the minion returns rets = {} for ret in ssh.run_iter(): rets.update(ret) if form == 'clean': cret = {} for host in rets: if 'return' in rets[host]: cret[host] = rets[host]['return'] else: cret[host] = rets[host] return cret else: return rets
def properties(self): """Return various properties of the binary tree. :return: Binary tree properties. :rtype: dict **Example**: .. doctest:: >>> from binarytree import Node >>> >>> root = Node(1) >>> root.left = Node(2) >>> root.right = Node(3) >>> root.left.left = Node(4) >>> root.left.right = Node(5) >>> props = root.properties >>> >>> props['height'] # equivalent to root.height 2 >>> props['size'] # equivalent to root.size 5 >>> props['max_leaf_depth'] # equivalent to root.max_leaf_depth 2 >>> props['min_leaf_depth'] # equivalent to root.min_leaf_depth 1 >>> props['max_node_value'] # equivalent to root.max_node_value 5 >>> props['min_node_value'] # equivalent to root.min_node_value 1 >>> props['leaf_count'] # equivalent to root.leaf_count 3 >>> props['is_balanced'] # equivalent to root.is_balanced True >>> props['is_bst'] # equivalent to root.is_bst False >>> props['is_complete'] # equivalent to root.is_complete True >>> props['is_max_heap'] # equivalent to root.is_max_heap False >>> props['is_min_heap'] # equivalent to root.is_min_heap True >>> props['is_perfect'] # equivalent to root.is_perfect False >>> props['is_strict'] # equivalent to root.is_strict True """ properties = _get_tree_properties(self) properties.update({ 'is_bst': _is_bst(self), 'is_balanced': _is_balanced(self) >= 0 }) return properties
Return various properties of the binary tree. :return: Binary tree properties. :rtype: dict **Example**: .. doctest:: >>> from binarytree import Node >>> >>> root = Node(1) >>> root.left = Node(2) >>> root.right = Node(3) >>> root.left.left = Node(4) >>> root.left.right = Node(5) >>> props = root.properties >>> >>> props['height'] # equivalent to root.height 2 >>> props['size'] # equivalent to root.size 5 >>> props['max_leaf_depth'] # equivalent to root.max_leaf_depth 2 >>> props['min_leaf_depth'] # equivalent to root.min_leaf_depth 1 >>> props['max_node_value'] # equivalent to root.max_node_value 5 >>> props['min_node_value'] # equivalent to root.min_node_value 1 >>> props['leaf_count'] # equivalent to root.leaf_count 3 >>> props['is_balanced'] # equivalent to root.is_balanced True >>> props['is_bst'] # equivalent to root.is_bst False >>> props['is_complete'] # equivalent to root.is_complete True >>> props['is_max_heap'] # equivalent to root.is_max_heap False >>> props['is_min_heap'] # equivalent to root.is_min_heap True >>> props['is_perfect'] # equivalent to root.is_perfect False >>> props['is_strict'] # equivalent to root.is_strict True
Below is the the instruction that describes the task: ### Input: Return various properties of the binary tree. :return: Binary tree properties. :rtype: dict **Example**: .. doctest:: >>> from binarytree import Node >>> >>> root = Node(1) >>> root.left = Node(2) >>> root.right = Node(3) >>> root.left.left = Node(4) >>> root.left.right = Node(5) >>> props = root.properties >>> >>> props['height'] # equivalent to root.height 2 >>> props['size'] # equivalent to root.size 5 >>> props['max_leaf_depth'] # equivalent to root.max_leaf_depth 2 >>> props['min_leaf_depth'] # equivalent to root.min_leaf_depth 1 >>> props['max_node_value'] # equivalent to root.max_node_value 5 >>> props['min_node_value'] # equivalent to root.min_node_value 1 >>> props['leaf_count'] # equivalent to root.leaf_count 3 >>> props['is_balanced'] # equivalent to root.is_balanced True >>> props['is_bst'] # equivalent to root.is_bst False >>> props['is_complete'] # equivalent to root.is_complete True >>> props['is_max_heap'] # equivalent to root.is_max_heap False >>> props['is_min_heap'] # equivalent to root.is_min_heap True >>> props['is_perfect'] # equivalent to root.is_perfect False >>> props['is_strict'] # equivalent to root.is_strict True ### Response: def properties(self): """Return various properties of the binary tree. :return: Binary tree properties. :rtype: dict **Example**: .. doctest:: >>> from binarytree import Node >>> >>> root = Node(1) >>> root.left = Node(2) >>> root.right = Node(3) >>> root.left.left = Node(4) >>> root.left.right = Node(5) >>> props = root.properties >>> >>> props['height'] # equivalent to root.height 2 >>> props['size'] # equivalent to root.size 5 >>> props['max_leaf_depth'] # equivalent to root.max_leaf_depth 2 >>> props['min_leaf_depth'] # equivalent to root.min_leaf_depth 1 >>> props['max_node_value'] # equivalent to root.max_node_value 5 >>> props['min_node_value'] # equivalent to root.min_node_value 1 >>> props['leaf_count'] # equivalent to root.leaf_count 3 >>> props['is_balanced'] # equivalent to root.is_balanced True >>> props['is_bst'] # equivalent to root.is_bst False >>> props['is_complete'] # equivalent to root.is_complete True >>> props['is_max_heap'] # equivalent to root.is_max_heap False >>> props['is_min_heap'] # equivalent to root.is_min_heap True >>> props['is_perfect'] # equivalent to root.is_perfect False >>> props['is_strict'] # equivalent to root.is_strict True """ properties = _get_tree_properties(self) properties.update({ 'is_bst': _is_bst(self), 'is_balanced': _is_balanced(self) >= 0 }) return properties
def parse_header(file_obj): """ Read the ASCII header of a PLY file, and leave the file object at the position of the start of data but past the header. Parameters ----------- file_obj : open file object Positioned at the start of the file Returns ----------- elements : collections.OrderedDict Fields and data types populated is_ascii : bool Whether the data is ASCII or binary image_name : None or str File name of TextureFile """ if 'ply' not in str(file_obj.readline()): raise ValueError('not a ply file!') # collect the encoding: binary or ASCII encoding = file_obj.readline().decode('utf-8').strip().lower() is_ascii = 'ascii' in encoding # big or little endian endian = ['<', '>'][int('big' in encoding)] elements = collections.OrderedDict() # store file name of TextureFiles in the header image_name = None while True: line = file_obj.readline() if line is None: raise ValueError("Header not terminated properly!") line = line.decode('utf-8').strip().split() # we're done if 'end_header' in line: break # elements are groups of properties if 'element' in line[0]: # we got a new element so add it name, length = line[1:] elements[name] = { 'length': int(length), 'properties': collections.OrderedDict()} # a property is a member of an element elif 'property' in line[0]: # is the property a simple single value, like: # `propert float x` if len(line) == 3: dtype, field = line[1:] elements[name]['properties'][ str(field)] = endian + dtypes[dtype] # is the property a painful list, like: # `property list uchar int vertex_indices` elif 'list' in line[1]: dtype_count, dtype, field = line[2:] elements[name]['properties'][ str(field)] = ( endian + dtypes[dtype_count] + ', ($LIST,)' + endian + dtypes[dtype]) # referenced as a file name elif 'TextureFile' in line: # textures come listed like: # `comment TextureFile fuze_uv.jpg` index = line.index('TextureFile') + 1 if index < len(line): image_name = line[index] return elements, is_ascii, image_name
Read the ASCII header of a PLY file, and leave the file object at the position of the start of data but past the header. Parameters ----------- file_obj : open file object Positioned at the start of the file Returns ----------- elements : collections.OrderedDict Fields and data types populated is_ascii : bool Whether the data is ASCII or binary image_name : None or str File name of TextureFile
Below is the the instruction that describes the task: ### Input: Read the ASCII header of a PLY file, and leave the file object at the position of the start of data but past the header. Parameters ----------- file_obj : open file object Positioned at the start of the file Returns ----------- elements : collections.OrderedDict Fields and data types populated is_ascii : bool Whether the data is ASCII or binary image_name : None or str File name of TextureFile ### Response: def parse_header(file_obj): """ Read the ASCII header of a PLY file, and leave the file object at the position of the start of data but past the header. Parameters ----------- file_obj : open file object Positioned at the start of the file Returns ----------- elements : collections.OrderedDict Fields and data types populated is_ascii : bool Whether the data is ASCII or binary image_name : None or str File name of TextureFile """ if 'ply' not in str(file_obj.readline()): raise ValueError('not a ply file!') # collect the encoding: binary or ASCII encoding = file_obj.readline().decode('utf-8').strip().lower() is_ascii = 'ascii' in encoding # big or little endian endian = ['<', '>'][int('big' in encoding)] elements = collections.OrderedDict() # store file name of TextureFiles in the header image_name = None while True: line = file_obj.readline() if line is None: raise ValueError("Header not terminated properly!") line = line.decode('utf-8').strip().split() # we're done if 'end_header' in line: break # elements are groups of properties if 'element' in line[0]: # we got a new element so add it name, length = line[1:] elements[name] = { 'length': int(length), 'properties': collections.OrderedDict()} # a property is a member of an element elif 'property' in line[0]: # is the property a simple single value, like: # `propert float x` if len(line) == 3: dtype, field = line[1:] elements[name]['properties'][ str(field)] = endian + dtypes[dtype] # is the property a painful list, like: # `property list uchar int vertex_indices` elif 'list' in line[1]: dtype_count, dtype, field = line[2:] elements[name]['properties'][ str(field)] = ( endian + dtypes[dtype_count] + ', ($LIST,)' + endian + dtypes[dtype]) # referenced as a file name elif 'TextureFile' in line: # textures come listed like: # `comment TextureFile fuze_uv.jpg` index = line.index('TextureFile') + 1 if index < len(line): image_name = line[index] return elements, is_ascii, image_name
def last_written_resolver(riak_object): """ A conflict-resolution function that resolves by selecting the most recently-modified sibling by timestamp. :param riak_object: an object-in-conflict that will be resolved :type riak_object: :class:`RiakObject <riak.riak_object.RiakObject>` """ riak_object.siblings = [max(riak_object.siblings, key=lambda x: x.last_modified), ]
A conflict-resolution function that resolves by selecting the most recently-modified sibling by timestamp. :param riak_object: an object-in-conflict that will be resolved :type riak_object: :class:`RiakObject <riak.riak_object.RiakObject>`
Below is the the instruction that describes the task: ### Input: A conflict-resolution function that resolves by selecting the most recently-modified sibling by timestamp. :param riak_object: an object-in-conflict that will be resolved :type riak_object: :class:`RiakObject <riak.riak_object.RiakObject>` ### Response: def last_written_resolver(riak_object): """ A conflict-resolution function that resolves by selecting the most recently-modified sibling by timestamp. :param riak_object: an object-in-conflict that will be resolved :type riak_object: :class:`RiakObject <riak.riak_object.RiakObject>` """ riak_object.siblings = [max(riak_object.siblings, key=lambda x: x.last_modified), ]
def first_n_three_layer_P(reference_patterns, estimated_patterns, n=5): """First n three-layer precision. This metric is basically the same as the three-layer FPR but it is only applied to the first n estimated patterns, and it only returns the precision. In MIREX and typically, n = 5. Examples -------- >>> ref_patterns = mir_eval.io.load_patterns("ref_pattern.txt") >>> est_patterns = mir_eval.io.load_patterns("est_pattern.txt") >>> P = mir_eval.pattern.first_n_three_layer_P(ref_patterns, ... est_patterns, n=5) Parameters ---------- reference_patterns : list The reference patterns in the format returned by :func:`mir_eval.io.load_patterns()` estimated_patterns : list The estimated patterns in the same format n : int Number of patterns to consider from the estimated results, in the order they appear in the matrix (Default value = 5) Returns ------- precision : float The first n three-layer Precision """ validate(reference_patterns, estimated_patterns) # If no patterns were provided, metric is zero if _n_onset_midi(reference_patterns) == 0 or \ _n_onset_midi(estimated_patterns) == 0: return 0., 0., 0. # Get only the first n patterns from the estimated results fn_est_patterns = estimated_patterns[:min(len(estimated_patterns), n)] # Compute the three-layer scores for the first n estimated patterns F, P, R = three_layer_FPR(reference_patterns, fn_est_patterns) return P
First n three-layer precision. This metric is basically the same as the three-layer FPR but it is only applied to the first n estimated patterns, and it only returns the precision. In MIREX and typically, n = 5. Examples -------- >>> ref_patterns = mir_eval.io.load_patterns("ref_pattern.txt") >>> est_patterns = mir_eval.io.load_patterns("est_pattern.txt") >>> P = mir_eval.pattern.first_n_three_layer_P(ref_patterns, ... est_patterns, n=5) Parameters ---------- reference_patterns : list The reference patterns in the format returned by :func:`mir_eval.io.load_patterns()` estimated_patterns : list The estimated patterns in the same format n : int Number of patterns to consider from the estimated results, in the order they appear in the matrix (Default value = 5) Returns ------- precision : float The first n three-layer Precision
Below is the the instruction that describes the task: ### Input: First n three-layer precision. This metric is basically the same as the three-layer FPR but it is only applied to the first n estimated patterns, and it only returns the precision. In MIREX and typically, n = 5. Examples -------- >>> ref_patterns = mir_eval.io.load_patterns("ref_pattern.txt") >>> est_patterns = mir_eval.io.load_patterns("est_pattern.txt") >>> P = mir_eval.pattern.first_n_three_layer_P(ref_patterns, ... est_patterns, n=5) Parameters ---------- reference_patterns : list The reference patterns in the format returned by :func:`mir_eval.io.load_patterns()` estimated_patterns : list The estimated patterns in the same format n : int Number of patterns to consider from the estimated results, in the order they appear in the matrix (Default value = 5) Returns ------- precision : float The first n three-layer Precision ### Response: def first_n_three_layer_P(reference_patterns, estimated_patterns, n=5): """First n three-layer precision. This metric is basically the same as the three-layer FPR but it is only applied to the first n estimated patterns, and it only returns the precision. In MIREX and typically, n = 5. Examples -------- >>> ref_patterns = mir_eval.io.load_patterns("ref_pattern.txt") >>> est_patterns = mir_eval.io.load_patterns("est_pattern.txt") >>> P = mir_eval.pattern.first_n_three_layer_P(ref_patterns, ... est_patterns, n=5) Parameters ---------- reference_patterns : list The reference patterns in the format returned by :func:`mir_eval.io.load_patterns()` estimated_patterns : list The estimated patterns in the same format n : int Number of patterns to consider from the estimated results, in the order they appear in the matrix (Default value = 5) Returns ------- precision : float The first n three-layer Precision """ validate(reference_patterns, estimated_patterns) # If no patterns were provided, metric is zero if _n_onset_midi(reference_patterns) == 0 or \ _n_onset_midi(estimated_patterns) == 0: return 0., 0., 0. # Get only the first n patterns from the estimated results fn_est_patterns = estimated_patterns[:min(len(estimated_patterns), n)] # Compute the three-layer scores for the first n estimated patterns F, P, R = three_layer_FPR(reference_patterns, fn_est_patterns) return P
def get_files(*bases): """ List all files in a data directory. """ for base in bases: basedir, _ = base.split(".", 1) base = os.path.join(os.path.dirname(__file__), *base.split(".")) rem = len(os.path.dirname(base)) + len(basedir) + 2 for root, dirs, files in os.walk(base): for name in files: yield os.path.join(basedir, root, name)[rem:]
List all files in a data directory.
Below is the the instruction that describes the task: ### Input: List all files in a data directory. ### Response: def get_files(*bases): """ List all files in a data directory. """ for base in bases: basedir, _ = base.split(".", 1) base = os.path.join(os.path.dirname(__file__), *base.split(".")) rem = len(os.path.dirname(base)) + len(basedir) + 2 for root, dirs, files in os.walk(base): for name in files: yield os.path.join(basedir, root, name)[rem:]
def rebalance(self, weight, child, base=np.nan, update=True): """ Rebalance a child to a given weight. This is a helper method to simplify code logic. This method is used when we want to se the weight of a particular child to a set amount. It is similar to allocate, but it calculates the appropriate allocation based on the current weight. Args: * weight (float): The target weight. Usually between -1.0 and 1.0. * child (str): child to allocate to - specified by name. * base (float): If specified, this is the base amount all weight delta calculations will be based off of. This is useful when we determine a set of weights and want to rebalance each child given these new weights. However, as we iterate through each child and call this method, the base (which is by default the current value) will change. Therefore, we can set this base to the original value before the iteration to ensure the proper allocations are made. * update (bool): Force update? """ # if weight is 0 - we want to close child if weight == 0: if child in self.children: return self.close(child) else: return # if no base specified use self's value if np.isnan(base): base = self.value # else make sure we have child if child not in self.children: c = SecurityBase(child) c.setup(self._universe) # update child to bring up to speed c.update(self.now) self._add_child(c) # allocate to child # figure out weight delta c = self.children[child] delta = weight - c.weight c.allocate(delta * base)
Rebalance a child to a given weight. This is a helper method to simplify code logic. This method is used when we want to se the weight of a particular child to a set amount. It is similar to allocate, but it calculates the appropriate allocation based on the current weight. Args: * weight (float): The target weight. Usually between -1.0 and 1.0. * child (str): child to allocate to - specified by name. * base (float): If specified, this is the base amount all weight delta calculations will be based off of. This is useful when we determine a set of weights and want to rebalance each child given these new weights. However, as we iterate through each child and call this method, the base (which is by default the current value) will change. Therefore, we can set this base to the original value before the iteration to ensure the proper allocations are made. * update (bool): Force update?
Below is the the instruction that describes the task: ### Input: Rebalance a child to a given weight. This is a helper method to simplify code logic. This method is used when we want to se the weight of a particular child to a set amount. It is similar to allocate, but it calculates the appropriate allocation based on the current weight. Args: * weight (float): The target weight. Usually between -1.0 and 1.0. * child (str): child to allocate to - specified by name. * base (float): If specified, this is the base amount all weight delta calculations will be based off of. This is useful when we determine a set of weights and want to rebalance each child given these new weights. However, as we iterate through each child and call this method, the base (which is by default the current value) will change. Therefore, we can set this base to the original value before the iteration to ensure the proper allocations are made. * update (bool): Force update? ### Response: def rebalance(self, weight, child, base=np.nan, update=True): """ Rebalance a child to a given weight. This is a helper method to simplify code logic. This method is used when we want to se the weight of a particular child to a set amount. It is similar to allocate, but it calculates the appropriate allocation based on the current weight. Args: * weight (float): The target weight. Usually between -1.0 and 1.0. * child (str): child to allocate to - specified by name. * base (float): If specified, this is the base amount all weight delta calculations will be based off of. This is useful when we determine a set of weights and want to rebalance each child given these new weights. However, as we iterate through each child and call this method, the base (which is by default the current value) will change. Therefore, we can set this base to the original value before the iteration to ensure the proper allocations are made. * update (bool): Force update? """ # if weight is 0 - we want to close child if weight == 0: if child in self.children: return self.close(child) else: return # if no base specified use self's value if np.isnan(base): base = self.value # else make sure we have child if child not in self.children: c = SecurityBase(child) c.setup(self._universe) # update child to bring up to speed c.update(self.now) self._add_child(c) # allocate to child # figure out weight delta c = self.children[child] delta = weight - c.weight c.allocate(delta * base)
def get_sub_doc(self, subpage): """Returns PyQuery object for a given subpage URL. :subpage: The subpage of the season, e.g. 'per_game'. :returns: PyQuery object. """ html = sportsref.utils.get_html(self._subpage_url(subpage)) return pq(html)
Returns PyQuery object for a given subpage URL. :subpage: The subpage of the season, e.g. 'per_game'. :returns: PyQuery object.
Below is the the instruction that describes the task: ### Input: Returns PyQuery object for a given subpage URL. :subpage: The subpage of the season, e.g. 'per_game'. :returns: PyQuery object. ### Response: def get_sub_doc(self, subpage): """Returns PyQuery object for a given subpage URL. :subpage: The subpage of the season, e.g. 'per_game'. :returns: PyQuery object. """ html = sportsref.utils.get_html(self._subpage_url(subpage)) return pq(html)
def filter_by_transcript_expression( self, transcript_expression_dict, min_expression_value=0.0): """ Filters variants down to those which have overlap a transcript whose expression value in the transcript_expression_dict argument is greater than min_expression_value. Parameters ---------- transcript_expression_dict : dict Dictionary mapping Ensembl transcript IDs to expression estimates (either FPKM or TPM) min_expression_value : float Threshold above which we'll keep an effect in the result collection """ return self.filter_any_above_threshold( multi_key_fn=lambda variant: variant.transcript_ids, value_dict=transcript_expression_dict, threshold=min_expression_value)
Filters variants down to those which have overlap a transcript whose expression value in the transcript_expression_dict argument is greater than min_expression_value. Parameters ---------- transcript_expression_dict : dict Dictionary mapping Ensembl transcript IDs to expression estimates (either FPKM or TPM) min_expression_value : float Threshold above which we'll keep an effect in the result collection
Below is the the instruction that describes the task: ### Input: Filters variants down to those which have overlap a transcript whose expression value in the transcript_expression_dict argument is greater than min_expression_value. Parameters ---------- transcript_expression_dict : dict Dictionary mapping Ensembl transcript IDs to expression estimates (either FPKM or TPM) min_expression_value : float Threshold above which we'll keep an effect in the result collection ### Response: def filter_by_transcript_expression( self, transcript_expression_dict, min_expression_value=0.0): """ Filters variants down to those which have overlap a transcript whose expression value in the transcript_expression_dict argument is greater than min_expression_value. Parameters ---------- transcript_expression_dict : dict Dictionary mapping Ensembl transcript IDs to expression estimates (either FPKM or TPM) min_expression_value : float Threshold above which we'll keep an effect in the result collection """ return self.filter_any_above_threshold( multi_key_fn=lambda variant: variant.transcript_ids, value_dict=transcript_expression_dict, threshold=min_expression_value)
def populate_requirement_set(requirement_set, # type: RequirementSet args, # type: List[str] options, # type: Values finder, # type: PackageFinder session, # type: PipSession name, # type: str wheel_cache # type: Optional[WheelCache] ): # type: (...) -> None """ Marshal cmd line args into a requirement set. """ # NOTE: As a side-effect, options.require_hashes and # requirement_set.require_hashes may be updated for filename in options.constraints: for req_to_add in parse_requirements( filename, constraint=True, finder=finder, options=options, session=session, wheel_cache=wheel_cache): req_to_add.is_direct = True requirement_set.add_requirement(req_to_add) for req in args: req_to_add = install_req_from_line( req, None, isolated=options.isolated_mode, use_pep517=options.use_pep517, wheel_cache=wheel_cache ) req_to_add.is_direct = True requirement_set.add_requirement(req_to_add) for req in options.editables: req_to_add = install_req_from_editable( req, isolated=options.isolated_mode, use_pep517=options.use_pep517, wheel_cache=wheel_cache ) req_to_add.is_direct = True requirement_set.add_requirement(req_to_add) for filename in options.requirements: for req_to_add in parse_requirements( filename, finder=finder, options=options, session=session, wheel_cache=wheel_cache, use_pep517=options.use_pep517): req_to_add.is_direct = True requirement_set.add_requirement(req_to_add) # If --require-hashes was a line in a requirements file, tell # RequirementSet about it: requirement_set.require_hashes = options.require_hashes if not (args or options.editables or options.requirements): opts = {'name': name} if options.find_links: raise CommandError( 'You must give at least one requirement to %(name)s ' '(maybe you meant "pip %(name)s %(links)s"?)' % dict(opts, links=' '.join(options.find_links))) else: raise CommandError( 'You must give at least one requirement to %(name)s ' '(see "pip help %(name)s")' % opts)
Marshal cmd line args into a requirement set.
Below is the the instruction that describes the task: ### Input: Marshal cmd line args into a requirement set. ### Response: def populate_requirement_set(requirement_set, # type: RequirementSet args, # type: List[str] options, # type: Values finder, # type: PackageFinder session, # type: PipSession name, # type: str wheel_cache # type: Optional[WheelCache] ): # type: (...) -> None """ Marshal cmd line args into a requirement set. """ # NOTE: As a side-effect, options.require_hashes and # requirement_set.require_hashes may be updated for filename in options.constraints: for req_to_add in parse_requirements( filename, constraint=True, finder=finder, options=options, session=session, wheel_cache=wheel_cache): req_to_add.is_direct = True requirement_set.add_requirement(req_to_add) for req in args: req_to_add = install_req_from_line( req, None, isolated=options.isolated_mode, use_pep517=options.use_pep517, wheel_cache=wheel_cache ) req_to_add.is_direct = True requirement_set.add_requirement(req_to_add) for req in options.editables: req_to_add = install_req_from_editable( req, isolated=options.isolated_mode, use_pep517=options.use_pep517, wheel_cache=wheel_cache ) req_to_add.is_direct = True requirement_set.add_requirement(req_to_add) for filename in options.requirements: for req_to_add in parse_requirements( filename, finder=finder, options=options, session=session, wheel_cache=wheel_cache, use_pep517=options.use_pep517): req_to_add.is_direct = True requirement_set.add_requirement(req_to_add) # If --require-hashes was a line in a requirements file, tell # RequirementSet about it: requirement_set.require_hashes = options.require_hashes if not (args or options.editables or options.requirements): opts = {'name': name} if options.find_links: raise CommandError( 'You must give at least one requirement to %(name)s ' '(maybe you meant "pip %(name)s %(links)s"?)' % dict(opts, links=' '.join(options.find_links))) else: raise CommandError( 'You must give at least one requirement to %(name)s ' '(see "pip help %(name)s")' % opts)
def _start_queue_management_thread(self): """ TODO: docstring """ if self._queue_management_thread is None: logger.debug("Starting queue management thread") self._queue_management_thread = threading.Thread( target=self._queue_management_worker) self._queue_management_thread.daemon = True self._queue_management_thread.start() logger.debug("Started queue management thread") else: logger.debug("Management thread already exists, returning")
TODO: docstring
Below is the the instruction that describes the task: ### Input: TODO: docstring ### Response: def _start_queue_management_thread(self): """ TODO: docstring """ if self._queue_management_thread is None: logger.debug("Starting queue management thread") self._queue_management_thread = threading.Thread( target=self._queue_management_worker) self._queue_management_thread.daemon = True self._queue_management_thread.start() logger.debug("Started queue management thread") else: logger.debug("Management thread already exists, returning")
def _in_range(self, start_time, end_time, time): """Indicate if the given time falls inside of the given range. Parameters ---------- start_time : int The unix time for the start of the range end_time : int The unix time for the end of the range time : int The unix time to check Returns ------- bool True if the time falls within the range, False otherwise. """ ONE_MONTH = 2764800 # 32 days return start_time <= time <= end_time or \ time <= start_time <= time + ONE_MONTH or \ time <= end_time <= time + ONE_MONTH
Indicate if the given time falls inside of the given range. Parameters ---------- start_time : int The unix time for the start of the range end_time : int The unix time for the end of the range time : int The unix time to check Returns ------- bool True if the time falls within the range, False otherwise.
Below is the the instruction that describes the task: ### Input: Indicate if the given time falls inside of the given range. Parameters ---------- start_time : int The unix time for the start of the range end_time : int The unix time for the end of the range time : int The unix time to check Returns ------- bool True if the time falls within the range, False otherwise. ### Response: def _in_range(self, start_time, end_time, time): """Indicate if the given time falls inside of the given range. Parameters ---------- start_time : int The unix time for the start of the range end_time : int The unix time for the end of the range time : int The unix time to check Returns ------- bool True if the time falls within the range, False otherwise. """ ONE_MONTH = 2764800 # 32 days return start_time <= time <= end_time or \ time <= start_time <= time + ONE_MONTH or \ time <= end_time <= time + ONE_MONTH
def toggleAttributesDOM(isEnabled): ''' toggleAttributesDOM - Toggle if the old DOM tag.attributes NamedNodeMap model should be used for the .attributes method, versus a more sane direct dict implementation. The DOM version is always accessable as AdvancedTag.attributesDOM The dict version is always accessable as AdvancedTag.attributesDict Default for AdvancedTag.attributes is to be attributesDict implementation. @param isEnabled <bool> - If True, .attributes will be changed to use the DOM-provider. Otherwise, it will use the dict provider. ''' if isEnabled: AdvancedTag.attributes = AdvancedTag.attributesDOM else: AdvancedTag.attributes = AdvancedTag.attributesDict
toggleAttributesDOM - Toggle if the old DOM tag.attributes NamedNodeMap model should be used for the .attributes method, versus a more sane direct dict implementation. The DOM version is always accessable as AdvancedTag.attributesDOM The dict version is always accessable as AdvancedTag.attributesDict Default for AdvancedTag.attributes is to be attributesDict implementation. @param isEnabled <bool> - If True, .attributes will be changed to use the DOM-provider. Otherwise, it will use the dict provider.
Below is the the instruction that describes the task: ### Input: toggleAttributesDOM - Toggle if the old DOM tag.attributes NamedNodeMap model should be used for the .attributes method, versus a more sane direct dict implementation. The DOM version is always accessable as AdvancedTag.attributesDOM The dict version is always accessable as AdvancedTag.attributesDict Default for AdvancedTag.attributes is to be attributesDict implementation. @param isEnabled <bool> - If True, .attributes will be changed to use the DOM-provider. Otherwise, it will use the dict provider. ### Response: def toggleAttributesDOM(isEnabled): ''' toggleAttributesDOM - Toggle if the old DOM tag.attributes NamedNodeMap model should be used for the .attributes method, versus a more sane direct dict implementation. The DOM version is always accessable as AdvancedTag.attributesDOM The dict version is always accessable as AdvancedTag.attributesDict Default for AdvancedTag.attributes is to be attributesDict implementation. @param isEnabled <bool> - If True, .attributes will be changed to use the DOM-provider. Otherwise, it will use the dict provider. ''' if isEnabled: AdvancedTag.attributes = AdvancedTag.attributesDOM else: AdvancedTag.attributes = AdvancedTag.attributesDict
def add_child(self, node): """Add a child node to the current node instance. :param node: the child node instance. :type node: Node :returns: The new child node instance. :rtype: Node """ if not issubclass(node.__class__, Node): raise TypeError("{}.add_child: arg «node»=«{}», type {} not valid.".format(self.__class__.__name__, node, type(node))) self.childs.append(node) node.parent = self return node
Add a child node to the current node instance. :param node: the child node instance. :type node: Node :returns: The new child node instance. :rtype: Node
Below is the the instruction that describes the task: ### Input: Add a child node to the current node instance. :param node: the child node instance. :type node: Node :returns: The new child node instance. :rtype: Node ### Response: def add_child(self, node): """Add a child node to the current node instance. :param node: the child node instance. :type node: Node :returns: The new child node instance. :rtype: Node """ if not issubclass(node.__class__, Node): raise TypeError("{}.add_child: arg «node»=«{}», type {} not valid.".format(self.__class__.__name__, node, type(node))) self.childs.append(node) node.parent = self return node
def get_values(self, dtype=None): """ return object dtype as boxed values, such as Timestamps/Timedelta """ if is_object_dtype(dtype): values = self.values.ravel() result = self._holder(values).astype(object) return result.reshape(self.values.shape) return self.values
return object dtype as boxed values, such as Timestamps/Timedelta
Below is the the instruction that describes the task: ### Input: return object dtype as boxed values, such as Timestamps/Timedelta ### Response: def get_values(self, dtype=None): """ return object dtype as boxed values, such as Timestamps/Timedelta """ if is_object_dtype(dtype): values = self.values.ravel() result = self._holder(values).astype(object) return result.reshape(self.values.shape) return self.values
def set_rule(self, name, properties): """ Set a rules as object attribute. Arguments: name (string): Rule name to set as attribute name. properties (dict): Dictionnary of properties. """ self._rule_attrs.append(name) setattr(self, name, properties)
Set a rules as object attribute. Arguments: name (string): Rule name to set as attribute name. properties (dict): Dictionnary of properties.
Below is the the instruction that describes the task: ### Input: Set a rules as object attribute. Arguments: name (string): Rule name to set as attribute name. properties (dict): Dictionnary of properties. ### Response: def set_rule(self, name, properties): """ Set a rules as object attribute. Arguments: name (string): Rule name to set as attribute name. properties (dict): Dictionnary of properties. """ self._rule_attrs.append(name) setattr(self, name, properties)
def evaluate(self): """Evaluate functional value of previous iteration.""" if self.opt['AccurateDFid']: DX = self.reconstruct() S = self.xstep.S dfd = (np.linalg.norm(self.xstep.W * (DX - S))**2) / 2.0 if self.xmethod == 'fista': X = self.xstep.getcoef() else: X = self.xstep.var_y1() rl1 = np.sum(np.abs(X)) return dict(DFid=dfd, RegL1=rl1, ObjFun=dfd + self.xstep.lmbda * rl1) else: return None
Evaluate functional value of previous iteration.
Below is the the instruction that describes the task: ### Input: Evaluate functional value of previous iteration. ### Response: def evaluate(self): """Evaluate functional value of previous iteration.""" if self.opt['AccurateDFid']: DX = self.reconstruct() S = self.xstep.S dfd = (np.linalg.norm(self.xstep.W * (DX - S))**2) / 2.0 if self.xmethod == 'fista': X = self.xstep.getcoef() else: X = self.xstep.var_y1() rl1 = np.sum(np.abs(X)) return dict(DFid=dfd, RegL1=rl1, ObjFun=dfd + self.xstep.lmbda * rl1) else: return None
def rename_object(self, old, new): """Replace the name of an object by a new one.""" self._objects.replace(old, new) pairs = self._pairs pairs |= {(new, p) for p in self._properties if (old, p) in pairs and not pairs.remove((old, p))}
Replace the name of an object by a new one.
Below is the the instruction that describes the task: ### Input: Replace the name of an object by a new one. ### Response: def rename_object(self, old, new): """Replace the name of an object by a new one.""" self._objects.replace(old, new) pairs = self._pairs pairs |= {(new, p) for p in self._properties if (old, p) in pairs and not pairs.remove((old, p))}
def _complete_path(self, cmd_param_text, full_cmd, *_): """ completes paths """ if full_cmd.endswith(" "): cmd_param, path = " ", " " else: pieces = shlex.split(full_cmd) if len(pieces) > 1: cmd_param = pieces[-1] else: cmd_param = cmd_param_text path = cmd_param.rstrip("/") if cmd_param != "/" else "/" if re.match(r"^\s*$", path): return self._zk.get_children(self.curdir) rpath = self.resolve_path(path) if self._zk.exists(rpath): opts = [os.path.join(path, znode) for znode in self._zk.get_children(rpath)] else: parent, child = os.path.dirname(rpath), os.path.basename(rpath) relpath = os.path.dirname(path) to_rel = lambda n: os.path.join(relpath, n) if relpath != "" else n opts = [to_rel(n) for n in self._zk.get_children(parent) if n.startswith(child)] offs = len(cmd_param) - len(cmd_param_text) return [opt[offs:] for opt in opts]
completes paths
Below is the the instruction that describes the task: ### Input: completes paths ### Response: def _complete_path(self, cmd_param_text, full_cmd, *_): """ completes paths """ if full_cmd.endswith(" "): cmd_param, path = " ", " " else: pieces = shlex.split(full_cmd) if len(pieces) > 1: cmd_param = pieces[-1] else: cmd_param = cmd_param_text path = cmd_param.rstrip("/") if cmd_param != "/" else "/" if re.match(r"^\s*$", path): return self._zk.get_children(self.curdir) rpath = self.resolve_path(path) if self._zk.exists(rpath): opts = [os.path.join(path, znode) for znode in self._zk.get_children(rpath)] else: parent, child = os.path.dirname(rpath), os.path.basename(rpath) relpath = os.path.dirname(path) to_rel = lambda n: os.path.join(relpath, n) if relpath != "" else n opts = [to_rel(n) for n in self._zk.get_children(parent) if n.startswith(child)] offs = len(cmd_param) - len(cmd_param_text) return [opt[offs:] for opt in opts]
def deduplicate(s, ch): """ From http://stackoverflow.com/q/42216559/610569 s = 'this is an irritating string with random spacing .' deduplicate(s) 'this is an irritating string with random spacing .' """ return ch.join([substring for substring in s.strip().split(ch) if substring])
From http://stackoverflow.com/q/42216559/610569 s = 'this is an irritating string with random spacing .' deduplicate(s) 'this is an irritating string with random spacing .'
Below is the the instruction that describes the task: ### Input: From http://stackoverflow.com/q/42216559/610569 s = 'this is an irritating string with random spacing .' deduplicate(s) 'this is an irritating string with random spacing .' ### Response: def deduplicate(s, ch): """ From http://stackoverflow.com/q/42216559/610569 s = 'this is an irritating string with random spacing .' deduplicate(s) 'this is an irritating string with random spacing .' """ return ch.join([substring for substring in s.strip().split(ch) if substring])
def maybe_run_for_org(org, task_func, task_key, lock_timeout): """ Runs the given task function for the specified org provided it's not already running :param org: the org :param task_func: the task function :param task_key: the task key :param lock_timeout: the lock timeout in seconds """ r = get_redis_connection() key = TaskState.get_lock_key(org, task_key) if r.get(key): logger.warning("Skipping task %s for org #%d as it is still running" % (task_key, org.id)) else: with r.lock(key, timeout=lock_timeout): state = org.get_task_state(task_key) if state.is_disabled: logger.info("Skipping task %s for org #%d as is marked disabled" % (task_key, org.id)) return logger.info("Started task %s for org #%d..." % (task_key, org.id)) prev_started_on = state.last_successfully_started_on this_started_on = timezone.now() state.started_on = this_started_on state.ended_on = None state.save(update_fields=("started_on", "ended_on")) num_task_args = len(inspect.getargspec(task_func).args) try: if num_task_args == 3: results = task_func(org, prev_started_on, this_started_on) elif num_task_args == 1: results = task_func(org) else: raise ValueError("Task signature must be foo(org) or foo(org, since, until)") # pragma: no cover state.ended_on = timezone.now() state.last_successfully_started_on = this_started_on state.last_results = json.dumps(results) state.is_failing = False state.save(update_fields=("ended_on", "last_successfully_started_on", "last_results", "is_failing")) logger.info("Finished task %s for org #%d with result: %s" % (task_key, org.id, json.dumps(results))) except Exception as e: state.ended_on = timezone.now() state.last_results = None state.is_failing = True state.save(update_fields=("ended_on", "last_results", "is_failing")) logger.exception("Task %s for org #%d failed" % (task_key, org.id)) raise e
Runs the given task function for the specified org provided it's not already running :param org: the org :param task_func: the task function :param task_key: the task key :param lock_timeout: the lock timeout in seconds
Below is the the instruction that describes the task: ### Input: Runs the given task function for the specified org provided it's not already running :param org: the org :param task_func: the task function :param task_key: the task key :param lock_timeout: the lock timeout in seconds ### Response: def maybe_run_for_org(org, task_func, task_key, lock_timeout): """ Runs the given task function for the specified org provided it's not already running :param org: the org :param task_func: the task function :param task_key: the task key :param lock_timeout: the lock timeout in seconds """ r = get_redis_connection() key = TaskState.get_lock_key(org, task_key) if r.get(key): logger.warning("Skipping task %s for org #%d as it is still running" % (task_key, org.id)) else: with r.lock(key, timeout=lock_timeout): state = org.get_task_state(task_key) if state.is_disabled: logger.info("Skipping task %s for org #%d as is marked disabled" % (task_key, org.id)) return logger.info("Started task %s for org #%d..." % (task_key, org.id)) prev_started_on = state.last_successfully_started_on this_started_on = timezone.now() state.started_on = this_started_on state.ended_on = None state.save(update_fields=("started_on", "ended_on")) num_task_args = len(inspect.getargspec(task_func).args) try: if num_task_args == 3: results = task_func(org, prev_started_on, this_started_on) elif num_task_args == 1: results = task_func(org) else: raise ValueError("Task signature must be foo(org) or foo(org, since, until)") # pragma: no cover state.ended_on = timezone.now() state.last_successfully_started_on = this_started_on state.last_results = json.dumps(results) state.is_failing = False state.save(update_fields=("ended_on", "last_successfully_started_on", "last_results", "is_failing")) logger.info("Finished task %s for org #%d with result: %s" % (task_key, org.id, json.dumps(results))) except Exception as e: state.ended_on = timezone.now() state.last_results = None state.is_failing = True state.save(update_fields=("ended_on", "last_results", "is_failing")) logger.exception("Task %s for org #%d failed" % (task_key, org.id)) raise e
def parse_time(time): """ Parse a date/time string and return a corresponding datetime object. Args: time (str): A ``string` of one of the following formats: ``%H:%M``, ``%Y-%m-%d`` or ``%Y-%m-%d %H:%M``. Returns: datetime.datetime: Depending on input string either returns ``datetime.date``, ``datetime.time`` or ``datetime.datetime``. Raises: ValueError: If ``time`` can not be matched against any of the accepted formats. Note: This parse just a singlular date, time or datetime representation. """ length = len(time.strip().split()) if length == 1: try: result = datetime.datetime.strptime(time, '%H:%M:%S').time() except ValueError: try: result = datetime.datetime.strptime(time, '%H:%M').time() except ValueError: result = datetime.datetime.strptime(time, '%Y-%m-%d').date() elif length == 2: try: result = datetime.datetime.strptime(time, '%Y-%m-%d %H:%M:%S') except ValueError: result = datetime.datetime.strptime(time, '%Y-%m-%d %H:%M') else: raise ValueError(_( "String does not seem to be in one of our supported time formats." )) return result
Parse a date/time string and return a corresponding datetime object. Args: time (str): A ``string` of one of the following formats: ``%H:%M``, ``%Y-%m-%d`` or ``%Y-%m-%d %H:%M``. Returns: datetime.datetime: Depending on input string either returns ``datetime.date``, ``datetime.time`` or ``datetime.datetime``. Raises: ValueError: If ``time`` can not be matched against any of the accepted formats. Note: This parse just a singlular date, time or datetime representation.
Below is the the instruction that describes the task: ### Input: Parse a date/time string and return a corresponding datetime object. Args: time (str): A ``string` of one of the following formats: ``%H:%M``, ``%Y-%m-%d`` or ``%Y-%m-%d %H:%M``. Returns: datetime.datetime: Depending on input string either returns ``datetime.date``, ``datetime.time`` or ``datetime.datetime``. Raises: ValueError: If ``time`` can not be matched against any of the accepted formats. Note: This parse just a singlular date, time or datetime representation. ### Response: def parse_time(time): """ Parse a date/time string and return a corresponding datetime object. Args: time (str): A ``string` of one of the following formats: ``%H:%M``, ``%Y-%m-%d`` or ``%Y-%m-%d %H:%M``. Returns: datetime.datetime: Depending on input string either returns ``datetime.date``, ``datetime.time`` or ``datetime.datetime``. Raises: ValueError: If ``time`` can not be matched against any of the accepted formats. Note: This parse just a singlular date, time or datetime representation. """ length = len(time.strip().split()) if length == 1: try: result = datetime.datetime.strptime(time, '%H:%M:%S').time() except ValueError: try: result = datetime.datetime.strptime(time, '%H:%M').time() except ValueError: result = datetime.datetime.strptime(time, '%Y-%m-%d').date() elif length == 2: try: result = datetime.datetime.strptime(time, '%Y-%m-%d %H:%M:%S') except ValueError: result = datetime.datetime.strptime(time, '%Y-%m-%d %H:%M') else: raise ValueError(_( "String does not seem to be in one of our supported time formats." )) return result
def add_data_item(self, data_item: DataItem) -> None: """Add a data item to the group. :param data_item: The :py:class:`nion.swift.Facade.DataItem` object to add. .. versionadded:: 1.0 Scriptable: Yes """ display_item = data_item._data_item.container.get_display_item_for_data_item(data_item._data_item) if data_item._data_item.container else None if display_item: self.__data_group.append_display_item(display_item)
Add a data item to the group. :param data_item: The :py:class:`nion.swift.Facade.DataItem` object to add. .. versionadded:: 1.0 Scriptable: Yes
Below is the the instruction that describes the task: ### Input: Add a data item to the group. :param data_item: The :py:class:`nion.swift.Facade.DataItem` object to add. .. versionadded:: 1.0 Scriptable: Yes ### Response: def add_data_item(self, data_item: DataItem) -> None: """Add a data item to the group. :param data_item: The :py:class:`nion.swift.Facade.DataItem` object to add. .. versionadded:: 1.0 Scriptable: Yes """ display_item = data_item._data_item.container.get_display_item_for_data_item(data_item._data_item) if data_item._data_item.container else None if display_item: self.__data_group.append_display_item(display_item)
def batch_step(self, batch_idx=None): """Updates the learning rate for the batch index: ``batch_idx``. If ``batch_idx`` is None, ``CyclicLR`` will use an internal batch index to keep track of the index. """ if batch_idx is None: batch_idx = self.last_batch_idx + 1 self.last_batch_idx = batch_idx for param_group, lr in zip(self.optimizer.param_groups, self.get_lr()): param_group['lr'] = lr
Updates the learning rate for the batch index: ``batch_idx``. If ``batch_idx`` is None, ``CyclicLR`` will use an internal batch index to keep track of the index.
Below is the the instruction that describes the task: ### Input: Updates the learning rate for the batch index: ``batch_idx``. If ``batch_idx`` is None, ``CyclicLR`` will use an internal batch index to keep track of the index. ### Response: def batch_step(self, batch_idx=None): """Updates the learning rate for the batch index: ``batch_idx``. If ``batch_idx`` is None, ``CyclicLR`` will use an internal batch index to keep track of the index. """ if batch_idx is None: batch_idx = self.last_batch_idx + 1 self.last_batch_idx = batch_idx for param_group, lr in zip(self.optimizer.param_groups, self.get_lr()): param_group['lr'] = lr
def send_mail(subject, body_text, addr_from, recipient_list, fail_silently=False, auth_user=None, auth_password=None, attachments=None, body_html=None, html_message=None, connection=None, headers=None): """ Sends a multipart email containing text and html versions which are encrypted for each recipient that has a valid gpg key installed. """ # Make sure only one HTML option is specified if body_html is not None and html_message is not None: # pragma: no cover raise ValueError("You cannot specify body_html and html_message at " "the same time. Please only use html_message.") # Push users to update their code if body_html is not None: # pragma: no cover warn("Using body_html is deprecated; use the html_message argument " "instead. Please update your code.", DeprecationWarning) html_message = body_html # Allow for a single address to be passed in. if isinstance(recipient_list, six.string_types): recipient_list = [recipient_list] connection = connection or get_connection( username=auth_user, password=auth_password, fail_silently=fail_silently) # Obtain a list of the recipients that have gpg keys installed. key_addresses = {} if USE_GNUPG: from email_extras.models import Address key_addresses = dict(Address.objects.filter(address__in=recipient_list) .values_list('address', 'use_asc')) # Create the gpg object. if key_addresses: gpg = GPG(gnupghome=GNUPG_HOME) if GNUPG_ENCODING is not None: gpg.encoding = GNUPG_ENCODING # Check if recipient has a gpg key installed def has_pgp_key(addr): return addr in key_addresses # Encrypts body if recipient has a gpg key installed. def encrypt_if_key(body, addr_list): if has_pgp_key(addr_list[0]): encrypted = gpg.encrypt(body, addr_list[0], always_trust=ALWAYS_TRUST) if encrypted == "" and body != "": # encryption failed raise EncryptionFailedError("Encrypting mail to %s failed.", addr_list[0]) return smart_text(encrypted) return body # Load attachments and create name/data tuples. attachments_parts = [] if attachments is not None: for attachment in attachments: # Attachments can be pairs of name/data, or filesystem paths. if not hasattr(attachment, "__iter__"): with open(attachment, "rb") as f: attachments_parts.append((basename(attachment), f.read())) else: attachments_parts.append(attachment) # Send emails - encrypted emails needs to be sent individually, while # non-encrypted emails can be sent in one send. So the final list of # lists of addresses to send to looks like: # [[unencrypted1, unencrypted2, unencrypted3], [encrypted1], [encrypted2]] unencrypted = [addr for addr in recipient_list if addr not in key_addresses] unencrypted = [unencrypted] if unencrypted else unencrypted encrypted = [[addr] for addr in key_addresses] for addr_list in unencrypted + encrypted: msg = EmailMultiAlternatives(subject, encrypt_if_key(body_text, addr_list), addr_from, addr_list, connection=connection, headers=headers) if html_message is not None: if has_pgp_key(addr_list[0]): mimetype = "application/gpg-encrypted" else: mimetype = "text/html" msg.attach_alternative(encrypt_if_key(html_message, addr_list), mimetype) for parts in attachments_parts: name = parts[0] if key_addresses.get(addr_list[0]): name += ".asc" msg.attach(name, encrypt_if_key(parts[1], addr_list)) msg.send(fail_silently=fail_silently)
Sends a multipart email containing text and html versions which are encrypted for each recipient that has a valid gpg key installed.
Below is the the instruction that describes the task: ### Input: Sends a multipart email containing text and html versions which are encrypted for each recipient that has a valid gpg key installed. ### Response: def send_mail(subject, body_text, addr_from, recipient_list, fail_silently=False, auth_user=None, auth_password=None, attachments=None, body_html=None, html_message=None, connection=None, headers=None): """ Sends a multipart email containing text and html versions which are encrypted for each recipient that has a valid gpg key installed. """ # Make sure only one HTML option is specified if body_html is not None and html_message is not None: # pragma: no cover raise ValueError("You cannot specify body_html and html_message at " "the same time. Please only use html_message.") # Push users to update their code if body_html is not None: # pragma: no cover warn("Using body_html is deprecated; use the html_message argument " "instead. Please update your code.", DeprecationWarning) html_message = body_html # Allow for a single address to be passed in. if isinstance(recipient_list, six.string_types): recipient_list = [recipient_list] connection = connection or get_connection( username=auth_user, password=auth_password, fail_silently=fail_silently) # Obtain a list of the recipients that have gpg keys installed. key_addresses = {} if USE_GNUPG: from email_extras.models import Address key_addresses = dict(Address.objects.filter(address__in=recipient_list) .values_list('address', 'use_asc')) # Create the gpg object. if key_addresses: gpg = GPG(gnupghome=GNUPG_HOME) if GNUPG_ENCODING is not None: gpg.encoding = GNUPG_ENCODING # Check if recipient has a gpg key installed def has_pgp_key(addr): return addr in key_addresses # Encrypts body if recipient has a gpg key installed. def encrypt_if_key(body, addr_list): if has_pgp_key(addr_list[0]): encrypted = gpg.encrypt(body, addr_list[0], always_trust=ALWAYS_TRUST) if encrypted == "" and body != "": # encryption failed raise EncryptionFailedError("Encrypting mail to %s failed.", addr_list[0]) return smart_text(encrypted) return body # Load attachments and create name/data tuples. attachments_parts = [] if attachments is not None: for attachment in attachments: # Attachments can be pairs of name/data, or filesystem paths. if not hasattr(attachment, "__iter__"): with open(attachment, "rb") as f: attachments_parts.append((basename(attachment), f.read())) else: attachments_parts.append(attachment) # Send emails - encrypted emails needs to be sent individually, while # non-encrypted emails can be sent in one send. So the final list of # lists of addresses to send to looks like: # [[unencrypted1, unencrypted2, unencrypted3], [encrypted1], [encrypted2]] unencrypted = [addr for addr in recipient_list if addr not in key_addresses] unencrypted = [unencrypted] if unencrypted else unencrypted encrypted = [[addr] for addr in key_addresses] for addr_list in unencrypted + encrypted: msg = EmailMultiAlternatives(subject, encrypt_if_key(body_text, addr_list), addr_from, addr_list, connection=connection, headers=headers) if html_message is not None: if has_pgp_key(addr_list[0]): mimetype = "application/gpg-encrypted" else: mimetype = "text/html" msg.attach_alternative(encrypt_if_key(html_message, addr_list), mimetype) for parts in attachments_parts: name = parts[0] if key_addresses.get(addr_list[0]): name += ".asc" msg.attach(name, encrypt_if_key(parts[1], addr_list)) msg.send(fail_silently=fail_silently)
def _target_chroms_and_header(bam_file, data): """Get a list of chromosomes to target and new updated ref_file header. Could potentially handle remapping from chr1 -> 1 but currently disabled due to speed issues. """ special_remaps = {"chrM": "MT", "MT": "chrM"} target_chroms = dict([(x.name, i) for i, x in enumerate(ref.file_contigs(dd.get_ref_file(data))) if chromhacks.is_autosomal_or_sex(x.name)]) out_chroms = [] with pysam.Samfile(bam_file, "rb") as bamfile: for bami, bam_contig in enumerate([c["SN"] for c in bamfile.header["SQ"]]): if bam_contig in target_chroms: target_chrom = bam_contig elif bam_contig in special_remaps and special_remaps[bam_contig] in target_chroms: target_chrom = special_remaps[bam_contig] elif bam_contig.startswith("chr") and bam_contig.replace("chr", "") in target_chroms: target_chrom = bam_contig.replace("chr", "") elif "chr%s" % bam_contig in target_chroms: target_chrom = "chr%s" % bam_contig else: target_chrom = None # target_chrom == bam_contig ensures we don't try chr1 -> 1 style remapping if target_chrom and target_chrom == bam_contig: # Order not required if dealing with SAM file header fixing #assert bami == target_chroms[target_chrom], \ # ("remove_extracontigs: Non-matching order of standard contig: %s %s (%s vs %s)" % # (bam_file, target_chrom, bami, target_chroms[target_chrom])) out_chroms.append(target_chrom) assert out_chroms, ("remove_extracontigs: Did not find any chromosomes in reference file: %s %s" % (bam_file, target_chroms)) return out_chroms
Get a list of chromosomes to target and new updated ref_file header. Could potentially handle remapping from chr1 -> 1 but currently disabled due to speed issues.
Below is the the instruction that describes the task: ### Input: Get a list of chromosomes to target and new updated ref_file header. Could potentially handle remapping from chr1 -> 1 but currently disabled due to speed issues. ### Response: def _target_chroms_and_header(bam_file, data): """Get a list of chromosomes to target and new updated ref_file header. Could potentially handle remapping from chr1 -> 1 but currently disabled due to speed issues. """ special_remaps = {"chrM": "MT", "MT": "chrM"} target_chroms = dict([(x.name, i) for i, x in enumerate(ref.file_contigs(dd.get_ref_file(data))) if chromhacks.is_autosomal_or_sex(x.name)]) out_chroms = [] with pysam.Samfile(bam_file, "rb") as bamfile: for bami, bam_contig in enumerate([c["SN"] for c in bamfile.header["SQ"]]): if bam_contig in target_chroms: target_chrom = bam_contig elif bam_contig in special_remaps and special_remaps[bam_contig] in target_chroms: target_chrom = special_remaps[bam_contig] elif bam_contig.startswith("chr") and bam_contig.replace("chr", "") in target_chroms: target_chrom = bam_contig.replace("chr", "") elif "chr%s" % bam_contig in target_chroms: target_chrom = "chr%s" % bam_contig else: target_chrom = None # target_chrom == bam_contig ensures we don't try chr1 -> 1 style remapping if target_chrom and target_chrom == bam_contig: # Order not required if dealing with SAM file header fixing #assert bami == target_chroms[target_chrom], \ # ("remove_extracontigs: Non-matching order of standard contig: %s %s (%s vs %s)" % # (bam_file, target_chrom, bami, target_chroms[target_chrom])) out_chroms.append(target_chrom) assert out_chroms, ("remove_extracontigs: Did not find any chromosomes in reference file: %s %s" % (bam_file, target_chroms)) return out_chroms
def _get_namespace2go2term(go2terms): """Group GO IDs by namespace.""" namespace2go2term = cx.defaultdict(dict) for goid, goterm in go2terms.items(): namespace2go2term[goterm.namespace][goid] = goterm return namespace2go2term
Group GO IDs by namespace.
Below is the the instruction that describes the task: ### Input: Group GO IDs by namespace. ### Response: def _get_namespace2go2term(go2terms): """Group GO IDs by namespace.""" namespace2go2term = cx.defaultdict(dict) for goid, goterm in go2terms.items(): namespace2go2term[goterm.namespace][goid] = goterm return namespace2go2term
def getSortedTrackedDeviceIndicesOfClass(self, eTrackedDeviceClass, unTrackedDeviceIndexArrayCount, unRelativeToTrackedDeviceIndex): """ Get a sorted array of device indices of a given class of tracked devices (e.g. controllers). Devices are sorted right to left relative to the specified tracked device (default: hmd -- pass in -1 for absolute tracking space). Returns the number of devices in the list, or the size of the array needed if not large enough. """ fn = self.function_table.getSortedTrackedDeviceIndicesOfClass punTrackedDeviceIndexArray = TrackedDeviceIndex_t() result = fn(eTrackedDeviceClass, byref(punTrackedDeviceIndexArray), unTrackedDeviceIndexArrayCount, unRelativeToTrackedDeviceIndex) return result, punTrackedDeviceIndexArray
Get a sorted array of device indices of a given class of tracked devices (e.g. controllers). Devices are sorted right to left relative to the specified tracked device (default: hmd -- pass in -1 for absolute tracking space). Returns the number of devices in the list, or the size of the array needed if not large enough.
Below is the the instruction that describes the task: ### Input: Get a sorted array of device indices of a given class of tracked devices (e.g. controllers). Devices are sorted right to left relative to the specified tracked device (default: hmd -- pass in -1 for absolute tracking space). Returns the number of devices in the list, or the size of the array needed if not large enough. ### Response: def getSortedTrackedDeviceIndicesOfClass(self, eTrackedDeviceClass, unTrackedDeviceIndexArrayCount, unRelativeToTrackedDeviceIndex): """ Get a sorted array of device indices of a given class of tracked devices (e.g. controllers). Devices are sorted right to left relative to the specified tracked device (default: hmd -- pass in -1 for absolute tracking space). Returns the number of devices in the list, or the size of the array needed if not large enough. """ fn = self.function_table.getSortedTrackedDeviceIndicesOfClass punTrackedDeviceIndexArray = TrackedDeviceIndex_t() result = fn(eTrackedDeviceClass, byref(punTrackedDeviceIndexArray), unTrackedDeviceIndexArrayCount, unRelativeToTrackedDeviceIndex) return result, punTrackedDeviceIndexArray
def main(): """ Install a package from pypi or gemfury :return: """ pypitools.common.setup_main() config = pypitools.common.ConfigData() module_name = os.path.basename(os.getcwd()) args = [] if config.use_sudo: args.extend([ 'sudo', '-H', ]) args.extend([ '{}'.format(config.pip), 'install', '--upgrade', '{module_name}'.format(module_name=module_name), ]) if config.pip_quiet: args.extend([ '--quiet', ]) if config.install_in_user_folder: args.extend([ '--user', ]) pypitools.common.check_call_no_output(args) output = subprocess.check_output([ '{}'.format(config.pip), 'show', '{module_name}'.format(module_name=module_name), ]).decode() for line in output.split("\n"): if line.startswith("Version"): print(line)
Install a package from pypi or gemfury :return:
Below is the the instruction that describes the task: ### Input: Install a package from pypi or gemfury :return: ### Response: def main(): """ Install a package from pypi or gemfury :return: """ pypitools.common.setup_main() config = pypitools.common.ConfigData() module_name = os.path.basename(os.getcwd()) args = [] if config.use_sudo: args.extend([ 'sudo', '-H', ]) args.extend([ '{}'.format(config.pip), 'install', '--upgrade', '{module_name}'.format(module_name=module_name), ]) if config.pip_quiet: args.extend([ '--quiet', ]) if config.install_in_user_folder: args.extend([ '--user', ]) pypitools.common.check_call_no_output(args) output = subprocess.check_output([ '{}'.format(config.pip), 'show', '{module_name}'.format(module_name=module_name), ]).decode() for line in output.split("\n"): if line.startswith("Version"): print(line)
def build_base_parameters(request): """Build the list of parameters to forward from the post and get parameters""" getParameters = {} postParameters = {} files = {} # Copy GET parameters, excluding ebuio_* for v in request.GET: if v[:6] != 'ebuio_': val = request.GET.getlist(v) if len(val) == 1: getParameters[v] = val[0] else: getParameters[v] = val # If using post, copy post parameters and files. Excluding ebuio_* if request.method == 'POST': for v in request.POST: if v[:6] != 'ebuio_': val = request.POST.getlist(v) if len(val) == 1: postParameters[v] = val[0] else: postParameters[v] = val for v in request.FILES: if v[:6] != 'ebuio_': files[v] = request.FILES[v] # .chunks() return (getParameters, postParameters, files)
Build the list of parameters to forward from the post and get parameters
Below is the the instruction that describes the task: ### Input: Build the list of parameters to forward from the post and get parameters ### Response: def build_base_parameters(request): """Build the list of parameters to forward from the post and get parameters""" getParameters = {} postParameters = {} files = {} # Copy GET parameters, excluding ebuio_* for v in request.GET: if v[:6] != 'ebuio_': val = request.GET.getlist(v) if len(val) == 1: getParameters[v] = val[0] else: getParameters[v] = val # If using post, copy post parameters and files. Excluding ebuio_* if request.method == 'POST': for v in request.POST: if v[:6] != 'ebuio_': val = request.POST.getlist(v) if len(val) == 1: postParameters[v] = val[0] else: postParameters[v] = val for v in request.FILES: if v[:6] != 'ebuio_': files[v] = request.FILES[v] # .chunks() return (getParameters, postParameters, files)
def int_to_base(n, base): """ :type n: int :type base: int :rtype: str """ is_negative = False if n == 0: return '0' elif n < 0: is_negative = True n *= -1 digit = string.digits + string.ascii_uppercase res = '' while n > 0: res += digit[n % base] n //= base if is_negative: return '-' + res[::-1] else: return res[::-1]
:type n: int :type base: int :rtype: str
Below is the the instruction that describes the task: ### Input: :type n: int :type base: int :rtype: str ### Response: def int_to_base(n, base): """ :type n: int :type base: int :rtype: str """ is_negative = False if n == 0: return '0' elif n < 0: is_negative = True n *= -1 digit = string.digits + string.ascii_uppercase res = '' while n > 0: res += digit[n % base] n //= base if is_negative: return '-' + res[::-1] else: return res[::-1]
def alphanum(columns, name=None, extended=False, isLast=False): """ Creates the grammar for an Alphanumeric (A) field, accepting only the specified number of characters. By default Alphanumeric fields accept only ASCII characters, excluding lowercases. If the extended flag is set to True, then non-ASCII characters are allowed, but the no ASCII lowercase constraint is kept. This can be a compulsory field, in which case the empty string is disallowed. The text will be stripped of heading and trailing whitespaces. :param columns: number of columns for this field :param name: name for the field :param extended: indicates if this is the exceptional case where non-ASCII are allowed :return: grammar for this Alphanumeric field """ if name is None: name = 'Alphanumeric Field' if columns < 0: # Can't be empty or have negative size raise BaseException() if isLast: columns = str('1,' + str(columns)) # Checks if non-ASCII characters are allowed if not extended: # The regular expression just forbids lowercase characters field = pp.Regex('([\x00-\x60]|[\x7B-\x7F]){' + str(columns) + '}') else: # The regular expression forbids lowercase characters but allows # non-ASCII characters field = pp.Regex('([\x00-\x09]|[\x0E-\x60]|[\x7B-\x7F]|[^\x00-\x7F]){' + str(columns) + '}') # Parse action field.setParseAction(lambda s: s[0].strip()) # Compulsory field validation action if columns: field.addParseAction(lambda s: _check_not_empty(s[0])) # White spaces are not removed field.leaveWhitespace() # Name field.setName(name) return field
Creates the grammar for an Alphanumeric (A) field, accepting only the specified number of characters. By default Alphanumeric fields accept only ASCII characters, excluding lowercases. If the extended flag is set to True, then non-ASCII characters are allowed, but the no ASCII lowercase constraint is kept. This can be a compulsory field, in which case the empty string is disallowed. The text will be stripped of heading and trailing whitespaces. :param columns: number of columns for this field :param name: name for the field :param extended: indicates if this is the exceptional case where non-ASCII are allowed :return: grammar for this Alphanumeric field
Below is the the instruction that describes the task: ### Input: Creates the grammar for an Alphanumeric (A) field, accepting only the specified number of characters. By default Alphanumeric fields accept only ASCII characters, excluding lowercases. If the extended flag is set to True, then non-ASCII characters are allowed, but the no ASCII lowercase constraint is kept. This can be a compulsory field, in which case the empty string is disallowed. The text will be stripped of heading and trailing whitespaces. :param columns: number of columns for this field :param name: name for the field :param extended: indicates if this is the exceptional case where non-ASCII are allowed :return: grammar for this Alphanumeric field ### Response: def alphanum(columns, name=None, extended=False, isLast=False): """ Creates the grammar for an Alphanumeric (A) field, accepting only the specified number of characters. By default Alphanumeric fields accept only ASCII characters, excluding lowercases. If the extended flag is set to True, then non-ASCII characters are allowed, but the no ASCII lowercase constraint is kept. This can be a compulsory field, in which case the empty string is disallowed. The text will be stripped of heading and trailing whitespaces. :param columns: number of columns for this field :param name: name for the field :param extended: indicates if this is the exceptional case where non-ASCII are allowed :return: grammar for this Alphanumeric field """ if name is None: name = 'Alphanumeric Field' if columns < 0: # Can't be empty or have negative size raise BaseException() if isLast: columns = str('1,' + str(columns)) # Checks if non-ASCII characters are allowed if not extended: # The regular expression just forbids lowercase characters field = pp.Regex('([\x00-\x60]|[\x7B-\x7F]){' + str(columns) + '}') else: # The regular expression forbids lowercase characters but allows # non-ASCII characters field = pp.Regex('([\x00-\x09]|[\x0E-\x60]|[\x7B-\x7F]|[^\x00-\x7F]){' + str(columns) + '}') # Parse action field.setParseAction(lambda s: s[0].strip()) # Compulsory field validation action if columns: field.addParseAction(lambda s: _check_not_empty(s[0])) # White spaces are not removed field.leaveWhitespace() # Name field.setName(name) return field
def _add_value_enum(self, var, tag): """ supports adding variables to the xml Parameters --------------- var: The SubElement variable tag: The SubElement tag to which enum value is to be added Return --------------- None """ if var['ValueEnum'][0] == 's0': numvalues_tag = etree.SubElement(tag, 'NumValues') numvalues_tag.text = str(int(var['ValueEnum'][-1][-1]) + 1) else: valueenum_tag = etree.SubElement(tag, 'ValueEnum') valueenum_tag.text = '' for value in var['ValueEnum']: valueenum_tag.text += value + ' ' valueenum_tag.text = valueenum_tag.text[:-1]
supports adding variables to the xml Parameters --------------- var: The SubElement variable tag: The SubElement tag to which enum value is to be added Return --------------- None
Below is the the instruction that describes the task: ### Input: supports adding variables to the xml Parameters --------------- var: The SubElement variable tag: The SubElement tag to which enum value is to be added Return --------------- None ### Response: def _add_value_enum(self, var, tag): """ supports adding variables to the xml Parameters --------------- var: The SubElement variable tag: The SubElement tag to which enum value is to be added Return --------------- None """ if var['ValueEnum'][0] == 's0': numvalues_tag = etree.SubElement(tag, 'NumValues') numvalues_tag.text = str(int(var['ValueEnum'][-1][-1]) + 1) else: valueenum_tag = etree.SubElement(tag, 'ValueEnum') valueenum_tag.text = '' for value in var['ValueEnum']: valueenum_tag.text += value + ' ' valueenum_tag.text = valueenum_tag.text[:-1]
def apply_exclude(self, high): ''' Read in the __exclude__ list and remove all excluded objects from the high data ''' if '__exclude__' not in high: return high ex_sls = set() ex_id = set() exclude = high.pop('__exclude__') for exc in exclude: if isinstance(exc, six.string_types): # The exclude statement is a string, assume it is an sls ex_sls.add(exc) if isinstance(exc, dict): # Explicitly declared exclude if len(exc) != 1: continue key = next(six.iterkeys(exc)) if key == 'sls': ex_sls.add(exc['sls']) elif key == 'id': ex_id.add(exc['id']) # Now the excludes have been simplified, use them if ex_sls: # There are sls excludes, find the associtaed ids for name, body in six.iteritems(high): if name.startswith('__'): continue if body.get('__sls__', '') in ex_sls: ex_id.add(name) for id_ in ex_id: if id_ in high: high.pop(id_) return high
Read in the __exclude__ list and remove all excluded objects from the high data
Below is the the instruction that describes the task: ### Input: Read in the __exclude__ list and remove all excluded objects from the high data ### Response: def apply_exclude(self, high): ''' Read in the __exclude__ list and remove all excluded objects from the high data ''' if '__exclude__' not in high: return high ex_sls = set() ex_id = set() exclude = high.pop('__exclude__') for exc in exclude: if isinstance(exc, six.string_types): # The exclude statement is a string, assume it is an sls ex_sls.add(exc) if isinstance(exc, dict): # Explicitly declared exclude if len(exc) != 1: continue key = next(six.iterkeys(exc)) if key == 'sls': ex_sls.add(exc['sls']) elif key == 'id': ex_id.add(exc['id']) # Now the excludes have been simplified, use them if ex_sls: # There are sls excludes, find the associtaed ids for name, body in six.iteritems(high): if name.startswith('__'): continue if body.get('__sls__', '') in ex_sls: ex_id.add(name) for id_ in ex_id: if id_ in high: high.pop(id_) return high
def validate(self, document): """ Check input for Python syntax errors. """ # When the input starts with Ctrl-Z, always accept. This means EOF in a # Python REPL. if document.text.startswith('\x1a'): return try: if self.get_compiler_flags: flags = self.get_compiler_flags() else: flags = 0 compile(document.text, '<input>', 'exec', flags=flags, dont_inherit=True) except SyntaxError as e: # Note, the 'or 1' for offset is required because Python 2.7 # gives `None` as offset in case of '4=4' as input. (Looks like # fixed in Python 3.) index = document.translate_row_col_to_index(e.lineno - 1, (e.offset or 1) - 1) raise ValidationError(index, 'Syntax Error') except TypeError as e: # e.g. "compile() expected string without null bytes" raise ValidationError(0, str(e)) except ValueError as e: # In Python 2, compiling "\x9" (an invalid escape sequence) raises # ValueError instead of SyntaxError. raise ValidationError(0, 'Syntax Error: %s' % e)
Check input for Python syntax errors.
Below is the the instruction that describes the task: ### Input: Check input for Python syntax errors. ### Response: def validate(self, document): """ Check input for Python syntax errors. """ # When the input starts with Ctrl-Z, always accept. This means EOF in a # Python REPL. if document.text.startswith('\x1a'): return try: if self.get_compiler_flags: flags = self.get_compiler_flags() else: flags = 0 compile(document.text, '<input>', 'exec', flags=flags, dont_inherit=True) except SyntaxError as e: # Note, the 'or 1' for offset is required because Python 2.7 # gives `None` as offset in case of '4=4' as input. (Looks like # fixed in Python 3.) index = document.translate_row_col_to_index(e.lineno - 1, (e.offset or 1) - 1) raise ValidationError(index, 'Syntax Error') except TypeError as e: # e.g. "compile() expected string without null bytes" raise ValidationError(0, str(e)) except ValueError as e: # In Python 2, compiling "\x9" (an invalid escape sequence) raises # ValueError instead of SyntaxError. raise ValidationError(0, 'Syntax Error: %s' % e)
def edit( self, text: str, parse_mode: str = "", disable_web_page_preview: bool = None, reply_markup: Union[ "pyrogram.InlineKeyboardMarkup", "pyrogram.ReplyKeyboardMarkup", "pyrogram.ReplyKeyboardRemove", "pyrogram.ForceReply" ] = None ) -> "Message": """Bound method *edit* of :obj:`Message <pyrogram.Message>` Use as a shortcut for: .. code-block:: python client.edit_message_text( chat_id=message.chat.id, message_id=message.message_id, text="hello" ) Example: .. code-block:: python message.edit("hello") Args: text (``str``): New text of the message. parse_mode (``str``, *optional*): Use :obj:`MARKDOWN <pyrogram.ParseMode.MARKDOWN>` or :obj:`HTML <pyrogram.ParseMode.HTML>` if you want Telegram apps to show bold, italic, fixed-width text or inline URLs in your message. Defaults to Markdown. disable_web_page_preview (``bool``, *optional*): Disables link previews for links in this message. reply_markup (:obj:`InlineKeyboardMarkup`, *optional*): An InlineKeyboardMarkup object. Returns: On success, the edited :obj:`Message <pyrogram.Message>` is returned. Raises: :class:`RPCError <pyrogram.RPCError>` in case of a Telegram RPC error. """ return self._client.edit_message_text( chat_id=self.chat.id, message_id=self.message_id, text=text, parse_mode=parse_mode, disable_web_page_preview=disable_web_page_preview, reply_markup=reply_markup )
Bound method *edit* of :obj:`Message <pyrogram.Message>` Use as a shortcut for: .. code-block:: python client.edit_message_text( chat_id=message.chat.id, message_id=message.message_id, text="hello" ) Example: .. code-block:: python message.edit("hello") Args: text (``str``): New text of the message. parse_mode (``str``, *optional*): Use :obj:`MARKDOWN <pyrogram.ParseMode.MARKDOWN>` or :obj:`HTML <pyrogram.ParseMode.HTML>` if you want Telegram apps to show bold, italic, fixed-width text or inline URLs in your message. Defaults to Markdown. disable_web_page_preview (``bool``, *optional*): Disables link previews for links in this message. reply_markup (:obj:`InlineKeyboardMarkup`, *optional*): An InlineKeyboardMarkup object. Returns: On success, the edited :obj:`Message <pyrogram.Message>` is returned. Raises: :class:`RPCError <pyrogram.RPCError>` in case of a Telegram RPC error.
Below is the the instruction that describes the task: ### Input: Bound method *edit* of :obj:`Message <pyrogram.Message>` Use as a shortcut for: .. code-block:: python client.edit_message_text( chat_id=message.chat.id, message_id=message.message_id, text="hello" ) Example: .. code-block:: python message.edit("hello") Args: text (``str``): New text of the message. parse_mode (``str``, *optional*): Use :obj:`MARKDOWN <pyrogram.ParseMode.MARKDOWN>` or :obj:`HTML <pyrogram.ParseMode.HTML>` if you want Telegram apps to show bold, italic, fixed-width text or inline URLs in your message. Defaults to Markdown. disable_web_page_preview (``bool``, *optional*): Disables link previews for links in this message. reply_markup (:obj:`InlineKeyboardMarkup`, *optional*): An InlineKeyboardMarkup object. Returns: On success, the edited :obj:`Message <pyrogram.Message>` is returned. Raises: :class:`RPCError <pyrogram.RPCError>` in case of a Telegram RPC error. ### Response: def edit( self, text: str, parse_mode: str = "", disable_web_page_preview: bool = None, reply_markup: Union[ "pyrogram.InlineKeyboardMarkup", "pyrogram.ReplyKeyboardMarkup", "pyrogram.ReplyKeyboardRemove", "pyrogram.ForceReply" ] = None ) -> "Message": """Bound method *edit* of :obj:`Message <pyrogram.Message>` Use as a shortcut for: .. code-block:: python client.edit_message_text( chat_id=message.chat.id, message_id=message.message_id, text="hello" ) Example: .. code-block:: python message.edit("hello") Args: text (``str``): New text of the message. parse_mode (``str``, *optional*): Use :obj:`MARKDOWN <pyrogram.ParseMode.MARKDOWN>` or :obj:`HTML <pyrogram.ParseMode.HTML>` if you want Telegram apps to show bold, italic, fixed-width text or inline URLs in your message. Defaults to Markdown. disable_web_page_preview (``bool``, *optional*): Disables link previews for links in this message. reply_markup (:obj:`InlineKeyboardMarkup`, *optional*): An InlineKeyboardMarkup object. Returns: On success, the edited :obj:`Message <pyrogram.Message>` is returned. Raises: :class:`RPCError <pyrogram.RPCError>` in case of a Telegram RPC error. """ return self._client.edit_message_text( chat_id=self.chat.id, message_id=self.message_id, text=text, parse_mode=parse_mode, disable_web_page_preview=disable_web_page_preview, reply_markup=reply_markup )
async def parse_vn_results(soup): """ Parse Visual Novel search pages. :param soup: The BS4 class object :return: A list of dictionaries containing a name and id. """ soup = soup.find_all('td', class_='tc1') vns = [] for item in soup[1:]: vns.append({'name': item.string, 'id': item.a.get('href')[1:]}) return vns
Parse Visual Novel search pages. :param soup: The BS4 class object :return: A list of dictionaries containing a name and id.
Below is the the instruction that describes the task: ### Input: Parse Visual Novel search pages. :param soup: The BS4 class object :return: A list of dictionaries containing a name and id. ### Response: async def parse_vn_results(soup): """ Parse Visual Novel search pages. :param soup: The BS4 class object :return: A list of dictionaries containing a name and id. """ soup = soup.find_all('td', class_='tc1') vns = [] for item in soup[1:]: vns.append({'name': item.string, 'id': item.a.get('href')[1:]}) return vns
def main(): # pylint: disable=too-many-statements """Main entry point""" parser = argparse.ArgumentParser(prog='mediafire-cli', description=__doc__) parser.add_argument('--debug', dest='debug', action='store_true', default=False, help='Enable debug output') parser.add_argument('--email', dest='email', required=False, default=os.environ.get('MEDIAFIRE_EMAIL', None)) parser.add_argument('--password', dest='password', required=False, default=os.environ.get('MEDIAFIRE_PASSWORD', None)) actions = parser.add_subparsers(title='Actions', dest='action') # http://bugs.python.org/issue9253#msg186387 actions.required = True # ls subparser = actions.add_parser('ls', help=do_ls.__doc__) subparser.add_argument('uri', nargs='?', help='MediaFire URI', default='mf:///') # file-upload subparser = actions.add_parser('file-upload', help=do_file_upload.__doc__) subparser.add_argument('paths', nargs='+', help='Path[s] to upload') subparser.add_argument('dest_uri', help='Destination MediaFire URI') # file-download subparser = actions.add_parser('file-download', help=do_file_download.__doc__) subparser.add_argument('uris', nargs='+', help='MediaFire File URI[s] to download') subparser.add_argument('dest_path', help='Destination path') # file-show subparser = actions.add_parser('file-show', help=do_file_show.__doc__) subparser.add_argument('uris', nargs='+', help='MediaFire File URI[s] to print out') # folder-create subparser = actions.add_parser('folder-create', help=do_folder_create.__doc__) subparser.add_argument('uris', nargs='+', help='MediaFire folder path URI[s]') # resource-delete subparser = actions.add_parser('resource-delete', help=do_resource_delete.__doc__) subparser.add_argument('uris', nargs='+', help='MediaFire resource URI[s]') subparser.add_argument('--purge', help="Purge, don't send to trash", dest="purge", action="store_true", default=False) # file-update-metadata subparser = actions.add_parser('file-update-metadata', help=do_file_update_metadata.__doc__) subparser.add_argument('uri', help='MediaFire file URI') subparser.add_argument('--filename', help='Set file name', default=None, dest='filename') subparser.add_argument('--privacy', help='Set file privacy', choices=['public', 'private'], default=None, dest='privacy') subparser.add_argument('--description', help='Set file description', dest='description', default=None) subparser.add_argument('--mtime', help="Set file modification time", dest='mtime', default=None) # folder-update-metadata subparser = actions.add_parser('folder-update-metadata', help=do_folder_update_metadata.__doc__) subparser.add_argument('uri', help='MediaFire folder URI') subparser.add_argument('--foldername', help='Set folder name', default=None, dest='foldername') subparser.add_argument('--privacy', help='Set folder privacy', choices=['public', 'private'], default=None, dest='privacy') subparser.add_argument('--recursive', help='Set privacy recursively', action='store_true', default=None, dest='recursive') subparser.add_argument('--description', help='Set folder description', dest='description', default=None) subparser.add_argument('--mtime', help='Set folder mtime', default=None, dest='mtime') # debug-get-resource subparser = actions.add_parser('debug-get-resource', help=do_debug_get_resource.__doc__) subparser.add_argument('uri', help='MediaFire resource URI', default='mediafire:/', nargs='?') args = parser.parse_args() if args.debug: logger = logging.getLogger() logger.setLevel(logging.DEBUG) logging.getLogger("mediafire.client").setLevel(logging.DEBUG) client = MediaFireClient() if args.email and args.password: client.login(args.email, args.password, app_id=APP_ID) router = { "file-upload": do_file_upload, "file-download": do_file_download, "file-show": do_file_show, "ls": do_ls, "folder-create": do_folder_create, "resource-delete": do_resource_delete, "file-update-metadata": do_file_update_metadata, "folder-update-metadata": do_folder_update_metadata, "debug-get-resource": do_debug_get_resource } if args.action in router: result = router[args.action](client, args) if not result: sys.exit(1) else: print('Unsupported action: {}'.format(args.action)) sys.exit(1)
Main entry point
Below is the the instruction that describes the task: ### Input: Main entry point ### Response: def main(): # pylint: disable=too-many-statements """Main entry point""" parser = argparse.ArgumentParser(prog='mediafire-cli', description=__doc__) parser.add_argument('--debug', dest='debug', action='store_true', default=False, help='Enable debug output') parser.add_argument('--email', dest='email', required=False, default=os.environ.get('MEDIAFIRE_EMAIL', None)) parser.add_argument('--password', dest='password', required=False, default=os.environ.get('MEDIAFIRE_PASSWORD', None)) actions = parser.add_subparsers(title='Actions', dest='action') # http://bugs.python.org/issue9253#msg186387 actions.required = True # ls subparser = actions.add_parser('ls', help=do_ls.__doc__) subparser.add_argument('uri', nargs='?', help='MediaFire URI', default='mf:///') # file-upload subparser = actions.add_parser('file-upload', help=do_file_upload.__doc__) subparser.add_argument('paths', nargs='+', help='Path[s] to upload') subparser.add_argument('dest_uri', help='Destination MediaFire URI') # file-download subparser = actions.add_parser('file-download', help=do_file_download.__doc__) subparser.add_argument('uris', nargs='+', help='MediaFire File URI[s] to download') subparser.add_argument('dest_path', help='Destination path') # file-show subparser = actions.add_parser('file-show', help=do_file_show.__doc__) subparser.add_argument('uris', nargs='+', help='MediaFire File URI[s] to print out') # folder-create subparser = actions.add_parser('folder-create', help=do_folder_create.__doc__) subparser.add_argument('uris', nargs='+', help='MediaFire folder path URI[s]') # resource-delete subparser = actions.add_parser('resource-delete', help=do_resource_delete.__doc__) subparser.add_argument('uris', nargs='+', help='MediaFire resource URI[s]') subparser.add_argument('--purge', help="Purge, don't send to trash", dest="purge", action="store_true", default=False) # file-update-metadata subparser = actions.add_parser('file-update-metadata', help=do_file_update_metadata.__doc__) subparser.add_argument('uri', help='MediaFire file URI') subparser.add_argument('--filename', help='Set file name', default=None, dest='filename') subparser.add_argument('--privacy', help='Set file privacy', choices=['public', 'private'], default=None, dest='privacy') subparser.add_argument('--description', help='Set file description', dest='description', default=None) subparser.add_argument('--mtime', help="Set file modification time", dest='mtime', default=None) # folder-update-metadata subparser = actions.add_parser('folder-update-metadata', help=do_folder_update_metadata.__doc__) subparser.add_argument('uri', help='MediaFire folder URI') subparser.add_argument('--foldername', help='Set folder name', default=None, dest='foldername') subparser.add_argument('--privacy', help='Set folder privacy', choices=['public', 'private'], default=None, dest='privacy') subparser.add_argument('--recursive', help='Set privacy recursively', action='store_true', default=None, dest='recursive') subparser.add_argument('--description', help='Set folder description', dest='description', default=None) subparser.add_argument('--mtime', help='Set folder mtime', default=None, dest='mtime') # debug-get-resource subparser = actions.add_parser('debug-get-resource', help=do_debug_get_resource.__doc__) subparser.add_argument('uri', help='MediaFire resource URI', default='mediafire:/', nargs='?') args = parser.parse_args() if args.debug: logger = logging.getLogger() logger.setLevel(logging.DEBUG) logging.getLogger("mediafire.client").setLevel(logging.DEBUG) client = MediaFireClient() if args.email and args.password: client.login(args.email, args.password, app_id=APP_ID) router = { "file-upload": do_file_upload, "file-download": do_file_download, "file-show": do_file_show, "ls": do_ls, "folder-create": do_folder_create, "resource-delete": do_resource_delete, "file-update-metadata": do_file_update_metadata, "folder-update-metadata": do_folder_update_metadata, "debug-get-resource": do_debug_get_resource } if args.action in router: result = router[args.action](client, args) if not result: sys.exit(1) else: print('Unsupported action: {}'.format(args.action)) sys.exit(1)
def check(schema, rev_id, page_id=None, radius=defaults.RADIUS, before=None, window=None): """ Checks the revert status of a revision. With this method, you can determine whether an edit is a 'reverting' edit, was 'reverted' by another edit and/or was 'reverted_to' by another edit. :Parameters: session : :class:`mwapi.Session` An API session to make use of rev_id : int the ID of the revision to check page_id : int the ID of the page the revision occupies (slower if not provided) radius : int a positive integer indicating the maximum number of revisions that can be reverted before : :class:`mwtypes.Timestamp` if set, limits the search for *reverting* revisions to those which were saved before this timestamp window : int if set, limits the search for *reverting* revisions to those which were saved within `window` seconds after the reverted edit rvprop : set( str ) a set of properties to include in revisions :Returns: A triple :class:`mwreverts.Revert` | `None` * reverting -- If this edit reverted other edit(s) * reverted -- If this edit was reverted by another edit * reverted_to -- If this edit was reverted to by another edit :Example: >>> import mwdb >>> import mwreverts.api >>> >>> schema = mwdb.Schema("mysql+pymysql://enwiki.labsdb/enwiki_p" + "?read_default_file=~/replica.my.cnf") >>> >>> def print_revert(revert): ... if revert is None: ... print(None) ... else: ... print(revert.reverting['rev_id'], ... [r['rev_id'] for r in revert.reverteds], ... revert.reverted_to['rev_id']) ... >>> reverting, reverted, reverted_to = \\ ... mwreverts.db.check(schema, 679778587) >>> print_revert(reverting) None >>> print_revert(reverted) 679778743 [679778587] 679742862 >>> print_revert(reverted_to) None """ rev_id = int(rev_id) radius = int(radius) if radius < 1: raise TypeError("invalid radius. Expected a positive integer.") page_id = int(page_id) if page_id is not None else None before = Timestamp(before) if before is not None else None # If we don't have the page_id, we're going to need to look them up if page_id is None: page_id = get_page_id(schema, rev_id) # Load history and current rev current_and_past_revs = list(n_edits_before( schema, rev_id + 1, page_id, n=radius + 1)) if len(current_and_past_revs) < 1: raise KeyError("Revision {0} not found in page {1}." .format(rev_id, page_id)) current_rev, past_revs = ( current_and_past_revs[-1], # Current rev is the last one returned current_and_past_revs[:-1] # The rest are past revs ) if current_rev.rev_id != rev_id: raise KeyError("Revision {0} not found in page {1}." .format(rev_id, page_id)) if window is not None and before is None: before = Timestamp(current_rev.rev_timestamp) + window # Load future revisions future_revs = list(n_edits_after( schema, rev_id, page_id, n=radius, before=before)) return build_revert_tuple( rev_id, past_revs, current_rev, future_revs, radius)
Checks the revert status of a revision. With this method, you can determine whether an edit is a 'reverting' edit, was 'reverted' by another edit and/or was 'reverted_to' by another edit. :Parameters: session : :class:`mwapi.Session` An API session to make use of rev_id : int the ID of the revision to check page_id : int the ID of the page the revision occupies (slower if not provided) radius : int a positive integer indicating the maximum number of revisions that can be reverted before : :class:`mwtypes.Timestamp` if set, limits the search for *reverting* revisions to those which were saved before this timestamp window : int if set, limits the search for *reverting* revisions to those which were saved within `window` seconds after the reverted edit rvprop : set( str ) a set of properties to include in revisions :Returns: A triple :class:`mwreverts.Revert` | `None` * reverting -- If this edit reverted other edit(s) * reverted -- If this edit was reverted by another edit * reverted_to -- If this edit was reverted to by another edit :Example: >>> import mwdb >>> import mwreverts.api >>> >>> schema = mwdb.Schema("mysql+pymysql://enwiki.labsdb/enwiki_p" + "?read_default_file=~/replica.my.cnf") >>> >>> def print_revert(revert): ... if revert is None: ... print(None) ... else: ... print(revert.reverting['rev_id'], ... [r['rev_id'] for r in revert.reverteds], ... revert.reverted_to['rev_id']) ... >>> reverting, reverted, reverted_to = \\ ... mwreverts.db.check(schema, 679778587) >>> print_revert(reverting) None >>> print_revert(reverted) 679778743 [679778587] 679742862 >>> print_revert(reverted_to) None
Below is the the instruction that describes the task: ### Input: Checks the revert status of a revision. With this method, you can determine whether an edit is a 'reverting' edit, was 'reverted' by another edit and/or was 'reverted_to' by another edit. :Parameters: session : :class:`mwapi.Session` An API session to make use of rev_id : int the ID of the revision to check page_id : int the ID of the page the revision occupies (slower if not provided) radius : int a positive integer indicating the maximum number of revisions that can be reverted before : :class:`mwtypes.Timestamp` if set, limits the search for *reverting* revisions to those which were saved before this timestamp window : int if set, limits the search for *reverting* revisions to those which were saved within `window` seconds after the reverted edit rvprop : set( str ) a set of properties to include in revisions :Returns: A triple :class:`mwreverts.Revert` | `None` * reverting -- If this edit reverted other edit(s) * reverted -- If this edit was reverted by another edit * reverted_to -- If this edit was reverted to by another edit :Example: >>> import mwdb >>> import mwreverts.api >>> >>> schema = mwdb.Schema("mysql+pymysql://enwiki.labsdb/enwiki_p" + "?read_default_file=~/replica.my.cnf") >>> >>> def print_revert(revert): ... if revert is None: ... print(None) ... else: ... print(revert.reverting['rev_id'], ... [r['rev_id'] for r in revert.reverteds], ... revert.reverted_to['rev_id']) ... >>> reverting, reverted, reverted_to = \\ ... mwreverts.db.check(schema, 679778587) >>> print_revert(reverting) None >>> print_revert(reverted) 679778743 [679778587] 679742862 >>> print_revert(reverted_to) None ### Response: def check(schema, rev_id, page_id=None, radius=defaults.RADIUS, before=None, window=None): """ Checks the revert status of a revision. With this method, you can determine whether an edit is a 'reverting' edit, was 'reverted' by another edit and/or was 'reverted_to' by another edit. :Parameters: session : :class:`mwapi.Session` An API session to make use of rev_id : int the ID of the revision to check page_id : int the ID of the page the revision occupies (slower if not provided) radius : int a positive integer indicating the maximum number of revisions that can be reverted before : :class:`mwtypes.Timestamp` if set, limits the search for *reverting* revisions to those which were saved before this timestamp window : int if set, limits the search for *reverting* revisions to those which were saved within `window` seconds after the reverted edit rvprop : set( str ) a set of properties to include in revisions :Returns: A triple :class:`mwreverts.Revert` | `None` * reverting -- If this edit reverted other edit(s) * reverted -- If this edit was reverted by another edit * reverted_to -- If this edit was reverted to by another edit :Example: >>> import mwdb >>> import mwreverts.api >>> >>> schema = mwdb.Schema("mysql+pymysql://enwiki.labsdb/enwiki_p" + "?read_default_file=~/replica.my.cnf") >>> >>> def print_revert(revert): ... if revert is None: ... print(None) ... else: ... print(revert.reverting['rev_id'], ... [r['rev_id'] for r in revert.reverteds], ... revert.reverted_to['rev_id']) ... >>> reverting, reverted, reverted_to = \\ ... mwreverts.db.check(schema, 679778587) >>> print_revert(reverting) None >>> print_revert(reverted) 679778743 [679778587] 679742862 >>> print_revert(reverted_to) None """ rev_id = int(rev_id) radius = int(radius) if radius < 1: raise TypeError("invalid radius. Expected a positive integer.") page_id = int(page_id) if page_id is not None else None before = Timestamp(before) if before is not None else None # If we don't have the page_id, we're going to need to look them up if page_id is None: page_id = get_page_id(schema, rev_id) # Load history and current rev current_and_past_revs = list(n_edits_before( schema, rev_id + 1, page_id, n=radius + 1)) if len(current_and_past_revs) < 1: raise KeyError("Revision {0} not found in page {1}." .format(rev_id, page_id)) current_rev, past_revs = ( current_and_past_revs[-1], # Current rev is the last one returned current_and_past_revs[:-1] # The rest are past revs ) if current_rev.rev_id != rev_id: raise KeyError("Revision {0} not found in page {1}." .format(rev_id, page_id)) if window is not None and before is None: before = Timestamp(current_rev.rev_timestamp) + window # Load future revisions future_revs = list(n_edits_after( schema, rev_id, page_id, n=radius, before=before)) return build_revert_tuple( rev_id, past_revs, current_rev, future_revs, radius)
def list_courses(args): """ List enrolled courses. @param args: Command-line arguments. @type args: namedtuple """ session = get_session() login(session, args.username, args.password) extractor = CourseraExtractor(session) courses = extractor.list_courses() logging.info('Found %d courses', len(courses)) for course in courses: logging.info(course)
List enrolled courses. @param args: Command-line arguments. @type args: namedtuple
Below is the the instruction that describes the task: ### Input: List enrolled courses. @param args: Command-line arguments. @type args: namedtuple ### Response: def list_courses(args): """ List enrolled courses. @param args: Command-line arguments. @type args: namedtuple """ session = get_session() login(session, args.username, args.password) extractor = CourseraExtractor(session) courses = extractor.list_courses() logging.info('Found %d courses', len(courses)) for course in courses: logging.info(course)
def settle_deferred_messages(self, settlement, messages, **kwargs): """Settle messages that have been previously deferred. :param settlement: How the messages are to be settled. This must be a string of one of the following values: 'completed', 'suspended', 'abandoned'. :type settlement: str :param messages: A list of deferred messages to be settled. :type messages: list[~azure.servicebus.common.message.DeferredMessage] Example: .. literalinclude:: ../examples/test_examples.py :start-after: [START settle_deferred_messages_service_bus] :end-before: [END settle_deferred_messages_service_bus] :language: python :dedent: 8 :caption: Settle deferred messages. """ if (self.entity and self.requires_session) or kwargs.get('session'): raise ValueError("Sessionful deferred messages can only be settled within a locked receive session.") if settlement.lower() not in ['completed', 'suspended', 'abandoned']: raise ValueError("Settlement must be one of: 'completed', 'suspended', 'abandoned'") if not messages: raise ValueError("At least one message must be specified.") message = { 'disposition-status': settlement.lower(), 'lock-tokens': types.AMQPArray([m.lock_token for m in messages])} with BaseHandler(self.entity_uri, self.auth_config, debug=self.debug, **kwargs) as handler: return handler._mgmt_request_response( # pylint: disable=protected-access REQUEST_RESPONSE_UPDATE_DISPOSTION_OPERATION, message, mgmt_handlers.default)
Settle messages that have been previously deferred. :param settlement: How the messages are to be settled. This must be a string of one of the following values: 'completed', 'suspended', 'abandoned'. :type settlement: str :param messages: A list of deferred messages to be settled. :type messages: list[~azure.servicebus.common.message.DeferredMessage] Example: .. literalinclude:: ../examples/test_examples.py :start-after: [START settle_deferred_messages_service_bus] :end-before: [END settle_deferred_messages_service_bus] :language: python :dedent: 8 :caption: Settle deferred messages.
Below is the the instruction that describes the task: ### Input: Settle messages that have been previously deferred. :param settlement: How the messages are to be settled. This must be a string of one of the following values: 'completed', 'suspended', 'abandoned'. :type settlement: str :param messages: A list of deferred messages to be settled. :type messages: list[~azure.servicebus.common.message.DeferredMessage] Example: .. literalinclude:: ../examples/test_examples.py :start-after: [START settle_deferred_messages_service_bus] :end-before: [END settle_deferred_messages_service_bus] :language: python :dedent: 8 :caption: Settle deferred messages. ### Response: def settle_deferred_messages(self, settlement, messages, **kwargs): """Settle messages that have been previously deferred. :param settlement: How the messages are to be settled. This must be a string of one of the following values: 'completed', 'suspended', 'abandoned'. :type settlement: str :param messages: A list of deferred messages to be settled. :type messages: list[~azure.servicebus.common.message.DeferredMessage] Example: .. literalinclude:: ../examples/test_examples.py :start-after: [START settle_deferred_messages_service_bus] :end-before: [END settle_deferred_messages_service_bus] :language: python :dedent: 8 :caption: Settle deferred messages. """ if (self.entity and self.requires_session) or kwargs.get('session'): raise ValueError("Sessionful deferred messages can only be settled within a locked receive session.") if settlement.lower() not in ['completed', 'suspended', 'abandoned']: raise ValueError("Settlement must be one of: 'completed', 'suspended', 'abandoned'") if not messages: raise ValueError("At least one message must be specified.") message = { 'disposition-status': settlement.lower(), 'lock-tokens': types.AMQPArray([m.lock_token for m in messages])} with BaseHandler(self.entity_uri, self.auth_config, debug=self.debug, **kwargs) as handler: return handler._mgmt_request_response( # pylint: disable=protected-access REQUEST_RESPONSE_UPDATE_DISPOSTION_OPERATION, message, mgmt_handlers.default)
def check_edge(self, name1, name2): ''' API: check_edge(self, name1, name2) Description: Return True if edge exists, False otherwise. Input: name1: name of the source node. name2: name of the sink node. Return: Returns True if edge exists, False otherwise. ''' if self.graph_type is DIRECTED_GRAPH: return (name1, name2) in self.edge_attr else: return ((name1, name2) in self.edge_attr or (name2, name1) in self.edge_attr)
API: check_edge(self, name1, name2) Description: Return True if edge exists, False otherwise. Input: name1: name of the source node. name2: name of the sink node. Return: Returns True if edge exists, False otherwise.
Below is the the instruction that describes the task: ### Input: API: check_edge(self, name1, name2) Description: Return True if edge exists, False otherwise. Input: name1: name of the source node. name2: name of the sink node. Return: Returns True if edge exists, False otherwise. ### Response: def check_edge(self, name1, name2): ''' API: check_edge(self, name1, name2) Description: Return True if edge exists, False otherwise. Input: name1: name of the source node. name2: name of the sink node. Return: Returns True if edge exists, False otherwise. ''' if self.graph_type is DIRECTED_GRAPH: return (name1, name2) in self.edge_attr else: return ((name1, name2) in self.edge_attr or (name2, name1) in self.edge_attr)
def generate_table(self, rows): """ Generates from a list of rows a PrettyTable object. """ table = PrettyTable(**self.kwargs) for row in self.rows: if len(row[0]) < self.max_row_width: appends = self.max_row_width - len(row[0]) for i in range(1, appends): row[0].append("-") if row[1] is True: self.make_fields_unique(row[0]) table.field_names = row[0] else: table.add_row(row[0]) return table
Generates from a list of rows a PrettyTable object.
Below is the the instruction that describes the task: ### Input: Generates from a list of rows a PrettyTable object. ### Response: def generate_table(self, rows): """ Generates from a list of rows a PrettyTable object. """ table = PrettyTable(**self.kwargs) for row in self.rows: if len(row[0]) < self.max_row_width: appends = self.max_row_width - len(row[0]) for i in range(1, appends): row[0].append("-") if row[1] is True: self.make_fields_unique(row[0]) table.field_names = row[0] else: table.add_row(row[0]) return table
def create_payload(self): """Remove ``smart_class_parameter_id`` or ``smart_variable_id``""" payload = super(OverrideValue, self).create_payload() if hasattr(self, 'smart_class_parameter'): del payload['smart_class_parameter_id'] if hasattr(self, 'smart_variable'): del payload['smart_variable_id'] return payload
Remove ``smart_class_parameter_id`` or ``smart_variable_id``
Below is the the instruction that describes the task: ### Input: Remove ``smart_class_parameter_id`` or ``smart_variable_id`` ### Response: def create_payload(self): """Remove ``smart_class_parameter_id`` or ``smart_variable_id``""" payload = super(OverrideValue, self).create_payload() if hasattr(self, 'smart_class_parameter'): del payload['smart_class_parameter_id'] if hasattr(self, 'smart_variable'): del payload['smart_variable_id'] return payload
def parse_tagInfo_data(self): """parses and plots taginfo files""" # Find and parse homer taginfo reports for f in self.find_log_files('homer/tagInfo', filehandles=True): s_name = os.path.basename(f['root']) s_name = self.clean_s_name(s_name, f['root']) parsed_data = self.parse_tag_info_chrs(f) if parsed_data is not None: if s_name in self.tagdir_data['taginfo_total']: log.debug("Duplicate tag info sample log found! Overwriting: {}".format(s_name)) self.add_data_source(f, s_name, section='taginfo') self.tagdir_data['taginfo_total'][s_name] = parsed_data[0] self.tagdir_data['taginfo_total_norm'][s_name] = self.normalize(parsed_data[0]) self.tagdir_data['taginfo_uniq'][s_name] = parsed_data[1] self.tagdir_data['taginfo_uniq_norm'][s_name] = self.normalize(parsed_data[1]) for f in self.find_log_files('homer/tagInfo', filehandles=True): s_name = os.path.basename(f['root']) s_name = self.clean_s_name(s_name, f['root']) ## collected tag_info data for general stats table and store under 'header' parsed_data = self.parse_tag_info(f) if parsed_data is not None: self.tagdir_data['header'][s_name] = parsed_data self.tagdir_data['taginfo_total'] = self.ignore_samples(self.tagdir_data['taginfo_total']) self.tagdir_data['taginfo_total_norm'] = self.ignore_samples(self.tagdir_data['taginfo_total_norm']) self.tagdir_data['taginfo_uniq'] = self.ignore_samples(self.tagdir_data['taginfo_uniq']) self.tagdir_data['taginfo_uniq_norm'] = self.ignore_samples(self.tagdir_data['taginfo_uniq_norm']) if len(self.tagdir_data['taginfo_total']) > 0: self.add_section ( name = 'Chromosomal Coverage', anchor = 'homer-tagInfo', description = 'This plot shows the distribution of tags along chromosomes.', helptext = '''This is a good quality control for tag distribution and could be a good indication of large duplications or deletions.''', plot = self.tag_info_chart() )
parses and plots taginfo files
Below is the the instruction that describes the task: ### Input: parses and plots taginfo files ### Response: def parse_tagInfo_data(self): """parses and plots taginfo files""" # Find and parse homer taginfo reports for f in self.find_log_files('homer/tagInfo', filehandles=True): s_name = os.path.basename(f['root']) s_name = self.clean_s_name(s_name, f['root']) parsed_data = self.parse_tag_info_chrs(f) if parsed_data is not None: if s_name in self.tagdir_data['taginfo_total']: log.debug("Duplicate tag info sample log found! Overwriting: {}".format(s_name)) self.add_data_source(f, s_name, section='taginfo') self.tagdir_data['taginfo_total'][s_name] = parsed_data[0] self.tagdir_data['taginfo_total_norm'][s_name] = self.normalize(parsed_data[0]) self.tagdir_data['taginfo_uniq'][s_name] = parsed_data[1] self.tagdir_data['taginfo_uniq_norm'][s_name] = self.normalize(parsed_data[1]) for f in self.find_log_files('homer/tagInfo', filehandles=True): s_name = os.path.basename(f['root']) s_name = self.clean_s_name(s_name, f['root']) ## collected tag_info data for general stats table and store under 'header' parsed_data = self.parse_tag_info(f) if parsed_data is not None: self.tagdir_data['header'][s_name] = parsed_data self.tagdir_data['taginfo_total'] = self.ignore_samples(self.tagdir_data['taginfo_total']) self.tagdir_data['taginfo_total_norm'] = self.ignore_samples(self.tagdir_data['taginfo_total_norm']) self.tagdir_data['taginfo_uniq'] = self.ignore_samples(self.tagdir_data['taginfo_uniq']) self.tagdir_data['taginfo_uniq_norm'] = self.ignore_samples(self.tagdir_data['taginfo_uniq_norm']) if len(self.tagdir_data['taginfo_total']) > 0: self.add_section ( name = 'Chromosomal Coverage', anchor = 'homer-tagInfo', description = 'This plot shows the distribution of tags along chromosomes.', helptext = '''This is a good quality control for tag distribution and could be a good indication of large duplications or deletions.''', plot = self.tag_info_chart() )
def add(self, *args): """ This function adds strings to the keyboard, while not exceeding row_width. E.g. ReplyKeyboardMarkup#add("A", "B", "C") yields the json result {keyboard: [["A"], ["B"], ["C"]]} when row_width is set to 1. When row_width is set to 2, the following is the result of this function: {keyboard: [["A", "B"], ["C"]]} See https://core.telegram.org/bots/api#replykeyboardmarkup :param args: KeyboardButton to append to the keyboard """ i = 1 row = [] for button in args: row.append(button.to_dic()) if i % self.row_width == 0: self.keyboard.append(row) row = [] i += 1 if len(row) > 0: self.keyboard.append(row)
This function adds strings to the keyboard, while not exceeding row_width. E.g. ReplyKeyboardMarkup#add("A", "B", "C") yields the json result {keyboard: [["A"], ["B"], ["C"]]} when row_width is set to 1. When row_width is set to 2, the following is the result of this function: {keyboard: [["A", "B"], ["C"]]} See https://core.telegram.org/bots/api#replykeyboardmarkup :param args: KeyboardButton to append to the keyboard
Below is the the instruction that describes the task: ### Input: This function adds strings to the keyboard, while not exceeding row_width. E.g. ReplyKeyboardMarkup#add("A", "B", "C") yields the json result {keyboard: [["A"], ["B"], ["C"]]} when row_width is set to 1. When row_width is set to 2, the following is the result of this function: {keyboard: [["A", "B"], ["C"]]} See https://core.telegram.org/bots/api#replykeyboardmarkup :param args: KeyboardButton to append to the keyboard ### Response: def add(self, *args): """ This function adds strings to the keyboard, while not exceeding row_width. E.g. ReplyKeyboardMarkup#add("A", "B", "C") yields the json result {keyboard: [["A"], ["B"], ["C"]]} when row_width is set to 1. When row_width is set to 2, the following is the result of this function: {keyboard: [["A", "B"], ["C"]]} See https://core.telegram.org/bots/api#replykeyboardmarkup :param args: KeyboardButton to append to the keyboard """ i = 1 row = [] for button in args: row.append(button.to_dic()) if i % self.row_width == 0: self.keyboard.append(row) row = [] i += 1 if len(row) > 0: self.keyboard.append(row)
def delete_account_invitation(self, account_id, invitation_id, **kwargs): # noqa: E501 """Delete a user invitation. # noqa: E501 An endpoint for deleting an active user invitation which has been sent for a new or an existing user to join the account. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/accounts/{account-id}/user-invitations/{invitation-id} -H 'Authorization: Bearer API_KEY'` # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.delete_account_invitation(account_id, invitation_id, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str account_id: Account ID. (required) :param str invitation_id: The ID of the invitation to be deleted. (required) :return: None If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('asynchronous'): return self.delete_account_invitation_with_http_info(account_id, invitation_id, **kwargs) # noqa: E501 else: (data) = self.delete_account_invitation_with_http_info(account_id, invitation_id, **kwargs) # noqa: E501 return data
Delete a user invitation. # noqa: E501 An endpoint for deleting an active user invitation which has been sent for a new or an existing user to join the account. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/accounts/{account-id}/user-invitations/{invitation-id} -H 'Authorization: Bearer API_KEY'` # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.delete_account_invitation(account_id, invitation_id, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str account_id: Account ID. (required) :param str invitation_id: The ID of the invitation to be deleted. (required) :return: None If the method is called asynchronously, returns the request thread.
Below is the the instruction that describes the task: ### Input: Delete a user invitation. # noqa: E501 An endpoint for deleting an active user invitation which has been sent for a new or an existing user to join the account. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/accounts/{account-id}/user-invitations/{invitation-id} -H 'Authorization: Bearer API_KEY'` # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.delete_account_invitation(account_id, invitation_id, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str account_id: Account ID. (required) :param str invitation_id: The ID of the invitation to be deleted. (required) :return: None If the method is called asynchronously, returns the request thread. ### Response: def delete_account_invitation(self, account_id, invitation_id, **kwargs): # noqa: E501 """Delete a user invitation. # noqa: E501 An endpoint for deleting an active user invitation which has been sent for a new or an existing user to join the account. **Example usage:** `curl -X DELETE https://api.us-east-1.mbedcloud.com/v3/accounts/{account-id}/user-invitations/{invitation-id} -H 'Authorization: Bearer API_KEY'` # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.delete_account_invitation(account_id, invitation_id, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str account_id: Account ID. (required) :param str invitation_id: The ID of the invitation to be deleted. (required) :return: None If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('asynchronous'): return self.delete_account_invitation_with_http_info(account_id, invitation_id, **kwargs) # noqa: E501 else: (data) = self.delete_account_invitation_with_http_info(account_id, invitation_id, **kwargs) # noqa: E501 return data
def process_request(self, request): """Check if user is logged in""" assert hasattr(request, 'user') if not request.user.is_authenticated(): path = request.path_info.lstrip('/') if not any(m.match(path) for m in EXEMPT_URLS): return HttpResponseRedirect(reverse(settings.LOGIN_URL))
Check if user is logged in
Below is the the instruction that describes the task: ### Input: Check if user is logged in ### Response: def process_request(self, request): """Check if user is logged in""" assert hasattr(request, 'user') if not request.user.is_authenticated(): path = request.path_info.lstrip('/') if not any(m.match(path) for m in EXEMPT_URLS): return HttpResponseRedirect(reverse(settings.LOGIN_URL))
def rebuild( self ): """ Rebuilds the current item in the scene. """ self.markForRebuild(False) self._textData = [] if ( self.rebuildBlocked() ): return scene = self.scene() if ( not scene ): return # rebuild a month look if ( scene.currentMode() == scene.Mode.Month ): self.rebuildMonth() elif ( scene.currentMode() in (scene.Mode.Day, scene.Mode.Week) ): self.rebuildDay()
Rebuilds the current item in the scene.
Below is the the instruction that describes the task: ### Input: Rebuilds the current item in the scene. ### Response: def rebuild( self ): """ Rebuilds the current item in the scene. """ self.markForRebuild(False) self._textData = [] if ( self.rebuildBlocked() ): return scene = self.scene() if ( not scene ): return # rebuild a month look if ( scene.currentMode() == scene.Mode.Month ): self.rebuildMonth() elif ( scene.currentMode() in (scene.Mode.Day, scene.Mode.Week) ): self.rebuildDay()
def apply_op(input_layer, operation, *op_args, **op_kwargs): """Applies the given operation to this before without adding any summaries. Args: input_layer: The input layer for this op. operation: An operation that takes a tensor and the supplied args. *op_args: Extra arguments for operation. **op_kwargs: Keyword arguments for the operation. Returns: A new layer with operation applied. """ return input_layer.with_tensor( operation(input_layer.tensor, *op_args, **op_kwargs))
Applies the given operation to this before without adding any summaries. Args: input_layer: The input layer for this op. operation: An operation that takes a tensor and the supplied args. *op_args: Extra arguments for operation. **op_kwargs: Keyword arguments for the operation. Returns: A new layer with operation applied.
Below is the the instruction that describes the task: ### Input: Applies the given operation to this before without adding any summaries. Args: input_layer: The input layer for this op. operation: An operation that takes a tensor and the supplied args. *op_args: Extra arguments for operation. **op_kwargs: Keyword arguments for the operation. Returns: A new layer with operation applied. ### Response: def apply_op(input_layer, operation, *op_args, **op_kwargs): """Applies the given operation to this before without adding any summaries. Args: input_layer: The input layer for this op. operation: An operation that takes a tensor and the supplied args. *op_args: Extra arguments for operation. **op_kwargs: Keyword arguments for the operation. Returns: A new layer with operation applied. """ return input_layer.with_tensor( operation(input_layer.tensor, *op_args, **op_kwargs))