code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def remaining_quota(self, remaining_quota): if remaining_quota is None: raise ValueError("Invalid value for `remaining_quota`, must not be `None`") if remaining_quota is not None and remaining_quota < 0: raise ValueError("Invalid value for `remaining_quota`, must be a value great...
Sets the remaining_quota of this ServicePackageMetadata. Current available service package quota. :param remaining_quota: The remaining_quota of this ServicePackageMetadata. :type: int
def get_resource_solvers(self, resource): solvers_classes = [s for s in self.resource_solver_classes if s.can_solve(resource)] if solvers_classes: solvers = [] for solver_class in solvers_classes: if solver_class not in self._resource_solvers_cache: ...
Returns the resource solvers that can solve the given resource. Arguments --------- resource : dataql.resources.Resource An instance of a subclass of ``Resource`` for which we want to get the solver classes that can solve it. Returns ------- list...
def calc_Cmin(mh, mc, Cph, Cpc): r Ch = mh*Cph Cc = mc*Cpc return min(Ch, Cc)
r'''Returns the heat capacity rate for the minimum stream having flows `mh` and `mc`, with averaged heat capacities `Cph` and `Cpc`. .. math:: C_c = m_cC_{p,c} C_h = m_h C_{p,h} C_{min} = \min(C_c, C_h) Parameters ---------- mh : float Mass flow rate of hot stream...
def bucket_ops(bid, api=""): try: yield 42 except ClientError as e: code = e.response['Error']['Code'] log.info( "bucket error bucket:%s error:%s", bid, e.response['Error']['Code']) if code == "NoSuchBucket": pass elif code ...
Context manager for dealing with s3 errors in one place bid: bucket_id in form of account_name:bucket_name
def renew_secret(client, creds, opt): if opt.reuse_token: return seconds = grok_seconds(opt.lease) if not seconds: raise aomi.exceptions.AomiCommand("invalid lease %s" % opt.lease) renew = None if client.version: v_bits = client.version.split('.') if int(v_bits[0]) ==...
Renews a secret. This will occur unless the user has specified on the command line that it is not neccesary
def colors_to_materials(colors, count=None): rgba = to_rgba(colors) if util.is_shape(rgba, (4,)) and count is not None: diffuse = rgba.reshape((-1, 4)) index = np.zeros(count, dtype=np.int) elif util.is_shape(rgba, (-1, 4)): unique, index = grouping.unique_rows(rgba) diffuse ...
Convert a list of colors into a list of unique materials and material indexes. Parameters ----------- colors : (n, 3) or (n, 4) float RGB or RGBA colors count : int Number of entities to apply color to Returns ----------- diffuse : (m, 4) int Colors index : (count...
def label(self, name, value, cluster_ids=None): if cluster_ids is None: cluster_ids = self.cluster_view.selected if not hasattr(cluster_ids, '__len__'): cluster_ids = [cluster_ids] if len(cluster_ids) == 0: return self.cluster_meta.set(name, cluster_id...
Assign a label to clusters. Example: `quality 3`
def create_virtualenv(venv_dir, use_venv_module=True): if not use_venv_module: try: check_call(['virtualenv', venv_dir, '--no-site-packages']) except OSError: raise Exception('You probably dont have virtualenv installed: sudo apt-get install python-virtual...
creates a new virtualenv in venv_dir By default, the built-in venv module is used. On older versions of python, you may set use_venv_module to False to use virtualenv
def to_parfiles(self,prefix): if self.isnull().values.any(): warnings.warn("NaN in par ensemble",PyemuWarning) if self.istransformed: self._back_transform(inplace=True) par_df = self.pst.parameter_data.loc[:, ["parnme","parval1","scale","offset"]].copy() ...
write the parameter ensemble to PEST-style parameter files Parameters ---------- prefix: str file prefix for par files Note ---- this function back-transforms inplace with respect to log10 before writing
def print_long(filename, stat, print_func): size = stat_size(stat) mtime = stat_mtime(stat) file_mtime = time.localtime(mtime) curr_time = time.time() if mtime > (curr_time + SIX_MONTHS) or mtime < (curr_time - SIX_MONTHS): print_func('%6d %s %2d %04d %s' % (size, MONTH[file_mtime[1]], ...
Prints detailed information about the file passed in.
def and_raises(self, *errors): "Expects an error or more to be raised from the given expectation." for error in errors: self.__expect(Expectation.raises, error)
Expects an error or more to be raised from the given expectation.
def set_join_rule(self, room_id, join_rule): content = { "join_rule": join_rule } return self.send_state_event(room_id, "m.room.join_rules", content)
Set the rule for users wishing to join the room. Args: room_id(str): The room to set the rules for. join_rule(str): The chosen rule. One of: ["public", "knock", "invite", "private"]
def _correctIsotopeImpurities(matrix, intensities): correctedIntensities, _ = scipy.optimize.nnls(matrix, intensities) return correctedIntensities
Corrects observed reporter ion intensities for isotope impurities. :params matrix: a matrix (2d nested list) containing numbers, each isobaric channel must be present as a COLUMN. Use maspy.isobar._transposeMatrix() if channels are written in rows. :param intensities: numpy array of observed re...
def _get_stddevs(self, C, stddev_types, mag, num_sites): stddevs = [] for _ in stddev_types: if mag < 7.16: sigma = C['c11'] + C['c12'] * mag elif mag >= 7.16: sigma = C['c13'] stddevs.append(np.zeros(num_sites) + sigma) return ...
Return total standard deviation as for equation 35, page 1021.
def add_device_callback(self, devices, callback): if not devices: return False if not isinstance(devices, (tuple, list)): devices = [devices] for device in devices: device_id = device if isinstance(device, AbodeDevice): device_id = ...
Register a device callback.
def putnotify(self, name, *args): self.queues[name][0].put(*args) self.queues[name][1].set()
Puts data into queue and alerts listeners
def get_user(self, name): users = self.data['users'] for user in users: if user['name'] == name: return user raise KubeConfError("user name not found.")
Get user from kubeconfig.
def alterar(self, id_script_type, type, description): if not is_valid_int_param(id_script_type): raise InvalidParameterError( u'The identifier of Script Type is invalid or was not informed.') script_type_map = dict() script_type_map['type'] = type script_type_...
Change Script Type from by the identifier. :param id_script_type: Identifier of the Script Type. Integer value and greater than zero. :param type: Script Type type. String with a minimum 3 and maximum of 40 characters :param description: Script Type description. String with a minimum 3 and maxi...
def save_performance(db, job_id, records): rows = [(job_id, rec['operation'], rec['time_sec'], rec['memory_mb'], int(rec['counts'])) for rec in records] db.insert('performance', 'job_id operation time_sec memory_mb counts'.split(), rows)
Save in the database the performance information about the given job. :param db: a :class:`openquake.server.dbapi.Db` instance :param job_id: a job ID :param records: a list of performance records
def map(self, map_fn, desc=None): if desc is None: desc = getattr(map_fn, '__name__', '') desc = u'map({})'.format(desc) return self.transform(lambda xs: (map_fn(x) for x in xs), desc=desc)
Return a copy of this query, with the values mapped through `map_fn`. Args: map_fn (callable): A callable that takes a single argument and returns a new value. Keyword Args: desc (str): A description of the mapping transform, for use in log message. Defaults to ...
def pyxwriter(self): model = self.Model() if hasattr(self, 'Parameters'): model.parameters = self.Parameters(vars(self)) else: model.parameters = parametertools.Parameters(vars(self)) if hasattr(self, 'Sequences'): model.sequences = self.Sequences(mode...
Update the pyx file.
def exponential_backoff(self): last_service_switch = self._service_starttime if not last_service_switch: return time_since_last_switch = time.time() - last_service_switch if not self._service_rapidstarts: self._service_rapidstarts = 0 minimum_wait = 0.1 * ...
A function that keeps waiting longer and longer the more rapidly it is called. It can be used to increasingly slow down service starts when they keep failing.
def create(cls, data, id_=None, **kwargs): r from .models import RecordMetadata with db.session.begin_nested(): record = cls(data) before_record_insert.send( current_app._get_current_object(), record=record ) record....
r"""Create a new record instance and store it in the database. #. Send a signal :data:`invenio_records.signals.before_record_insert` with the new record as parameter. #. Validate the new record data. #. Add the new record in the database. #. Send a signal :data:`invenio_re...
def authenticate(self, transport, account_name, password): if not isinstance(transport, ZimbraClientTransport): raise ZimbraClientException('Invalid transport') if util.empty(account_name): raise AuthException('Empty account name')
Authenticates account, if no password given tries to pre-authenticate. @param transport: transport to use for method calls @param account_name: account name @param password: account password @return: AuthToken if authentication succeeded @raise AuthException: if authentication fa...
def generate_page(self, path, template, **kwargs): directory = None if kwargs.get('page'): directory = kwargs['page'].dir path = self._get_dist_path(path, directory=directory) if not path.endswith('.html'): path = path + '.html' if not os.path.isdir(os.path.dirname(path)): os.makedirs(os.path.dirname...
Generate the HTML for a single page. You usually don't need to call this method manually, it is used by a lot of other, more end-user friendly methods. Args: path (str): Where to place the page relative to the root URL. Usually something like "index", "about-me", "projects/example", etc. template (...
def hist(self, by=None, bins=10, **kwds): return self(kind='hist', by=by, bins=bins, **kwds)
Draw one histogram of the DataFrame's columns. A histogram is a representation of the distribution of data. This function groups the values of all given Series in the DataFrame into bins and draws all bins in one :class:`matplotlib.axes.Axes`. This is useful when the DataFrame's Series ...
def undisplayable_info(obj, html=False): "Generate helpful message regarding an undisplayable object" collate = '<tt>collate</tt>' if html else 'collate' info = "For more information, please consult the Composing Data tutorial (http://git.io/vtIQh)" if isinstance(obj, HoloMap): error = "HoloMap ...
Generate helpful message regarding an undisplayable object
def lookup_url(self, url): if type(url) is not str: url = url.encode('utf8') if not url.strip(): raise ValueError("Empty input string.") url_hashes = URL(url).hashes try: list_names = self._lookup_hashes(url_hashes) self.storage.commit() ...
Look up specified URL in Safe Browsing threat lists.
def _memory_profile(with_gc=False): import utool as ut if with_gc: garbage_collect() import guppy hp = guppy.hpy() print('[hpy] Waiting for heap output...') heap_output = hp.heap() print(heap_output) print('[hpy] total heap size: ' + ut.byte_str2(heap_output.size)) ut.util_re...
Helper for memory debugging. Mostly just a namespace where I experiment with guppy and heapy. References: http://stackoverflow.com/questions/2629680/deciding-between-subprocess-multiprocessing-and-thread-in-python Reset Numpy Memory:: %reset out %reset array
def list_bindings_for_vhost(self, vhost): return self._api_get('/api/bindings/{}'.format( urllib.parse.quote_plus(vhost) ))
A list of all bindings in a given virtual host. :param vhost: The vhost name :type vhost: str
def get_field_errors(self, field): identifier = format_html('{0}[\'{1}\']', self.form_name, field.name) errors = self.errors.get(field.html_name, []) return self.error_class([SafeTuple( (identifier, self.field_error_css_classes, '$pristine', '$pristine', 'invalid', e)) for e in error...
Return server side errors. Shall be overridden by derived forms to add their extra errors for AngularJS.
def forw_dfs(self, start, end=None): return list(self.iterdfs(start, end, forward=True))
Returns a list of nodes in some forward DFS order. Starting with the start node the depth first search proceeds along outgoing edges.
def add_device_not_active_callback(self, callback): _LOGGER.debug('Added new callback %s ', callback) self._cb_device_not_active.append(callback)
Register callback to be invoked when a device is not responding.
def children(self, node, relations=None): g = self.get_graph() if node in g: children = list(g.successors(node)) if relations is None: return children else: rset = set(relations) return [c for c in children if len(self.c...
Return all direct children of specified node. Wraps networkx by default. Arguments --------- node: string identifier for node in ontology relations: list of strings list of relation (object property) IDs used to filter
def run(self): states = open(self.states, 'r').read().splitlines() for state in states: url = self.build_url(state) log = "Downloading State < {0} > from < {1} >" logging.info(log.format(state, url)) tmp = self.download(self.output, url, self.overwrite) ...
For each state in states file build url and download file
def _scalar_operations(self, axis, scalar, func): if isinstance(scalar, (list, np.ndarray, pandas.Series)): new_index = self.index if axis == 0 else self.columns def list_like_op(df): if axis == 0: df.index = new_index else: ...
Handler for mapping scalar operations across a Manager. Args: axis: The axis index object to execute the function on. scalar: The scalar value to map. func: The function to use on the Manager with the scalar. Returns: A new QueryCompiler with updated dat...
def repo_name(self): ds = [[x.repo_name] for x in self.repos] df = pd.DataFrame(ds, columns=['repository']) return df
Returns a DataFrame of the repo names present in this project directory :return: DataFrame
def _load_from_file(self, filename): if not tf.gfile.Exists(filename): raise ValueError("File %s not found" % filename) with tf.gfile.Open(filename) as f: self._load_from_file_object(f)
Load from a vocab file.
def _do_add_floating_ip_asr1k(self, floating_ip, fixed_ip, vrf, ex_gw_port): vlan = ex_gw_port['hosting_info']['segmentation_id'] hsrp_grp = ex_gw_port[ha.HA_INFO]['group'] LOG.debug("add floating_ip: %(fip)s, fixed_ip: %(fixed_ip)s, " "vrf: %(...
To implement a floating ip, an ip static nat is configured in the underlying router ex_gw_port contains data to derive the vlan associated with related subnet for the fixed ip. The vlan in turn is applied to the redundancy parameter for setting the IP NAT.
def autodiscover(): import copy from django.conf import settings from django.utils.importlib import import_module for app in settings.INSTALLED_APPS: before_import_registry = copy.copy(gargoyle._registry) try: import_module('%s.gargoyle' % app) except: gar...
Auto-discover INSTALLED_APPS admin.py modules and fail silently when not present. This forces an import on them to register any admin bits they may want.
def length_curve(obj): if not isinstance(obj, abstract.Curve): raise GeomdlException("Input shape must be an instance of abstract.Curve class") length = 0.0 evalpts = obj.evalpts num_evalpts = len(obj.evalpts) for idx in range(num_evalpts - 1): length += linalg.point_distance(evalpts...
Computes the approximate length of the parametric curve. Uses the following equation to compute the approximate length: .. math:: \\sum_{i=0}^{n-1} \\sqrt{P_{i + 1}^2-P_{i}^2} where :math:`n` is number of evaluated curve points and :math:`P` is the n-dimensional point. :param obj: input cur...
def umount(self, forced=True): for child in self._children: if hasattr(child, "umount"): child.umount(forced)
Umount all mountable distribution points. Defaults to using forced method.
def get_nmr_prize_pool(self, round_num=0, tournament=1): tournaments = self.get_competitions(tournament) tournaments.sort(key=lambda t: t['number']) if round_num == 0: t = tournaments[-1] else: tournaments = [t for t in tournaments if t['number'] == round_num] ...
Get NMR prize pool for the given round and tournament. Args: round_num (int, optional): The round you are interested in, defaults to current round. tournament (int, optional): ID of the tournament, defaults to 1 Returns: decimal.Decimal: prize pool i...
def remove_behaviour(self, behaviour): if not self.has_behaviour(behaviour): raise ValueError("This behaviour is not registered") index = self.behaviours.index(behaviour) self.behaviours[index].kill() self.behaviours.pop(index)
Removes a behaviour from the agent. The behaviour is first killed. Args: behaviour (spade.behaviour.CyclicBehaviour): the behaviour instance to be removed
def simple_state_machine(): from random import random from furious.async import Async number = random() logging.info('Generating a number... %s', number) if number > 0.25: logging.info('Continuing to do stuff.') return Async(target=simple_state_machine) return number
Pick a number, if it is more than some cuttoff continue the chain.
def listEverything(matching=False): pages=pageNames() if matching: pages=[x for x in pages if matching in x] for i,page in enumerate(pages): pages[i]="%s%s (%s)"%(pageFolder(page),page,getPageType(page)) print("\n".join(sorted(pages)))
Prints every page in the project to the console. Args: matching (str, optional): if given, only return names with this string in it
def _generate_examples(self, images_dir_path, csv_path=None, csv_usage=None): if csv_path: with tf.io.gfile.GFile(csv_path) as csv_f: reader = csv.DictReader(csv_f) data = [(row["image"], int(row["level"])) for row in reader if csv_usage is None or row["Usage"] ...
Yields Example instances from given CSV. Args: images_dir_path: path to dir in which images are stored. csv_path: optional, path to csv file with two columns: name of image and label. If not provided, just scan image directory, don't set labels. csv_usage: optional, subset of examples fro...
def _deregister_config_file(self, key): state = self.__load_state() if 'remove_configs' not in state: state['remove_configs'] = {} state['remove_configs'][key] = (state['config_files'].pop(key)) self.__dump_state(state)
Deregister a previously registered config file. The caller should ensure that it was previously registered.
def solve(self, solver=None, solverparameters=None): if self.F is None: raise Exception("Relaxation is not generated yet. Call " "'SdpRelaxation.get_relaxation' first") solve_sdp(self, solver, solverparameters)
Call a solver on the SDP relaxation. Upon successful solution, it returns the primal and dual objective values along with the solution matrices. It also sets these values in the `sdpRelaxation` object, along with some status information. :param sdpRelaxation: The SDP relaxation to be so...
def nan_empty(self, col: str): try: self.df[col] = self.df[col].replace('', nan) self.ok("Filled empty values with nan in column " + col) except Exception as e: self.err(e, "Can not fill empty values with nan")
Fill empty values with NaN values :param col: name of the colum :type col: str :example: ``ds.nan_empty("mycol")``
def set_size(self, w, h): self.attributes['width'] = str(w) self.attributes['height'] = str(h)
Sets the rectangle size. Args: w (int): width of the rectangle h (int): height of the rectangle
def get(self, request, *args, **kwargs): serializer_class = self.get_serializer_class() context = self.get_serializer_context() services = [] for service_type in SERVICES.keys(): services.append( serializer_class( object(), ...
return list of open 311 services
def list_servers(backend, socket=DEFAULT_SOCKET_URL, objectify=False): ha_conn = _get_conn(socket) ha_cmd = haproxy.cmds.listServers(backend=backend) return ha_conn.sendCmd(ha_cmd, objectify=objectify)
List servers in haproxy backend. backend haproxy backend socket haproxy stats socket, default ``/var/run/haproxy.sock`` CLI Example: .. code-block:: bash salt '*' haproxy.list_servers mysql
def trim_seqs(input_seqs, trim_len, left_trim_len): logger = logging.getLogger(__name__) okseqs = 0 totseqs = 0 if trim_len < -1: raise ValueError("Invalid trim_len: %d" % trim_len) for label, seq in input_seqs: totseqs += 1 if trim_len == -1: okseqs += 1 ...
Trim FASTA sequences to specified length. Parameters ---------- input_seqs : iterable of (str, str) The list of input sequences in (label, sequence) format trim_len : int Sequence trimming length. Specify a value of -1 to disable trimming. left_trim_len : int Sequence trimmi...
def getPagePixmap(doc, pno, matrix = None, colorspace = csRGB, clip = None, alpha = True): return doc[pno].getPixmap(matrix = matrix, colorspace = colorspace, clip = clip, alpha = alpha)
Create pixmap of document page by page number. Notes: Convenience function calling page.getPixmap. Args: pno: (int) page number matrix: Matrix for transformation (default: Identity). colorspace: (str/Colorspace) rgb, rgb, gray - case ignored, default csRGB. clip: (irect-...
def varexp(line): ip = get_ipython() funcname, name = line.split() try: import guiqwt.pyplot as pyplot except: import matplotlib.pyplot as pyplot __fig__ = pyplot.figure(); __items__ = getattr(pyplot, funcname[2:])(ip.user_ns[name]) pyplot.show() del __fig__, __items__
Spyder's variable explorer magic Used to generate plots, histograms and images of the variables displayed on it.
def write_attribute_adj_list(self, path): att_mappings = self.get_attribute_mappings() with open(path, mode="w") as file: for k, v in att_mappings.items(): print("{} {}".format(k, " ".join(str(e) for e in v)), file=file)
Write the bipartite attribute graph to a file. :param str path: Path to the output file.
def tab_insert(self, e): u cursor = min(self.l_buffer.point, len(self.l_buffer.line_buffer)) ws = ' ' * (self.tabstop - (cursor % self.tabstop)) self.insert_text(ws) self.finalize()
u'''Insert a tab character.
def time_segments_aggregate(X, interval, time_column, method=['mean']): if isinstance(X, np.ndarray): X = pd.DataFrame(X) X = X.sort_values(time_column).set_index(time_column) if isinstance(method, str): method = [method] start_ts = X.index.values[0] max_ts = X.index.values[-1] v...
Aggregate values over fixed length time segments.
def fabrics(self): if not self.__fabrics: self.__fabrics = Fabrics(self.__connection) return self.__fabrics
Gets the Fabrics API client. Returns: Fabrics:
def real(prompt=None, empty=False): s = _prompt_input(prompt) if empty and not s: return None else: try: return float(s) except ValueError: return real(prompt=prompt, empty=empty)
Prompt a real number. Parameters ---------- prompt : str, optional Use an alternative prompt. empty : bool, optional Allow an empty response. Returns ------- float or None A float if the user entered a valid real number. None if the user pressed only Enter a...
def form_valid(self, form): instance = form.save(commit=False) if hasattr(self.request, 'user'): instance.user = self.request.user if settings.CONTACT_FORM_FILTER_MESSAGE: instance.message = bleach.clean( instance.message, tags=settings.CON...
This is what's called when the form is valid.
def is_same_as(df, df_to_compare, **kwargs): try: tm.assert_frame_equal(df, df_to_compare, **kwargs) except AssertionError as exc: six.raise_from(AssertionError("DataFrames are not equal"), exc) return df
Assert that two pandas dataframes are the equal Parameters ========== df : pandas DataFrame df_to_compare : pandas DataFrame **kwargs : dict keyword arguments passed through to panda's ``assert_frame_equal`` Returns ======= df : DataFrame
def _get_default_retry_params(): default = getattr(_thread_local_settings, 'default_retry_params', None) if default is None or not default.belong_to_current_request(): return RetryParams() else: return copy.copy(default)
Get default RetryParams for current request and current thread. Returns: A new instance of the default RetryParams.
def normalize(text, lowercase=True, collapse=True, latinize=False, ascii=False, encoding_default='utf-8', encoding=None, replace_categories=UNICODE_CATEGORIES): text = stringify(text, encoding_default=encoding_default, encoding=encoding) if text is None: ...
The main normalization function for text. This will take a string and apply a set of transformations to it so that it can be processed more easily afterwards. Arguments: * ``lowercase``: not very mysterious. * ``collapse``: replace multiple whitespace-like characters with a single whitespace. Th...
def decode(self, key): key = BucketKey.decode(key) if key.uuid != self.uuid: raise ValueError("%s is not a bucket corresponding to this limit" % key) return key.params
Given a bucket key, compute the parameters used to compute that key. Note: Deprecated. Use BucketKey.decode() instead. :param key: The bucket key. Note that the UUID must match the UUID of this limit; a ValueError will be raised if this is not the case...
def preloop(self): if not self.parser: self.stdout.write("Welcome to imagemounter {version}".format(version=__version__)) self.stdout.write("\n") self.parser = ImageParser() for p in self.args.paths: self.onecmd('disk "{}"'.format(p))
if the parser is not already set, loads the parser.
def install_requirements(self, path, index=None): cmd = 'install -r {0}'.format(path) if index: cmd = 'install --index-url {0} -r {1}'.format(index, path) self.pip(cmd)
Install packages from a requirements.txt file. Args: path (str): The path to the requirements file. index (str): The URL for a pypi index to use.
def get_entry_compact_text_repr(entry, entries): text = get_shortest_text_value(entry) if text is not None: return text else: sources = get_sourced_from(entry) if sources is not None: texts = [] for source in sources: source_entry = entries[sou...
If the entry has a text value, return that. If the entry has a source_from value, return the text value of the source. Otherwise, return None.
def write(self, data): _complain_ifclosed(self.closed) if self.__encoding: self.f.write(data.encode(self.__encoding, self.__errors)) return len(data) else: return self.f.write(data)
Write ``data`` to the file. :type data: bytes :param data: the data to be written to the file :rtype: int :return: the number of bytes written
def clean(cls, path): for pth in os.listdir(path): pth = os.path.abspath(os.path.join(path, pth)) if os.path.isdir(pth): logger.debug('Removing directory %s' % pth) shutil.rmtree(pth) else: logger.debug('Removing file %s' % pth)...
Clean up all the files in a provided path
def match(self, name): if (self.ns + self.name).startswith(name): return True for alias in self.aliases: if (self.ns + alias).startswith(name): return True
Compare an argument string to the task name.
def run(self): self._listening_sock, self._address = ( bind_domain_socket(self._address) if self._uds_path else bind_tcp_socket(self._address)) if self._ssl: certfile = os.path.join(os.path.dirname(__file__), 'server.pem') self._listening_sock ...
Begin serving. Returns the bound port, or 0 for domain socket.
def alterar(self, id_divisiondc, name): if not is_valid_int_param(id_divisiondc): raise InvalidParameterError( u'The identifier of Division Dc is invalid or was not informed.') url = 'divisiondc/' + str(id_divisiondc) + '/' division_dc_map = dict() division_dc...
Change Division Dc from by the identifier. :param id_divisiondc: Identifier of the Division Dc. Integer value and greater than zero. :param name: Division Dc name. String with a minimum 2 and maximum of 80 characters :return: None :raise InvalidParameterError: The identifier of Divisi...
def _connect_lxd(spec): return { 'method': 'lxd', 'kwargs': { 'container': spec.remote_addr(), 'python_path': spec.python_path(), 'lxc_path': spec.mitogen_lxc_path(), 'connect_timeout': spec.ansible_ssh_timeout() or spec.timeout(), 'remote_...
Return ContextService arguments for an LXD container connection.
def parse_lint_result(lint_result_path, manifest_path): unused_string_pattern = re.compile('The resource `R\.string\.([^`]+)` appears to be unused') mainfest_string_refs = get_manifest_string_refs(manifest_path) root = etree.parse(lint_result_path).getroot() issues = [] for issue_xml in root.findall...
Parse lint-result.xml and create Issue for every problem found except unused strings referenced in AndroidManifest
def add_authorizers(self, authorizers): self.security_definitions = self.security_definitions or {} for authorizer_name, authorizer in authorizers.items(): self.security_definitions[authorizer_name] = authorizer.generate_swagger()
Add Authorizer definitions to the securityDefinitions part of Swagger. :param list authorizers: List of Authorizer configurations which get translated to securityDefinitions.
def delete_dcnm_out_nwk(self, tenant_id, fw_dict, is_fw_virt=False): tenant_name = fw_dict.get('tenant_name') ret = self._delete_service_nwk(tenant_id, tenant_name, 'out') if ret: res = fw_const.DCNM_OUT_NETWORK_DEL_SUCCESS LOG.info("out Service network deleted for tenant...
Delete the DCNM OUT network and update the result.
def reportDeprecatedWorkerNameUsage(message, stacklevel=None, filename=None, lineno=None): if filename is None: if stacklevel is None: stacklevel = 3 else: stacklevel += 2 warnings.warn(DeprecatedWorkerNameWarning(message), None, st...
Hook that is ran when old API name is used. :param stacklevel: stack level relative to the caller's frame. Defaults to caller of the caller of this function.
def copy_attr(f1, f2): copyit = lambda x: not hasattr(f2, x) and x[:10] == 'PACKAGING_' if f1._tags: pattrs = [tag for tag in f1._tags if copyit(tag)] for attr in pattrs: f2.Tag(attr, f1.GetTag(attr))
copies the special packaging file attributes from f1 to f2.
def elliconstraint(self, x, cfac=1e8, tough=True, cond=1e6): N = len(x) f = sum(cond**(np.arange(N)[-1::-1] / (N - 1)) * x**2) cvals = (x[0] + 1, x[0] + 1 + 100 * x[1], x[0] + 1 - 100 * x[1]) if tough: f += cfac * sum(max(0, c) for c in cvals...
ellipsoid test objective function with "constraints"
def skull_strip(dset,suffix='_ns',prefix=None,unifize=True): if prefix==None: prefix = nl.suffix(dset,suffix) unifize_dset = nl.suffix(dset,'_u') cmd = bet2 if bet2 else 'bet2' if unifize: info = nl.dset_info(dset) if info==None: nl.notify('Error: could not read info ...
use bet to strip skull from given anatomy
def new_reply(cls, thread, user, content): msg = cls.objects.create(thread=thread, sender=user, content=content) thread.userthread_set.exclude(user=user).update(deleted=False, unread=True) thread.userthread_set.filter(user=user).update(deleted=False, unread=False) message_sent.send(sende...
Create a new reply for an existing Thread. Mark thread as unread for all other participants, and mark thread as read by replier.
def _guess_cmd_name(self, cmd): if cmd[0] == 'zcat' and 'bowtie' in cmd: return 'bowtie' if cmd[0] == 'samtools': return ' '.join(cmd[0:2]) if cmd[0] == 'java': jars = [s for s in cmd if '.jar' in s] return os.path.basename(jars[0].replace('.jar', ...
Manually guess some known command names, where we can do a better job than the automatic parsing.
def oneImageNLF(img, img2=None, signal=None): x, y, weights, signal = calcNLF(img, img2, signal) _, fn, _ = _evaluate(x, y, weights) return fn, signal
Estimate the NLF from one or two images of the same kind
def _is_simple_type(value): return isinstance(value, six.string_types) or isinstance(value, int) or isinstance(value, float) or isinstance(value, bool)
Returns True, if the given parameter value is an instance of either int, str, float or bool.
def qnm_freq_decay(f_0, tau, decay): q_0 = pi * f_0 * tau alpha = 1. / decay alpha_sq = 1. / decay / decay q_sq = (alpha_sq + 4*q_0*q_0 + alpha*numpy.sqrt(alpha_sq + 16*q_0*q_0)) / 4. return numpy.sqrt(q_sq) / pi / tau
Return the frequency at which the amplitude of the ringdown falls to decay of the peak amplitude. Parameters ---------- f_0 : float The ringdown-frequency, which gives the peak amplitude. tau : float The damping time of the sinusoid. decay: float The fraction of the peak...
def _maximization(self, X): for i in range(self.k): resp = np.expand_dims(self.responsibility[:, i], axis=1) mean = (resp * X).sum(axis=0) / resp.sum() covariance = (X - mean).T.dot((X - mean) * resp) / resp.sum() self.parameters[i]["mean"], self.parameters[i]["co...
Update the parameters and priors
def get_info(self, symbol): sym = self._get_symbol_info(symbol) if not sym: raise NoDataFoundException("Symbol does not exist.") ret = {} ret['chunk_count'] = sym[CHUNK_COUNT] ret['len'] = sym[LEN] ret['appended_rows'] = sym[APPEND_COUNT] ret['metadata...
Returns information about the symbol, in a dictionary Parameters ---------- symbol: str the symbol for the given item in the DB Returns ------- dictionary
def _DownloadScript(self, url, dest_dir): if url.startswith(r'gs://'): url = re.sub('^gs://', 'https://storage.googleapis.com/', url) return self._DownloadAuthUrl(url, dest_dir) header = r'http[s]?://' domain = r'storage\.googleapis\.com' bucket = r'(?P<bucket>[a-z0-9][-_.a-z0-9]*[a-z0-9])' ...
Download the contents of the URL to the destination. Args: url: string, the URL to download. dest_dir: string, the path to a directory for storing metadata scripts. Returns: string, the path to the file storing the metadata script.
def GetSqlValuesTuple(self, trip_id): result = [] for fn in self._SQL_FIELD_NAMES: if fn == 'trip_id': result.append(trip_id) else: result.append(getattr(self, fn)) return tuple(result)
Return a tuple that outputs a row of _FIELD_NAMES to be written to a SQLite database. Arguments: trip_id: The trip_id of the trip to which this StopTime corresponds. It must be provided, as it is not stored in StopTime.
def flush(self, using=None, **kwargs): return self._get_connection(using).indices.flush(index=self._name, **kwargs)
Preforms a flush operation on the index. Any additional keyword arguments will be passed to ``Elasticsearch.indices.flush`` unchanged.
def device_info(self): return { 'family': self.family, 'platform': self.platform, 'os_type': self.os_type, 'os_version': self.os_version, 'udi': self.udi, 'driver_name': self.driver.platform, 'mode': self.mode, 'is_c...
Return device info dict.
def save_snapshot(self, si, logger, vm_uuid, snapshot_name, save_memory): vm = self.pyvmomi_service.find_by_uuid(si, vm_uuid) snapshot_path_to_be_created = SaveSnapshotCommand._get_snapshot_name_to_be_created(snapshot_name, vm) save_vm_memory_to_snapshot = SaveSnapshotCommand._get_save_vm_memory...
Creates a snapshot of the current state of the virtual machine :param vim.ServiceInstance si: py_vmomi service instance :type si: vim.ServiceInstance :param logger: Logger :type logger: cloudshell.core.logger.qs_logger.get_qs_logger :param vm_uuid: UUID of the virtual machine ...
def render_pep440(vcs): if vcs is None: return None tags = vcs.split('-') if len(tags) == 1: return tags[0] else: return tags[0] + '+' + '.'.join(tags[1:])
Convert git release tag into a form that is PEP440 compliant.
def plan_scripts(self): if not self.__plan_scripts: self.__plan_scripts = PlanScripts(self.__connection) return self.__plan_scripts
Gets the Plan Scripts API client. Returns: PlanScripts:
def method(self, returns, **parameter_types): @wrapt.decorator def type_check_wrapper(method, instance, args, kwargs): if instance is not None: raise Exception("Instance shouldn't be set.") parameter_names = inspect.getargspec(method).args defaults = i...
Syntactic sugar for registering a method Example: >>> registry = Registry() >>> @registry.method(returns=int, x=int, y=int) ... def add(x, y): ... return x + y :param returns: The method's return type :type returns: type :param param...
def write_sources_file(): file_content = ( 'schemes: ' 'https://github.com/chriskempson/base16-schemes-source.git\n' 'templates: ' 'https://github.com/chriskempson/base16-templates-source.git' ) file_path = rel_to_cwd('sources.yaml') with open(file_path, 'w') as file_: ...
Write a sources.yaml file to current working dir.
def serialize(node, stream=None, Dumper=Dumper, **kwds): return serialize_all([node], stream, Dumper=Dumper, **kwds)
Serialize a representation tree into a YAML stream. If stream is None, return the produced string instead.
def resolve_variables(variables, context, provider): for variable in variables: variable.resolve(context, provider)
Given a list of variables, resolve all of them. Args: variables (list of :class:`stacker.variables.Variable`): list of variables context (:class:`stacker.context.Context`): stacker context provider (:class:`stacker.provider.base.BaseProvider`): subclass of the base p...