code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def cumulative_gaps_to(self, when: datetime.datetime) -> datetime.timedelta: gaps = self.gaps() return gaps.cumulative_time_to(when)
Return the cumulative time within our gaps, up to ``when``.
def parsetypes(dtype): return [dtype[i].name.strip('1234567890').rstrip('ing') for i in range(len(dtype))]
Parse the types from a structured numpy dtype object. Return list of string representations of types from a structured numpy dtype object, e.g. ['int', 'float', 'str']. Used by :func:`tabular.io.saveSV` to write out type information in the header. **Parameters** **dtype** : numpy dtype object Structured numpy dtype object to parse. **Returns** **out** : list of strings List of strings corresponding to numpy types:: [dtype[i].name.strip('1234567890').rstrip('ing') \ for i in range(len(dtype))]
def owner(self, pathobj): stat = self.stat(pathobj) if not stat.is_dir: return stat.modified_by else: return 'nobody'
Returns file owner This makes little sense for Artifactory, but to be consistent with pathlib, we return modified_by instead, if available
def get_html_column_count(html_string): try: from bs4 import BeautifulSoup except ImportError: print("ERROR: You must have BeautifulSoup to use html2data") return soup = BeautifulSoup(html_string, 'html.parser') table = soup.find('table') if not table: return 0 column_counts = [] trs = table.findAll('tr') if len(trs) == 0: return 0 for tr in range(len(trs)): if tr == 0: tds = trs[tr].findAll('th') if len(tds) == 0: tds = trs[tr].findAll('td') else: tds = trs[tr].findAll('td') count = 0 for td in tds: if td.has_attr('colspan'): count += int(td['colspan']) else: count += 1 column_counts.append(count) return max(column_counts)
Gets the number of columns in an html table. Paramters --------- html_string : str Returns ------- int The number of columns in the table
def selection(self, index): self.update() self.isActiveWindow() self._parent.delete_btn.setEnabled(True)
Update selected row.
def register_variable_compilation(self, path, compilation_cbk, listclass): self.compilations_variable[path] = { 'callback': compilation_cbk, 'listclass': listclass }
Register given compilation method for variable on given path. :param str path: JPath for given variable. :param callable compilation_cbk: Compilation callback to be called. :param class listclass: List class to use for lists.
def is_correctness_available(self, question_id): response = self.get_response(question_id) if response.is_answered(): item = self._get_item(response.get_item_id()) return item.is_correctness_available_for_response(response) return False
is a measure of correctness available for the question
def handle_profile_form(form): form.process(formdata=request.form) if form.validate_on_submit(): email_changed = False with db.session.begin_nested(): current_userprofile.username = form.username.data current_userprofile.full_name = form.full_name.data db.session.add(current_userprofile) if current_app.config['USERPROFILES_EMAIL_ENABLED'] and \ form.email.data != current_user.email: current_user.email = form.email.data current_user.confirmed_at = None db.session.add(current_user) email_changed = True db.session.commit() if email_changed: send_confirmation_instructions(current_user) flash(_('Profile was updated. We have sent a verification ' 'email to %(email)s. Please check it.', email=current_user.email), category='success') else: flash(_('Profile was updated.'), category='success')
Handle profile update form.
def addcols(self, desc, dminfo={}, addtoparent=True): tdesc = desc if 'name' in desc: import casacore.tables.tableutil as pt if len(desc) == 2 and 'desc' in desc: tdesc = pt.maketabdesc(desc) elif 'valueType' in desc: cd = pt.makecoldesc(desc['name'], desc) tdesc = pt.maketabdesc(cd) self._addcols(tdesc, dminfo, addtoparent) self._makerow()
Add one or more columns. Columns can always be added to a normal table. They can also be added to a reference table and optionally to its parent table. `desc` contains a description of the column(s) to be added. It can be given in three ways: - a dict created by :func:`maketabdesc`. In this way multiple columns can be added. - a dict created by :func:`makescacoldesc`, :func:`makearrcoldesc`, or :func:`makecoldesc`. In this way a single column can be added. - a dict created by :func:`getcoldesc`. The key 'name' containing the column name has to be defined in such a dict. `dminfo` can be used to provide detailed data manager info to tell how the column(s) have to be stored. The dminfo of an existing column can be obtained using method :func:`getdminfo`. `addtoparent` defines if the column should also be added to the parent table in case the current table is a reference table (result of selection). If True, it will be added to the parent if it does not exist yet. For example, add a column using the same data manager type as another column:: coldmi = t.getdminfo('colarrtsm') # get dminfo of existing column coldmi["NAME"] = 'tsm2' # give it a unique name t.addcols (maketabdesc(makearrcoldesc("colarrtsm2",0., ndim=2)), coldmi)
def incremental_value(self, slip_moment, mmax, mag_value, bbar, dbar): delta_m = mmax - mag_value a_3 = self._get_a3(bbar, dbar, slip_moment, mmax) return a_3 * bbar * (np.exp(bbar * delta_m) - 1.0) * (delta_m > 0.0)
Returns the incremental rate with Mmax = Mag_value
def apply_transformation(self, structure): sga = SpacegroupAnalyzer(structure, symprec=self.symprec, angle_tolerance=self.angle_tolerance) return sga.get_conventional_standard_structure(international_monoclinic=self.international_monoclinic)
Returns most primitive cell for structure. Args: structure: A structure Returns: The same structure in a conventional standard setting
def accept_kwargs(func): def wrapped(val, **kwargs): try: return func(val, **kwargs) except TypeError: return func(val) return wrapped
Wrap a function that may not accept kwargs so they are accepted The output function will always have call signature of :code:`func(val, **kwargs)`, whereas the original function may have call signatures of :code:`func(val)` or :code:`func(val, **kwargs)`. In the case of the former, rather than erroring, kwargs are just ignored. This method is called on serializer/deserializer function; these functions always receive kwargs from serialize, but by using this, the original functions may simply take a single value.
def export_3_column(stimfunction, filename, temporal_resolution=100.0 ): stim_counter = 0 event_counter = 0 while stim_counter < stimfunction.shape[0]: if stimfunction[stim_counter, 0] != 0: event_onset = str(stim_counter / temporal_resolution) weight = str(stimfunction[stim_counter, 0]) event_duration = 0 while stimfunction[stim_counter, 0] != 0 & stim_counter <= \ stimfunction.shape[0]: event_duration = event_duration + 1 stim_counter = stim_counter + 1 event_duration = str(event_duration / temporal_resolution) with open(filename, "a") as file: file.write(event_onset + '\t' + event_duration + '\t' + weight + '\n') event_counter = event_counter + 1 stim_counter = stim_counter + 1
Output a tab separated three column timing file This produces a three column tab separated text file, with the three columns representing onset time (s), event duration (s) and weight, respectively. Useful if you want to run the simulated data through FEAT analyses. In a way, this is the reverse of generate_stimfunction Parameters ---------- stimfunction : timepoint by 1 array The stimulus function describing the time course of events. For instance output from generate_stimfunction. filename : str The name of the three column text file to be output temporal_resolution : float How many elements per second are you modeling with the stimfunction?
def check_dimensions(self, dataset): required_ctx = TestCtx(BaseCheck.HIGH, 'All geophysical variables are time-series incomplete feature types') message = '{} must be a valid timeseries feature type. It must have dimensions of (timeSeries, time).' message += ' And all coordinates must have dimensions of (timeSeries)' for variable in util.get_geophysical_variables(dataset): is_valid = util.is_multi_timeseries_incomplete(dataset, variable) required_ctx.assert_true( is_valid, message.format(variable) ) return required_ctx.to_result()
Checks that the feature types of this dataset are consitent with a time series incomplete dataset :param netCDF4.Dataset dataset: An open netCDF dataset
def __request(self, url, params): log.debug('request: %s %s' %(url, str(params))) try: response = urlopen(url, urlencode(params)).read() if params.get('action') != 'data': log.debug('response: %s' % response) if params.get('action', None) == 'data': return response else: return json.loads(response) except TypeError, e: log.exception('request error') raise ServerError(e) except IOError, e: log.error('request error: %s' % str(e)) raise ServerError(e)
Make an HTTP POST request to the server and return JSON data. :param url: HTTP URL to object. :returns: Response as dict.
def share( self, share_id: str, token: dict = None, augment: bool = False, prot: str = "https", ) -> dict: share_url = "{}://v1.{}.isogeo.com/shares/{}".format( prot, self.api_url, share_id ) share_req = self.get( share_url, headers=self.header, proxies=self.proxies, verify=self.ssl ) checker.check_api_response(share_req) share = share_req.json() if augment: share = utils.share_extender( share, self.search(whole_share=1, share=share_id).get("results") ) else: pass return share
Get information about a specific share and its applications. :param str token: API auth token :param str share_id: share UUID :param bool augment: option to improve API response by adding some tags on the fly. :param str prot: https [DEFAULT] or http (use it only for dev and tracking needs).
def dissolved(self, concs): new_concs = concs.copy() for r in self.rxns: if r.has_precipitates(self.substances): net_stoich = np.asarray(r.net_stoich(self.substances)) s_net, s_stoich, s_idx = r.precipitate_stoich(self.substances) new_concs -= new_concs[s_idx]/s_stoich * net_stoich return new_concs
Return dissolved concentrations
def versions_from_archive(self): py_vers = versions_from_trove(self.classifiers) return [ver for ver in py_vers if ver != self.unsupported_version]
Return Python versions extracted from trove classifiers.
def _node(self, tax_id): s = select([self.nodes.c.parent_id, self.nodes.c.rank], self.nodes.c.tax_id == tax_id) res = s.execute() output = res.fetchone() if not output: msg = 'value "{}" not found in nodes.tax_id'.format(tax_id) raise ValueError(msg) else: return output
Returns parent_id, rank FIXME: expand return rank to include custom 'below' ranks built when get_lineage is caled
def load_data(self, *args, **kwargs): argpos = { v['extras']['argpos']: k for k, v in self.parameters.iteritems() if 'argpos' in v['extras'] } data = dict( {argpos[n]: a for n, a in enumerate(args)}, **kwargs ) return self.apply_units_to_cache(data)
Collects positional and keyword arguments into `data` and applies units. :return: data
def dump(cls, obj, file, protocol=0): cls.save_option_state = True pickle.dump(obj, file, protocol=protocol) cls.save_option_state = False
Equivalent to pickle.dump except that the HoloViews option tree is saved appropriately.
def get_grid(start, end, nsteps=100): step = (end-start) / float(nsteps) return [start + i * step for i in xrange(nsteps+1)]
Generates a equal distanced list of float values with nsteps+1 values, begining start and ending with end. :param start: the start value of the generated list. :type float :param end: the end value of the generated list. :type float :param nsteps: optional the number of steps (default=100), i.e. the generated list contains nstep+1 values. :type int
def SpawnProcess(popen_args, passwd=None): if passwd is not None: p = subprocess.Popen(popen_args, stdin=subprocess.PIPE) p.communicate(input=passwd) else: p = subprocess.Popen(popen_args) p.wait() if p.returncode != 0: raise ErrorDuringRepacking(" ".join(popen_args))
Spawns a process.
def register_signals(self): for index in self.indexes: if index.object_type: self._connect_signal(index)
Register signals for all indexes.
def provision_system_user(items, database_name, overwrite=False, clear=False, skip_user_check=False): from hfos.provisions.base import provisionList from hfos.database import objectmodels if overwrite is True: hfoslog('Refusing to overwrite system user!', lvl=warn, emitter='PROVISIONS') overwrite = False system_user_count = objectmodels['user'].count({'name': 'System'}) if system_user_count == 0 or clear is False: provisionList(Users, 'user', overwrite, clear, skip_user_check=True) hfoslog('Provisioning: Users: Done.', emitter="PROVISIONS") else: hfoslog('System user already present.', lvl=warn, emitter='PROVISIONS')
Provision a system user
def trigger(self, username, project, branch, **build_params): method = 'POST' url = ('/project/{username}/{project}/tree/{branch}?' 'circle-token={token}'.format( username=username, project=project, branch=branch, token=self.client.api_token)) if build_params is not None: json_data = self.client.request(method, url, build_parameters=build_params) else: json_data = self.client.request(method, url) return json_data
Trigger new build and return a summary of the build.
def run(analysis, path=None, name=None, info=None, **kwargs): kwargs.update({ 'analysis': analysis, 'path': path, 'name': name, 'info': info, }) main(**kwargs)
Run a single analysis. :param Analysis analysis: Analysis class to run. :param str path: Path of analysis. Can be `__file__`. :param str name: Name of the analysis. :param dict info: Optional entries are ``version``, ``title``, ``readme``, ... :param dict static: Map[url regex, root-folder] to serve static content.
def compare(self, other, t_threshold=1e-3, r_threshold=1e-3): return compute_rmsd(self.t, other.t) < t_threshold and compute_rmsd(self.r, other.r) < r_threshold
Compare two transformations The RMSD values of the rotation matrices and the translation vectors are computed. The return value is True when the RMSD values are below the thresholds, i.e. when the two transformations are almost identical.
def point_on_line(point, line_start, line_end, accuracy=50.): length = dist(line_start, line_end) ds = length / float(accuracy) if -ds < (dist(line_start, point) + dist(point, line_end) - length) < ds: return True return False
Checks whether a point lies on a line The function checks whether the point "point" (P) lies on the line defined by its starting point line_start (A) and its end point line_end (B). This is done by comparing the distance of [AB] with the sum of the distances [AP] and [PB]. If the difference is smaller than [AB] / accuracy, the point P is assumed to be on the line. By increasing the value of accuracy (the default is 50), the tolerance is decreased. :param point: Point to be checked (tuple with x any y coordinate) :param line_start: Starting point of the line (tuple with x any y coordinate) :param line_end: End point of the line (tuple with x any y coordinate) :param accuracy: The higher this value, the less distance is tolerated :return: True if the point is one the line, False if not
def calc_secondary_parameters(self): self.a = self.x/(2.*self.d**.5) self.b = self.u/(2.*self.d**.5)
Determine the values of the secondary parameters `a` and `b`.
def catch_result(task_func): @functools.wraps(task_func, assigned=available_attrs(task_func)) def dec(*args, **kwargs): orig_stdout = sys.stdout sys.stdout = content = StringIO() task_response = task_func(*args, **kwargs) sys.stdout = orig_stdout content.seek(0) task_response['stdout'] = content.read() return task_response return dec
Catch printed result from Celery Task and return it in task response
def _warnCount(self, warnings, warningCount=None): if not warningCount: warningCount = {} for warning in warnings: wID = warning["warning_id"] if not warningCount.get(wID): warningCount[wID] = {} warningCount[wID]["count"] = 1 warningCount[wID]["message"] = warning.get("warning_message") else: warningCount[wID]["count"] += 1 return warningCount
Calculate the count of each warning, being given a list of them. @param warnings: L{list} of L{dict}s that come from L{tools.parsePyLintWarnings}. @param warningCount: A L{dict} produced by this method previously, if you are adding to the warnings. @return: L{dict} of L{dict}s for the warnings.
def workspaces(self, index=None): c = self.centralWidget() if index is None: return (c.widget(n) for n in range(c.count())) else: return c.widget(index)
return generator for all all workspace instances
def delete_network_precommit(self, context): segments = context.network_segments for segment in segments: if not self.check_segment(segment): return vlan_id = segment.get(api.SEGMENTATION_ID) if not vlan_id: return self.ucsm_db.delete_vlan_entry(vlan_id) if any([True for ip, ucsm in CONF.ml2_cisco_ucsm.ucsms.items() if ucsm.sp_template_list]): self.ucsm_db.delete_sp_template_for_vlan(vlan_id) if any([True for ip, ucsm in CONF.ml2_cisco_ucsm.ucsms.items() if ucsm.vnic_template_list]): self.ucsm_db.delete_vnic_template_for_vlan(vlan_id)
Delete entry corresponding to Network's VLAN in the DB.
def backprop(self, input_data, targets, cache=None): if cache is not None: activations = cache else: activations = self.feed_forward(input_data, prediction=False) if activations.shape != targets.shape: raise ValueError('Activations (shape = %s) and targets (shape = %s) are different sizes' % (activations.shape, targets.shape)) delta = substract_matrix(activations, targets) nan_to_zeros(delta, delta) df_W = linalg.dot(input_data, delta, transa='T') df_b = matrix_sum_out_axis(delta, 0) df_input = linalg.dot(delta, self.W, transb='T') if self.l1_penalty_weight: df_W += self.l1_penalty_weight * sign(self.W) if self.l2_penalty_weight: df_W += self.l2_penalty_weight * self.W return (df_W, df_b), df_input
Backpropagate through the logistic layer. **Parameters:** input_data : ``GPUArray`` Inpute data to compute activations for. targets : ``GPUArray`` The target values of the units. cache : list of ``GPUArray`` Cache obtained from forward pass. If the cache is provided, then the activations are not recalculated. **Returns:** gradients : tuple of ``GPUArray`` Gradients with respect to the weights and biases in the form ``(df_weights, df_biases)``. df_input : ``GPUArray`` Gradients with respect to the input.
def _keyboard_access(self, element): if not element.has_attribute('tabindex'): tag = element.get_tag_name() if (tag == 'A') and (not element.has_attribute('href')): element.set_attribute('tabindex', '0') elif ( (tag != 'A') and (tag != 'INPUT') and (tag != 'BUTTON') and (tag != 'SELECT') and (tag != 'TEXTAREA') ): element.set_attribute('tabindex', '0')
Provide keyboard access for element, if it not has. :param element: The element. :type element: hatemile.util.html.htmldomelement.HTMLDOMElement
def get_list(self, key, pipeline=False): if pipeline: return self._pipeline.lrange(key, 0, -1) return self._db.lrange(key, 0, -1)
Get all the value in the list stored at key. Args: key (str): Key where the list is stored. pipeline (bool): True, start a transaction block. Default false. Returns: list: values in the list ordered by list index
def add_stream(self, name=None, tpld_id=None, state=XenaStreamState.enabled): stream = XenaStream(parent=self, index='{}/{}'.format(self.index, len(self.streams)), name=name) stream._create() tpld_id = tpld_id if tpld_id else XenaStream.next_tpld_id stream.set_attributes(ps_comment='"{}"'.format(stream.name), ps_tpldid=tpld_id) XenaStream.next_tpld_id = max(XenaStream.next_tpld_id + 1, tpld_id + 1) stream.set_state(state) return stream
Add stream. :param name: stream description. :param tpld_id: TPLD ID. If None the a unique value will be set. :param state: new stream state. :type state: xenamanager.xena_stream.XenaStreamState :return: newly created stream. :rtype: xenamanager.xena_stream.XenaStream
def trk50(msg): d = hex2bin(data(msg)) if d[11] == '0': return None sign = int(d[12]) value = bin2int(d[13:23]) if sign: value = value - 1024 trk = value * 90.0 / 512.0 if trk < 0: trk = 360 + trk return round(trk, 3)
True track angle, BDS 5,0 message Args: msg (String): 28 bytes hexadecimal message (BDS50) string Returns: float: angle in degrees to true north (from 0 to 360)
def get_version(self, state=None, date=None): version_model = self._meta._version_model q = version_model.objects.filter(object_id=self.pk) if state: q = version_model.normal.filter(object_id=self.pk, state=state) if date: q = q.filter(date_published__lte=date) q = q.order_by('-date_published') results = q[:1] if results: return results[0] return None
Get a particular version of an item :param state: The state you want to get. :param date: Get a version that was published before or on this date.
def exception(self, url, exception): return (time.time() + self.ttl, self.factory(url))
What to return when there's an exception.
def remove_label(self, label): self._logger.info('Removing label "{}"'.format(label)) count = self._matches[constants.LABEL_FIELDNAME].value_counts().get( label, 0) self._matches = self._matches[ self._matches[constants.LABEL_FIELDNAME] != label] self._logger.info('Removed {} labelled results'.format(count))
Removes all results rows associated with `label`. :param label: label to filter results on :type label: `str`
def delete(self): if not self._sync: del self._buffer shutil.rmtree(self.cache_dir)
Delete the write buffer and cache directory.
def _GetDatabaseAccount(self): try: database_account = self._GetDatabaseAccountStub(self.DefaultEndpoint) return database_account except errors.HTTPFailure: for location_name in self.PreferredLocations: locational_endpoint = _GlobalEndpointManager.GetLocationalEndpoint(self.DefaultEndpoint, location_name) try: database_account = self._GetDatabaseAccountStub(locational_endpoint) return database_account except errors.HTTPFailure: pass return None
Gets the database account first by using the default endpoint, and if that doesn't returns use the endpoints for the preferred locations in the order they are specified to get the database account.
def get_corpus(args): tokenizer = get_tokenizer(args) return tacl.Corpus(args.corpus, tokenizer)
Returns a `tacl.Corpus`.
def get_all_locators(self): locators = [] self._lock.acquire() try: for reference in self._references: locators.append(reference.get_locator()) finally: self._lock.release() return locators
Gets locators for all registered component references in this reference map. :return: a list with component locators.
def fromxlsx(filename, sheet=None, range_string=None, row_offset=0, column_offset=0, **kwargs): return XLSXView(filename, sheet=sheet, range_string=range_string, row_offset=row_offset, column_offset=column_offset, **kwargs)
Extract a table from a sheet in an Excel .xlsx file. N.B., the sheet name is case sensitive. The `sheet` argument can be omitted, in which case the first sheet in the workbook is used by default. The `range_string` argument can be used to provide a range string specifying a range of cells to extract. The `row_offset` and `column_offset` arguments can be used to specify offsets. Any other keyword arguments are passed through to :func:`openpyxl.load_workbook()`.
def format_raw_script(raw_script): if six.PY2: script = ' '.join(arg.decode('utf-8') for arg in raw_script) else: script = ' '.join(raw_script) return script.strip()
Creates single script from a list of script parts. :type raw_script: [basestring] :rtype: basestring
def from_dict(self, mapdict): self.name_format = mapdict["identifier"] try: self._fro = dict( [(k.lower(), v) for k, v in mapdict["fro"].items()]) except KeyError: pass try: self._to = dict([(k.lower(), v) for k, v in mapdict["to"].items()]) except KeyError: pass if self._fro is None and self._to is None: raise ConverterError("Missing specifications") if self._fro is None or self._to is None: self.adjust()
Import the attribute map from a dictionary :param mapdict: The dictionary
def select_tmpltbank_class(curr_exe): exe_to_class_map = { 'pycbc_geom_nonspinbank' : PyCBCTmpltbankExecutable, 'pycbc_aligned_stoch_bank': PyCBCTmpltbankExecutable } try: return exe_to_class_map[curr_exe] except KeyError: raise NotImplementedError( "No job class exists for executable %s, exiting" % curr_exe)
This function returns a class that is appropriate for setting up template bank jobs within workflow. Parameters ---------- curr_exe : string The name of the executable to be used for generating template banks. Returns -------- exe_class : Sub-class of pycbc.workflow.core.Executable that holds utility functions appropriate for the given executable. Instances of the class ('jobs') **must** have methods * job.create_node() and * job.get_valid_times(ifo, )
def save(self): resp = self.r_session.put( self.document_url, data=self.json(), headers={'Content-Type': 'application/json'} ) resp.raise_for_status()
Saves changes made to the locally cached SecurityDocument object's data structures to the remote database.
def _generate_username(self): while True: username = str(uuid.uuid4()) username = username.replace('-', '') username = username[:-2] try: User.objects.get(username=username) except User.DoesNotExist: return username
Generate a unique username
def dummy_batch(m: nn.Module, size:tuple=(64,64))->Tensor: "Create a dummy batch to go through `m` with `size`." ch_in = in_channels(m) return one_param(m).new(1, ch_in, *size).requires_grad_(False).uniform_(-1.,1.)
Create a dummy batch to go through `m` with `size`.
def hcf(num1, num2): if num1 > num2: smaller = num2 else: smaller = num1 for i in range(1, smaller + 1): if ((num1 % i == 0) and (num2 % i == 0)): return i
Find the highest common factor of 2 numbers :type num1: number :param num1: The first number to find the hcf for :type num2: number :param num2: The second number to find the hcf for
def emulate_seek(fd, offset, chunk=CHUNK): while chunk and offset > CHUNK: fd.read(chunk) offset -= chunk fd.read(offset)
Emulates a seek on an object that does not support it The seek is emulated by reading and discarding bytes until specified offset is reached. The ``offset`` argument is in bytes from start of file. The ``chunk`` argument can be used to adjust the size of the chunks in which read operation is performed. Larger chunks will reach the offset in less reads and cost less CPU but use more memory. Conversely, smaller chunks will be more memory efficient, but cause more read operations and more CPU usage. If chunk is set to None, then the ``offset`` amount of bytes is read at once. This is fastest but depending on the offset size, may use a lot of memory. Default chunk size is controlled by the ``fsend.rangewrapper.CHUNK`` constant, which is 8KB by default. This function has no return value.
def config_to_options(config): class Options: host=config.get('smtp', 'host', raw=True) port=config.getint('smtp', 'port') to_addr=config.get('mail', 'to_addr', raw=True) from_addr=config.get('mail', 'from_addr', raw=True) subject=config.get('mail', 'subject', raw=True) encoding=config.get('mail', 'encoding', raw=True) username=config.get('auth', 'username') opts = Options() opts.from_addr % {'host': opts.host, 'prog': 'notify'} opts.to_addr % {'host': opts.host, 'prog': 'notify'} return opts
Convert ConfigParser instance to argparse.Namespace Parameters ---------- config : object A ConfigParser instance Returns ------- object An argparse.Namespace instance
def print_version(): click.echo("Versions:") click.secho( "CLI Package Version: %(version)s" % {"version": click.style(get_cli_version(), bold=True)} ) click.secho( "API Package Version: %(version)s" % {"version": click.style(get_api_version(), bold=True)} )
Print the environment versions.
def get_interface_mode(args): calculator_list = ['wien2k', 'abinit', 'qe', 'elk', 'siesta', 'cp2k', 'crystal', 'vasp', 'dftbp', 'turbomole'] for calculator in calculator_list: mode = "%s_mode" % calculator if mode in args and args.__dict__[mode]: return calculator return None
Return calculator name The calculator name is obtained from command option arguments where argparse is used. The argument attribute name has to be "{calculator}_mode". Then this method returns {calculator}.
def _get_redirect_url(self, request): if 'next' in request.session: next_url = request.session['next'] del request.session['next'] elif 'next' in request.GET: next_url = request.GET.get('next') elif 'next' in request.POST: next_url = request.POST.get('next') else: next_url = request.user.get_absolute_url() if not next_url: next_url = '/' return next_url
Next gathered from session, then GET, then POST, then users absolute url.
def from_dict(cls, d): if type(d) != dict: raise TypeError('Expecting a <dict>, got a {0}'.format(type(d))) obj = cls() obj._full_data = d obj._import_attributes(d) obj._a_tags = obj._parse_a_tags(d) return obj
Given a dict in python-zimbra format or XML, generate a Python object.
def network(self, borrow=False): if self._network is None and self.network_json is not None: self.load_weights() if borrow: return self.borrow_cached_network( self.network_json, self.network_weights) else: import keras.models self._network = keras.models.model_from_json(self.network_json) if self.network_weights is not None: self._network.set_weights(self.network_weights) self.network_json = None self.network_weights = None return self._network
Return the keras model associated with this predictor. Parameters ---------- borrow : bool Whether to return a cached model if possible. See borrow_cached_network for details Returns ------- keras.models.Model
def _system_parameters(**kwargs): return {key: value for key, value in kwargs.items() if (value is not None or value == {})}
Returns system keyword arguments removing Nones. Args: kwargs: system keyword arguments. Returns: dict: system keyword arguments.
def compareBIM(args): class Dummy(object): pass compareBIM_args = Dummy() compareBIM_args.before = args.bfile + ".bim" compareBIM_args.after = args.out + ".bim" compareBIM_args.out = args.out + ".removed_snps" try: CompareBIM.checkArgs(compareBIM_args) beforeBIM = CompareBIM.readBIM(compareBIM_args.before) afterBIM = CompareBIM.readBIM(compareBIM_args.after) CompareBIM.compareSNPs(beforeBIM, afterBIM, compareBIM_args.out) except CompareBIM.ProgramError as e: raise ProgramError("CompareBIM: " + e.message)
Compare two BIM file. :param args: the options. :type args: argparse.Namespace Creates a *Dummy* object to mimic an :py:class:`argparse.Namespace` class containing the options for the :py:mod:`pyGenClean.PlinkUtils.compare_bim` module.
def hook(self, debug, pid): label = "%s!%s" % (self.__modName, self.__procName) try: hook = self.__hook[pid] except KeyError: try: aProcess = debug.system.get_process(pid) except KeyError: aProcess = Process(pid) hook = Hook(self.__preCB, self.__postCB, self.__paramCount, self.__signature, aProcess.get_arch() ) self.__hook[pid] = hook hook.hook(debug, pid, label)
Installs the API hook on a given process and module. @warning: Do not call from an API hook callback. @type debug: L{Debug} @param debug: Debug object. @type pid: int @param pid: Process ID.
def repo_exists(self, auth, username, repo_name): path = "/repos/{u}/{r}".format(u=username, r=repo_name) return self._get(path, auth=auth).ok
Returns whether a repository with name ``repo_name`` owned by the user with username ``username`` exists. :param auth.Authentication auth: authentication object :param str username: username of owner of repository :param str repo_name: name of repository :return: whether the repository exists :rtype: bool :raises NetworkFailure: if there is an error communicating with the server :raises ApiFailure: if the request cannot be serviced
def gamma_reset(self): with open(self._fb_device) as f: fcntl.ioctl(f, self.SENSE_HAT_FB_FBIORESET_GAMMA, self.SENSE_HAT_FB_GAMMA_DEFAULT)
Resets the LED matrix gamma correction to default
def altz_to_utctz_str(altz): utci = -1 * int((float(altz) / 3600) * 100) utcs = str(abs(utci)) utcs = "0" * (4 - len(utcs)) + utcs prefix = (utci < 0 and '-') or '+' return prefix + utcs
As above, but inverses the operation, returning a string that can be used in commit objects
def create_job(name=None, config_xml=None, saltenv='base'): if not name: raise SaltInvocationError('Required parameter \'name\' is missing') if job_exists(name): raise CommandExecutionError('Job \'{0}\' already exists'.format(name)) if not config_xml: config_xml = jenkins.EMPTY_CONFIG_XML else: config_xml_file = _retrieve_config_xml(config_xml, saltenv) with salt.utils.files.fopen(config_xml_file) as _fp: config_xml = salt.utils.stringutils.to_unicode(_fp.read()) server = _connect() try: server.create_job(name, config_xml) except jenkins.JenkinsException as err: raise CommandExecutionError( 'Encountered error creating job \'{0}\': {1}'.format(name, err) ) return config_xml
Return the configuration file. :param name: The name of the job is check if it exists. :param config_xml: The configuration file to use to create the job. :param saltenv: The environment to look for the file in. :return: The configuration file used for the job. CLI Example: .. code-block:: bash salt '*' jenkins.create_job jobname salt '*' jenkins.create_job jobname config_xml='salt://jenkins/config.xml'
def connect(self): if self.session and self.session.is_expired: self.disconnect(abandon_session=True) if not self.session: try: login_result = self.login(self.username, self.password) except AccountFault: log.error('Login failed, invalid username or password') raise else: self.session = login_result.session_id self.connected = time() return self.connected
Connects to the Responsys soap service Uses the credentials passed to the client init to login and setup the session id returned. Returns True on successful connection, otherwise False.
def get_finished(self): indices = [] for idf, v in self.q.items(): if v.poll() != None: indices.append(idf) for i in indices: self.q.pop(i) return indices
Clean up terminated processes and returns the list of their ids
def create_dep(self, projects): dialog = DepCreatorDialog(projects=projects, parent=self) dialog.exec_() dep = dialog.dep return dep
Create and return a new dep :param projects: the projects for the dep :type projects: :class:`jukeboxcore.djadapter.models.Project` :returns: The created dep or None :rtype: None | :class:`jukeboxcore.djadapter.models.Dep` :raises: None
def fix(self, to_file=None): self.packer_cmd = self.packer.fix self._add_opt(self.packerfile) result = self.packer_cmd() if to_file: with open(to_file, 'w') as f: f.write(result.stdout.decode()) result.fixed = json.loads(result.stdout.decode()) return result
Implements the `packer fix` function :param string to_file: File to output fixed template to
def get_root_dir(self): if os.path.isdir(self.root_path): return self.root_path else: return os.path.dirname(self.root_path)
Retrieve the absolute path to the root directory of this data source. Returns: str: absolute path to the root directory of this data source.
def call_git_branch(): try: with open(devnull, "w") as fnull: arguments = [GIT_COMMAND, 'rev-parse', '--abbrev-ref', 'HEAD'] return check_output(arguments, cwd=CURRENT_DIRECTORY, stderr=fnull).decode("ascii").strip() except (OSError, CalledProcessError): return None
return the string output of git desribe
def calc_secondary_parameters(self): self.c = 1./(self.k*special.gamma(self.n))
Determine the value of the secondary parameter `c`.
def _change_source_state(name, state): choc_path = _find_chocolatey(__context__, __salt__) cmd = [choc_path, 'source', state, '--name', name] result = __salt__['cmd.run_all'](cmd, python_shell=False) if result['retcode'] != 0: raise CommandExecutionError( 'Running chocolatey failed: {0}'.format(result['stdout']) ) return result['stdout']
Instructs Chocolatey to change the state of a source. name Name of the repository to affect. state State in which you want the chocolatey repository.
def receive(self, sock): msg = None data = b'' recv_done = False recv_len = -1 while not recv_done: buf = sock.recv(BUFSIZE) if buf is None or len(buf) == 0: raise Exception("socket closed") if recv_len == -1: recv_len = struct.unpack('>I', buf[:4])[0] data += buf[4:] recv_len -= len(data) else: data += buf recv_len -= len(buf) recv_done = (recv_len == 0) msg = pickle.loads(data) return msg
Receive a message on ``sock``.
def edit(self, tag_name=None, target_commitish=None, name=None, body=None, draft=None, prerelease=None): url = self._api data = { 'tag_name': tag_name, 'target_commitish': target_commitish, 'name': name, 'body': body, 'draft': draft, 'prerelease': prerelease, } self._remove_none(data) r = self._session.patch( url, data=json.dumps(data), headers=Release.CUSTOM_HEADERS ) successful = self._boolean(r, 200, 404) if successful: self.__init__(r.json(), self) return successful
Users with push access to the repository can edit a release. If the edit is successful, this object will update itself. :param str tag_name: (optional), Name of the tag to use :param str target_commitish: (optional), The "commitish" value that determines where the Git tag is created from. Defaults to the repository's default branch. :param str name: (optional), Name of the release :param str body: (optional), Description of the release :param boolean draft: (optional), True => Release is a draft :param boolean prerelease: (optional), True => Release is a prerelease :returns: True if successful; False if not successful
def purgeDeletedWidgets(): toremove = [] for field in AbstractEditorWidget.funit_fields: if sip.isdeleted(field): toremove.append(field) for field in toremove: AbstractEditorWidget.funit_fields.remove(field) toremove = [] for field in AbstractEditorWidget.tunit_fields: if sip.isdeleted(field): toremove.append(field) for field in toremove: AbstractEditorWidget.tunit_fields.remove(field)
Finds old references to stashed fields and deletes them
def make_csr(A): if not (isspmatrix_csr(A) or isspmatrix_bsr(A)): try: A = csr_matrix(A) print('Implicit conversion of A to CSR in pyamg.blackbox.make_csr') except BaseException: raise TypeError('Argument A must have type csr_matrix or\ bsr_matrix, or be convertible to csr_matrix') if A.shape[0] != A.shape[1]: raise TypeError('Argument A must be a square') A = A.asfptype() return A
Convert A to CSR, if A is not a CSR or BSR matrix already. Parameters ---------- A : array, matrix, sparse matrix (n x n) matrix to convert to CSR Returns ------- A : csr_matrix, bsr_matrix If A is csr_matrix or bsr_matrix, then do nothing and return A. Else, convert A to CSR if possible and return. Examples -------- >>> from pyamg.gallery import poisson >>> from pyamg.blackbox import make_csr >>> A = poisson((40,40),format='csc') >>> Acsr = make_csr(A) Implicit conversion of A to CSR in pyamg.blackbox.make_csr
def get_conservation(block): consensus = block['sequences'][0]['seq'] assert all(c.isupper() for c in consensus), \ "So-called consensus contains indels!" cleaned = [[c for c in s['seq'] if not c.islower()] for s in block['sequences'][1:]] height = float(len(cleaned)) for row in cleaned: if len(row) != len(consensus): raise ValueError("Aligned sequence length (%s) doesn't match " "consensus (%s)" % (len(row), len(consensus))) columns = zip(*cleaned) return dict((idx + 1, columns[idx].count(cons_char) / height) for idx, cons_char in enumerate(consensus))
Calculate conservation levels at each consensus position. Return a dict of {position: float conservation}
def _parse_tokenize(self, rule): for token in self._TOKENIZE_RE.split(rule): if not token or token.isspace(): continue clean = token.lstrip('(') for i in range(len(token) - len(clean)): yield '(', '(' if not clean: continue else: token = clean clean = token.rstrip(')') trail = len(token) - len(clean) lowered = clean.lower() if lowered in ('and', 'or', 'not'): yield lowered, clean elif clean: if len(token) >= 2 and ((token[0], token[-1]) in [('"', '"'), ("'", "'")]): yield 'string', token[1:-1] else: yield 'check', self._parse_check(clean) for i in range(trail): yield ')', ')'
Tokenizer for the policy language.
def get_document(self, doc_url, force_download=False): doc_url = str(doc_url) if (self.use_cache and not force_download and self.cache.has_document(doc_url)): doc_data = self.cache.get_document(doc_url) else: doc_data = self.api_request(doc_url, raw=True) if self.update_cache: self.cache.add_document(doc_url, doc_data) return doc_data
Retrieve the data for the given document from the server :type doc_url: String or Document :param doc_url: the URL of the document, or a Document object :type force_download: Boolean :param force_download: True to download from the server regardless of the cache's contents :rtype: String :returns: the document data :raises: APIError if the API request is not successful
def density_matrix_of(self, qubits: List[ops.Qid] = None) -> np.ndarray: r return density_matrix_from_state_vector( self.state_vector(), [self.qubit_map[q] for q in qubits] if qubits is not None else None )
r"""Returns the density matrix of the state. Calculate the density matrix for the system on the list, qubits. Any qubits not in the list that are present in self.state_vector() will be traced out. If qubits is None the full density matrix for self.state_vector() is returned, given self.state_vector() follows standard Kronecker convention of numpy.kron. For example: self.state_vector() = np.array([1/np.sqrt(2), 1/np.sqrt(2)], dtype=np.complex64) qubits = None gives us \rho = \begin{bmatrix} 0.5 & 0.5 0.5 & 0.5 \end{bmatrix} Args: qubits: list containing qubit IDs that you would like to include in the density matrix (i.e.) qubits that WON'T be traced out. Returns: A numpy array representing the density matrix. Raises: ValueError: if the size of the state represents more than 25 qubits. IndexError: if the indices are out of range for the number of qubits corresponding to the state.
def connect_to(self, service_name, **kwargs): service_class = self.get_connection(service_name) return service_class.connect_to(**kwargs)
Shortcut method to make instantiating the ``Connection`` classes easier. Forwards ``**kwargs`` like region, keys, etc. on to the constructor. :param service_name: A string that specifies the name of the desired service. Ex. ``sqs``, ``sns``, ``dynamodb``, etc. :type service_name: string :rtype: <kotocore.connection.Connection> instance
async def start_async(self): if self.run_task: raise Exception("A PartitionManager cannot be started multiple times.") partition_count = await self.initialize_stores_async() _logger.info("%r PartitionCount: %r", self.host.guid, partition_count) self.run_task = asyncio.ensure_future(self.run_async())
Intializes the partition checkpoint and lease store and then calls run async.
def rtl_langs(self): def is_rtl(lang): base_rtl = ['ar', 'fa', 'he', 'ur'] return any([lang.startswith(base_code) for base_code in base_rtl]) return sorted(set([lang for lang in self.translated_locales if is_rtl(lang)]))
Returns the set of translated RTL language codes present in self.locales. Ignores source locale.
def parseBranches(self, descendants): parsed, parent, cond = [], False, lambda b: (b.string or '').strip() for branch in filter(cond, descendants): if self.getHeadingLevel(branch) == self.depth: parsed.append({'root':branch.string, 'source':branch}) parent = True elif not parent: parsed.append({'root':branch.string, 'source':branch}) else: parsed[-1].setdefault('descendants', []).append(branch) return [TOC(depth=self.depth+1, **kwargs) for kwargs in parsed]
Parse top level of markdown :param list elements: list of source objects :return: list of filtered TreeOfContents objects
def _handleCallAnswered(self, regexMatch, callId=None): if regexMatch: groups = regexMatch.groups() if len(groups) > 1: callId = int(groups[0]) self.activeCalls[callId].answered = True else: for call in dictValuesIter(self.activeCalls): if call.answered == False and type(call) == Call: call.answered = True return else: self.activeCalls[callId].answered = True
Handler for "outgoing call answered" event notification line
def as_native_str(encoding='utf-8'): if PY3: return lambda f: f else: def encoder(f): @functools.wraps(f) def wrapper(*args, **kwargs): return f(*args, **kwargs).encode(encoding=encoding) return wrapper return encoder
A decorator to turn a function or method call that returns text, i.e. unicode, into one that returns a native platform str. Use it as a decorator like this:: from __future__ import unicode_literals class MyClass(object): @as_native_str(encoding='ascii') def __repr__(self): return next(self._iter).upper()
def _get_LDAP_connection(): server = ldap3.Server('ldap://' + get_optional_env('EPFL_LDAP_SERVER_FOR_SEARCH')) connection = ldap3.Connection(server) connection.open() return connection, get_optional_env('EPFL_LDAP_BASE_DN_FOR_SEARCH')
Return a LDAP connection
def get_one(cls, enforcement_id): qry = db.Enforcements.filter(enforcement_id == Enforcements.enforcement_id) return qry
Return the properties of any enforcement action
def prime_gen() -> int: D = {} yield 2 for q in itertools.islice(itertools.count(3), 0, None, 2): p = D.pop(q, None) if p is None: D[q * q] = 2 * q yield q else: x = p + q while x in D: x += p D[x] = p
A generator for prime numbers starting from 2.
def table(columns, names, page_size=None, format_strings=None): if page_size is None: page = 'disable' else: page = 'enable' div_id = uuid.uuid4() column_descriptions = [] for column, name in zip(columns, names): if column.dtype.kind == 'S': ctype = 'string' else: ctype = 'number' column_descriptions.append((ctype, name)) data = [] for item in zip(*columns): data.append(list(item)) return google_table_template.render(div_id=div_id, page_enable=page, column_descriptions = column_descriptions, page_size=page_size, data=data, format_strings=format_strings, )
Return an html table of this data Parameters ---------- columns : list of numpy arrays names : list of strings The list of columns names page_size : {int, None}, optional The number of items to show on each page of the table format_strings : {lists of strings, None}, optional The ICU format string for this column, None for no formatting. All columns must have a format string if provided. Returns ------- html_table : str A str containing the html code to display a table of this data
def check_boto_reqs(boto_ver=None, boto3_ver=None, botocore_ver=None, check_boto=True, check_boto3=True): if check_boto is True: try: import boto has_boto = True except ImportError: has_boto = False if boto_ver is None: boto_ver = '2.0.0' if not has_boto or version_cmp(boto.__version__, boto_ver) == -1: return False, 'A minimum version of boto {0} is required.'.format(boto_ver) if check_boto3 is True: try: import boto3 import botocore has_boto3 = True except ImportError: has_boto3 = False if boto3_ver is None: boto3_ver = '1.2.6' if botocore_ver is None: botocore_ver = '1.3.23' if not has_boto3 or version_cmp(boto3.__version__, boto3_ver) == -1: return False, 'A minimum version of boto3 {0} is required.'.format(boto3_ver) elif version_cmp(botocore.__version__, botocore_ver) == -1: return False, 'A minimum version of botocore {0} is required'.format(botocore_ver) return True
Checks for the version of various required boto libs in one central location. Most boto states and modules rely on a single version of the boto, boto3, or botocore libs. However, some require newer versions of any of these dependencies. This function allows the module to pass in a version to override the default minimum required version. This function is useful in centralizing checks for ``__virtual__()`` functions in the various, and many, boto modules and states. boto_ver The minimum required version of the boto library. Defaults to ``2.0.0``. boto3_ver The minimum required version of the boto3 library. Defaults to ``1.2.6``. botocore_ver The minimum required version of the botocore library. Defaults to ``1.3.23``. check_boto Boolean defining whether or not to check for boto deps. This defaults to ``True`` as most boto modules/states rely on boto, but some do not. check_boto3 Boolean defining whether or not to check for boto3 (and therefore botocore) deps. This defaults to ``True`` as most boto modules/states rely on boto3/botocore, but some do not.
def hashed(source_filename, prepared_options, thumbnail_extension, **kwargs): parts = ':'.join([source_filename] + prepared_options) short_sha = hashlib.sha1(parts.encode('utf-8')).digest() short_hash = base64.urlsafe_b64encode(short_sha[:9]).decode('utf-8') return '.'.join([short_hash, thumbnail_extension])
Generate a short hashed thumbnail filename. Creates a 12 character url-safe base64 sha1 filename (plus the extension), for example: ``6qW1buHgLaZ9.jpg``.
def filter_oids(self, oids): oids = set(oids) return self[self['_oid'].map(lambda x: x in oids)]
Leaves only objects with specified oids. :param oids: list of oids to include
def on_mouse_motion(self, x, y, dx, dy): self.example.mouse_position_event(x, self.buffer_height - y)
Pyglet specific mouse motion callback. Forwards and traslates the event to the example
def post(self, uri, body=None, logon_required=True, wait_for_completion=True, operation_timeout=None): try: return self._urihandler.post(self._hmc, uri, body, logon_required, wait_for_completion) except HTTPError as exc: raise zhmcclient.HTTPError(exc.response()) except ConnectionError as exc: raise zhmcclient.ConnectionError(exc.message, None)
Perform the HTTP POST method against the resource identified by a URI, using a provided request body, on the faked HMC. HMC operations using HTTP POST are either synchronous or asynchronous. Asynchronous operations return the URI of an asynchronously executing job that can be queried for status and result. Examples for synchronous operations: * With no response body: "Logon", "Update CPC Properties" * With a response body: "Create Partition" Examples for asynchronous operations: * With no ``job-results`` field in the completed job status response: "Start Partition" * With a ``job-results`` field in the completed job status response (under certain conditions): "Activate a Blade", or "Set CPC Power Save" The `wait_for_completion` parameter of this method can be used to deal with asynchronous HMC operations in a synchronous way. Parameters: uri (:term:`string`): Relative URI path of the resource, e.g. "/api/session". This URI is relative to the base URL of the session (see the :attr:`~zhmcclient.Session.base_url` property). Must not be `None`. body (:term:`json object`): JSON object to be used as the HTTP request body (payload). `None` means the same as an empty dictionary, namely that no HTTP body is included in the request. logon_required (bool): Boolean indicating whether the operation requires that the session is logged on to the HMC. For example, the "Logon" operation does not require that. Because this is a faked HMC, this does not perform a real logon, but it is still used to update the state in the faked HMC. wait_for_completion (bool): Boolean controlling whether this method should wait for completion of the requested HMC operation, as follows: * If `True`, this method will wait for completion of the requested operation, regardless of whether the operation is synchronous or asynchronous. This will cause an additional entry in the time statistics to be created for the asynchronous operation and waiting for its completion. This entry will have a URI that is the targeted URI, appended with "+completion". * If `False`, this method will immediately return the result of the HTTP POST method, regardless of whether the operation is synchronous or asynchronous. operation_timeout (:term:`number`): Timeout in seconds, when waiting for completion of an asynchronous operation. The special value 0 means that no timeout is set. `None` means that the default async operation timeout of the session is used. For `wait_for_completion=True`, a :exc:`~zhmcclient.OperationTimeout` is raised when the timeout expires. For `wait_for_completion=False`, this parameter has no effect. Returns: :term:`json object`: If `wait_for_completion` is `True`, returns a JSON object representing the response body of the synchronous operation, or the response body of the completed job that performed the asynchronous operation. If a synchronous operation has no response body, `None` is returned. If `wait_for_completion` is `False`, returns a JSON object representing the response body of the synchronous or asynchronous operation. In case of an asynchronous operation, the JSON object will have a member named ``job-uri``, whose value can be used with the :meth:`~zhmcclient.Session.query_job_status` method to determine the status of the job and the result of the original operation, once the job has completed. See the section in the :term:`HMC API` book about the specific HMC operation and about the 'Query Job Status' operation, for a description of the members of the returned JSON objects. Raises: :exc:`~zhmcclient.HTTPError` :exc:`~zhmcclient.ParseError` (not implemented) :exc:`~zhmcclient.AuthError` (not implemented) :exc:`~zhmcclient.ConnectionError`
def get(self, name): pm = self._libeng.engGetVariable(self._ep, name) out = mxarray_to_ndarray(self._libmx, pm) self._libmx.mxDestroyArray(pm) return out
Get variable `name` from MATLAB workspace. Parameters ---------- name : str Name of the variable in MATLAB workspace. Returns ------- array_like Value of the variable `name`.