code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def dict2orderedlist(dic, order_list, default='', **kwargs): result = [] for key_order in order_list: value = get_element(dic, key_order, **kwargs) result.append(value if value is not None else default) return result
Return a list with dict values ordered by a list of key passed in args.
def configure_box(self, boxsize, root_nx=1, root_ny=1, root_nz=1): clibrebound.reb_configure_box(byref(self), c_double(boxsize), c_int(root_nx), c_int(root_ny), c_int(root_nz)) return
Initialize the simulation box. This function only needs to be called it boundary conditions other than "none" are used. In such a case the boxsize must be known and is set with this function. Parameters ---------- boxsize : float, optional The size of one root box. root_nx, root_ny, root_nz : int, optional The number of root boxes in each direction. The total size of the simulation box will be ``root_nx * boxsize``, ``root_ny * boxsize`` and ``root_nz * boxsize``. By default there will be exactly one root box in each direction.
def timeseries_from_mat(filename, varname=None, fs=1.0): import scipy.io as sio if varname is None: mat_dict = sio.loadmat(filename) if len(mat_dict) > 1: raise ValueError('Must specify varname: file contains ' 'more than one variable. ') else: mat_dict = sio.loadmat(filename, variable_names=(varname,)) array = mat_dict.popitem()[1] return Timeseries(array, fs=fs)
load a multi-channel Timeseries from a MATLAB .mat file Args: filename (str): .mat file to load varname (str): variable name. only needed if there is more than one variable saved in the .mat file fs (scalar): sample rate of timeseries in Hz. (constant timestep assumed) Returns: Timeseries
def fields(self): if 'feature' in self._dict: self._attributes = self._dict['feature']['attributes'] else: self._attributes = self._dict['attributes'] return self._attributes.keys()
returns a list of feature fields
def _delLocalOwnerRole(self, username): parent = self.getParent() if parent.portal_type == "Client": parent.manage_delLocalRoles([username]) self._recursive_reindex_object_security(parent)
Remove local owner role from parent object
def _nodemap_changed(self, data, stat): if not stat: raise EnvironmentNotFoundException(self.nodemap_path) try: conf_path = self._deserialize_nodemap(data)[self.hostname] except KeyError: conf_path = '/services/%s/conf' % self.service self.config_watcher = DataWatch( self.zk, conf_path, self._config_changed )
Called when the nodemap changes.
def plot(self, lax=None, proj='all', element='PIBsBvV', dP=None, dI=_def.TorId, dBs=_def.TorBsd, dBv=_def.TorBvd, dVect=_def.TorVind, dIHor=_def.TorITord, dBsHor=_def.TorBsTord, dBvHor=_def.TorBvTord, Lim=None, Nstep=_def.TorNTheta, dLeg=_def.TorLegd, indices=False, draw=True, fs=None, wintit=None, Test=True): kwdargs = locals() lout = ['self'] for k in lout: del kwdargs[k] return _plot.Struct_plot(self, **kwdargs)
Plot the polygon defining the vessel, in chosen projection Generic method for plotting the Ves object The projections to be plotted, the elements to plot can be specified Dictionaries of properties for each elements can also be specified If an ax is not provided a default one is created. Parameters ---------- Lax : list or plt.Axes The axes to be used for plotting Provide a list of 2 axes if proj='All' If None a new figure with axes is created proj : str Flag specifying the kind of projection - 'Cross' : cross-section projection - 'Hor' : horizontal projection - 'All' : both - '3d' : a 3d matplotlib plot element : str Flag specifying which elements to plot Each capital letter corresponds to an element: * 'P': polygon * 'I': point used as a reference for impact parameters * 'Bs': (surfacic) center of mass * 'Bv': (volumic) center of mass for Tor type * 'V': vector pointing inward perpendicular to each segment dP : dict / None Dict of properties for plotting the polygon Fed to plt.Axes.plot() or plt.plot_surface() if proj='3d' dI : dict / None Dict of properties for plotting point 'I' in Cross-section projection dIHor : dict / None Dict of properties for plotting point 'I' in horizontal projection dBs : dict / None Dict of properties for plotting point 'Bs' in Cross-section projection dBsHor : dict / None Dict of properties for plotting point 'Bs' in horizontal projection dBv : dict / None Dict of properties for plotting point 'Bv' in Cross-section projection dBvHor : dict / None Dict of properties for plotting point 'Bv' in horizontal projection dVect : dict / None Dict of properties for plotting point 'V' in cross-section projection dLeg : dict / None Dict of properties for plotting the legend, fed to plt.legend() The legend is not plotted if None Lim : list or tuple Array of a lower and upper limit of angle (rad.) or length for plotting the '3d' proj Nstep : int Number of points for sampling in ignorable coordinate (toroidal angle or length) draw : bool Flag indicating whether the fig.canvas.draw() shall be called automatically a4 : bool Flag indicating whether the figure should be plotted in a4 dimensions for printing Test : bool Flag indicating whether the inputs should be tested for conformity Returns ------- La list / plt.Axes Handles of the axes used for plotting (list if several axes where used)
def resolve_remote(self, uri): if uri.startswith('file://'): try: path = uri[7:] with open(path, 'r') as schema_file: result = yaml.load(schema_file) if self.cache_remote: self.store[uri] = result return result except yaml.parser.ParserError as e: logging.debug('Error parsing {!r} as YAML: {}'.format( uri, e)) return super(SchemaRefResolver, self).resolve_remote(uri)
Add support to load YAML files. This will attempt to load a YAML file first, and then go back to the default behavior. :param str uri: the URI to resolve :returns: the retrieved document
def _request_process_json_bulk(self, response_data): status = 'Failure' data = response_data.get(self.request_entity, []) if data: status = 'Success' return data, status
Handle bulk JSON response Return: (string): The response data (string): The response status
def close(self): self._closeIfNotUpdatedTimer.stop() self._qpart.removeEventFilter(self) self._qpart.cursorPositionChanged.disconnect(self._onCursorPositionChanged) QListView.close(self)
Explicitly called destructor. Removes widget from the qpart
def set_bucket_policy(self, bucket_name, policy): is_valid_policy_type(policy) is_valid_bucket_name(bucket_name) headers = { 'Content-Length': str(len(policy)), 'Content-Md5': get_md5_base64digest(policy) } content_sha256_hex = get_sha256_hexdigest(policy) self._url_open("PUT", bucket_name=bucket_name, query={"policy": ""}, headers=headers, body=policy, content_sha256=content_sha256_hex)
Set bucket policy of given bucket name. :param bucket_name: Bucket name. :param policy: Access policy/ies in string format.
def parse_xml_node(self, node): self.name = node.getAttributeNS(RTS_NS, 'name') self.comment = node.getAttributeNS(RTS_EXT_NS, 'comment') if node.hasAttributeNS(RTS_EXT_NS, 'visible'): visible = node.getAttributeNS(RTS_EXT_NS, 'visible') if visible.lower() == 'true' or visible == '1': self.visible = True else: self.visible = False for c in get_direct_child_elements_xml(node, prefix=RTS_EXT_NS, local_name='Properties'): name, value = parse_properties_xml(c) self._properties[name] = value return self
Parse an xml.dom Node object representing a data port into this object.
def fill(self, value=b'\xff'): previous_segment_maximum_address = None fill_segments = [] for address, data in self._segments: maximum_address = address + len(data) if previous_segment_maximum_address is not None: fill_size = address - previous_segment_maximum_address fill_size_words = fill_size // self.word_size_bytes fill_segments.append(_Segment( previous_segment_maximum_address, previous_segment_maximum_address + fill_size, value * fill_size_words, self.word_size_bytes)) previous_segment_maximum_address = maximum_address for segment in fill_segments: self._segments.add(segment)
Fill all empty space between segments with given value `value`.
def diff(self, plot): if self.fig == 'auto': figure_format = self.params('fig').objects[0] else: figure_format = self.fig return self.html(plot, figure_format)
Returns the latest plot data to update an existing plot.
def load_umls(): dataset_path = _load('umls') X = _load_csv(dataset_path, 'data') y = X.pop('label').values graph = nx.Graph(nx.read_gml(os.path.join(dataset_path, 'graph.gml'))) return Dataset(load_umls.__doc__, X, y, accuracy_score, stratify=True, graph=graph)
UMLs Dataset. The data consists of information about a 135 Graph and the relations between their nodes given as a DataFrame with three columns, source, target and type, indicating which nodes are related and with which type of link. The target is a 1d numpy binary integer array indicating whether the indicated link exists or not.
def which(executable): locations = ( '/usr/local/bin', '/bin', '/usr/bin', '/usr/local/sbin', '/usr/sbin', '/sbin', ) for location in locations: executable_path = os.path.join(location, executable) if os.path.exists(executable_path) and os.path.isfile(executable_path): return executable_path
find the location of an executable
def get_templates(self, limit=100, offset=0): url = self.TEMPLATES_URL + "?limit=%s&offset=%s" % (limit, offset) connection = Connection(self.token) connection.set_url(self.production, url) return connection.get_request()
Get all account templates
def bundle_visualization_url(self, bundle_id, channel=None): url = '{}/{}/diagram.svg'.format(self.url, _get_path(bundle_id)) return _add_channel(url, channel)
Generate the path to the visualization for bundles. @param charm_id The ID of the bundle. @param channel Optional channel name. @return The url to the visualization.
def compute_and_cache_missing_buckets(self, start_time, end_time, untrusted_time, force_recompute=False): if untrusted_time and not untrusted_time.tzinfo: untrusted_time = untrusted_time.replace(tzinfo=tzutc()) events = self._compute_buckets(start_time, end_time, compute_missing=True, cache=True, untrusted_time=untrusted_time, force_recompute=force_recompute) for event in events: yield event
Return the results for `query_function` on every `bucket_width` time period between `start_time` and `end_time`. Look for previously cached results to avoid recomputation. For any buckets where all events would have occurred before `untrusted_time`, cache the results. :param start_time: A datetime for the beginning of the range, aligned with `bucket_width`. :param end_time: A datetime for the end of the range, aligned with `bucket_width`. :param untrusted_time: A datetime after which to not trust that computed data is stable. Any buckets that overlap with or follow this untrusted_time will not be cached. :param force_recompute: A boolean that, if True, will force recompute and recaching of even previously cached data.
def pack_msg(method, msg, pickle_protocol=PICKLE_PROTOCOL): dump = io.BytesIO() pickle.dump(msg, dump, pickle_protocol) size = dump.tell() return (struct.pack(METHOD_STRUCT_FORMAT, method) + struct.pack(SIZE_STRUCT_FORMAT, size) + dump.getvalue())
Packs a method and message.
def to_masked_array(self, copy=True): isnull = pd.isnull(self.values) return np.ma.MaskedArray(data=self.values, mask=isnull, copy=copy)
Convert this array into a numpy.ma.MaskedArray Parameters ---------- copy : bool If True (default) make a copy of the array in the result. If False, a MaskedArray view of DataArray.values is returned. Returns ------- result : MaskedArray Masked where invalid values (nan or inf) occur.
def eval(e, amplitude, e_0, alpha, beta): ee = e / e_0 eeponent = -alpha - beta * np.log(ee) return amplitude * ee ** eeponent
One dimenional log parabola model function
def get_sites(self, entry): try: index_url = reverse('zinnia:entry_archive_index') except NoReverseMatch: index_url = '' return format_html_join( ', ', '<a href="{}://{}{}" target="blank">{}</a>', [(settings.PROTOCOL, site.domain, index_url, conditional_escape(site.name)) for site in entry.sites.all()])
Return the sites linked in HTML.
def process_response(self, response): if response.status_code != 200: raise TwilioException('Unable to fetch page', response) return json.loads(response.text)
Load a JSON response. :param Response response: The HTTP response. :return dict: The JSON-loaded content.
def design_create(self, name, ddoc, use_devmode=True, syncwait=0): name = self._cb._mk_devmode(name, use_devmode) fqname = "_design/{0}".format(name) if not isinstance(ddoc, dict): ddoc = json.loads(ddoc) ddoc = ddoc.copy() ddoc['_id'] = fqname ddoc = json.dumps(ddoc) existing = None if syncwait: try: existing = self.design_get(name, use_devmode=False) except CouchbaseError: pass ret = self._cb._http_request( type=_LCB.LCB_HTTP_TYPE_VIEW, path=fqname, method=_LCB.LCB_HTTP_METHOD_PUT, post_data=ddoc, content_type="application/json") self._design_poll(name, 'add', existing, syncwait, use_devmode=use_devmode) return ret
Store a design document :param string name: The name of the design :param ddoc: The actual contents of the design document :type ddoc: string or dict If ``ddoc`` is a string, it is passed, as-is, to the server. Otherwise it is serialized as JSON, and its ``_id`` field is set to ``_design/{name}``. :param bool use_devmode: Whether a *development* mode view should be used. Development-mode views are less resource demanding with the caveat that by default they only operate on a subset of the data. Normally a view will initially be created in 'development mode', and then published using :meth:`design_publish` :param float syncwait: How long to poll for the action to complete. Server side design operations are scheduled and thus this function may return before the operation is actually completed. Specifying the timeout here ensures the client polls during this interval to ensure the operation has completed. :raise: :exc:`couchbase.exceptions.TimeoutError` if ``syncwait`` was specified and the operation could not be verified within the interval specified. :return: An :class:`~couchbase.result.HttpResult` object. .. seealso:: :meth:`design_get`, :meth:`design_delete`, :meth:`design_publish`
def get_real_field(model, field_name): parts = field_name.split('__') field = model._meta.get_field(parts[0]) if len(parts) == 1: return model._meta.get_field(field_name) elif isinstance(field, models.ForeignKey): return get_real_field(field.rel.to, '__'.join(parts[1:])) else: raise Exception('Unhandled field: %s' % field_name)
Get the real field from a model given its name. Handle nested models recursively (aka. ``__`` lookups)
def get(self, channel_sid): return UserChannelContext( self._version, service_sid=self._solution['service_sid'], user_sid=self._solution['user_sid'], channel_sid=channel_sid, )
Constructs a UserChannelContext :param channel_sid: The SID of the Channel that has the User Channel to fetch :returns: twilio.rest.chat.v2.service.user.user_channel.UserChannelContext :rtype: twilio.rest.chat.v2.service.user.user_channel.UserChannelContext
def get_left_right(seq): cseq = seq.strip(GAPS) leftjust = seq.index(cseq[0]) rightjust = seq.rindex(cseq[-1]) return leftjust, rightjust
Find position of the first and last base
def get(self, timeout): if self._first: self._first = False return ("ping", PingStats.get(), {}) try: (action, msg, kwargs) = yield from asyncio.wait_for(super().get(), timeout) except asyncio.futures.TimeoutError: return ("ping", PingStats.get(), {}) return (action, msg, kwargs)
When timeout is expire we send a ping notification with server information
def _fast_read(self, infile): infile.seek(0) return(int(infile.read().decode().strip()))
Function for fast reading from sensor files.
def decode(self, dataset_split=None, decode_from_file=False, checkpoint_path=None): if decode_from_file: decoding.decode_from_file(self._estimator, self._decode_hparams.decode_from_file, self._hparams, self._decode_hparams, self._decode_hparams.decode_to_file) else: decoding.decode_from_dataset( self._estimator, self._hparams.problem.name, self._hparams, self._decode_hparams, dataset_split=dataset_split, checkpoint_path=checkpoint_path)
Decodes from dataset or file.
def install(pkg, target='LocalSystem', store=False, allow_untrusted=False): if '*.' not in pkg: pkg = _quote(pkg) target = _quote(target) cmd = 'installer -pkg {0} -target {1}'.format(pkg, target) if store: cmd += ' -store' if allow_untrusted: cmd += ' -allowUntrusted' python_shell = False if '*.' in cmd: python_shell = True return __salt__['cmd.run_all'](cmd, python_shell=python_shell)
Install a pkg file Args: pkg (str): The package to install target (str): The target in which to install the package to store (bool): Should the package be installed as if it was from the store? allow_untrusted (bool): Allow the installation of untrusted packages? Returns: dict: A dictionary containing the results of the installation CLI Example: .. code-block:: bash salt '*' macpackage.install test.pkg
def attach_template(self, _template, _key, **unbound_var_values): if _key in unbound_var_values: raise ValueError('%s specified twice.' % _key) unbound_var_values[_key] = self return _template.as_layer().construct(**unbound_var_values)
Attaches the template to this such that _key=this layer. Note: names were chosen to avoid conflicts with any likely unbound_var keys. Args: _template: The template to construct. _key: The key that this layer should replace. **unbound_var_values: The values for the unbound_vars. Returns: A new layer with operation applied. Raises: ValueError: If _key is specified twice or there is a problem computing the template.
def ctypes2buffer(cptr, length): if not isinstance(cptr, ctypes.POINTER(ctypes.c_char)): raise RuntimeError('expected char pointer') res = bytearray(length) rptr = (ctypes.c_char * length).from_buffer(res) if not ctypes.memmove(rptr, cptr, length): raise RuntimeError('memmove failed') return res
Convert ctypes pointer to buffer type.
def get_location(conn, vm_): locations = conn.list_locations() loc = config.get_cloud_config_value('location', vm_, __opts__, default=2) for location in locations: if six.text_type(loc) in (six.text_type(location.id), six.text_type(location.name)): return location
Return the node location to use
def logout(request): request.response.headers.extend(forget(request)) return {'redirect': request.POST.get('came_from', '/')}
View to forget the user
def parse_institution_address(address, city, state_province, country, postal_code, country_code): address_list = force_list(address) state_province = match_us_state(state_province) or state_province postal_code = force_list(postal_code) country = force_list(country) country_code = match_country_code(country_code) if isinstance(postal_code, (tuple, list)): postal_code = ', '.join(postal_code) if isinstance(country, (tuple, list)): country = ', '.join(set(country)) if not country_code and country: country_code = match_country_name_to_its_code(country) if not country_code and state_province and state_province in us_state_to_iso_code.values(): country_code = 'US' return { 'cities': force_list(city), 'country_code': country_code, 'postal_address': address_list, 'postal_code': postal_code, 'state': state_province, }
Parse an institution address.
def _on_mode_change(self, mode): if isinstance(mode, (tuple, list)): mode = mode[0] if mode is None: _LOGGER.warning("Mode change event with no mode.") return if not mode or mode.lower() not in CONST.ALL_MODES: _LOGGER.warning("Mode change event with unknown mode: %s", mode) return _LOGGER.debug("Alarm mode change event to: %s", mode) alarm_device = self._abode.get_alarm(refresh=True) alarm_device._json_state['mode']['area_1'] = mode for callback in self._device_callbacks.get(alarm_device.device_id, ()): _execute_callback(callback, alarm_device)
Mode change broadcast from Abode SocketIO server.
def remote_access(self, service=None, use_xarray=None): if service is None: service = 'CdmRemote' if 'CdmRemote' in self.access_urls else 'OPENDAP' if service not in (CaseInsensitiveStr('CdmRemote'), CaseInsensitiveStr('OPENDAP')): raise ValueError(service + ' is not a valid service for remote_access') return self.access_with_service(service, use_xarray)
Access the remote dataset. Open the remote dataset and get a netCDF4-compatible `Dataset` object providing index-based subsetting capabilities. Parameters ---------- service : str, optional The name of the service to use for access to the dataset, either 'CdmRemote' or 'OPENDAP'. Defaults to 'CdmRemote'. Returns ------- Dataset Object for netCDF4-like access to the dataset
def request_name(self, name): while name in self._blacklist: name += "_" self._blacklist.add(name) return name
Request a name, might return the name or a similar one if already used or reserved
def _load_meta(self, meta): meta = yaml.load(meta, Loader=Loader) if 'version' in meta: meta['version'] = str(meta['version']) return meta
Load data from meta.yaml to a dictionary
def _get_model(self, lookup_keys, session): try: return self.queryset(session).filter_by(**lookup_keys).one() except NoResultFound: raise NotFoundException('No model of type {0} was found using ' 'lookup_keys {1}'.format(self.model.__name__, lookup_keys))
Gets the sqlalchemy Model instance associated with the lookup keys. :param dict lookup_keys: A dictionary of the keys and their associated values. :param Session session: The sqlalchemy session :return: The sqlalchemy orm model instance.
def successors(self): if not self.children: return for part in self.children: yield part for subpart in part.successors(): yield subpart
Yield Compounds below self in the hierarchy. Yields ------- mb.Compound The next Particle below self in the hierarchy
def get_contact_method(self, id, **kwargs): endpoint = '{0}/{1}/contact_methods/{2}'.format( self.endpoint, self['id'], id, ) result = self.request('GET', endpoint=endpoint, query_params=kwargs) return result['contact_method']
Get a contact method for this user.
def _infer_binary_operation(left, right, binary_opnode, context, flow_factory): context, reverse_context = _get_binop_contexts(context, left, right) left_type = helpers.object_type(left) right_type = helpers.object_type(right) methods = flow_factory( left, left_type, binary_opnode, right, right_type, context, reverse_context ) for method in methods: try: results = list(method()) except AttributeError: continue except exceptions.AttributeInferenceError: continue except exceptions.InferenceError: yield util.Uninferable return else: if any(result is util.Uninferable for result in results): yield util.Uninferable return if all(map(_is_not_implemented, results)): continue not_implemented = sum( 1 for result in results if _is_not_implemented(result) ) if not_implemented and not_implemented != len(results): yield util.Uninferable return yield from results return yield util.BadBinaryOperationMessage(left_type, binary_opnode.op, right_type)
Infer a binary operation between a left operand and a right operand This is used by both normal binary operations and augmented binary operations, the only difference is the flow factory used.
def show(context, log, results_file, verbose, item): history_log = context.obj['history_log'] no_color = context.obj['no_color'] if not results_file: try: with open(history_log, 'r') as f: lines = f.readlines() history = lines[len(lines) - item] except IndexError: echo_style( 'History result at index %s does not exist.' % item, no_color, fg='red' ) sys.exit(1) except Exception: echo_style( 'Unable to retrieve results history, ' 'provide results file or re-run test.', no_color, fg='red' ) sys.exit(1) log_file = get_log_file_from_item(history) if log: echo_log(log_file, no_color) else: echo_results_file( log_file.rsplit('.', 1)[0] + '.results', no_color, verbose ) elif log: echo_log(results_file, no_color) else: echo_results_file(results_file, no_color, verbose)
Print test results info from provided results json file. If no results file is supplied echo results from most recent test in history if it exists. If verbose option selected, echo all test cases. If log option selected echo test log.
def _process_infohash_list(infohash_list): if isinstance(infohash_list, list): data = {'hashes': '|'.join([h.lower() for h in infohash_list])} else: data = {'hashes': infohash_list.lower()} return data
Method to convert the infohash_list to qBittorrent API friendly values. :param infohash_list: List of infohash.
def around(A, decimals=0): if isinstance(A, Poly): B = A.A.copy() for key in A.keys: B[key] = around(B[key], decimals) return Poly(B, A.dim, A.shape, A.dtype) return numpy.around(A, decimals)
Evenly round to the given number of decimals. Args: A (Poly, numpy.ndarray): Input data. decimals (int): Number of decimal places to round to (default: 0). If decimals is negative, it specifies the number of positions to the left of the decimal point. Returns: (Poly, numpy.ndarray): Same type as A. Examples: >>> P = chaospy.prange(3)*2**-numpy.arange(0, 6, 2, float) >>> print(P) [1.0, 0.25q0, 0.0625q0^2] >>> print(chaospy.around(P)) [1.0, 0.0, 0.0] >>> print(chaospy.around(P, 2)) [1.0, 0.25q0, 0.06q0^2]
def delete(self, cascade=False, delete_shares=False): if self.id: self.connection.post('delete_video', video_id=self.id, cascade=cascade, delete_shares=delete_shares) self.id = None
Deletes the video.
def expand(fn, col, inputtype=pd.DataFrame): if inputtype == pd.DataFrame: if isinstance(col, int): def _wrapper(*args, **kwargs): return fn(args[0].iloc[:, col], *args[1:], **kwargs) return _wrapper def _wrapper(*args, **kwargs): return fn(args[0].loc[:, col], *args[1:], **kwargs) return _wrapper elif inputtype == np.ndarray: def _wrapper(*args, **kwargs): return fn(args[0][:, col], *args[1:], **kwargs) return _wrapper raise TypeError("invalid input type")
Wrap a function applying to a single column to make a function applying to a multi-dimensional dataframe or ndarray Parameters ---------- fn : function Function that applies to a series or vector. col : str or int Index of column to which to apply `fn`. inputtype : class or type Type of input to be expected by the wrapped function. Normally pd.DataFrame or np.ndarray. Defaults to pd.DataFrame. Returns ---------- wrapped : function Function that takes an input of type `inputtype` and applies `fn` to the specified `col`.
def existing_config_files(): global _ETC_PATHS global _MAIN_CONFIG_FILE global _CONFIG_VAR_INCLUDE global _CONFIG_FILTER config_files = [] for possible in _ETC_PATHS: config_files = config_files + glob.glob("%s%s" % (possible, _MAIN_CONFIG_FILE)) if _CONFIG_VAR_INCLUDE != "": main_config = Configuration("general", { _CONFIG_VAR_INCLUDE:"" }, _MAIN_CONFIG_FILE) if main_config.CONFIG_DIR != "": for possible in _ETC_PATHS: config_files = config_files + glob.glob("%s%s/%s" % (possible, main_config.CONFIG_DIR, _CONFIG_FILTER)) return config_files
Method that calculates all the configuration files that are valid, according to the 'set_paths' and other methods for this module.
def get_attribute_id(self, attribute_key): attribute = self.attribute_key_map.get(attribute_key) has_reserved_prefix = attribute_key.startswith(RESERVED_ATTRIBUTE_PREFIX) if attribute: if has_reserved_prefix: self.logger.warning(('Attribute %s unexpectedly has reserved prefix %s; using attribute ID ' 'instead of reserved attribute name.' % (attribute_key, RESERVED_ATTRIBUTE_PREFIX))) return attribute.id if has_reserved_prefix: return attribute_key self.logger.error('Attribute "%s" is not in datafile.' % attribute_key) self.error_handler.handle_error(exceptions.InvalidAttributeException(enums.Errors.INVALID_ATTRIBUTE_ERROR)) return None
Get attribute ID for the provided attribute key. Args: attribute_key: Attribute key for which attribute is to be fetched. Returns: Attribute ID corresponding to the provided attribute key.
def set_idlemax(self, idlemax): is_running = yield from self.is_running() if is_running: yield from self._hypervisor.send('vm set_idle_max "{name}" 0 {idlemax}'.format(name=self._name, idlemax=idlemax)) log.info('Router "{name}" [{id}]: idlemax updated from {old_idlemax} to {new_idlemax}'.format(name=self._name, id=self._id, old_idlemax=self._idlemax, new_idlemax=idlemax)) self._idlemax = idlemax
Sets CPU idle max value :param idlemax: idle max value (integer)
def _getaddrinfo(self, host: str, family: int=socket.AF_UNSPEC) \ -> List[tuple]: event_loop = asyncio.get_event_loop() query = event_loop.getaddrinfo(host, 0, family=family, proto=socket.IPPROTO_TCP) if self._timeout: query = asyncio.wait_for(query, self._timeout) try: results = yield from query except socket.error as error: if error.errno in ( socket.EAI_FAIL, socket.EAI_NODATA, socket.EAI_NONAME): raise DNSNotFound( 'DNS resolution failed: {error}'.format(error=error) ) from error else: raise NetworkError( 'DNS resolution error: {error}'.format(error=error) ) from error except asyncio.TimeoutError as error: raise NetworkError('DNS resolve timed out.') from error else: return results
Query DNS using system resolver. Coroutine.
def explicit_start_marker(self, source): if not self.use_cell_markers: return False if self.metadata: return True if self.cell_marker_start: start_code_re = re.compile('^' + self.comment + r'\s*' + self.cell_marker_start + r'\s*(.*)$') end_code_re = re.compile('^' + self.comment + r'\s*' + self.cell_marker_end + r'\s*$') if start_code_re.match(source[0]) or end_code_re.match(source[0]): return False if all([line.startswith(self.comment) for line in self.source]): return True if LightScriptCellReader(self.fmt).read(source)[1] < len(source): return True return False
Does the python representation of this cell requires an explicit start of cell marker?
def lock(self, name, ttl=60): return locks.Lock(name, ttl=ttl, etcd_client=self)
Create a new lock. :param name: name of the lock :type name: string or bytes :param ttl: length of time for the lock to live for in seconds. The lock will be released after this time elapses, unless refreshed :type ttl: int :returns: new lock :rtype: :class:`.Lock`
def get_session_data(ctx, username, password, salt, server_public, private, preset): session = SRPClientSession( SRPContext(username, password, prime=preset[0], generator=preset[1]), private=private) session.process(server_public, salt, base64=True) click.secho('Client session key: %s' % session.key_b64) click.secho('Client session key proof: %s' % session.key_proof_b64) click.secho('Client session key hash: %s' % session.key_proof_hash_b64)
Print out client session data.
def _parse_vrf_query(self, query_str): sp = smart_parsing.VrfSmartParser() query = sp.parse(query_str) return query
Parse a smart search query for VRFs This is a helper function to smart_search_vrf for easier unit testing of the parser.
def add_detector(self, detector_cls): if not issubclass(detector_cls, detectors.base.Detector): raise TypeError(( '"%(detector_cls)s" is not a subclass of Detector' ) % locals()) name = detector_cls.filth_cls.type if name in self._detectors: raise KeyError(( 'can not add Detector "%(name)s"---it already exists. ' 'Try removing it first.' ) % locals()) self._detectors[name] = detector_cls()
Add a ``Detector`` to scrubadub
def sort(self): self.sorted_commits = [] if not self.commits: return self.sorted_commits prev_commit = self.commits.pop(0) prev_line = prev_commit.line_number prev_uuid = prev_commit.uuid for commit in self.commits: if (commit.uuid != prev_uuid or commit.line_number != (prev_line + 1)): prev_commit.lines = self.line_range(prev_commit.line_number, prev_line) self.sorted_commits.append(prev_commit) prev_commit = commit prev_line = commit.line_number prev_uuid = commit.uuid prev_commit.lines = self.line_range(prev_commit.line_number, prev_line) self.sorted_commits.append(prev_commit) return self.sorted_commits
Consolidate adjacent lines, if same commit ID. Will modify line number to be a range, when two or more lines with the same commit ID.
def clean(self): super().clean() if self.group: self.groupname = self.group.name elif not self.groupname: raise ValidationError({ 'groupname': _NOT_BLANK_MESSAGE, 'group': _NOT_BLANK_MESSAGE })
automatically sets groupname
def old_values(self): def get_old_values_and_key(item): values = item.old_values values.update({self._key: item.past_dict[self._key]}) return values return [get_old_values_and_key(el) for el in self._get_recursive_difference('all') if el.diffs and el.past_dict]
Returns the old values from the diff
def send_key(self, key): _LOGGER.info('Queueing key %s', key) frame = self._get_key_event_frame(key) self._send_queue.put({'frame': frame})
Sends a key.
def get(self, sid): return FunctionVersionContext( self._version, service_sid=self._solution['service_sid'], function_sid=self._solution['function_sid'], sid=sid, )
Constructs a FunctionVersionContext :param sid: The sid :returns: twilio.rest.serverless.v1.service.function.function_version.FunctionVersionContext :rtype: twilio.rest.serverless.v1.service.function.function_version.FunctionVersionContext
def validate_sum(parameter_container, validation_message, **kwargs): parameters = parameter_container.get_parameters(False) values = [] for parameter in parameters: if parameter.selected_option_type() in [SINGLE_DYNAMIC, STATIC]: values.append(parameter.value) sum_threshold = kwargs.get('max', 1) if None in values: clean_value = [x for x in values if x is not None] values.remove(None) if sum(clean_value) > sum_threshold: return { 'valid': False, 'message': validation_message } else: if sum(values) > sum_threshold: return { 'valid': False, 'message': validation_message } return { 'valid': True, 'message': '' }
Validate the sum of parameter value's. :param parameter_container: The container that use this validator. :type parameter_container: ParameterContainer :param validation_message: The message if there is validation error. :type validation_message: str :param kwargs: Keywords Argument. :type kwargs: dict :returns: Dictionary of valid and message. :rtype: dict Note: The code is not the best I wrote, since there are two alternatives. 1. If there is no None, the sum must be equal to 1 2. If there is no None, the sum must be less than 1
def parse_osm_node(response): try: point = Point(response['lon'], response['lat']) poi = { 'osmid': response['id'], 'geometry': point } if 'tags' in response: for tag in response['tags']: poi[tag] = response['tags'][tag] except Exception: log('Point has invalid geometry: {}'.format(response['id'])) return poi
Parse points from OSM nodes. Parameters ---------- response : JSON Nodes from OSM response. Returns ------- Dict of vertex IDs and their lat, lon coordinates.
def start_processing(self, message, steps=0, warning=True): if self.__is_processing: warning and LOGGER.warning( "!> {0} | Engine is already processing, 'start_processing' request has been ignored!".format( self.__class__.__name__)) return False LOGGER.debug("> Starting processing operation!") self.__is_processing = True self.Application_Progress_Status_processing.Processing_progressBar.setRange(0, steps) self.Application_Progress_Status_processing.Processing_progressBar.setValue(0) self.Application_Progress_Status_processing.show() self.set_processing_message(message) return True
Registers the start of a processing operation. :param message: Operation description. :type message: unicode :param steps: Operation steps. :type steps: int :param warning: Emit warning message. :type warning: int :return: Method success. :rtype: bool
def fire_event(self, event_name, wait=False, *args, **kwargs): tasks = [] event_method_name = "on_" + event_name for plugin in self._plugins: event_method = getattr(plugin.object, event_method_name, None) if event_method: try: task = self._schedule_coro(event_method(*args, **kwargs)) tasks.append(task) def clean_fired_events(future): try: self._fired_events.remove(task) except (KeyError, ValueError): pass task.add_done_callback(clean_fired_events) except AssertionError: self.logger.error("Method '%s' on plugin '%s' is not a coroutine" % (event_method_name, plugin.name)) self._fired_events.extend(tasks) if wait: if tasks: yield from asyncio.wait(tasks, loop=self._loop)
Fire an event to plugins. PluginManager schedule @asyncio.coroutinecalls for each plugin on method called "on_" + event_name For example, on_connect will be called on event 'connect' Method calls are schedule in the asyn loop. wait parameter must be set to true to wait until all mehtods are completed. :param event_name: :param args: :param kwargs: :param wait: indicates if fire_event should wait for plugin calls completion (True), or not :return:
def regex_lexer(regex_pat): if isinstance(regex_pat, str): regex_pat = re.compile(regex_pat) def f(inp_str, pos): m = regex_pat.match(inp_str, pos) return m.group() if m else None elif hasattr(regex_pat, 'match'): def f(inp_str, pos): m = regex_pat.match(inp_str, pos) return m.group() if m else None else: regex_pats = tuple(re.compile(e) for e in regex_pat) def f(inp_str, pos): for each_pat in regex_pats: m = each_pat.match(inp_str, pos) if m: return m.group() return f
generate token names' cache
async def status_by_zip(self, zip_code: str) -> dict: try: location = next(( d for d in await self.user_reports() if d['zip'] == zip_code)) except StopIteration: return {} return await self.status_by_coordinates( float(location['latitude']), float(location['longitude']))
Get symptom data for the provided ZIP code.
def delete(build_folder): if _meta_.del_build in ["on", "ON"] and os.path.exists(build_folder): shutil.rmtree(build_folder)
Delete build directory and all its contents.
def retract(args): if not args.msg: return "Syntax: !vote retract <pollnum>" if not args.msg.isdigit(): return "Not A Valid Positive Integer." response = get_response(args.session, args.msg, args.nick) if response is None: return "You haven't voted on that poll yet!" args.session.delete(response) return "Vote retracted"
Deletes a vote for a poll.
def lnprior(pars): logprob = ( naima.uniform_prior(pars[0], 0.0, np.inf) + naima.uniform_prior(pars[1], -1, 5) + naima.uniform_prior(pars[3], 0, np.inf) ) return logprob
Return probability of parameter values according to prior knowledge. Parameter limits should be done here through uniform prior ditributions
def force_unicode(s, encoding='utf-8', strings_only=False, errors='strict'): if isinstance(s, unicode): return s if strings_only and is_protected_type(s): return s try: if not isinstance(s, basestring,): if hasattr(s, '__unicode__'): s = unicode(s) else: try: s = unicode(str(s), encoding, errors) except UnicodeEncodeError: if not isinstance(s, Exception): raise s = u' '.join([force_unicode(arg, encoding, strings_only, errors) for arg in s]) elif not isinstance(s, unicode): s = s.decode(encoding, errors) except UnicodeDecodeError, e: if not isinstance(s, Exception): raise DjangoUnicodeDecodeError(s, *e.args) else: s = u' '.join([force_unicode(arg, encoding, strings_only, errors) for arg in s]) return s
Similar to smart_unicode, except that lazy instances are resolved to strings, rather than kept as lazy objects. If strings_only is True, don't convert (some) non-string-like objects.
def describe(self, element): if (element == 'tasks'): return self.tasks_df.describe() elif (element == 'task_runs'): return self.task_runs_df.describe() else: return "ERROR: %s not found" % element
Return tasks or task_runs Panda describe.
def dependency_of_fetches(fetches, op): try: from tensorflow.python.client.session import _FetchHandler as FetchHandler handler = FetchHandler(op.graph, fetches, {}) targets = tuple(handler.fetches() + handler.targets()) except ImportError: if isinstance(fetches, list): targets = tuple(fetches) elif isinstance(fetches, dict): raise ValueError("Don't know how to parse dictionary to fetch list! " "This is a bug of tensorpack.") else: targets = (fetches, ) return dependency_of_targets(targets, op)
Check that op is in the subgraph induced by the dependencies of fetches. fetches may have more general structure. Args: fetches: An argument to `sess.run`. Nested structure will affect performance. op (tf.Operation or tf.Tensor): Returns: bool: True if any of `fetches` depend on `op`.
def timetopythonvalue(time_val): "Convert a time or time range from ArcGIS REST server format to Python" if isinstance(time_val, sequence): return map(timetopythonvalue, time_val) elif isinstance(time_val, numeric): return datetime.datetime(*(time.gmtime(time_val))[:6]) elif isinstance(time_val, numeric): values = [] try: values = map(long, time_val.split(",")) except: pass if values: return map(timetopythonvalue, values) raise ValueError(repr(time_val))
Convert a time or time range from ArcGIS REST server format to Python
def get_unique_name(self, prefix): ident = sum(t.startswith(prefix) for t, _ in self.layers.items()) + 1 return '%s_%d' % (prefix, ident)
Returns an index-suffixed unique name for the given prefix. This is used for auto-generating layer names based on the type-prefix.
def load_data(self, data, **kwargs): self.__set_map__(**kwargs) start = datetime.datetime.now() log.debug("Dataload stated") if isinstance(data, list): data = self._convert_results(data, **kwargs) class_types = self.__group_data__(data, **kwargs) self._generate_classes(class_types, self.non_defined, **kwargs) for triple in data: self.add_triple(sub=triple, **kwargs) log.debug("Dataload completed in '%s'", (datetime.datetime.now() - start))
Bulk adds rdf data to the class args: data: the data to be loaded kwargs: strip_orphans: True or False - remove triples that have an orphan blanknode as the object obj_method: "list", or None: if "list" the object of a method will be in the form of a list.
def reset_globals(version=None, loop=None): global containers global instruments global labware global robot global reset global modules global hardware robot, reset, instruments, containers, labware, modules, hardware\ = build_globals(version, loop)
Reinitialize the global singletons with a given API version. :param version: 1 or 2. If `None`, pulled from the `useProtocolApiV2` advanced setting.
def isRectangular(self): upper = (self.ur - self.ul).unit if not bool(upper): return False right = (self.lr - self.ur).unit if not bool(right): return False left = (self.ll - self.ul).unit if not bool(left): return False lower = (self.lr - self.ll).unit if not bool(lower): return False eps = 1e-5 return abs(sum(map(lambda x,y: x*y, upper, right))) <= eps and \ abs(sum(map(lambda x,y: x*y, upper, left))) <= eps and \ abs(sum(map(lambda x,y: x*y, left, lower))) <= eps
Check if quad is rectangular.
def get_file_size(path): assert isinstance(path, (str, _oldstr)) if not os.path.isfile(path): raise IOError('File "%s" does not exist.', path) return os.path.getsize(path)
The the size of a file in bytes. Parameters ---------- path: str The path of the file. Returns ------- int The size of the file in bytes. Raises ------ IOError If the file does not exist. OSError If a file system error occurs.
def _create_xml_node(cls): try: xml_map = cls._xml_map except AttributeError: raise ValueError("This model has no XML definition") return _create_xml_node( xml_map.get('name', cls.__name__), xml_map.get("prefix", None), xml_map.get("ns", None) )
Create XML node from "_xml_map".
def clear_recent_files(self): self.manager.clear() self.update_actions() self.clear_requested.emit()
Clear recent files and menu.
def reconfigArg(ArgConfig): r _type = ArgConfig.get('type') if _type: if hasattr(_type, '__ec_config__'): _type.__ec_config__(ArgConfig) if not 'type_str' in ArgConfig: ArgConfig['type_str'] = (_type.__name__ if isinstance(_type, type) else 'unspecified type') if _type else 'str' if not 'desc' in ArgConfig: ArgConfig['desc'] = ArgConfig['name'] return ArgConfig
r"""Reconfigures an argument based on its configuration.
def create(self, name): vg = self.attach(-1, 1) vg._name = name return vg
Create a new vgroup, and assign it a name. Args:: name name to assign to the new vgroup Returns:: VG instance for the new vgroup A create(name) call is equivalent to an attach(-1, 1) call, followed by a call to the setname(name) method of the instance. C library equivalent : no equivalent
def add_all_database_reactions(model, compartments): added = set() for rxnid in model.database.reactions: reaction = model.database.get_reaction(rxnid) if all(compound.compartment in compartments for compound, _ in reaction.compounds): if not model.has_reaction(rxnid): added.add(rxnid) model.add_reaction(rxnid) return added
Add all reactions from database that occur in given compartments. Args: model: :class:`psamm.metabolicmodel.MetabolicModel`.
async def on_isupport_maxlist(self, value): self._list_limits = {} for entry in value.split(','): modes, limit = entry.split(':') self._list_limits[frozenset(modes)] = int(limit) for mode in modes: self._list_limit_groups[mode] = frozenset(modes)
Limits on channel modes involving lists.
def _optional_envs(): envs = { key: os.environ.get(key) for key in OPTIONAL_ENV_VARS if key in os.environ } if 'JOB_NAME' in envs and 'BUILD_NUMBER' not in envs: raise BrowserConfigError("Missing BUILD_NUMBER environment var") if 'BUILD_NUMBER' in envs and 'JOB_NAME' not in envs: raise BrowserConfigError("Missing JOB_NAME environment var") return envs
Parse environment variables for optional values, raising a `BrowserConfig` error if they are insufficiently specified. Returns a `dict` of environment variables.
def get_pidpath(rundir, process_type, name=None): assert rundir, "rundir is not configured" path = os.path.join(rundir, '%s.pid' % process_type) if name: path = os.path.join(rundir, '%s.%s.pid' % (process_type, name)) log.log('common', 'get_pidpath for type %s, name %r: %s' % (process_type, name, path)) return path
Get the full path to the pid file for the given process type and name.
def set_mouse_button_callback(window, cbfun): window_addr = ctypes.cast(ctypes.pointer(window), ctypes.POINTER(ctypes.c_long)).contents.value if window_addr in _mouse_button_callback_repository: previous_callback = _mouse_button_callback_repository[window_addr] else: previous_callback = None if cbfun is None: cbfun = 0 c_cbfun = _GLFWmousebuttonfun(cbfun) _mouse_button_callback_repository[window_addr] = (cbfun, c_cbfun) cbfun = c_cbfun _glfw.glfwSetMouseButtonCallback(window, cbfun) if previous_callback is not None and previous_callback[0] != 0: return previous_callback[0]
Sets the mouse button callback. Wrapper for: GLFWmousebuttonfun glfwSetMouseButtonCallback(GLFWwindow* window, GLFWmousebuttonfun cbfun);
def multiple_sources(stmt): sources = list(set([e.source_api for e in stmt.evidence])) if len(sources) > 1: return True return False
Return True if statement is supported by multiple sources. Note: this is currently not used and replaced by BeliefEngine score cutoff
def section_names(self, ordkey="wall_time"): section_names = [] for idx, timer in enumerate(self.timers()): if idx == 0: section_names = [s.name for s in timer.order_sections(ordkey)] return section_names
Return the names of sections ordered by ordkey. For the time being, the values are taken from the first timer.
def action_download(self, courseid, taskid, path): wanted_path = self.verify_path(courseid, taskid, path) if wanted_path is None: raise web.notfound() task_fs = self.task_factory.get_task_fs(courseid, taskid) (method, mimetype_or_none, file_or_url) = task_fs.distribute(wanted_path) if method == "local": web.header('Content-Type', mimetype_or_none) return file_or_url elif method == "url": raise web.redirect(file_or_url) else: raise web.notfound()
Download a file or a directory
def _bd_(self): if not getattr(self, '__bd__', False): self.__bd = BetterDictLookUp(self) return self.__bd
Property that allows dot lookups of otherwise hidden attributes.
def write(path, data, binary=False): mode = "w" if binary: mode = "wb" with open(path, mode) as f: f.write(data) f.close()
Writes a given data to a file located at the given path.
def status(self): orig_dict = self._get(self._service_url('status')) orig_dict['implementation_version'] = orig_dict.pop('Implementation-Version') orig_dict['built_from_git_sha1'] = orig_dict.pop('Built-From-Git-SHA1') return Status(orig_dict)
Get the status of Alerting Service :return: Status object
def _collect_zipimporter_cache_entries(normalized_path, cache): result = [] prefix_len = len(normalized_path) for p in cache: np = normalize_path(p) if (np.startswith(normalized_path) and np[prefix_len:prefix_len + 1] in (os.sep, '')): result.append(p) return result
Return zipimporter cache entry keys related to a given normalized path. Alternative path spellings (e.g. those using different character case or those using alternative path separators) related to the same path are included. Any sub-path entries are included as well, i.e. those corresponding to zip archives embedded in other zip archives.
def promote_s3app(self): utils.banner("Promoting S3 App") primary_region = self.configs['pipeline']['primary_region'] s3obj = s3.S3Deployment( app=self.app, env=self.env, region=self.region, prop_path=self.json_path, artifact_path=self.artifact_path, artifact_version=self.artifact_version, primary_region=primary_region) s3obj.promote_artifacts(promote_stage=self.promote_stage)
promotes S3 deployment to LATEST
def form_valid(self, form, formsets): new_object = False if not self.object: new_object = True instance = getattr(form, 'instance', None) auto_tags, changed_tags, old_tags = tag_handler.get_tags_from_data( form.data, self.get_tags(instance)) tag_handler.set_auto_tags_for_form(form, auto_tags) with transaction.commit_on_success(): self.object = self.save_form(form) self.save_formsets(form, formsets, auto_tags=auto_tags) url = self.get_object_url() self.log_action(self.object, CMSLog.SAVE, url=url) msg = self.write_message() if not new_object and changed_tags and old_tags: tag_handler.update_changed_tags(changed_tags, old_tags) return self.success_response(msg)
Response for valid form. In one transaction this will save the current form and formsets, log the action and message the user. Returns the results of calling the `success_response` method.