code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def handle_status(self): status = self.get_status() if status: self.zones['main'].update_status(status)
Handle status from device
def absorption_coefficient( dielectric ): energies_in_eV = np.array( dielectric[0] ) real_dielectric = parse_dielectric_data( dielectric[1] ) imag_dielectric = parse_dielectric_data( dielectric[2] ) epsilon_1 = np.mean( real_dielectric, axis=1 ) epsilon_2 = np.mean( imag_dielectric, axis=1 ) ret...
Calculate the optical absorption coefficient from an input set of pymatgen vasprun dielectric constant data. Args: dielectric (list): A list containing the dielectric response function in the pymatgen vasprun format. | element 0: list of energies ...
def menu(self, prompt, choices): menu = [prompt] + [ "{0}. {1}".format(*choice) for choice in enumerate(choices, start=1) ] command = 'inputlist({})'.format(repr(menu)) choice = int(self._vim.eval(command)) if not 0 < choice < len(menu): return ret...
Presents a selection menu and returns the user's choice. Args: prompt (str): Text to ask the user what to select. choices (Sequence[str]): Values for the user to select from. Returns: The value selected by the user, or ``None``. Todo: Nice oppor...
def format_call(self, api_version, api_call): api_call = api_call.lstrip('/') api_call = api_call.rstrip('?') logger.debug('api_call post strip =\n%s' % api_call) if (api_version == 2 and api_call[-1] != '/'): logger.debug('Adding "/" to api_call.') api_call += '/...
Return properly formatted QualysGuard API call according to api_version etiquette.
def cache_security_group_exists(name, region=None, key=None, keyid=None, profile=None): return bool(describe_cache_security_groups(name=name, region=region, key=key, keyid=keyid, profile=profile))
Check to see if an ElastiCache security group exists. Example: .. code-block:: bash salt myminion boto3_elasticache.cache_security_group_exists mysecuritygroup
def set_stepdown_window(self, start, end, enabled=True, scheduled=True, weekly=True): if not start < end: raise TypeError('Parameter "start" must occur earlier in time than "end".') week_delta = datetime.timedelta(days=7) if not ((end - start) <= week_delta): raise TypeEr...
Set the stepdown window for this instance. Date times are assumed to be UTC, so use UTC date times. :param datetime.datetime start: The datetime which the stepdown window is to open. :param datetime.datetime end: The datetime which the stepdown window is to close. :param bool enabled: ...
def can_proceed(self): now = datetime.datetime.now() delta = datetime.timedelta(days=self.update_interval) return now >= self.last_update + delta
Checks whether app can proceed :return: True iff app is not locked and times since last update < app update interval
def nb_per_chunk(item_size, item_dim, chunk_size): size = chunk_size * 10.**6 ratio = int(round(size / (item_size*item_dim))) return max(10, ratio)
Return the number of items that can be stored in one chunk. :param int item_size: Size of an item's scalar componant in Bytes (e.g. for np.float64 this is 8) :param int item_dim: Items dimension (length of the second axis) :param float chunk_size: The size of a chunk given in MBytes.
def get_mapping(self, index, doc_type=None): mapping = self.es.indices.get_mapping(index=index, doc_type=doc_type) return next(iter(mapping.values()))
Get mapping for index. :param index: index name
def session_end_pb(status, end_time_secs=None): if end_time_secs is None: end_time_secs = time.time() session_end_info = plugin_data_pb2.SessionEndInfo(status=status, end_time_secs=end_time_secs) return _summary(metadata.SESSION_END_INFO_TAG, ...
Constructs a SessionEndInfo protobuffer. Creates a summary that contains status information for a completed training session. Should be exported after the training session is completed. One such summary per training session should be created. Each should have a different run. Args: status: A tensorboard...
def _mesh_to_material(mesh, metallic=0.0, rough=0.0): try: color = mesh.visual.main_color except BaseException: color = np.array([100, 100, 100, 255], dtype=np.uint8) color = color.astype(float32) / np.iinfo(color.dtype).max material = { "pbrMetallicRoughness": { "bas...
Create a simple GLTF material for a mesh using the most commonly occurring color in that mesh. Parameters ------------ mesh: trimesh.Trimesh Mesh to create a material from Returns ------------ material: dict In GLTF material format
def CheckInputArgs(*interfaces): l = len(interfaces) def wrapper(func): def check_args(self, *args, **kw): for i in range(len(args)): if (l > i and interfaces[i].providedBy(args[i])) or interfaces[-1].providedBy(args[i]): continue if l > i:...
Must provide at least one interface, the last one may be repeated.
def daily(self): if self._daily is None: self._daily = DailyList(self._version, account_sid=self._solution['account_sid'], ) return self._daily
Access the daily :returns: twilio.rest.api.v2010.account.usage.record.daily.DailyList :rtype: twilio.rest.api.v2010.account.usage.record.daily.DailyList
def get_data_with_timestamps(self): result = [] for t, d in zip(self.timestamps, self.data_points): result.append(t, round(d, self.lr)) return result
Returns the data points with timestamps. Returns: A list of tuples in the format of (timestamp, data)
def validate_data_columns(self, data_columns, min_itemsize): if not len(self.non_index_axes): return [] axis, axis_labels = self.non_index_axes[0] info = self.info.get(axis, dict()) if info.get('type') == 'MultiIndex' and data_columns: raise ValueError("cannot use...
take the input data_columns and min_itemize and create a data columns spec
def get_channel_property(self, channel_id, property_name): if isinstance(channel_id, (int, np.integer)): if channel_id in self.get_channel_ids(): if channel_id not in self._channel_properties: self._channel_properties[channel_id] = {} if isinstance...
This function returns the data stored under the property name from the given channel. Parameters ---------- channel_id: int The channel id for which the property will be returned property_name: str A property stored by the RecordingExtractor (location, et...
def x_y_by_col_lbl(df, y_col_lbl): x_cols = [col for col in df.columns if col != y_col_lbl] return df[x_cols], df[y_col_lbl]
Returns an X dataframe and a y series by the given column name. Parameters ---------- df : pandas.DataFrame The dataframe to split. y_col_lbl : object The label of the y column. Returns ------- X, y : pandas.DataFrame, pandas.Series A dataframe made up of all column...
def remove_unsafe_chars(text): if isinstance(text, six.string_types): text = UNSAFE_RE.sub('', text) return text
Remove unsafe unicode characters from a piece of text.
def vi_score(self, x, index): if index == 0: return self.vi_loc_score(x) elif index == 1: return self.vi_scale_score(x)
Wrapper function for selecting appropriate score Parameters ---------- x : float A random variable index : int 0 or 1 depending on which latent variable Returns ---------- The gradient of the scale latent variable at x
def merge(self, other): if not getattr(other, '_catalog', None): return if self._catalog is None: self.plural = other.plural self._info = other._info.copy() self._catalog = other._catalog.copy() else: self._catalog.update(other._catalog...
Merge another translation into this catalog.
def add_weight(cls): @functools.wraps(cls.add_weight) def _add_weight(self, name=None, shape=None, dtype=None, initializer=None, regularizer=None, **kwargs): if isinstance(initializer, tf.keras.layers.Lay...
Decorator for Layers, overriding add_weight for trainable initializers.
def allowed_entries(self, capability): index = re.match(r'(.+)index$', capability) archive = re.match(r'(.+)\-archive$', capability) if (capability == 'capabilitylistindex'): return([]) elif (index): return([index.group(1)]) elif (archive): ret...
Return list of allowed entries for given capability document. Includes handling of capability = *index where the only acceptable entries are *.
def lex(args): if len(args) == 0 or args[0] == SHOW: return [(SHOW, None)] elif args[0] == LOG: return [(LOG, None)] elif args[0] == ECHO: return [(ECHO, None)] elif args[0] == SET and args[1] == RATE: return tokenizeSetRate(args[2:]) elif args[0] == SET and args[1] =...
Lex input and return a list of actions to perform.
def _download(self): min_x, max_x, min_y, max_y = self.gssha_grid.bounds(as_geographic=True) if self.era_download_data == 'era5': log.info("Downloading ERA5 data ...") download_era5_for_gssha(self.lsm_input_folder_path, self.download_start_date...
download ERA5 data for GSSHA domain
def _save_vocab_file(vocab_file, subtoken_list): with tf.gfile.Open(vocab_file, mode="w") as f: for subtoken in subtoken_list: f.write("'%s'\n" % _unicode_to_native(subtoken))
Save subtokens to file.
def _is_env_per_bucket(): buckets = _get_buckets() if isinstance(buckets, dict): return True elif isinstance(buckets, list): return False else: raise ValueError('Incorrect s3.buckets type given in config')
Return the configuration mode, either buckets per environment or a list of buckets that have environment dirs in their root
def read_csv_from_file(filename): logger_csvs.info("enter read_csv_from_file") d = {} l = [] try: logger_csvs.info("open file: {}".format(filename)) with open(filename, 'r') as f: r = csv.reader(f, delimiter=',') for idx, col in enumerate(next(r)): ...
Opens the target CSV file and creates a dictionary with one list for each CSV column. :param str filename: :return list of lists: column values
def get_api_service(self, name=None): try: svc = self.services_by_name.get(name, None) if svc is None: raise ValueError(f"Couldn't find the API service configuration") return svc except: raise Exception(f"Failed to retrieve the API service ...
Returns the specific service config definition
def list_services(self, limit=None, marker=None): return self._services_manager.list(limit=limit, marker=marker)
List CDN services.
def _update_dict(self, to_dict, from_dict): for key, value in from_dict.items(): if key in to_dict and isinstance(to_dict[key], dict) and \ isinstance(from_dict[key], dict): self._update_dict(to_dict[key], from_dict[key]) else: to_dict[...
Recursively merges the fields for two dictionaries. Args: to_dict (dict): The dictionary onto which the merge is executed. from_dict (dict): The dictionary merged into to_dict
def get(self): if (self.obj is None) or (time.time() >= self.expires): with self.lock: self.expires, self.obj = self.factory() if isinstance(self.obj, BaseException): self.exception = self.obj else: self.exceptio...
Get the wrapped object.
def get_processor_name(): if platform.system() == "Linux": with open("/proc/cpuinfo", "rb") as cpuinfo: all_info = cpuinfo.readlines() for line in all_info: if b'model name' in line: return re.sub(b'.*model name.*:', b'', line, 1) return platfo...
Returns the processor name in the system
def responses_callback(request): method = request.method headers = CaseInsensitiveDict() request_headers = CaseInsensitiveDict() request_headers.update(request.headers) request.headers = request_headers uri = request.url return StackInABox.call_into(method, r...
Responses Request Handler. Converts a call intercepted by Responses to the Stack-In-A-Box infrastructure :param request: request object :returns: tuple - (int, dict, string) containing: int - the HTTP response status code dict - the headers for the HTTP res...
def _prepare_colors(color, values, limits_c, colormap, alpha, chan=None): if values is not None: if limits_c is None: limits_c = array([-1, 1]) * nanmax(abs(values)) norm_values = normalize(values, *limits_c) cm = get_colormap(colormap) colors = cm[norm_values] elif c...
Return colors for all the channels based on various inputs. Parameters ---------- color : tuple 3-, 4-element tuple, representing RGB and alpha, between 0 and 1 values : ndarray array with values for each channel limits_c : tuple of 2 floats, optional min and max values to n...
def validate_value(self, value): if value not in (None, self._unset): super(ReferenceField, self).validate_value(value) if value.app != self.target_app: raise ValidationError( self.record, "Reference field '{}' has target app '{}', ...
Validate provided record is a part of the appropriate target app for the field
def wheelEvent(self, event): initial_state = event.isAccepted() event.ignore() self.mouse_wheel_activated.emit(event) if not event.isAccepted(): event.setAccepted(initial_state) super(CodeEdit, self).wheelEvent(event)
Emits the mouse_wheel_activated signal. :param event: QMouseEvent
def generate(self): header = ' '.join('=' * self.width[i] for i in range(self.w)) lines = [ ' '.join(row[i].ljust(self.width[i]) for i in range(self.w)) for row in self.rows] return [header] + lines + [header]
Generate a list of strings representing the table in RST format.
def transaction_error_code(self): error = self.response_doc.find('transaction_error') if error is not None: code = error.find('error_code') if code is not None: return code.text
The machine-readable error code for a transaction error.
def image_get(auth=None, **kwargs): cloud = get_operator_cloud(auth) kwargs = _clean_kwargs(**kwargs) return cloud.get_image(**kwargs)
Get a single image CLI Example: .. code-block:: bash salt '*' glanceng.image_get name=image1 salt '*' glanceng.image_get name=0e4febc2a5ab4f2c8f374b054162506d
def installedUniqueRequirements(self, target): myDepends = dependentsOf(self.__class__) for dc in self.store.query(_DependencyConnector, _DependencyConnector.target==target): if dc.installee is self: continue depends = dependentsOf(dc.installee.__class_...
Return an iterable of things installed on the target that this item requires and are not required by anything else.
def next_chunks(self): with self.chunk_available: while True: playing_sounds = [s for s in self.sounds if s.playing] chunks = [] for s in playing_sounds: try: chunks.append(next(s.chunks)) ...
Gets a new chunk from all played sound and mix them together.
def try_enqueue(conn, queue_name, msg): logger.debug('Getting Queue URL for queue %s', queue_name) qurl = conn.get_queue_url(QueueName=queue_name)['QueueUrl'] logger.debug('Sending message to queue at: %s', qurl) resp = conn.send_message( QueueUrl=qurl, MessageBody=msg, DelaySeco...
Try to enqueue a message. If it succeeds, return the message ID. :param conn: SQS API connection :type conn: :py:class:`botocore:SQS.Client` :param queue_name: name of queue to put message in :type queue_name: str :param msg: JSON-serialized message body :type msg: str :return: message ID ...
def _get_auth_token(self): url = '/%s/oauth2/token' % getattr( settings, 'RESTCLIENTS_O365_TENANT', 'test') headers = {'Accept': 'application/json'} data = { "grant_type": "client_credentials", "client_id": getattr(settings, 'R...
Given the office356 tenant and client id, and client secret acquire a new authorization token
def remove_from_queue(self, index): updid = '0' objid = 'Q:0/' + str(index + 1) self.avTransport.RemoveTrackFromQueue([ ('InstanceID', 0), ('ObjectID', objid), ('UpdateID', updid), ])
Remove a track from the queue by index. The index number is required as an argument, where the first index is 0. Args: index (int): The (0-based) index of the track to remove
def _convert_to_clusters(c): new_dict = {} n_cluster = 0 logger.debug("_convert_to_cluster: loci %s" % c.loci2seq.keys()) for idl in c.loci2seq: n_cluster += 1 new_c = cluster(n_cluster) new_c.loci2seq[idl] = c.loci2seq[idl] new_dict[n_cluster] = new_c logger.debug("_...
Return 1 cluster per loci
def read(self): if not self.ready_to_read(): return None data = self._read() if data is None: return None return self._parse_message(data)
If there is data available to be read from the transport, reads the data and tries to parse it as a protobuf message. If the parsing succeeds, return a protobuf object. Otherwise, returns None.
def loc(self): try: return '{}:{}'.format(*ParseError.loc_info(self.text, self.index)) except ValueError: return '<out of bounds index {!r}>'.format(self.index)
Locate the error position in the source code text.
def send_status(self, payload): answer = {} data = [] if self.paused: answer['status'] = 'paused' else: answer['status'] = 'running' if len(self.queue) > 0: data = deepcopy(self.queue.queue) for key, item in data.items(): ...
Send the daemon status and the current queue for displaying.
def _tracebacks(score_matrix, traceback_matrix, idx): score = score_matrix[idx] if score == 0: yield () return directions = traceback_matrix[idx] assert directions != Direction.NONE, 'Tracebacks with direction NONE should have value 0!' row, col = idx if directions & Direction.UP...
Implementation of traceeback. This version can produce empty tracebacks, which we generally don't want users seeing. So the higher level `tracebacks` filters those out.
def _parse_content_type(content_type: Optional[str]) -> Tuple[Optional[str], str]: if not content_type: return None, "utf-8" else: type_, parameters = cgi.parse_header(content_type) encoding = parameters.get("charset", "utf-8") return type_, encoding
Tease out the content-type and character encoding. A default character encoding of UTF-8 is used, so the content-type must be used to determine if any decoding is necessary to begin with.
def get_similar(self, limit=None): params = self._get_params() if limit: params["limit"] = limit doc = self._request(self.ws_prefix + ".getSimilar", True, params) names = _extract_all(doc, "name") matches = _extract_all(doc, "match") artists = [] for i...
Returns the similar artists on the network.
def resolve(self, working_set=None): working_set = working_set or global_working_set if self._plugin_requirements: for plugin_location in self._resolve_plugin_locations(): if self._is_wheel(plugin_location): plugin_location = self._activate_wheel(plugin_location) working_set.add_...
Resolves any configured plugins and adds them to the global working set. :param working_set: The working set to add the resolved plugins to instead of the global working set (for testing). :type: :class:`pkg_resources.WorkingSet`
def OnDestroy(self, event): if hasattr(self, 'cardmonitor'): self.cardmonitor.deleteObserver(self.cardtreecardobserver) if hasattr(self, 'readermonitor'): self.readermonitor.deleteObserver(self.readertreereaderobserver) self.cardmonitor.deleteObserver(self.readertreec...
Called on panel destruction.
def get_effective_target_sdk_version(self): target_sdk_version = self.get_target_sdk_version() if not target_sdk_version: target_sdk_version = self.get_min_sdk_version() try: return int(target_sdk_version) except (ValueError, TypeError): return 1
Return the effective targetSdkVersion, always returns int > 0. If the targetSdkVersion is not set, it defaults to 1. This is set based on defaults as defined in: https://developer.android.com/guide/topics/manifest/uses-sdk-element.html :rtype: int
def GET_savedtimegrid(self) -> None: try: self._write_timegrid(state.timegrids[self._id]) except KeyError: self._write_timegrid(hydpy.pub.timegrids.init)
Get the previously saved simulation period.
def append(self, value): if not self.need_free: raise ValueError("Stack is read-only") if not isinstance(value, X509): raise TypeError('StackOfX509 can contain only X509 objects') sk_push(self.ptr, libcrypto.X509_dup(value.cert))
Adds certificate to stack
def rank_dated_files(pattern, dir, descending=True): files = glob.glob(op.join(dir, pattern)) return sorted(files, reverse=descending)
Search a directory for files that match a pattern. Return an ordered list of these files by filename. Args: pattern: The glob pattern to search for. dir: Path to directory where the files will be searched for. descending: Default True, will sort alphabetically by descending order. Retu...
def _no_primary(max_staleness, selection): smax = selection.secondary_with_max_last_write_date() if not smax: return selection.with_server_descriptions([]) sds = [] for s in selection.server_descriptions: if s.server_type == SERVER_TYPE.RSSecondary: staleness = (smax.last_wri...
Apply max_staleness, in seconds, to a Selection with no known primary.
def has_permission(self, request, view): if not self.global_permissions: return True serializer_class = view.get_serializer_class() assert serializer_class.Meta.model is not None, ( "global_permissions set to true without a model " "set on the serializer for '...
Overrides the standard function and figures out methods to call for global permissions.
def targetpop(upper_density, coul, target_cf, slsp, n_tot): if upper_density < 0.503: return 0. trypops=population_distri(upper_density, n_tot) slsp.set_filling(trypops) slsp.selfconsistency(coul,0) efm_free = dos_bethe_find_crystalfield(trypops, slsp.param['hopping']) orb_ener = slsp.param['lam...
restriction on finding the right populations that leave the crystal field same
def rav2xf(rot, av): rot = stypes.toDoubleMatrix(rot) av = stypes.toDoubleVector(av) xform = stypes.emptyDoubleMatrix(x=6, y=6) libspice.rav2xf_c(rot, av, xform) return stypes.cMatrixToNumpy(xform)
This routine determines a state transformation matrix from a rotation matrix and the angular velocity of the rotation. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/rav2xf_c.html :param rot: Rotation matrix. :type rot: 3x3-Element Array of floats :param av: Angular velocity vector. ...
def no_coroutine(f): @functools.wraps(f) def _no_coroutine(*args, **kwargs): generator = f(*args, **kwargs) if not isinstance(generator, types.GeneratorType): return generator previous = None first = True while True: element = None t...
This is not a coroutine ;) Use as a decorator: @no_coroutine def foo(): five = yield 5 print(yield "hello") The function passed should be a generator yielding whatever you feel like. The yielded values instantly get passed back into the generator. It's basically the same as if ...
def get_cache_time( self, path: str, modified: Optional[datetime.datetime], mime_type: str ) -> int: return self.CACHE_MAX_AGE if "v" in self.request.arguments else 0
Override to customize cache control behavior. Return a positive number of seconds to make the result cacheable for that amount of time or 0 to mark resource as cacheable for an unspecified amount of time (subject to browser heuristics). By default returns cache expiry of 10 yea...
def find_cached_dm(self): pmag_dir = find_pmag_dir.get_pmag_dir() if pmag_dir is None: pmag_dir = '.' model_file = os.path.join(pmag_dir, 'pmagpy', 'data_model', 'data_model.json') if not os.path.isfile(model_file): model_file = o...
Find filename where cached data model json is stored. Returns --------- model_file : str data model json file location
def _in_tag(self, tagname, attributes=None): node = self.cur_node while not node is None: if node.tag == tagname: if attributes and node.attrib == attributes: return True elif attributes: return False ret...
Determine if we are already in a certain tag. If we give attributes, make sure they match.
def datapath(self): path = self._fields['path'] if not path: path = self.fetch('directory') if path and not self._fields['is_multi_file']: path = os.path.join(path, self._fields['name']) return os.path.expanduser(fmt.to_unicode(path))
Get an item's data path.
def get_gallery_favorites(self): url = (self._imgur._base_url + "/3/account/{0}/gallery_favorites".format( self.name)) resp = self._imgur._send_request(url) return [Image(img, self._imgur) for img in resp]
Get a list of the images in the gallery this user has favorited.
def participant_policy(self, value): old_policy = self.participant_policy new_policy = value self._participant_policy = new_policy notify(ParticipationPolicyChangedEvent(self, old_policy, new_policy))
Changing participation policy fires a "ParticipationPolicyChanged" event
def sell(self, product_id, order_type, **kwargs): return self.place_order(product_id, 'sell', order_type, **kwargs)
Place a sell order. This is included to maintain backwards compatibility with older versions of cbpro-Python. For maximum support from docstrings and function signatures see the order type-specific functions place_limit_order, place_market_order, and place_stop_order. Args: ...
def job_delayed(self, job, queue): delayed_until = job.delayed_until.hget() if delayed_until: try: delayed_until = compute_delayed_until(delayed_until=parse(delayed_until)) except (ValueError, TypeError): delayed_until = None if not delayed...
Called if a job, before trying to run it, has the "delayed" status, or, after run, if its status was set to "delayed" If delayed_until was not set, or is invalid, set it to 60sec in the future
def _resolve_deps(self, depmap): deps = defaultdict(lambda: OrderedSet()) for category, depspecs in depmap.items(): dependencies = deps[category] for depspec in depspecs: dep_address = Address.parse(depspec) try: self.context.build_graph.maybe_inject_address_closure(dep_add...
Given a map of gen-key=>target specs, resolves the target specs into references.
def infer(self, ob): self._add_to_stack(ob) logits, vf = self.infer_from_frame_stack(self._frame_stack) return logits, vf
Add new observation to frame stack and infer policy. Args: ob: array of shape (height, width, channels) Returns: logits and vf.
def remove_entity(self, entity, second=False): if entity in self._entities: if second: for group in self._groups.keys(): if entity in self._groups[group]: self.deregister_entity_from_group(entity, group) self._entities.remov...
Removes entity from world and kills entity
def E_Advective_Dispersion(t, Pe): if isinstance(t, list): t[t == 0] = 10**(-10) return (Pe/(4*np.pi*t))**(0.5)*np.exp((-Pe*((1-t)**2))/(4*t))
Calculate a dimensionless measure of the output tracer concentration from a spike input to reactor with advection and dispersion. :param t: The time(s) at which to calculate the effluent concentration. Time can be made dimensionless by dividing by the residence time of the CMFR. :type t: float or numpy.arr...
def dump_tree(self, statement=None, indent_level=0): out = u"" indent = u" "*indent_level if statement is None: for root_statement in self.statements: out += self.dump_tree(root_statement, indent_level) else: out += indent + str(statement) + u'\n' ...
Dump the AST for this parsed file. Args: statement (SensorGraphStatement): the statement to print if this function is called recursively. indent_level (int): The number of spaces to indent this statement. Used for recursively printing blocks of ...
def maybe_reduce(nodes): r _, num_nodes = nodes.shape if num_nodes < 2: return False, nodes elif num_nodes == 2: projection = _PROJECTION0 denom = _PROJ_DENOM0 elif num_nodes == 3: projection = _PROJECTION1 denom = _PROJ_DENOM1 elif num_nodes == 4: ...
r"""Reduce nodes in a curve if they are degree-elevated. .. note:: This is a helper for :func:`_full_reduce`. Hence there is no corresponding Fortran speedup. We check if the nodes are degree-elevated by projecting onto the space of degree-elevated curves of the same degree, then comparin...
def thresholdBlocks(self, blocks, recall_weight=1.5): candidate_records = itertools.chain.from_iterable(self._blockedPairs(blocks)) probability = core.scoreDuplicates(candidate_records, self.data_model, self.classifier...
Returns the threshold that maximizes the expected F score, a weighted average of precision and recall for a sample of blocked data. Arguments: blocks -- Sequence of tuples of records, where each tuple is a set of records covered by a blocking predicate recall...
def resize(self, size, disk=None): if isinstance(size, Size): size = size.slug opts = {"disk": disk} if disk is not None else {} return self.act(type='resize', size=size, **opts)
Resize the droplet :param size: a size slug or a `Size` object representing the size to resize to :type size: string or `Size` :param bool disk: Set to `True` for a permanent resize, including disk changes :return: an `Action` representing the in-progress operati...
def image_create(cmptparms, cspace): lst = [ctypes.c_int, ctypes.POINTER(ImageComptParmType), ctypes.c_int] OPENJPEG.opj_image_create.argtypes = lst OPENJPEG.opj_image_create.restype = ctypes.POINTER(ImageType) image = OPENJPEG.opj_image_create(len(cmptparms), cmptparms, cspace) return(image)
Wrapper for openjpeg library function opj_image_create.
def boolean(self): try: return self._boolean except AttributeError: nbits = len(self.bits) boolean = numpy.zeros((self.size, nbits), dtype=bool) for i, sample in enumerate(self.value): boolean[i, :] = [int(sample) >> j & 1 for j in range(nb...
A mapping of this `StateVector` to a 2-D array containing all binary bits as booleans, for each time point.
def _is_cow(path): dirname = os.path.dirname(path) return 'C' not in __salt__['file.lsattr'](dirname)[path]
Check if the subvolume is copy on write
def check_query(query): q = query.lower() if "select " not in q: raise InvalidQuery("SELECT word not found in the query: {0}".format(query)) if " from " not in q: raise InvalidQuery("FROM word not found in the query: {0}".format(query))
Check query sanity Args: query: query string Returns: None
def rbac_policy_create(request, **kwargs): body = {'rbac_policy': kwargs} rbac_policy = neutronclient(request).create_rbac_policy( body=body).get('rbac_policy') return RBACPolicy(rbac_policy)
Create a RBAC Policy. :param request: request context :param target_tenant: target tenant of the policy :param tenant_id: owner tenant of the policy(Not recommended) :param object_type: network or qos_policy :param object_id: object id of policy :param action: access_as_shared or access_as_exte...
def on_data(self, raw_data): try: data = json.loads(raw_data) except ValueError: logger.error('value error: %s' % raw_data) return unique_id = data.get('id') if self._error_handler(data, unique_id): return False operation = data['op...
Called when raw data is received from connection. Override this method if you wish to manually handle the stream data :param raw_data: Received raw data :return: Return False to stop stream and close connection
def __calculate_boltzmann_factor(self, state_key, next_action_list): sigmoid = self.__calculate_sigmoid() q_df = self.q_df[self.q_df.state_key == state_key] q_df = q_df[q_df.isin(next_action_list)] q_df["boltzmann_factor"] = q_df["q_value"] / sigmoid q_df["boltzmann_factor"] = q_...
Calculate boltzmann factor. Args: state_key: The key of state. next_action_list: The possible action in `self.t+1`. If the length of this list is 0, all action should be possible. Returns: [(`The key of action`,...
def _authenticate(self): try: hosted_zones = self.r53_client.list_hosted_zones_by_name()[ 'HostedZones' ] hosted_zone = next( hz for hz in hosted_zones if self.filter_zone(hz) ) self.domain_id = hosted_zo...
Determine the hosted zone id for the domain.
def handle_message(self, msg): if msg.msg_id not in self.msg_types: self.report_message_type(msg) self.msg_types.add(msg.msg_id) self.tc.message('inspection', typeId=msg.msg_id, message=msg.msg, file=os.path.relpath(msg.abspath).replace('\\', '/'), ...
Issues an `inspection` service message based on a PyLint message. Registers each message type upon first encounter. :param utils.Message msg: a PyLint message
def get_posix(self, i): index = i.index value = ['['] try: c = next(i) if c != ':': raise ValueError('Not a valid property!') else: value.append(c) c = next(i) if c == '^': val...
Get POSIX.
def many_nodes( lexer: Lexer, open_kind: TokenKind, parse_fn: Callable[[Lexer], Node], close_kind: TokenKind, ) -> List[Node]: expect_token(lexer, open_kind) nodes = [parse_fn(lexer)] append = nodes.append while not expect_optional_token(lexer, close_kind): append(parse_fn(lexer)...
Fetch matching nodes, at least one. Returns a non-empty list of parse nodes, determined by the `parse_fn`. This list begins with a lex token of `open_kind` and ends with a lex token of `close_kind`. Advances the parser to the next lex token after the closing token.
def save_statement(self, statement): if not isinstance(statement, Statement): statement = Statement(statement) request = HTTPRequest( method="POST", resource="statements" ) if statement.id is not None: request.method = "PUT" req...
Save statement to LRS and update statement id if necessary :param statement: Statement object to be saved :type statement: :class:`tincan.statement.Statement` :return: LRS Response object with the saved statement as content :rtype: :class:`tincan.lrs_response.LRSResponse`
def _start_reader(self): while True: message = yield From(self.pipe.read_message()) self._process(message)
Read messages from the Win32 pipe server and handle them.
def format_norm(kwargs, current=None): norm = kwargs.pop('norm', current) or 'linear' vmin = kwargs.pop('vmin', None) vmax = kwargs.pop('vmax', None) clim = kwargs.pop('clim', (vmin, vmax)) or (None, None) clip = kwargs.pop('clip', None) if norm == 'linear': norm = colors.Normalize() ...
Format a `~matplotlib.colors.Normalize` from a set of kwargs Returns ------- norm, kwargs the formatted `Normalize` instance, and the remaining keywords
def OnDrawBackground(self, dc, rect, item, flags): if (item & 1 == 0 or flags & (wx.combo.ODCB_PAINTING_CONTROL | wx.combo.ODCB_PAINTING_SELECTED)): try: wx.combo.OwnerDrawnComboBox.OnDrawBackground(self, dc, ...
Called for drawing the background area of each item Overridden from OwnerDrawnComboBox
def sign_off(self): try: logger.info("Bot player signing off.") feedback = WebDriverWait(self.driver, 20).until( EC.presence_of_element_located((By.ID, "submit-questionnaire")) ) self.complete_questionnaire() feedback.click() ...
Submit questionnaire and finish. This uses Selenium to click the submit button on the questionnaire and return to the original window.
def to_yaml(value) -> str: stream = yaml.io.StringIO() dumper = ConfigDumper(stream, default_flow_style=True, width=sys.maxsize) val = None try: dumper.open() dumper.represent(value) val = stream.getvalue().strip() dumper.close() finally: dumper.dispose() ...
Convert a given value to a YAML string.
def _make_version(major, minor, micro, releaselevel, serial): assert releaselevel in ['alpha', 'beta', 'candidate', 'final'] version = "%d.%d" % (major, minor) if micro: version += ".%d" % (micro,) if releaselevel != 'final': short = {'alpha': 'a', 'beta': 'b', 'candidate': 'rc'}[release...
Create a readable version string from version_info tuple components.
def _build_auth_request(self, verify=False, **kwargs): json = { 'domain': self.domain } credential = self.credential params = {} if credential.provider_name.startswith('lms'): params = dict( login=credential._login, pwd=cred...
Build the authentication request to SMC
def finalize(self, result): runtime = int(time.time() * 1000) - self.execution_start_time self.testcase_manager.update_execution_data(self.execution_guid, runtime)
At the end of the run, we want to update the DB row with the execution time.
def ListAssets(logdir, plugin_name): plugin_dir = PluginDirectory(logdir, plugin_name) try: return [x.rstrip('/') for x in tf.io.gfile.listdir(plugin_dir)] except tf.errors.NotFoundError: return []
List all the assets that are available for given plugin in a logdir. Args: logdir: A directory that was created by a TensorFlow summary.FileWriter. plugin_name: A string name of a plugin to list assets for. Returns: A string list of available plugin assets. If the plugin subdirectory does not exis...
def _register_allocator(self, plugin_name, plugin_instance): for allocator in plugin_instance.get_allocators().keys(): if allocator in self._allocators: raise PluginException("Allocator with name {} already exists! unable to add " "allocators fro...
Register an allocator. :param plugin_name: Allocator name :param plugin_instance: RunPluginBase :return: