code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def _get_contexts_for_squash(self, batch_signature): batch = self._batches_by_id[batch_signature].batch index = self._batches.index(batch) contexts = [] txns_added_predecessors = [] for b in self._batches[index::-1]: batch_is_valid = True context...
Starting with the batch referenced by batch_signature, iterate back through the batches and for each valid batch collect the context_id. At the end remove contexts for txns that are other txn's predecessors. Args: batch_signature (str): The batch to start from, moving back through the batches in the scheduler Returns...
juraj-google-style
def VerifyStructure(self, parser_mediator, line): try: structure = self._LINE.parseString(line) except pyparsing.ParseException: logger.debug('Not a SkyDrive old log file') return False day_of_month, month, year, hours, minutes, seconds, milliseconds = ( structure.date_time) ...
Verify that this file is a SkyDrive old log file. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. line (str): line from a text file. Returns: bool: True if the line is in the expected format, False if not.
juraj-google-style
def filter_children(self, ctype: ContentType=None) -> List[SchemaNode]: if (ctype is None): ctype = self.content_type() return [c for c in self.children if ((not isinstance(c, (RpcActionNode, NotificationNode))) and ((c.content_type().value & ctype.value) != 0))]
Return receiver's children based on content type. Args: ctype: Content type.
codesearchnet
def _process_book(html_chunk): title, url = _parse_title_url(html_chunk) book_format, pages, isbn = _parse_format_pages_isbn(html_chunk) pub = Publication( title=title, authors=_parse_authors(html_chunk), price=_parse_price(html_chunk), publisher="Grada" ) ...
Parse available informations about book from the book details page. Args: html_chunk (obj): HTMLElement containing slice of the page with details. Returns: obj: :class:`structures.Publication` instance with book details.
juraj-google-style
def warning_handler(self, handler): if (not self.opened()): handler = (handler or util.noop) self._warning_handler = enums.JLinkFunctions.LOG_PROTOTYPE(handler) self._dll.JLINKARM_SetWarnOutHandler(self._warning_handler)
Setter for the warning handler function. If the DLL is open, this function is a no-op, so it should be called prior to calling ``open()``. Args: self (JLink): the ``JLink`` instance handler (function): function to call on warning messages Returns: ``None``
codesearchnet
def circuit_to_instruction(circuit): instruction = Instruction(name=circuit.name, num_qubits=sum([qreg.size for qreg in circuit.qregs]), num_clbits=sum([creg.size for creg in circuit.cregs]), params=[]) instruction.control = None def find_bit_position(bit): 'find the index of a given bit (Register,...
Build an ``Instruction`` object from a ``QuantumCircuit``. The instruction is anonymous (not tied to a named quantum register), and so can be inserted into another circuit. The instruction will have the same string name as the circuit. Args: circuit (QuantumCircuit): the input circuit. Return: Instruction: an instru...
codesearchnet
def random_string_generator(size=6, chars=string.ascii_uppercase): try: return ''.join((random.choice(chars) for _ in range(size))) except: (line, filename, synerror) = trace() raise ArcRestHelperError({'function': 'random_string_generator', 'line': line, 'filename': filename, 'synerror'...
Generates a random string from a set of characters. Args: size (int): The length of the resultant string. Defaults to 6. chars (str): The characters to be used by :py:func:`random.choice`. Defaults to :py:const:`string.ascii_uppercase`. Returns: str: The randomly generated string. Examples: >>> arcresthelper.common....
codesearchnet
def wait_for_job(self, job, poll=5): desc = _wait_until_training_done((lambda last_desc: _train_done(self.sagemaker_client, job, last_desc)), None, poll) self._check_job_status(job, desc, 'TrainingJobStatus') return desc
Wait for an Amazon SageMaker training job to complete. Args: job (str): Name of the training job to wait for. poll (int): Polling interval in seconds (default: 5). Returns: (dict): Return value from the ``DescribeTrainingJob`` API. Raises: ValueError: If the training job fails.
codesearchnet
def get_push_pop_stack(): push = copy.deepcopy(PUSH_STACK) pop = copy.deepcopy(POP_STACK) anno.setanno(push, 'pop', pop) anno.setanno(push, 'gen_push', True) anno.setanno(pop, 'push', push) op_id = _generate_op_id() return (push, pop, op_id)
Create pop and push nodes for substacks that are linked. Returns: A push and pop node which have `push_func` and `pop_func` annotations respectively, identifying them as such. They also have a `pop` and `push` annotation respectively, which links the push node to the pop node and vice versa.
codesearchnet
def get_locations_list(self, lower_bound=0, upper_bound=None): real_upper_bound = upper_bound if (upper_bound is None): real_upper_bound = self.nbr_of_sub_locations() try: return self._locations_list[lower_bound:real_upper_bound] except: return list()
Return the internal location list. Args: lower_bound: upper_bound: Returns:
codesearchnet
def serialize_to_display(self, doc_format='pretty-xml', *args, **kwargs): return super(ResourceMap, self).serialize(*args, format=doc_format, encoding=None, **kwargs).decode('utf-8')
Serialize ResourceMap to an XML doc that is pretty printed for display. Args: doc_format: str One of: ``xml``, ``n3``, ``turtle``, ``nt``, ``pretty-xml``, ``trix``, ``trig`` and ``nquads``. args and kwargs: Optional arguments forwarded to rdflib.ConjunctiveGraph.serialize(). Returns: str: Pretty printed Resource Map...
codesearchnet
def _print_tensor_info(tensor_info, indent=0): indent_str = ' ' * indent def in_print(s): print(indent_str + s) in_print(' dtype: ' + {value: key for key, value in types_pb2.DataType.items()}[tensor_info.dtype]) if tensor_info.tensor_shape.unknown_rank: shape = 'unknown_rank' el...
Prints details of the given tensor_info. Args: tensor_info: TensorInfo object to be printed. indent: How far (in increments of 2 spaces) to indent each line output
github-repos
def create_stub(generated_create_stub, channel=None, service_path=None, service_port=None, credentials=None, scopes=None, ssl_credentials=None): if (channel is None): target = '{}:{}'.format(service_path, service_port) if (credentials is None): credentials = _grpc_google_auth.get_default...
Creates a gRPC client stub. Args: generated_create_stub (Callable): The generated gRPC method to create a stub. channel (grpc.Channel): A Channel object through which to make calls. If None, a secure channel is constructed. If specified, all remaining arguments are ignored. service_path (str): The domain name of the A...
codesearchnet
def query(self, minhash, k): if k <= 0: raise ValueError("k must be positive") if len(minhash) < self.k*self.l: raise ValueError("The num_perm of MinHash out of range") results = set() r = self.k while r > 0: for key in self._query(min...
Return the approximate top-k keys that have the highest Jaccard similarities to the query set. Args: minhash (datasketch.MinHash): The MinHash of the query set. k (int): The maximum number of keys to return. Returns: `list` of at most k keys.
juraj-google-style
def add_region_feature(self, start_resnum, end_resnum, feat_type=None, feat_id=None, qualifiers=None): if self.feature_file: raise ValueError('Feature file associated with sequence, please remove file association to append ' 'additional features.') if n...
Add a feature to the features list describing a region of the protein sequence. Args: start_resnum (int): Start residue number of the protein sequence feature end_resnum (int): End residue number of the protein sequence feature feat_type (str, optional): Optional description of the feature type (ie. 'binding domain') ...
juraj-google-style
def get_concept_item_mapping(self, concepts=None, lang=None): if (concepts is None): concepts = self.filter(active=True) if (lang is not None): concepts = concepts.filter(lang=lang) if (lang is None): languages = set([concept.lang for concept in concepts]) if (len(lan...
Get mapping of concepts to items belonging to concept. Args: concepts (list of Concept): Defaults to None meaning all concepts lang (str): language of concepts, if None use language of concepts Returns: dict: concept (int) -> list of item ids (int)
codesearchnet
def convert_selu(params, w_name, scope_name, inputs, layers, weights, names): print('Converting selu ...') if names == 'short': tf_name = 'SELU' + random_string(4) elif names == 'keep': tf_name = w_name else: tf_name = w_name + str(random.random()) selu = keras.layers....
Convert selu layer. Args: params: dictionary with layer parameters w_name: name prefix in state_dict scope_name: pytorch scope name inputs: pytorch node inputs layers: dictionary with keras tensors weights: pytorch state_dict names: use short names for keras layers
juraj-google-style
def spherical_vert(script, radius=1.0, center_pt=(0.0, 0.0, 0.0)): function = 'sqrt((x-{})^2+(y-{})^2+(z-{})^2)<={}'.format(center_pt[0], center_pt[1], center_pt[2], radius) vert_function(script, function=function) return None
Select all vertices within a spherical radius Args: radius (float): radius of the sphere center_pt (3 coordinate tuple or list): center point of the sphere Layer stack: No impacts MeshLab versions: 2016.12 1.3.4BETA
codesearchnet
def recipe_bigquery_function(config, auth, function, dataset): bigquery(config, {'auth': auth, 'function': function, 'to': {'dataset': dataset}})
Add a custom function or table to a dataset. Args: auth (authentication) - Credentials used for writing function. function (choice) - Function or table to create. dataset (string) - Existing BigQuery dataset.
github-repos
def apply_grads(self, grads, variables): ops = [] for (grad, var) in zip(grads, variables): ops.extend(self.apply_grad(grad, var)) if (not ops): return ops return variables[0].graph.combine_assignments(ops)
Apply gradients to variables. Call this function externally instead of apply_grad(). This causes the operations to be combined, which is necessary for stacking variables see mtf.rewrite_stack_variables(). Args: grads: a list of Tensor variables: a list of Variables Returns: a list of Operations
codesearchnet
def merge(self, dataset): def merge_data(source, dest): for (key, value) in source.items(): if isinstance(value, dict): merge_data(value, dest.setdefault(key, {})) else: dest[key] = value return dest merge_data(dataset.data, self._data) ...
Merge the specified dataset on top of the existing data. This replaces all values in the existing dataset with the values from the given dataset. Args: dataset (TaskData): A reference to the TaskData object that should be merged on top of the existing object.
codesearchnet
def score_intersect(self, term1, term2, **kwargs): t1_kde = self.kde(term1, **kwargs) t2_kde = self.kde(term2, **kwargs) overlap = np.minimum(t1_kde, t2_kde) return np.trapz(overlap)
Compute the geometric area of the overlap between the kernel density estimates of two terms. Args: term1 (str) term2 (str) Returns: float
codesearchnet
def call(self, hidden_states: tf.Tensor, attention_mask: np.ndarray | tf.Tensor | None, layer_head_mask: tf.Tensor | None, training: Optional[bool]=False) -> tf.Tensor: residual = hidden_states hidden_states, self_attn_weights, _ = self.self_attn(hidden_states=hidden_states, attention_mask=attention_mask, layer...
Args: hidden_states (`tf.Tensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`tf.Tensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. layer_head_mask (`tf.Tensor`): mask for attention heads in a given layer of...
github-repos
def _ParseFileHeader(self, file_object): file_header_map = self._GetDataTypeMap('chrome_cache_data_block_file_header') try: (file_header, _) = self._ReadStructureFromFileObject(file_object, 0, file_header_map) except (ValueError, errors.ParseError) as exception: raise errors.ParseError('Unab...
Parses the file header. Args: file_object (dfvfs.FileIO): a file-like object to parse. Raises: ParseError: if the file header cannot be read.
codesearchnet
def get_authorization_url(self, client_id=None, instance_id=None, redirect_uri=None, region=None, scope=None, state=None): client_id = client_id or self.client_id instance_id = instance_id or self.instance_id redirect_uri = red...
Generate authorization URL. Args: client_id (str): OAuth2 client ID. Defaults to ``None``. instance_id (str): App Instance ID. Defaults to ``None``. redirect_uri (str): Redirect URI. Defaults to ``None``. region (str): App Region. Defaults to ``None``. scope (str): Permissions. Defaults to ``None``. state (str): UUID ...
juraj-google-style
def get_type(self): raise NotImplementedError('Base class should not be called directly!')
This function returns the type of the sniffer. Returns: The type (string) of the sniffer. Corresponds to the 'Type' key of the sniffer configuration.
github-repos
def create_heroku_connect_schema(using=DEFAULT_DB_ALIAS): connection = connections[using] with connection.cursor() as cursor: cursor.execute(_SCHEMA_EXISTS_QUERY, [settings.HEROKU_CONNECT_SCHEMA]) schema_exists = cursor.fetchone()[0] if schema_exists: return False cur...
Create Heroku Connect schema. Note: This function is only meant to be used for local development. In a production environment the schema will be created by Heroku Connect. Args: using (str): Alias for database connection. Returns: bool: ``True`` if the schema was created, ``False`` if the schema already exists.
codesearchnet
def parse_date(date_string, ignoretz=True): try: return parser.parse(date_string, ignoretz=ignoretz) except TypeError: return None
Parse a string as a date. If the string fails to parse, `None` will be returned instead >>> parse_date('2017-08-15T18:24:31') datetime.datetime(2017, 8, 15, 18, 24, 31) Args: date_string (`str`): Date in string format to parse ignoretz (`bool`): If set ``True``, ignore time zones and return a naive :class:`datetime` ...
juraj-google-style
def isubset(self, *keys): return ww.g((key, self[key]) for key in keys)
Return key, self[key] as generator for key in keys. Raise KeyError if a key does not exist Args: keys: Iterable containing keys Example: >>> from ww import d >>> list(d({1: 1, 2: 2, 3: 3}).isubset(1, 3)) [(1, 1), (3, 3)]
juraj-google-style
def init_cache(self, batch_size, max_length, encoder_outputs): decoder_input_ids = jnp.ones((batch_size, max_length), dtype='i4') decoder_attention_mask = jnp.ones_like(decoder_input_ids) decoder_position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(decoder_input_ids).shape[-1]), decoder_input_ids.shape...
Args: batch_size (`int`): batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache. max_length (`int`): maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized cache. encoder_outputs (`Union[FlaxBaseModelOutput, tuple(tuple(jnp.ndarr...
github-repos
def beam_sql(self, line: str, cell: Optional[str]=None) -> Optional[PValue]: input_str = line if cell: input_str += ' ' + cell parsed = self._parser.parse(input_str.strip().split()) if not parsed: return output_name = parsed.output_name verbose = parsed.verbose query = parsed...
The beam_sql line/cell magic that executes a Beam SQL. Args: line: the string on the same line after the beam_sql magic. cell: everything else in the same notebook cell as a string. If None, beam_sql is used as line magic. Otherwise, cell magic. Returns None if running into an error or waiting for user input (running...
github-repos
def get_experiment_from_key(self, experiment_key): experiment = self.experiment_key_map.get(experiment_key) if experiment: return experiment self.logger.error(('Experiment key "%s" is not in datafile.' % experiment_key)) self.error_handler.handle_error(exceptions.InvalidExperimentException(enums...
Get experiment for the provided experiment key. Args: experiment_key: Experiment key for which experiment is to be determined. Returns: Experiment corresponding to the provided experiment key.
codesearchnet
def get_type_info(obj): if isinstance(obj, primitive_types): return ('primitive', type(obj).__name__) if isinstance(obj, sequence_types): return ('sequence', type(obj).__name__) if isinstance(obj, array_types): return ('array', type(obj).__name__) if isinstance(obj, key_valu...
Get type information for a Python object Args: obj: The Python object Returns: tuple: (object type "catagory", object type name)
juraj-google-style
async def freeze(self, *args, **kwargs): uid = kwargs.get('uid', 0) coinid = kwargs.get('coinid') amount = kwargs.get('amount') address = kwargs.get('address') try: coinid = coinid.replace('TEST', '') except: pass try: uid = int(uid) except: return (await ...
Freeze users balance Accepts: - uid [integer] (users id from main server) - coinid [string] (blockchain type in uppercase) - amount [integer] (amount for freezing) Returns: - uid [integer] (users id from main server) - coinid [string] (blockchain type in uppercase) - amount_active [integer] (activae users amount) - a...
codesearchnet
def delete_panel(self, panel_obj): res = self.panel_collection.delete_one({'_id': panel_obj['_id']}) LOG.warning(('Deleting panel %s, version %s' % (panel_obj['panel_name'], panel_obj['version']))) return res
Delete a panel by '_id'. Args: panel_obj(dict) Returns: res(pymongo.DeleteResult)
codesearchnet
def unique_timestamps(self: EventSetOrNode) -> EventSetOrNode: from temporian.core.operators.unique_timestamps import unique_timestamps return unique_timestamps(self)
Removes events with duplicated timestamps from an [`EventSet`][temporian.EventSet]. Returns a feature-less EventSet where each timestamp from the original one only appears once. If the input is indexed, the unique operation is applied independently for each index. Usage example: ```python >>> a = tp.event_set(timesta...
github-repos
def group_by_mimetype(content: ProcessorContent) -> dict[str, ProcessorContent]: grouped_content = {} for mimetype, part in content.items(): if mimetype not in grouped_content: grouped_content[mimetype] = ProcessorContent() grouped_content[mimetype] += part return grouped_content
Groups content by mimetype. The order of parts within each mimetype grouping is preserved, maintaining the same order as they appeared in the original input `content`. Args: content: The content to group. Returns: A dictionary mapping mimetypes to ProcessorContent objects, with the same order as in the original inpu...
github-repos
def compare_versions(ver1='', oper='==', ver2=''): if not ver1: raise SaltInvocationError('compare_version, ver1 is blank') if not ver2: raise SaltInvocationError('compare_version, ver2 is blank') if ver1 == 'latest': ver1 = six.text_type(sys.maxsize) if ver2 == 'lates...
Compare software package versions Args: ver1 (str): A software version to compare oper (str): The operand to use to compare ver2 (str): A software version to compare Returns: bool: True if the comparison is valid, otherwise False CLI Example: .. code-block:: bash salt '*' pkg.compare_versions 1.2 >= 1.3
juraj-google-style
def colored(cls, color, message): return getattr(cls, color.upper()) + message + cls.DEFAULT
Small function to wrap a string around a color Args: color (str): name of the color to wrap the string with, must be one of the class properties message (str): String to wrap with the color Returns: str: the colored string
juraj-google-style
def commit_author(sha1=''): with conf.within_proj_dir(): cmd = 'git show -s --format="%an||%ae" {}'.format(sha1) result = shell.run(cmd, capture=True, never_pretend=True).stdout (name, email) = result.split('||') return Author(name, email)
Return the author of the given commit. Args: sha1 (str): The sha1 of the commit to query. If not given, it will return the sha1 for the current commit. Returns: Author: A named tuple ``(name, email)`` with the commit author details.
codesearchnet
def remove_config(self, id): url = self._url('/configs/{0}', id) res = self._delete(url) self._raise_for_status(res) return True
Remove a config Args: id (string): Full ID of the config to remove Returns (boolean): True if successful Raises: :py:class:`docker.errors.NotFound` if no config with that ID exists
juraj-google-style
def _transpose_batch_time(x): x_static_shape = x.get_shape() if x_static_shape.rank is not None and x_static_shape.rank < 2: return x x_rank = array_ops.rank(x) x_t = array_ops.transpose(x, array_ops.concat(([1, 0], math_ops.range(2, x_rank)), axis=0)) x_t.set_shape(tensor_shape.TensorShape(...
Transposes the batch and time dimensions of a Tensor. If the input tensor has rank < 2 it returns the original tensor. Retains as much of the static shape information as possible. Args: x: A Tensor. Returns: x transposed along the first two dimensions.
github-repos
def _forward_over_back_hessian(f, params, use_pfor, dtype=None): return _vectorize_parameters(functools.partial(_hvp, f, params), params, use_pfor=use_pfor, dtype=dtype)
Computes the full Hessian matrix for the scalar-valued f(*params). Args: f: A function taking `params` and returning a scalar. params: A possibly nested structure of tensors. use_pfor: If true, uses `tf.vectorized_map` calls instead of looping. dtype: Required if `use_pfor=False`. A possibly nested structure of dtypes...
github-repos
def get(self, *, search, limit=0, headers=None): return self.transport.forward_request( method='GET', path=self.path, params={'search': search, 'limit': limit}, headers=headers )
Retrieves the assets that match a given text search string. Args: search (str): Text search string. limit (int): Limit the number of returned documents. Defaults to zero meaning that it returns all the matching assets. headers (dict): Optional headers to pass to the request. Returns: :obj:`list` of :obj:`dict`: List ...
juraj-google-style
def delete_asset(self, asset_id, asset_type): return self.asset(asset_id, asset_type=asset_type, action='DELETE')
Delete the asset with the provided asset_id. Args: asset_id: The id of the asset. asset_type: The asset type. Returns:
juraj-google-style
def __new__(cls, name, parents, dct): newClass = super(CommandMeta, cls).__new__(cls, name, parents, dct) if name != 'Command': for attribute in ['name', 'description', 'help']: if attribute not in dct or dct[attribute] is None: raise ValueError(...
Creates a new Command class and validates it. Args: cls (Class): the class object being created name (name): the name of the class being created parents (list): list of parent classes dct (dictionary): class attributes Returns: ``Class``
juraj-google-style
def wrap_in_placeholder(self, arg, shape_info): if shape_info == 'known': return arg if isinstance(arg, ragged_tensor.RaggedTensor): return arg.with_flat_values(self.wrap_in_placeholder(arg.flat_values, shape_info)) if isinstance(arg, tensor_shape.TensorShape): if arg.ndims is None: ...
Wraps `arg` in a placeholder to limit static shape info. Args: arg: The value to wrap. A Tensor, RaggedTensor, or TensorShape. shape_info: One of ['known', 'unknown_dims', 'unknown_rank']. Returns: * If shape_info is 'known': returns `arg`. * If shape_info is 'unknown_dims': returns a placeholder wrapping `arg` wher...
github-repos
def tensor_summary(name, tensor, summary_description=None, collections=None, summary_metadata=None, family=None, display_name=None): if summary_metadata is None: summary_metadata = _SummaryMetadata() if summary_description is not None: summary_metadata.summary_description = summary_description ...
Outputs a `Summary` protocol buffer with a serialized tensor.proto. Args: name: A name for the generated node. If display_name is not set, it will also serve as the tag name in TensorBoard. (In that case, the tag name will inherit tf name scopes.) tensor: A tensor of any type and shape to serialize. summary_descriptio...
github-repos
def _update_graph(self, vertex_dict=None, edge_dict=None): def set_attrs(ref, attrs): for attr_name, attr_val in attrs.items(): ref.set(attr_name, attr_val) with self._lock: if vertex_dict: for vertex, vertex_attrs in vertex_dict.items(): set_attrs(self._...
Updates the pydot.Dot object with the given attribute update Args: vertex_dict: (Dict[str, Dict[str, str]]) maps vertex names to attributes edge_dict: This should be Either (Dict[str, Dict[str, str]]) which maps edge names to attributes Or (Dict[(str, str), Dict[str, str]]) which maps vertex pairs to edge attributes
github-repos
def invite_by_email(self, email, user, organization, **kwargs): try: invitee = self.user_model.objects.get(email__iexact=email) except self.user_model.DoesNotExist: invitee = None user_invitation = self.invitation_model.objects.create( ...
Primary interface method by which one user invites another to join Args: email: request: **kwargs: Returns: an invitation instance Raises: MultipleObjectsReturned if multiple matching users are found
juraj-google-style
def add_presence_listener(self, callback): listener_uid = uuid4() self.presence_listeners[listener_uid] = callback return listener_uid
Add a presence listener that will send a callback when the client receives a presence update. Args: callback (func(roomchunk)): Callback called when a presence update arrives. Returns: uuid.UUID: Unique id of the listener, can be used to identify the listener.
codesearchnet
def position(msg0, msg1, t0, t1, lat_ref=None, lon_ref=None): tc0 = typecode(msg0) tc1 = typecode(msg1) if (5<=tc0<=8 and 5<=tc1<=8): if (not lat_ref) or (not lon_ref): raise RuntimeError("Surface position encountered, a reference \ position lat/lon r...
Decode position from a pair of even and odd position message (works with both airborne and surface position messages) Args: msg0 (string): even message (28 bytes hexadecimal string) msg1 (string): odd message (28 bytes hexadecimal string) t0 (int): timestamps for the even message t1 (int): timestamps for the odd messa...
juraj-google-style
def hex_to_name(hexx): for (n, h) in defaults.COLOURS.items(): if ((len(n) > 1) and (h == hexx.upper())): return n.lower() return None
Convert hex to a color name, using matplotlib's colour names. Args: hexx (str): A hexadecimal colour, starting with '#'. Returns: str: The name of the colour, or None if not found.
codesearchnet
def save_summaries(frames, keys, selected_summaries, batch_dir, batch_name): if not frames: logger.info("Could save summaries - no summaries to save!") logger.info("You have no frames - aborting") return None if not keys: logger.info("Could save summaries - no summaries to s...
Writes the summaries to csv-files Args: frames: list of ``cellpy`` summary DataFrames keys: list of indexes (typically run-names) for the different runs selected_summaries: list defining which summary data to save batch_dir: directory to save to batch_name: the batch name (will be used for making the file-name(s)) Re...
juraj-google-style
def check_num_tasks(chain, task_count): errors = [] min_decision_tasks = 1 if (task_count['decision'] < min_decision_tasks): errors.append('{} decision tasks; we must have at least {}!'.format(task_count['decision'], min_decision_tasks)) raise_on_errors(errors)
Make sure there are a specific number of specific task types. Currently we only check decision tasks. Args: chain (ChainOfTrust): the chain we're operating on task_count (dict): mapping task type to the number of links. Raises: CoTError: on failure.
codesearchnet
def load_institute(adapter, internal_id, display_name, sanger_recipients=None): institute_obj = build_institute( internal_id=internal_id, display_name=display_name, sanger_recipients=sanger_recipients ) log.info("Loading institute {0} with display name {1}" \ " int...
Load a institute into the database Args: adapter(MongoAdapter) internal_id(str) display_name(str) sanger_recipients(list(email))
juraj-google-style
def _Identity(tensor, name=None): tensor = ops.internal_convert_to_tensor_or_composite(tensor, as_ref=True) tensor = variable_utils.convert_variables_to_tensors(tensor) if isinstance(tensor, tensor_lib.Tensor): if tensor.dtype._is_ref_dtype: return gen_array_ops.ref_identity(tensor, name...
Return a tensor with the same shape and contents as the input tensor. Args: tensor: A Tensor. name: A name for this operation (optional). Returns: A Tensor with the same type and value as the input Tensor.
github-repos
def __init__(self, input_reader=None, output_writer=None): super(Log2TimelineTool, self).__init__( input_reader=input_reader, output_writer=output_writer) self._command_line_arguments = None self._enable_sigsegv_handler = False self._number_of_extraction_workers = 0 self._storage_serial...
Initializes a log2timeline CLI tool. Args: input_reader (Optional[InputReader]): input reader, where None indicates that the stdin input reader should be used. output_writer (Optional[OutputWriter]): output writer, where None indicates that the stdout output writer should be used.
juraj-google-style
def save_attributes_to_hdf5_group(group, name, data): bad_attributes = [x for x in data if len(x) > HDF5_OBJECT_HEADER_LIMIT] if bad_attributes: raise RuntimeError('The following attributes cannot be saved to HDF5 file because they are larger than %d bytes: %s' % (HDF5_OBJECT_HEADER_LIMIT, ', '.join(bad...
Saves attributes (data) of the specified name into the HDF5 group. This method deals with an inherent problem of HDF5 file which is not able to store data larger than HDF5_OBJECT_HEADER_LIMIT bytes. Args: group: A pointer to a HDF5 group. name: A name of the attributes to save. data: Attributes data to store. Raises...
github-repos
def create_from_binary(cls, load_dataruns, binary_view): (attr_type, attr_len, non_resident, name_len, name_offset, flags, attr_id, start_vcn, end_vcn, rl_offset, compress_usize, alloc_sstream, curr_sstream, init_sstream) = cls._REPR.unpack(binary_view[:cls._REPR.size]) if name_len: name = binary_view[n...
Creates a new object NonResidentAttrHeader from a binary stream. The binary stream can be represented by a byte string, bytearray or a memoryview of the bytearray. Args: load_dataruns (bool) - Indicates if the dataruns are to be loaded binary_view (memoryview of bytearray) - A binary stream with the information of the...
codesearchnet
def gen_encoder_output_proposals(self, enc_output, padding_mask, spatial_shapes): batch_size = enc_output.shape[0] proposals = [] _cur = 0 level_ids = [] for level, (height, width) in enumerate(spatial_shapes): mask_flatten_ = padding_mask[:, _cur:_cur + height * width].view(batch_size, heig...
Generate the encoder output proposals from encoded enc_output. Args: enc_output (Tensor[batch_size, sequence_length, hidden_size]): Output of the encoder. padding_mask (Tensor[batch_size, sequence_length]): Padding mask for `enc_output`. spatial_shapes (Tensor[num_feature_levels, 2]): Spatial shapes of the feature map...
github-repos
def login_with_password(self, username, password, limit=10): warn('login_with_password is deprecated. Use login with sync=True.', DeprecationWarning) return self.login(username, password, limit, sync=True)
Deprecated. Use ``login`` with ``sync=True``. Login to the homeserver. Args: username (str): Account username password (str): Account password limit (int): Deprecated. How many messages to return when syncing. This will be replaced by a filter API in a later release. Returns: str: Access token Raises: MatrixRequest...
codesearchnet
def get(self, record_id): record_url = self.record_url(record_id) return self._get(record_url)
Retrieves a record by its id >>> record = airtable.get('recwPQIfs4wKPyc9D') Args: record_id(``str``): Airtable record id Returns: record (``dict``): Record
juraj-google-style
def parse_config(self, config): prefix = self.argument_prefix self.sources = config.get_sources(prefix) self.smart_sources = [self._get_smart_filename(s) for s in self.sources] self.index = config.get_index(prefix) self.source_roots = OrderedSet(config.get_paths(('%s_source_roots' % prefix))) fo...
Override this, making sure to chain up first, if your extension adds its own custom command line arguments, or you want to do any further processing on the automatically added arguments. The default implementation will set attributes on the extension: - 'sources': a set of absolute paths to source files for this exten...
codesearchnet
def calculate_uncertainty(self, logits: torch.Tensor) -> torch.Tensor: uncertainty_scores = -torch.abs(logits) return uncertainty_scores
In Mask2Former paper, uncertainty is estimated as L1 distance between 0.0 and the logit prediction in 'logits' for the foreground class in `classes`. Args: logits (`torch.Tensor`): A tensor of shape (R, 1, ...) for class-specific or class-agnostic, where R is the total number of predicted masks in all images and C is:...
github-repos
def __similarity(s1, s2, ngrams_fn, n=3): (ngrams1, ngrams2) = (set(ngrams_fn(s1, n=n)), set(ngrams_fn(s2, n=n))) matches = ngrams1.intersection(ngrams2) return ((2 * len(matches)) / (len(ngrams1) + len(ngrams2)))
The fraction of n-grams matching between two sequences Args: s1: a string s2: another string n: an int for the n in n-gram Returns: float: the fraction of n-grams matching
codesearchnet
def find_all_template(im_source, im_search, threshold=0.5, maxcnt=0, rgb=False, bgremove=False): method = cv2.TM_CCOEFF_NORMED if rgb: s_bgr = cv2.split(im_search) i_bgr = cv2.split(im_source) weight = (0.3, 0.3, 0.4) resbgr = [0, 0, 0] for i in range(3):...
Locate image position with cv2.templateFind Use pixel match to find pictures. Args: im_source(string): 图像、素材 im_search(string): 需要查找的图片 threshold: 阈值,当相识度小于该阈值的时候,就忽略掉 Returns: A tuple of found [(point, score), ...] Raises: IOError: when file read error
juraj-google-style
def get_membership(self, uuid=None): group_id = self.get_group_id(uuid=uuid) uri = 'group/{group_id}/member' mbr_data = self.get(uri.format(group_id=group_id), params=None) return mbr_data
Get membership data based on uuid. Args: uuid (str): optional uuid. defaults to self.cuuid Raises: PyLmodUnexpectedData: No data was returned. requests.RequestException: Exception connection error Returns: dict: membership json
codesearchnet
def mktemp(self, container: Container) -> str: logger.debug('creating a temporary file inside container %s', container.uid) response = self.command(container, 'mktemp') if (response.code != 0): msg = 'failed to create temporary file for container {}: [{}] {}' msg = msg.format(uid, response.c...
Creates a named temporary file within a given container. Returns: the absolute path to the created temporary file.
codesearchnet
def masks_to_boxes(masks: np.ndarray) -> np.ndarray: if masks.size == 0: return np.zeros((0, 4)) h, w = masks.shape[-2:] y = np.arange(0, h, dtype=np.float32) x = np.arange(0, w, dtype=np.float32) y, x = np.meshgrid(y, x, indexing='ij') x_mask = masks * np.expand_dims(x, axis=0) x_ma...
Compute the bounding boxes around the provided panoptic segmentation masks. Args: masks: masks in format `[number_masks, height, width]` where N is the number of masks Returns: boxes: bounding boxes in format `[number_masks, 4]` in xyxy format
github-repos
def session_new(self, **kwargs): path = self._get_path('session_new') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Generate a session id for user based authentication. A session id is required in order to use any of the write methods. Args: request_token: The token you generated for the user to approve. The token needs to be approved before being used here. Returns: A dict respresentation of the JSON returned from the API.
juraj-google-style
def DeregisterPlugin(cls, plugin_class): name = getattr( plugin_class, 'ARTIFACT_DEFINITION_NAME', plugin_class.__name__) name = name.lower() if name not in cls._plugins: raise KeyError( 'Artifact plugin class not set for name: {0:s}.'.format(name)) del cls._plugins[name] ...
Deregisters an preprocess plugin class. Args: plugin_class (type): preprocess plugin class. Raises: KeyError: if plugin class is not set for the corresponding name. TypeError: if the source type of the plugin class is not supported.
juraj-google-style
def smartupgrade(self, restart=True, dependencies=False, prerelease=False): if not self.check(): if self.verbose: print("Package {} already up-to-date!".format(self.pkg)) return if self.verbose: print("Upgrading {} ...".format(self.pkg)) ...
Upgrade the package if there is a later version available. Args: restart: restart app if True dependencies: update package dependencies if True (see pip --no-deps) prerelease: update to pre-release and development versions
juraj-google-style
def save_config(config, logdir=None): if logdir: with config.unlocked: config.logdir = logdir message = 'Start a new run and write summaries and checkpoints to {}.' tf.logging.info(message.format(config.logdir)) tf.gfile.MakeDirs(config.logdir) config_path = os.path.join(config.logdir, 'c...
Save a new configuration by name. If a logging directory is specified, is will be created and the configuration will be stored there. Otherwise, a log message will be printed. Args: config: Configuration object. logdir: Location for writing summaries and checkpoints if specified. Returns: Configuration object.
juraj-google-style
def run(argv=None, save_main_session=True, test_pipeline=None) -> PipelineResult: known_args, pipeline_args = parse_known_args(argv) pipeline_options = PipelineOptions(pipeline_args) pipeline_options.view_as(SetupOptions).save_main_session = save_main_session milk_quality_data = pandas.read_csv(known_ar...
Args: argv: Command line arguments defined for this example. save_main_session: Used for internal testing. test_pipeline: Used for internal testing.
github-repos
def easeInElastic(n, amplitude=1, period=0.3): _checkRange(n) return (1 - easeOutElastic((1 - n), amplitude=amplitude, period=period))
An elastic tween function that begins with an increasing wobble and then snaps into the destination. Args: n (float): The time progress, starting at 0.0 and ending at 1.0. Returns: (float) The line progress, starting at 0.0 and ending at 1.0. Suitable for passing to getPointOnLine().
codesearchnet
def text(self, tag, textdata, step=None): if (step is None): step = self._step else: self._step = step smd = SummaryMetadata(plugin_data=SummaryMetadata.PluginData(plugin_name='text')) if isinstance(textdata, (str, bytes)): tensor = tf.make_tensor_proto(values=[textdata.encode(en...
Saves a text summary. Args: tag: str: label for this data textdata: string, or 1D/2D list/numpy array of strings step: int: training step Note: markdown formatting is rendered by tensorboard.
codesearchnet
def _get_tf2_flags(parser): input_file_group = parser.add_mutually_exclusive_group() input_file_group.add_argument('--saved_model_dir', type=str, help='Full path of the directory containing the SavedModel.') input_file_group.add_argument('--keras_model_file', type=str, help='Full filepath of HDF5 file conta...
Returns ArgumentParser for tflite_convert for TensorFlow 2.0. Args: parser: ArgumentParser
github-repos
def _write_input(self, input_dir="."): with open(os.path.join(input_dir, self.input_file), 'wt', encoding="utf-8") as inp: for k, v in self.control_params.items(): inp.write('{} {}\n'.format(k, self._format_param_val(v))) fo...
Write the packmol input file to the input directory. Args: input_dir (string): path to the input directory
juraj-google-style
def get_samples_live_last(self, sensor_id): url = "https: headers = self.__gen_headers() headers["Content-Type"] = "application/json" params = { "sensorId": sensor_id } url = self.__append_url_params(url, params) r = requests.get(url, headers=headers) return r.json()
Get the last sample recorded by the sensor. Args: sensor_id (string): hexadecimal id of the sensor to query, e.g. ``0x0013A20040B65FAD`` Returns: list: dictionary objects containing sample data
juraj-google-style
def recode(self, table: pd.DataFrame, validate=False) -> pd.DataFrame: return self._recode_output(self._recode_input(table, validate=validate), validate=validate)
Pass the appropriate columns through each recoder function sequentially and return the final result. Args: table (pd.DataFrame): A dataframe on which to apply recoding logic. validate (bool): If ``True``, recoded table must pass validation tests.
juraj-google-style
def read_cs_raw_symmetrized_tensors(self): header_pattern = '\\s+-{50,}\\s+\\s+Absolute Chemical Shift tensors\\s+\\s+-{50,}$' first_part_pattern = '\\s+UNSYMMETRIZED TENSORS\\s+$' row_pattern = '\\s+'.join((['([-]?\\d+\\.\\d+)'] * 3)) unsym_footer_pattern = '^\\s+SYMMETRIZED TENSORS\\s+$' with zope...
Parse the matrix form of NMR tensor before corrected to table. Returns: nsymmetrized tensors list in the order of atoms.
codesearchnet
def save(self, target, format=None, encoding=None, **options): if (encoding is None): encoding = config.DEFAULT_ENCODING if (format is None): (_, format) = helpers.detect_scheme_and_format(target) writer_class = self.__custom_writers.get(format) if (writer_class is None): if (for...
Save stream to the local filesystem. Args: target (str): Path where to save the stream. format (str, optional): The format the stream will be saved as. If None, detects from the ``target`` path. Defaults to None. encoding (str, optional): Saved file encoding. Defaults to ``config.DEFAULT_ENCODING``. **options: Extra o...
codesearchnet
def single_device(cl_device_type='GPU', platform=None, fallback_to_any_device_type=False): if isinstance(cl_device_type, str): cl_device_type = device_type_from_string(cl_device_type) device = None if (platform is None): platforms = cl.get_platforms() else: platforms = [platform]...
Get a list containing a single device environment, for a device of the given type on the given platform. This will only fetch devices that support double (possibly only double with a pragma defined, but still, it should support double). Args: cl_device_type (cl.device_type.* or string): The type of the device we want...
codesearchnet
def emit_region(self, timestamp: int, duration: int, pid: int, tid: int, category: str, name: str, args: Dict[str, Any]) -> None: event = self._create_event('X', category, name, pid, tid, timestamp) event['dur'] = duration event['args'] = args self._events.append(event)
Adds a region event to the trace. Args: timestamp: The start timestamp of this region as a long integer. duration: The duration of this region as a long integer. pid: Identifier of the process generating this event as an integer. tid: Identifier of the thread generating this event as an integer. category: The even...
github-repos
def readMonthTariffs(self, months_type): self.setContext("readMonthTariffs") try: req_type = binascii.hexlify(str(months_type).zfill(1)) req_str = "01523102303031" + req_type + "282903" work_table = self.m_mons if months_type == ReadMonths.kWhRev...
Serial call to read month tariffs block into meter object buffer. Args: months_type (int): A :class:`~ekmmeters.ReadMonths` value. Returns: bool: True on completion.
juraj-google-style
def accuracy_score(gold, pred, ignore_in_gold=[], ignore_in_pred=[]): gold, pred = _preprocess(gold, pred, ignore_in_gold, ignore_in_pred) if len(gold) and len(pred): acc = np.sum(gold == pred) / len(gold) else: acc = 0 return acc
Calculate (micro) accuracy. Args: gold: A 1d array-like of gold labels pred: A 1d array-like of predicted labels (assuming abstain = 0) ignore_in_gold: A list of labels for which elements having that gold label will be ignored. ignore_in_pred: A list of labels for which elements having that pred label will be ignored. ...
juraj-google-style
def reassign_label(cls, destination_cluster, label): conn = Qubole.agent(version=Cluster.api_version) data = {'destination_cluster': destination_cluster, 'label': label} return conn.put((cls.rest_entity_path + '/reassign-label'), data)
Reassign a label from one cluster to another. Args: `destination_cluster`: id/label of the cluster to move the label to `label`: label to be moved from the source cluster
codesearchnet
async def runCmdLine(self, line): if self.echoline: self.outp.printf(f'{self.cmdprompt}{line}') ret = None name = line.split(None, 1)[0] cmdo = self.getCmdByName(name) if (cmdo is None): self.printf(('cmd not found: %s' % (name,))) return try: ret = (await cmdo.ru...
Run a single command line. Args: line (str): Line to execute. Examples: Execute the 'woot' command with the 'help' switch: await cli.runCmdLine('woot --help') Returns: object: Arbitrary data from the cmd class.
codesearchnet
def sub_location(self, nbr): assert (nbr > (- 1)), 'Sub location number must be greater or equal to 0!' assert (nbr < (self.nbr_of_sub_locations() - 1)), (('Sub location number must be lower than %d!' % self.nbr_of_sub_locations()) - 1) return self._locations_list[nbr]
Return a given sub location, 0-based. Args: nbr: Returns:
codesearchnet
def read_avg_core_poten(self): def pairwise(iterable): 's -> (s0,s1), (s1,s2), (s2, s3), ...' a = iter(iterable) return zip(a, a) with zopen(self.filename, 'rt') as foutcar: line = foutcar.readline() aps = [] while (line != ''): line = foutcar.readlin...
Read the core potential at each ionic step. Returns: A list for each ionic step containing a list of the average core potentials for each atom: [[avg core pot]]. Example: The average core potential of the 2nd atom of the structure at the last ionic step is: [-1][1]
codesearchnet
def disconnect(self, container, *args, **kwargs): if isinstance(container, Container): container = container.id return self.client.api.disconnect_container_from_network( container, self.id, *args, **kwargs )
Disconnect a container from this network. Args: container (str): Container to disconnect from this network, as either an ID, name, or :py:class:`~docker.models.containers.Container` object. force (bool): Force the container to disconnect from a network. Default: ``False`` Raises: :py:class:`docker.errors.APIError` If...
juraj-google-style
def WriteEvent(self, event): self.WriteEventStart() try: self.WriteEventBody(event) except errors.NoFormatterFound as exception: error_message = 'unable to retrieve formatter with error: {0!s}'.format( exception) self._ReportEventError(event, error_message) except err...
Writes the event to the output. Args: event (EventObject): event.
juraj-google-style
def get_enumerations_from_bit_mask(enumeration, mask): return [x for x in enumeration if ((x.value & mask) == x.value)]
A utility function that creates a list of enumeration values from a bit mask for a specific mask enumeration class. Args: enumeration (class): The enumeration class from which to draw enumeration values. mask (int): The bit mask from which to identify enumeration values. Returns: list: A list of enumeration values co...
codesearchnet
def _process_worker(call_queue, result_queue, shutdown): while True: try: call_item = call_queue.get(block=True, timeout=0.1) except queue.Empty: if shutdown.is_set(): return else: try: r = call_item() excep...
Evaluates calls from call_queue and places the results in result_queue. This worker is run in a seperate process. Args: call_queue: A multiprocessing.Queue of _CallItems that will be read and evaluated by the worker. result_queue: A multiprocessing.Queue of _ResultItems that will written to by the worker. shutdown: A...
juraj-google-style
def _infer_shape(self, dimensions): n = np.prod(dimensions) m = np.prod(abs(np.array(self._shape))) v = np.array(self._shape) v[(v == (- 1))] = (n return tuple(v)
Replaces the -1 wildcard in the output shape vector. This function infers the correct output shape given the input dimensions. Args: dimensions: List of input non-batch dimensions. Returns: Tuple of non-batch output dimensions.
codesearchnet
def get_servo_temperature(self): data = [] data.append(0x09) data.append(self.servoid) data.append(RAM_READ_REQ) data.append(TEMPERATURE_RAM) data.append(BYTE2) send_data(data) rxdata = [] try: rxdata = SERPORT.read(13) ...
Gets the current temperature of Herkulex Args: none Returns: int: the current temperature register of Herkulex Raises: SerialException: Error occured while opening serial port
juraj-google-style
def __rmod__(self, other): try: other = as_dimension(other) except (TypeError, ValueError): return NotImplemented return other % self
Returns `other` modulo `self`. Args: other: Another Dimension, or a value accepted by `as_dimension`. Returns: A Dimension whose value is `other` modulo `self`.
juraj-google-style
def __init__(self, datastore_client, entity_kind_batches, entity_kind_images): self._datastore_client = datastore_client self._entity_kind_batches = entity_kind_batches self._entity_kind_images = entity_kind_images self._data = {}
Initialize ImageBatchesBase. Args: datastore_client: instance of the CompetitionDatastoreClient entity_kind_batches: Cloud Datastore entity kind which is used to store batches of images. entity_kind_images: Cloud Datastore entity kind which is used to store individual images.
juraj-google-style
def generators_from_logdir(logdir): subdirs = io_wrapper.GetLogdirSubdirectories(logdir) generators = [ itertools.chain(*[ generator_from_event_file(os.path.join(subdir, f)) for f in tf.io.gfile.listdir(subdir) if io_wrapper.IsTensorFlowEventsFile(os.path.join(subdir, f)) ...
Returns a list of event generators for subdirectories with event files. The number of generators returned should equal the number of directories within logdir that contain event files. If only logdir contains event files, returns a list of length one. Args: logdir: A log directory that contains event files. Returns:...
juraj-google-style