code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def get(self, key, default=None): if (key.count('.') == 0): return super(DotDict, self).get(key, default) value = default (first, remainder) = key.split('.', 1) if (first in self): value = super(DotDict, self).get(first, default) if isinstance(value, (dict, DotDict)): ...
Get a value from the `DotDict`. The `key` parameter can either be a regular string key, e.g. "foo", or it can be a string key with dot notation, e.g. "foo.bar.baz", to signify a nested lookup. The default value is returned if any level of the key's components are not found. Args: key (str): The key to get the value ...
codesearchnet
def create_interconnect(self, location_entries, timeout=(- 1)): return self._helper.create(location_entries, uri=self.locations_uri, timeout=timeout)
Creates an interconnect at the given location. Warning: It does not create the LOGICAL INTERCONNECT itself. It will fail if no interconnect is already present on the specified position. Args: location_entries (dict): Dictionary with location entries. timeout: Timeout in seconds. Wait for task completion by default. T...
codesearchnet
def identify(self, token): payload = { 'op': 2, 'd': { 'token': self.token, 'properties': { '$os': sys.platform, '$browser': 'legobot', '$device': 'legobot' }, ...
Identifies to the websocket endpoint Args: token (string): Discord bot token
juraj-google-style
def assert_split_at_fraction_exhaustive(source, start_position=None, stop_position=None, perform_multi_threaded_test=True): expected_items = read_from_source(source, start_position, stop_position) if not expected_items: raise ValueError('Source %r is empty.' % source) if len(expected_items) == 1: ...
Performs and tests dynamic work rebalancing exhaustively. Asserts that for each possible start position, a source can be split at every interesting fraction (halfway between two fractions that differ by at least one item) and the results are consistent if a split succeeds. Verifies multi threaded splitting as well. A...
github-repos
def search(cls, session, queries, out_type): cls._check_implements('search') domain = cls.get_search_domain(queries) return cls(('/search/%s.json' % cls.__endpoint__), data={'query': str(domain)}, session=session, out_type=out_type)
Search for a record given a domain. Args: session (requests.sessions.Session): Authenticated session. queries (helpscout.models.Domain or iter): The queries for the domain. If a ``Domain`` object is provided, it will simply be returned. Otherwise, a ``Domain`` object will be generated from the complex queries. In this...
codesearchnet
def _pack_images(images, rows, cols): shape = onp.shape(images) (width, height, depth) = shape[(- 3):] images = onp.reshape(images, ((- 1), width, height, depth)) batch = onp.shape(images)[0] rows = onp.minimum(rows, batch) cols = onp.minimum((batch images = images[:(rows * cols)] image...
Helper utility to make a tiled field of images from numpy arrays. Args: images: Image tensor in shape [N, W, H, C]. rows: Number of images per row in tiled image. cols: Number of images per column in tiled image. Returns: A tiled image of shape [W * rows, H * cols, C]. Truncates incomplete rows.
codesearchnet
def remove_pos_arg_placeholders(alias_command): split_command = shlex.split(alias_command) boundary_index = len(split_command) for (i, subcommand) in enumerate(split_command): if ((not re.match('^[a-z]', subcommand.lower())) or (i > COLLISION_CHECK_LEVEL_DEPTH)): boundary_index = i ...
Remove positional argument placeholders from alias_command. Args: alias_command: The alias command to remove from. Returns: The alias command string without positional argument placeholder.
codesearchnet
def get_special_tokens_mask(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None, already_has_special_tokens: bool=False) -> List[int]: if already_has_special_tokens: return super().get_special_tokens_mask(token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True) ...
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. already_has_spe...
github-repos
def get_key(self, match_key, num_results=1, best=True, **dfilter): return get_key(match_key, self.keys(), num_results=num_results, best=best, **dfilter)
Get multiple fully-specified keys that match the provided query. Args: key (DatasetID): DatasetID of query parameters to use for searching. Any parameter that is `None` is considered a wild card and any match is accepted. Can also be a string representing the dataset name or a number representing the dataset wavelengt...
codesearchnet
def CheckPath(self, path, path_segment_separator=None): if (not self._case_sensitive): path = path.lower() if (path_segment_separator is None): path_segment_separator = self._path_segment_separator path_segments = path.split(path_segment_separator) number_of_path_segments = len(path_segm...
Checks if a path matches the scan tree-based path filter. Args: path: a string containing the path. path_segment_separator: optional string containing the path segment separator. None defaults to the path segment separator that was set when the path filter scan tree was initialized. Returns: A boolean indicating if t...
codesearchnet
def post(self, service, data): url = self._url_format(service) data = Base._data_to_json(data) headers = {'content-type': 'application/json'} return self.rest_action(self._session.post, url, data=data, headers=headers)
Generic POST operation for sending data to Learning Modules API. Data should be a JSON string or a dict. If it is not a string, it is turned into a JSON string for the POST body. Args: service (str): The endpoint service to use, i.e. gradebook data (json or dict): the data payload Raises: requests.RequestException:...
codesearchnet
def save_state_regularly(self, fname, frequency=600): self.save_state(fname) loop = asyncio.get_event_loop() self.save_state_loop = loop.call_later(frequency, self.save_state_regularly, fname, ...
Save the state of node with a given regularity to the given filename. Args: fname: File name to save retularly to frequency: Frequency in seconds that the state should be saved. By default, 10 minutes.
juraj-google-style
def latch_config_variables(self): return {desc.name: desc.latch() for desc in self._config_variables.values()}
Latch the current value of all config variables as python objects. This function will capture the current value of all config variables at the time that this method is called. It must be called after start() has been called so that any default values in the config variables have been properly set otherwise DataError ...
codesearchnet
def is_end_node(node): return (isinstance(node, ast.Expr) and isinstance(node.value, ast.Name) and node.value.id == 'end')
Checks if a node is the "end" keyword. Args: node: AST node. Returns: True if the node is the "end" keyword, otherwise False.
juraj-google-style
def _next_layer_gather_index(bc, original_rp, broadcast_rp): old_value_rowids = array_ops.gather(bc.gather_index, broadcast_rp.value_rowids()) def gi_no_broadcast(): old_row_starts = array_ops.gather(original_rp.row_splits(), old_value_rowids) expected_row_lengths = array_ops.gather(params=orig...
Create the next layer gather_index whether or not a broadcast happens. *----------bc-------->* | | original_rp broadcast_rp | | \|/ \|/ *--next_broadcaster-->* Args: bc: the old broadcaster. original_rp: the original row partition. broadcast_rp: the ...
github-repos
def set(self, *args): assert (len(args) in (1, 2)) if (len(args) == 1): value = args[0] self._impl.set(value) else: (index, value) = args if isinstance(value, Real): self._impl.setTplDbl(Tuple(index)._impl, value) elif isinstance(value, basestring): ...
Set the value of a single instance of this parameter. Args: args: value if the parameter is scalar, index and value otherwise. Raises: RuntimeError: If the entity has been deleted in the underlying AMPL. TypeError: If the parameter is not scalar and the index is not provided.
codesearchnet
def has_title(self, title, **kwargs): try: self.assert_title(title, **kwargs) return True except ExpectationNotMet: return False
Checks if the page has the given title. Args: title (str | RegexObject): The string or regex that the title should match. **kwargs: Arbitrary keyword arguments for :class:`TitleQuery`. Returns: bool: Whether it matches.
juraj-google-style
def draw_text(img, pos, text, color, font_scale=0.4): img = img.astype(np.uint8) (x0, y0) = (int(pos[0]), int(pos[1])) font = cv2.FONT_HERSHEY_SIMPLEX ((text_w, text_h), _) = cv2.getTextSize(text, font, font_scale, 1) if ((x0 + text_w) > img.shape[1]): x0 = (img.shape[1] - text_w) if ((y...
Draw text on an image. Args: pos (tuple): x, y; the position of the text text (str): font_scale (float): color (tuple): a 3-tuple BGR color in [0, 255]
codesearchnet
def run(self, gin): with ScratchDir("."): p = subprocess.Popen( self._gulp_cmd, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE ) out, err = p.communicate(bytearray(gin, "utf-8")) out = out.decode("ut...
Run GULP using the gin as input Args: gin: GULP input string Returns: gout: GULP output string
juraj-google-style
async def init(self, name, conf=None): tank = self.tanks.get(name) if (tank is not None): return tank iden = s_common.guid() logger.info('Creating new tank: %s', name) path = s_common.genpath(self.dirn, 'tanks', iden) tank = (await CryoTank.anit(path, conf)) node = (await self.names....
Generate a new CryoTank with a given name or get an reference to an existing CryoTank. Args: name (str): Name of the CryoTank. Returns: CryoTank: A CryoTank instance.
codesearchnet
def load_disease_terms(adapter, genemap_lines, genes=None, hpo_disease_lines=None): if (not genes): genes = adapter.genes_by_alias() disease_terms = get_mim_phenotypes(genemap_lines=genemap_lines) if (not hpo_disease_lines): hpo_disease_lines = fetch_hpo_phenotype_to_terms() hpo_diseases...
Load the omim phenotypes into the database Parse the phenotypes from genemap2.txt and find the associated hpo terms from ALL_SOURCES_ALL_FREQUENCIES_diseases_to_genes_to_phenotypes.txt. Args: adapter(MongoAdapter) genemap_lines(iterable(str)) genes(dict): Dictionary with all genes found in database hpo_disease_lines(...
codesearchnet
def contains(self, key): try: self._api.objects_get(self._bucket, key) except datalab.utils.RequestException as e: if (e.status == 404): return False raise e except Exception as e: raise e return True
Checks if the specified item exists. Args: key: the key of the item to lookup. Returns: True if the item exists; False otherwise. Raises: Exception if there was an error requesting information about the item.
codesearchnet
def is_compatible(self, other: 'ValueSpec') -> bool:
Returns True if values acceptable to `other` is acceptable to this spec. Args: other: Other value spec. Returns: True if values that is applicable to the other value spec can be applied to current spec. Otherwise False.
github-repos
def list_(return_yaml=True, include_pillar=True, include_opts=True, **kwargs): beacons = None try: eventer = salt.utils.event.get_event('minion', opts=__opts__) res = __salt__['event.fire']({'func': 'list', 'include_pillar': include_pillar, 'include_opts': include_opts}, 'manage_beacons') ...
List the beacons currently configured on the minion. Args: return_yaml (bool): Whether to return YAML formatted output, default ``True``. include_pillar (bool): Whether to include beacons that are configured in pillar, default is ``True``. include_opts (bool): Whether to include beacons that are configured in opts,...
codesearchnet
class AutoformerFeatureEmbedder(nn.Module): def __init__(self, cardinalities: List[int], embedding_dims: List[int]) -> None: super().__init__() self.num_features = len(cardinalities) self.embedders = nn.ModuleList([nn.Embedding(c, d) for c, d in zip(cardinalities, embedding_dims)]) def...
Embed a sequence of categorical features. Args: cardinalities (`list[int]`): List of cardinalities of the categorical features. embedding_dims (`list[int]`): List of embedding dimensions of the categorical features.
github-repos
def _ContainsNone(self, fail_verb, excluded): present = [] if len(excluded) == 1: if excluded[0] in self._actual: present.extend(excluded) elif excluded: try: actual_set = set(self._actual) except TypeError: actual_set = self._actual for i ...
Determines if the subject contains none of the excluded elements. Helper function for ContainsNoneIn() and ContainsNoneOf(). Args: fail_verb: string describing how the excluded elements should be excluded. excluded: iterable of objects that should not be contained in the subject. Returns: None if the subject contain...
github-repos
def read(in_path): assert os.path.exists(in_path), "The following GRP file can't be found. in_path: {}".format(in_path) with open(in_path, 'r') as f: lines = f.readlines() grp = [line.strip() for line in lines if (line and (not re.match('^ return grp
Read a grp file at the path specified by in_path. Args: in_path (string): path to GRP file Returns: grp (list)
codesearchnet
def validate(self, proxy_scanner, expected_num=20, queue_timeout=3, val_timeout=5): while (self.proxy_num() < expected_num): try: candidate_proxy = proxy_scanner.proxy_queue.get(timeout=queue_timeout) except queue.Empty: if proxy_scanner.is_scanning(): continu...
Target function of validation threads Args: proxy_scanner: A ProxyScanner object. expected_num: Max number of valid proxies to be scanned. queue_timeout: Timeout for getting a proxy from the queue. val_timeout: An integer passed to `is_valid` as argument `timeout`.
codesearchnet
def __init__(self, task_queue, verbose=True): multiprocessing.Process.__init__(self) self._task_queue = task_queue self.total_task = self._task_queue.qsize() self.current_state = None self.verbose = verbose
Construct an instance of TaskTracker Args: task_queue (multiprocessing.JoinableQueue): A queue of the input data. verbose (bool, optional): Set to False to disable verbose output.
juraj-google-style
def _update_task(self, task): self.task = task self.task.data.update(self.task_data) self.task_type = task.task_spec.__class__.__name__ self.spec = task.task_spec self.task_name = task.get_name() self.activity = getattr(self.spec, 'service_class', '') self._set_lane_data()
Assigns current task step to self.task then updates the task's data with self.task_data Args: task: Task object.
codesearchnet
def check_partitioners(partitioners, keys): if (partitioners is None): return {} _assert_is_dictlike(partitioners, valid_keys=keys) keys = set(keys) if (not (set(partitioners) <= keys)): extra_keys = (set(partitioners) - keys) raise KeyError('Invalid partitioner keys {}, partitio...
Checks the given partitioners. This checks that `partitioners` is a dictionary that only contains keys in `keys`, and furthermore the entries in `partitioners` are functions or further dictionaries (the latter used, for example, in passing partitioners to modules inside modules) that must satisfy the same constraints....
codesearchnet
def most_recent(path, startswith=None, endswith=None): candidate_files = [] for filename in all_files_in_directory(path): if startswith and not os.path.basename(filename).startswith(startswith): continue if endswith and not filename.endswith(endswith): continue ...
Recursively inspect all files under a directory and return the most recent Args: path (str): the path of the directory to traverse startswith (str): the file name start with (optional) endswith (str): the file name ends with (optional) Returns: the most recent file within the subdirectory
juraj-google-style
def remove(self, keys, name=None): if keys.dtype != self._key_dtype: raise TypeError(f'Dtype of argument `keys` must be {self._key_dtype}, received: {keys.dtype}') with ops.name_scope(name, '%s_lookup_table_remove' % self.name, (self.resource_handle, keys, self._default_value)): op = gen_lookup_...
Removes `keys` and its associated values from the table. If a key is not present in the table, it is silently ignored. Args: keys: Keys to remove. Can be a tensor of any shape. Must match the table's key type. name: A name for the operation (optional). Returns: The created Operation. Raises: TypeError: when `keys` ...
github-repos
def copy_results(self, copy_to_dir, rename_model_to=None, force_rerun=False): if (not rename_model_to): rename_model_to = self.model_to_use new_model_path = op.join(copy_to_dir, '{}.pdb'.format(rename_model_to)) if self.structure_path: if ssbio.utils.force_rerun(flag=force_rerun, outfile=new...
Copy the raw information from I-TASSER modeling to a new folder. Copies all files in the list _attrs_to_copy. Args: copy_to_dir (str): Directory to copy the minimal set of results per sequence. rename_model_to (str): New file name (without extension) force_rerun (bool): If existing models and results should be overwr...
codesearchnet
def __init__(self): self._last_step_outputs = {} self._last_step_outputs_reduce_ops = {} self._non_tensor_outputs = {}
Initialize an output context. Returns: A context object.
github-repos
async def is_try_or_pull_request(context, task): if is_github_task(task): return await is_pull_request(context, task) else: return is_try(task, context.config['source_env_prefix'])
Determine if a task is a try or a pull-request-like task (restricted privs). Checks are the ones done in ``is_try`` and ``is_pull_request`` Args: context (scriptworker.context.Context): the scriptworker context. task (dict): the task definition to check. Returns: bool: True if it's a pull-request or a try task
juraj-google-style
def _MergeIdenticalCaseInsensitive(self, a, b): if (a.lower() != b.lower()): raise MergeError(("values must be the same (case insensitive) ('%s' vs '%s')" % (transitfeed.EncodeUnicode(a), transitfeed.EncodeUnicode(b)))) return b
Tries to merge two strings. The string are required to be the same ignoring case. The second string is always used as the merged value. Args: a: The first string. b: The second string. Returns: The merged string. This is equal to the second string. Raises: MergeError: The strings were not the same ignoring case.
codesearchnet
def reset_logger(name, level=None, handler=None): if level is None: level = logging.INFO logger = logging.getLogger(name) logger.setLevel(level) handler = handler or logging.StreamHandler() handler.setFormatter(logging.Formatter(_DEFAULT_LOG_FORMAT)) logger.handlers = [handler] ret...
Make a standard python logger object with default formatter, handler, etc. Defaults are: - level == logging.INFO - handler == logging.StreamHandler() Args: name: a logger name. level: an optional initial log level for this logger. handler: an optional initial handler for this logger. Returns: a standard python logge...
juraj-google-style
def genes_by_alias(hgnc_genes): alias_genes = {} for hgnc_id in hgnc_genes: gene = hgnc_genes[hgnc_id] hgnc_symbol = gene['hgnc_symbol'] for alias in gene['previous_symbols']: true_id = None if (alias == hgnc_symbol): true_id = hgnc_id ...
Return a dictionary with hgnc symbols as keys Value of the dictionaries are information about the hgnc ids for a symbol. If the symbol is primary for a gene then 'true_id' will exist. A list of hgnc ids that the symbol points to is in ids. Args: hgnc_genes(dict): a dictionary with hgnc_id as key and gene info as valu...
codesearchnet
def is_stopped(self): resp = self._client.send(Request(action='is_dag_stopped', payload={'dag_name': self._dag_name})) return resp.payload['is_stopped']
Check whether the task received a stop signal from the workflow. Tasks can use the stop flag to gracefully terminate their work. This is particularly important for long running tasks and tasks that employ an infinite loop, such as trigger tasks. Returns: bool: True if the task should be stopped.
codesearchnet
def dispatch(self, message): for validator, callback in self.validators: if not validator.matches(message): continue callback(message) return raise ArgumentError("No handler was registered for message", message=message)
Dispatch a message to a callback based on its schema. Args: message (dict): The message to dispatch
juraj-google-style
def register(self, cmd: Type[Command]) -> None: self.commands[cmd.command] = cmd
Register a new IMAP command. Args: cmd: The new command type.
codesearchnet
def editline_with_regex(self, regex_tgtline, to_replace): for (idx, line) in enumerate(self._swp_lines): mobj = re.match(regex_tgtline, line) if mobj: self._swp_lines[idx] = to_replace return
find the first matched line, then replace Args: regex_tgtline (str): regular expression used to match the target line to_replace (str): line you wanna use to replace
codesearchnet
def exists(self, path): self.__validate_storage_path(path) try: metadata = self.api_client.get_entity_by_query(path=path) except StorageNotFoundException: return False return metadata and 'uuid' in metadata
Check if a certain path exists in the storage service. Args: path (str): The path to be checked Returns: True if the path exists, False otherwise Raises: StorageArgumentException: Invalid arguments StorageForbiddenException: Server response code 403 StorageNotFoundException: Server response code 404 StorageException...
juraj-google-style
def get_lock_state_transaction(self, transaction_id): response = None try: response = requests.get( urls.get_lockstate_transaction(self._giid, transaction_id), headers={ 'Accept': 'application/json, text/javascript, */*; q=0.01', ...
Get lock state transaction status Args: transaction_id: Transaction ID received from set_lock_state
juraj-google-style
def ping(hostname: str, timeout_s: int = 5) -> bool: if sys.platform == "win32": timeout_ms = timeout_s * 1000 args = [ "ping", hostname, "-n", "1", "-w", str(timeout_ms), ] elif sys.platform.startswith('linux'): args = [ ...
Pings a host, using OS tools. Args: hostname: host name or IP address timeout_s: timeout in seconds Returns: was the ping successful?
juraj-google-style
def setup_config(cfg, config_filenames=None, env_var_name=None): if (env_var_name is None): env_var_name = 'BB_CONFIG_FILE' config_path = os.getenv(env_var_name, None) if (not config_path): config_path = find_config(defaults=config_filenames) if config_path: cfg.load(config_path)...
This will initialize the given configuration object. The following resources are available in the same order: 1) Default settings. 2) Config file. 3) Environment variables. WARNING: Environment variables do _not_ take precedence over the config file right now. (init_from_env will refuse to update the value, if there ...
codesearchnet
def get_diff(repo: Repo, base_commit: str, commits: List[str]) -> List[str]: print('\n code_diff = [] for commit in commits: for diff_obj in commit.diff(base_commit): if diff_obj.change_type == 'A' and diff_obj.b_path.endswith('.py'): code_diff.append(diff_obj.b_path) ...
Get the diff between a base commit and one or several commits. Args: repo (`git.Repo`): A git repository (for instance the Transformers repo). base_commit (`str`): The commit reference of where to compare for the diff. This is the current commit, not the branching point! commits (`List[str]`): The list of commits with...
github-repos
def wait_until_final(self, poll_interval=1, timeout=60): start_time = time.time() elapsed = 0 while ((self.status != 'complete') and ((timeout <= 0) or (elapsed < timeout))): time.sleep(poll_interval) self.refresh() elapsed = (time.time() - start_time)
It will poll the URL to grab the latest status resource in a given timeout and time interval. Args: poll_interval (int): how often to poll the status service. timeout (int): how long to poll the URL until giving up. Use <= 0 to wait forever
codesearchnet
def create_feed_dict_from_input_data(input_data: RepresentativeSample, signature_def: meta_graph_pb2.SignatureDef) -> Mapping[str, np.ndarray]: feed_dict = {} for input_key, input_value in input_data.items(): input_tensor_name = signature_def.inputs[input_key].name value = input_value if...
Constructs a feed_dict from input data. Note: This function should only be used in graph mode. This is a helper function that converts an 'input key -> input value' mapping to a feed dict. A feed dict is an 'input tensor name -> input value' mapping and can be directly passed to the `feed_dict` argument of `sess.run(...
github-repos
def get_extra_args(): g = ops.get_default_graph() if isinstance(g, _FuncGraph): return g.extra_args else: return []
Returns the corresponding function arguments for the captured inputs. Returns: If the default graph is being used to define a function, the returned list of place holders are those used inside the function body corresponding those returned by get_extra_inputs(). Otherwise, returns an empty list.
github-repos
def get(self, item, alt=None): try: val = self[item] except ValueError: return alt return val if val is not None else alt
Standard dict-like .get() method. Args: item (str): See :meth:`.__getitem__` for details. alt (default None): Alternative value, if item is not found. Returns: obj: `item` or `alt`, if item is not found.
juraj-google-style
def get_train_examples(self, data_dir, filename=None): if data_dir is None: data_dir = '' if self.train_file is None: raise ValueError('SquadProcessor should be instantiated via SquadV1Processor or SquadV2Processor') with open(os.path.join(data_dir, self.train_file if filename is None else f...
Returns the training examples from the data directory. Args: data_dir: Directory containing the data files used for training and evaluating. filename: None by default, specify this if the training file has a different name than the original one which is `train-v1.1.json` and `train-v2.0.json` for squad versions 1.1 an...
github-repos
def ExtractEvents( self, parser_mediator, registry_key, codepage='cp1252', **kwargs): self._ParseSubKey(parser_mediator, registry_key, [], codepage=codepage)
Extracts events from a Windows Registry key. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. registry_key (dfwinreg.WinRegistryKey): Windows Registry key. codepage (Optional[str]): extended ASCII string codepage.
juraj-google-style
def moments_v2(x, axes, shift=None, keepdims=False, name=None): return moments(x=x, axes=axes, shift=shift, name=name, keep_dims=keepdims)
Calculates the mean and variance of `x`. The mean and variance are calculated by aggregating the contents of `x` across `axes`. If `x` is 1-D and `axes = [0]` this is just the mean and variance of a vector. Note: shift is currently not used; the true mean is computed and used. When using these moments for batch nor...
github-repos
def init_log(log_file): log = None try: log = open(log_file, 'a') sys.stdout = Tee(sys.stdout, log) except: pass return log
Creates log file on disk and "Tees" :py:class:`sys.stdout` to console and disk Args: log_file (str): The path on disk to append or create the log file. Returns: file: The opened log file.
juraj-google-style
def intersect(self, second_iterable, selector=identity): if self.closed(): raise ValueError('Attempt to call intersect() on a closed Queryable.') if (not is_iterable(second_iterable)): raise TypeError('Cannot compute intersect() with second_iterable of non-iterable {0}'.format(str(type(second_it...
Returns those elements which are both in the source sequence and in the second_iterable. Note: This method uses deferred execution. Args: second_iterable: Elements are returned if they are also in the sequence. selector: An optional single argument function which is used to project the elements in the source and sec...
codesearchnet
def success(channel, image, hex_str): hex_number = int(hex_str, 16) gui = ui_embed.UI( channel, "", " modulename=modulename, colour=hex_number, thumbnail=image, ) return gui
Creates an embed UI containing a hex color message Args: channel (discord.Channel): The Discord channel to bind the embed to image (str): The url of the image to add hex_str (str): The hex value Returns: ui (ui_embed.UI): The embed UI object that was created
juraj-google-style
def is_back_tracking(neurite): def pair(segs): ' Pairs the input list into triplets' return zip(segs, segs[1:]) def coords(node): ' Returns the first three values of the tree that correspond to the x, y, z coordinates' return node[COLS.XYZ] def max_radius(seg): ' R...
Check if a neurite process backtracks to a previous node. Back-tracking takes place when a daughter of a branching process goes back and either overlaps with a previous point, or lies inside the cylindrical volume of the latter. Args: neurite(Neurite): neurite to operate on Returns: True Under the following scenaria:...
codesearchnet
def sheets_values_batch_update(config, auth, sheet_url_or_name, data): sheet_id = sheets_id(config, auth, sheet_url_or_name) API_Sheets(config, auth).spreadsheets().values().batchUpdate(spreadsheetId=sheet_id, body=data).execute()
Helper for performing batch value operations. Args: config - see starthinker/util/configuration.py auth - user or service sheet_url_or_name - one of: URL, document title, or id data - JSON data for sending to batch request No Return
github-repos
def xresnet18(pretrained=False, **kwargs): model = XResNet(BasicBlock, [2, 2, 2, 2], **kwargs) if pretrained: model.load_state_dict(model_zoo.load_url(model_urls['xresnet18'])) return model
Constructs a XResNet-18 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet
juraj-google-style
def create_tree_from_string(line): depth = 0 current_word = "" root = None current_node = root for char in line: if char == '(': if current_node is not None and len(current_word) > 0: attribute_text_label(current_node, current_word) ...
Parse and convert a string representation of an example into a LabeledTree datastructure. Arguments: ---------- line : str, string version of the tree. Returns: -------- LabeledTree : parsed tree.
juraj-google-style
def sample_from_discretized_mix_logistic(pred, seed=None): (logits, locs, log_scales, coeffs) = split_to_discretized_mix_logistic_params(pred) num_mixtures = shape_list(logits)[(- 1)] gumbel_noise = (- tf.log((- tf.log(tf.random_uniform(tf.shape(logits), minval=1e-05, maxval=(1.0 - 1e-05), seed=seed))))) ...
Sampling from a discretized mixture of logistics. Args: pred: A [batch, height, width, num_mixtures*10] tensor of floats comprising one unconstrained mixture probability, three means (one per channel), three standard deviations (one per channel), and three coefficients which linearly parameterize dependence across cha...
codesearchnet
def match(self, message) -> bool: if (self.to and (message.to != self.to)): return False if (self.sender and (message.sender != self.sender)): return False if (self.body and (message.body != self.body)): return False if (self.thread and (message.thread != self.thread)): r...
Returns wether a message matches with this message or not. The message can be a Message object or a Template object. Args: message (spade.message.Message): the message to match to Returns: bool: wether the message matches or not
codesearchnet
def sample_g_values(self, ngram_keys: torch.LongTensor) -> torch.LongTensor: sampling_table_size, = self.sampling_table.shape sampling_table = self.sampling_table.reshape((1, 1, sampling_table_size)) ngram_keys = ngram_keys % sampling_table_size return torch.take_along_dim(sampling_table, indices=ngram_...
Samples g values from Bernoulli distribution. It is not possible to pass random keys in a vectorized way in torch. Instead we pre-compute a random sampling table, and use apply modulo table size to map from ngram keys (int64) to g values. Args: ngram_keys (`torch.LongTensor`): Random keys (batch_size, num_ngrams, dep...
github-repos
def city(self, value=None): if (value is not None): try: value = str(value) except ValueError: raise ValueError('value {} need to be of type str for field `city`'.format(value)) if (',' in value): raise ValueError('value should not contain a comma for fiel...
Corresponds to IDD Field `city` Args: value (str): value for IDD Field `city` if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
codesearchnet
def build_image(registry, image): if ':' in image['name']: _, tag = image['name'].split(':', 1) else: _, tag = image['name'], None values = { 'registry': '' if registry is None else registry + '/', 'image': image['name'], 'tag': tag, } if tag is No...
Build docker image. Args: registry (str): The name of the registry this image belongs to. If not given, the resulting image will have a name without the registry. image (dict[str, Any]): The dict containing the information about the built image. This is the same dictionary as defined in DOCKER_IMAGES variable.
juraj-google-style
def save_as_json(total: list, name='data.json', sort_by: str = None, no_duplicate=False, order='asc'): if sort_by: reverse = order == 'desc' total = sorted(total, key=itemgetter(sort_by), reverse=reverse) if no_duplicate: ...
Save what you crawled as a json file. Args: total (list): Total of data you crawled. name (str, optional): Defaults to 'data.json'. The name of the file. sort_by (str, optional): Defaults to None. Sort items by a specific key. no_duplicate (bool, optional): Defaults to False. If True, it will remove duplicated data. o...
juraj-google-style
def VerifyStructure(self, parser_mediator, line): self._last_month = 0 self._year_use = parser_mediator.GetEstimatedYear() key = 'header' try: structure = self._MAC_WIFI_HEADER.parseString(line) except pyparsing.ParseException: structure = None if not structure: key = '...
Verify that this file is a Mac Wifi log file. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. line (str): line from a text file. Returns: bool: True if the line is in the expected format, False if not.
juraj-google-style
def parse(source): if isinstance(source, str): return parse_stream(six.StringIO(source)) else: return parse_stream(source)
Parses source code returns an array of instructions suitable for optimization and execution by a Machine. Args: source: A string or stream containing source code.
codesearchnet
def _logspace_mean(log_values): center = tf.stop_gradient(_sample_max(log_values)) centered_values = tf.math.exp(log_values - center) log_mean_of_values = tf.math.log(_sample_mean(centered_values)) + center return log_mean_of_values
Evaluate `Log[E[values]]` in a stable manner. Args: log_values: `Tensor` holding `Log[values]`. Returns: `Tensor` of same `dtype` as `log_values`, reduced across dim 0. `Log[Mean[values]]`.
juraj-google-style
def resolve(self, keys: Iterable[str]) -> Tuple[Dict[KeySpec, List[str]], List[str]]: keys = list(keys) input_keyset = set(keys) nonconst_key_specs = [k for k in self._fields.keys() if not k.is_const] nonconst_keys = {k: [] for k in nonconst_key_specs} unmatched_keys = [] keys_by_key_spec = dict...
Resolve keys by grouping them by their matched fields. Args: keys: A list of string keys. Returns: A tuple of matched key results and unmatched keys. Matched key results are an ordered dict of KeySpec to matched keys, in field declaration order. Unmatched keys are strings from input.
github-repos
def extend(self, *iterables): for value in iterables: list.extend(self, value) return self
Add all values of all iterables at the end of the list Args: iterables: iterable which content to add at the end Example: >>> from ww import l >>> lst = l([]) >>> lst.extend([1, 2]) [1, 2] >>> lst [1, 2] >>> lst.extend([3, 4]).extend([5, 6]) [1, 2, 3, 4, 5, 6] >>> lst [1, 2, 3, 4, 5, 6]
juraj-google-style
def filter_values(cls, part_info): filtered = [] for info_list in cls.filter_parts(part_info).values(): filtered += info_list return filtered
Filter the part_info dict list looking for instances of our class Args: part_info (dict): {part_name: [Info] or None} as returned from Controller.run_hook() Returns: list: [info] where info is a subclass of cls
codesearchnet
def register_read_multiple(self, register_indices): num_regs = len(register_indices) buf = (ctypes.c_uint32 * num_regs)(*register_indices) data = (ctypes.c_uint32 * num_regs)(0) statuses = (ctypes.c_uint8 * num_regs)(0) res = self._dll.JLINKARM_ReadRegs(buf, data, statuses, num_regs) if (res < 0...
Retrieves the values from the registers specified. Args: self (JLink): the ``JLink`` instance register_indices (list): list of registers to read Returns: A list of values corresponding one-to-one for each of the given register indices. The returned list of values are the values in order of which the indices were spe...
codesearchnet
def _follow_link(self, link_path_components, link): link_path = link.contents sep = self._path_separator(link_path) if (not self._starts_with_root_path(link_path)): components = link_path_components[:(- 1)] components.append(link_path) link_path = sep.join(components) return self...
Follow a link w.r.t. a path resolved so far. The component is either a real file, which is a no-op, or a symlink. In the case of a symlink, we have to modify the path as built up so far /a/b => ../c should yield /a/../c (which will normalize to /a/c) /a/b => x should yield /a/x /a/b => /x/y/z should yield /x/y/z ...
codesearchnet
def headers(self, headers=None, **kw): headers = kw if kw else headers self._request.headers = headers self.add_matcher(matcher('HeadersMatcher', headers))
Defines a dictionary of arguments. Header keys are case insensitive. Arguments: headers (dict): headers to match. **headers (dict): headers to match as variadic keyword arguments. Returns: self: current Mock instance.
juraj-google-style
def _ReadStructure(self, file_object, file_offset, data_size, data_type_map, description): data = self._ReadData(file_object, file_offset, data_size, description) return self._ReadStructureFromByteStream(data, file_offset, data_type_map, description)
Reads a structure. Args: file_object (FileIO): file-like object. file_offset (int): offset of the data relative from the start of the file-like object. data_size (int): data size of the structure. data_type_map (dtfabric.DataTypeMap): data type map of the structure. description (str): description of the structure. Re...
codesearchnet
def size(self): gate_ops = 0 for (instr, _, _) in self.data: if (instr.name not in ['barrier', 'snapshot']): gate_ops += 1 return gate_ops
Returns total number of gate operations in circuit. Returns: int: Total number of gate operations.
codesearchnet
def commit_offsets_async(self, offsets, callback=None): self._invoke_completed_offset_commit_callbacks() if (not self.coordinator_unknown()): future = self._do_commit_offsets_async(offsets, callback) else: future = self.lookup_coordinator() future.add_callback((lambda r: functools.pa...
Commit specific offsets asynchronously. Arguments: offsets (dict {TopicPartition: OffsetAndMetadata}): what to commit callback (callable, optional): called as callback(offsets, response) response will be either an Exception or a OffsetCommitResponse struct. This callback can be used to trigger custom actions when a co...
codesearchnet
def modutf7_decode(data: bytes) -> str: parts = [] is_usascii = True buf = memoryview(data) while buf: byte = buf[0] if is_usascii: if (buf[0:2] == b'&-'): parts.append('&') buf = buf[2:] elif (byte == 38): is_usasci...
Decode the bytestring using modified UTF-7. Args: data: The encoded bytestring to decode.
codesearchnet
def implement(cls, implementations, for_type=None, for_types=None): for type_ in cls.__get_type_args(for_type, for_types): cls._implement_for_type(for_type=type_, implementations=implementations)
Provide protocol implementation for a type. Register all implementations of multimethod functions in this protocol and add the type into the abstract base class of the protocol. Arguments: implementations: A dict of (function, implementation), where each function is multimethod and each implementation is a callable. ...
codesearchnet
def static(self, root, path, media_type=None, charset='UTF-8'): root = os.path.abspath(os.path.join(root, '')) path = os.path.abspath(os.path.join(root, path.lstrip('/\\'))) self.response.state['filename'] = os.path.basename(path) if (not path.startswith(root)): return 403 elif (not os.path....
Send content of a static file as response. The path to the document root directory should be specified as the root argument. This is very important to prevent directory traversal attack. This method guarantees that only files within the document root directory are served and no files outside this directory can be acce...
codesearchnet
def shape(self): return self._dense_shape_default
Get the `TensorShape` representing the shape of the dense tensor. Returns: A `TensorShape` object.
github-repos
def file_modified_time(file_name) -> pd.Timestamp: return pd.to_datetime(time.ctime(os.path.getmtime(filename=file_name)))
File modified time in python Args: file_name: file name Returns: pd.Timestamp
juraj-google-style
def check_trace_mode(device_type, trace_mode): if trace_mode == tensor_tracer_flags.TRACE_MODE_FULL_TENSOR_SUMMARY: if device_type != _DEVICE_TYPE_TPU: raise ValueError('Device_type "%s" is not yet supported for trace mode "%s"' % (device_type, trace_mode))
Checks if the given trace mode work on the given device type. Args: device_type: Device type, TPU, GPU, CPU. trace_mode: Tensor tracer trace mode. Raises: ValueError: If the given trace mode is not supported for the device.
github-repos
def state_estimation_ensemble(data, k, n_runs=10, M_list=[], **se_params): if (len(M_list) == 0): M_list = [] for i in range(n_runs): (M, W, ll) = poisson_estimate_state(data, k, **se_params) M_list.append(M) M_stacked = np.hstack(M_list) (M_new, W_new, ll) = poisson_...
Runs an ensemble method on the list of M results... Args: data: genes x cells array k: number of classes n_runs (optional): number of random initializations of state estimation M_list (optional): list of M arrays from state estimation se_params (optional): optional poisson_estimate_state params Returns: M_new W_new l...
codesearchnet
def __init__(self, certificate_type, value, masks=None, name='Certificate'): super(Certificate, self).__init__() self._object_type = enums.ObjectType.CERTIFICATE self.value = value self.certificate_type = certificate_type self.names = [name] i...
Create a Certificate. Args: certificate_type(CertificateType): An enumeration defining the type of the certificate. value(bytes): The bytes representing the certificate. masks(list): A list of CryptographicUsageMask enumerations defining how the certificate will be used. name(string): The string name of the certificat...
juraj-google-style
def count_up_to(ref, limit, name=None): if ref.dtype._is_ref_dtype: return gen_state_ops.count_up_to(ref, limit=limit, name=name) return gen_state_ops.resource_count_up_to(ref.handle, limit, T=ref.dtype, name=name)
Increments 'ref' until it reaches 'limit'. Args: ref: A Variable. Must be one of the following types: `int32`, `int64`. Should be from a scalar `Variable` node. limit: An `int`. If incrementing ref would bring it above limit, instead generates an 'OutOfRange' error. name: A name for the operation (optional). Returns:...
github-repos
def get_model_test_files() -> List[str]: _ignore_files = ['test_modeling_common', 'test_modeling_encoder_decoder', 'test_modeling_flax_encoder_decoder', 'test_modeling_flax_speech_encoder_decoder', 'test_modeling_marian', 'test_modeling_tf_common', 'test_modeling_tf_encoder_decoder'] test_files = [] model_t...
Get the model test files. Returns: `List[str]`: The list of test files. The returned files will NOT contain the `tests` (i.e. `PATH_TO_TESTS` defined in this script). They will be considered as paths relative to `tests`. A caller has to use `os.path.join(PATH_TO_TESTS, ...)` to access the files.
github-repos
def AddRoute(self, short_name, long_name, route_type, route_id=None): if route_id is None: route_id = util.FindUniqueId(self.routes) route = self._gtfs_factory.Route(short_name=short_name, long_name=long_name, route_type=route_type, route_id=route_id) route.agency_id = sel...
Add a route to this schedule. Args: short_name: Short name of the route, such as "71L" long_name: Full name of the route, such as "NW 21st Ave/St Helens Rd" route_type: A type such as "Tram", "Subway" or "Bus" route_id: id of the route or None, in which case a unique id is picked Returns: A new Route object
juraj-google-style
def _group_sentences(total_nb_sentences, group_length): sentences_groups = [] current_sentence_group = [] for i in range(0, total_nb_sentences): if i % group_length == 0: if len(current_sentence_group) > 0: sentences_groups.append(current...
Split sentences in groups, given a specific group length. Args: total_nb_sentences (int): Total available sentences. group_length (int): Limit of length for each group. Returns: list: Contains groups (lists) of sentences.
juraj-google-style
def is_table(engine, sql): if engine.dialect.has_table(engine, sql): return True return False
Check with the given sql arg is query or table Args: engine: SQLAlchemy connection engine sql: SQL query or table name Returns: True for table or False if not
juraj-google-style
def get_sequence_dense_tensor(self, transformation_cache, state_manager): pass
Returns a `TensorSequenceLengthPair`. Args: transformation_cache: A `FeatureTransformationCache` object to access features. state_manager: A `StateManager` to create / access resources such as lookup tables.
github-repos
def ParseOptions(self, options): helpers_manager.ArgumentHelperManager.ParseOptions(options, self, names=['data_location']) signature_identifiers = self.ParseStringOption(options, 'signature_identifiers') if (signature_identifiers == 'list'): self.list_signature_identifiers = True if self.list_s...
Parses the options and initializes the front-end. Args: options (argparse.Namespace): command line arguments. Raises: BadConfigOption: if the options are invalid.
codesearchnet
def fig_to_svg(fig): buf = io.StringIO() fig.savefig(buf, format='svg') buf.seek(0) return buf.getvalue()
Helper function to convert matplotlib figure to SVG string Returns: str: figure as SVG string
codesearchnet
def _parse_schema_field(field): schema = bigquery.TableFieldSchema() schema.name = field['name'] schema.type = field['type'] if 'mode' in field: schema.mode = field['mode'] else: schema.mode = 'NULLABLE' if 'description' in field: schema.description = field['description']...
Parse a single schema field from dictionary. Args: field: Dictionary object containing serialized schema. Returns: A TableFieldSchema for a single column in BigQuery.
github-repos
def ParseForwardedIps(self, forwarded_ips): addresses = [] forwarded_ips = forwarded_ips or [] for ip in forwarded_ips: if ip and (IP_REGEX.match(ip) or IP_ALIAS_REGEX.match(ip)): addresses.extend([str(addr) for addr in list(netaddr.IPNetwork(ip))]) else: self.logger.warning...
Parse and validate forwarded IP addresses. Args: forwarded_ips: list, the IP address strings to parse. Returns: list, the valid IP address strings.
juraj-google-style
def get_checkpoint_state(checkpoint_dir, latest_filename=None): if isinstance(checkpoint_dir, os.PathLike): checkpoint_dir = os.fspath(checkpoint_dir) ckpt = None coord_checkpoint_filename = _GetCheckpointFilename(checkpoint_dir, latest_filename) f = None try: if file_io.file_exists(...
Returns CheckpointState proto from the "checkpoint" file. If the "checkpoint" file contains a valid CheckpointState proto, returns it. Args: checkpoint_dir: The directory of checkpoints. latest_filename: Optional name of the checkpoint file. Default to 'checkpoint'. Returns: A CheckpointState if the state was avail...
github-repos
def testHeatEquation_WithDefaultBoundaryCondtion(self, lower_bc_type, upper_bc_type): def final_cond_fn(x): return math.e * math.sin(x) def expected_result_fn(x): return tf.sin(x) @neumann def boundary_fn(t, x): del x return -tf.exp(t) lower_boundary_fn = boundary_...
Test for Default boundary conditions. Tests solving heat equation with the following boundary conditions involving default boundary `u_xx(0, t) = 0` or `u_xx(5 pi, t) = 0`. The exact solution `u(x, t=0) = e^t sin(x)`. Args: lower_bc_type: Lower boundary condition type. upper_bc_type: Upper boundary condition type.
github-repos