code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def outer_graph(self): current = self._weak_outer_graph() if current is None: return self._fallback_outer_graph return current
The Graph this FuncGraph is nested in. Functions may capture Tensors from graphs they are nested in (transitive). Returns: A Graph object. Initially set to the current default graph when the FuncGraph was created. If the previous `outer_graph` was deleted because the function that owns it was deleted, `outer_graph` i...
github-repos
def to_value_set_codes(self, fhir_context: context.FhirPathContext) -> Optional[ValueSetCodes]: if self.code_values is not None: return ValueSetCodes(self.value_set_url, self.value_set_version, self.code_values) value_set_proto = fhir_context.get_value_set(self.value_set_url) if value_set_proto is N...
Builds a representation of the value set given to the memberOf call. If memberOf was called with a value set proto, returns a ValueSetCodes object using the fields from that proto. If memberOf was called with a URL string, attempt to retrieve a value set proto from `fhir_context` and use it to build the ValueSetCodes ...
github-repos
def from_b58check(private_key): b58dec = base58.b58decode_check(private_key) version = b58dec[0] assert (version in [PrivateKey.TESTNET_VERSION, PrivateKey.MAINNET_VERSION]) return PrivateKey(int.from_bytes(b58dec[1:], 'big'))
Decodes a Base58Check encoded private-key. Args: private_key (str): A Base58Check encoded private key. Returns: PrivateKey: A PrivateKey object
codesearchnet
def find_all(self, selector, **kwargs): self.debug_log(('Finding elements with selector: %s' % selector)) raise_exception = kwargs.get('raise_exception', BROME_CONFIG['proxy_driver']['raise_exception']) self.debug_log(('effective raise_exception: %s' % raise_exception)) wait_until_present = kwargs.get('...
Return all the elements found with a selector Args: selector (str): the selector used to find the element Kwargs: wait_until_present (bool) default configurable via proxy_driver:wait_until_present_before_find wait_until_visible (bool) default configurable via proxy_driver:wait_until_visible_before_find raise_exceptio...
codesearchnet
def patch_fromText(self, textline): if type(textline) == unicode: textline = textline.encode("ascii") patches = [] if not textline: return patches text = textline.split('\n') while len(text) != 0: m = re.match("^@@ -(\d+),?(\d*) \+(\d+),?(\d*) @@$", text[0]) ...
Parse a textual representation of patches and return a list of patch objects. Args: textline: Text representation of patches. Returns: Array of Patch objects. Raises: ValueError: If invalid input.
juraj-google-style
def extract_channel(k, cdim): n = cdim perm = tuple(list(range(k)) + [n - 1] + list(range(k, n - 1))) return CPermutation.create(perm)
Create a :class:`CPermutation` that extracts channel `k` Return a permutation circuit that maps the k-th (zero-based) input to the last output, while preserving the relative order of all other channels. Args: k (int): Extracted channel index cdim (int): The circuit dimension (number of channels) Returns: Circuit: Pe...
juraj-google-style
def install_requirements(self, path, index=None): cmd = 'install -r {0}'.format(path) if index: cmd = 'install --index-url {0} -r {1}'.format(index, path) self.pip(cmd)
Install packages from a requirements.txt file. Args: path (str): The path to the requirements file. index (str): The URL for a pypi index to use.
codesearchnet
def eulers_totient(n): if (not isinstance(n, int)): raise TypeError('Expecting a strictly positive integer') if (n <= 0): raise ValueError('Expecting a strictly positive integer') if (n == 1): return 1 result = 0 for i in range(1, n): if (gcd(i, n) == 1): ...
Calculate the value of Euler's totient for a given integer Args: n (int): strictly positive integer Returns: The value of Euler's totient for n Raises: TypeError: If either n or k is not an integer ValueError: If either n or k is negative, or if k is strictly greater than n
codesearchnet
def parse_arguments(*args, **options): days = options.get('days', 1) enterprise_customer_uuid = options.get('enterprise_customer_uuid') enterprise_customer = None if enterprise_customer_uuid: try: enterprise_customer = EnterpriseCustomer.objects.get(uuid=enterprise_customer_uuid) ...
Parse and validate arguments for send_course_enrollments command. Arguments: *args: Positional arguments passed to the command **options: optional arguments passed to the command Returns: A tuple containing parsed values for 1. days (int): Integer showing number of days to lookup enterprise enrollments, course comple...
codesearchnet
def post_message(self, level, message, count=1, timestamp=None, now_reference=None): if ((len(self.messages) > 0) and (self.messages[(- 1)].message == message)): self.messages[(- 1)].count += 1 else: msg_object = ServiceMessage(level, message, self._last_message_id, timestamp, now_reference) ...
Post a new message for service. Args: level (int): The level of the message (info, warning, error) message (string): The message contents count (int): The number of times the message has been repeated timestamp (float): An optional monotonic value in seconds for when the message was created now_reference (float): If t...
codesearchnet
def get_sample(self, md5): sample = self.data_store.get_sample(md5) if not sample: return {'sample_set': {'md5_list': self.get_sample_set(md5)}} return {'sample': sample}
Get a sample from the DataStore. Args: md5: the md5 of the sample Returns: A dictionary of meta data about the sample which includes a ['raw_bytes'] key that contains the raw bytes. Raises: Workbench.DataNotFound if the sample is not found.
juraj-google-style
def make_list_of_op(tops, check_graph=True, allow_graph=True, ignore_ts=False): if isinstance(tops, ops.Graph): if allow_graph: return tops.get_operations() else: raise TypeError('allow_graph is False: cannot convert a tf.Graph.') else: if not is_iterable(tops): ...
Convert ops to a list of `tf.Operation`. Args: tops: can be an iterable of `tf.Operation`, a `tf.Graph` or a single operation. check_graph: if `True` check if all the operations belong to the same graph. allow_graph: if `False` a `tf.Graph` cannot be converted. ignore_ts: if True, silently ignore `tf.Tensor`. Returns:...
github-repos
def isdir(path): system = get_instance(path) return system.isdir(system.ensure_dir_path(path))
Return True if path is an existing directory. Equivalent to "os.path.isdir". Args: path (path-like object): Path or URL. Returns: bool: True if directory exists.
juraj-google-style
def create_base_for_fuse_batchnorm(self, pattern_match_mode='MATCH_ALL'): with self.cached_session() as sess: data_format = 'NHWC' if pattern_match_mode == 'MISMATCH_FORMAT': data_format = 'NCHW' inputs = [1, 4, 2, 5, 3, 6, -1, -4, -2, -5, -3, -6] input_op = constant_op.c...
Create testing graph and compute the result from original graph. Args: pattern_match_mode: A label string to indicate which batchnorm composition pattern to create in the resulting graph. "MATCH_ALL" - Create a graph matching the decomposed batchnorm pattern with full set of primitive ops. "MATCH_NO_GAMMA" - Create a ...
github-repos
def _run_query(client, query, job_config=None): start_time = time.time() query_job = client.query(query, job_config=job_config) print('Executing query with job ID: {}'.format(query_job.job_id)) while True: print('\rQuery executing: {:0.2f}s'.format((time.time() - start_time)), end='') tr...
Runs a query while printing status updates Args: client (google.cloud.bigquery.client.Client): Client to bundle configuration needed for API requests. query (str): SQL query to be executed. Defaults to the standard SQL dialect. Use the ``job_config`` parameter to change dialects. job_config (google.cloud.bigquery.job....
codesearchnet
def get_video_transcript_data(video_id, language_code): video_transcript = VideoTranscript.get_or_none(video_id, language_code) if video_transcript: try: return dict(file_name=video_transcript.filename, content=video_transcript.transcript.file.read()) except Exception: lo...
Get video transcript data Arguments: video_id(unicode): An id identifying the Video. language_code(unicode): it will be the language code of the requested transcript. Returns: A dict containing transcript file name and its content.
codesearchnet
def _JoinKeyPath(self, path_segments): path_segments = [ segment.split(definitions.KEY_PATH_SEPARATOR) for segment in path_segments] path_segments = [ element for sublist in path_segments for element in sublist] path_segments = filter(None, path_segme...
Joins the path segments into key path. Args: path_segments (list[str]): Windows Registry key path segments. Returns: str: key path.
juraj-google-style
def from_label(cls, label): z = np.zeros(len(label), dtype=np.bool) x = np.zeros(len(label), dtype=np.bool) for (i, char) in enumerate(label): if (char == 'X'): x[((- i) - 1)] = True elif (char == 'Z'): z[((- i) - 1)] = True elif (char == 'Y'): z[(...
r"""Take pauli string to construct pauli. The qubit index of pauli label is q_{n-1} ... q_0. E.g., a pauli is $P_{n-1} \otimes ... \otimes P_0$ Args: label (str): pauli label Returns: Pauli: the constructed pauli Raises: QiskitError: invalid character in the label
codesearchnet
def GetName(self): if (self.AssetType == AssetType.GoverningToken): return 'NEO' elif (self.AssetType == AssetType.UtilityToken): return 'NEOGas' if (type(self.Name) is bytes): return self.Name.decode('utf-8') return self.Name
Get the asset name based on its type. Returns: str: 'NEO' or 'NEOGas'
codesearchnet
def box(self, x0, y0, width, height): assert width > 1 assert height > 1 width -= 1 height -= 1 for x in range(x0, x0 + width): self.point(x, y0, "-") self.point(x, y0 + height, "-") for y in range(y0, y0 + height): self.poi...
Create a box on ASCII canvas. Args: x0 (int): x coordinate of the box corner. y0 (int): y coordinate of the box corner. width (int): box width. height (int): box height.
juraj-google-style
def Process(self, parser_mediator, root_item=None, **kwargs): super(SummaryInformationOLECFPlugin, self).Process( parser_mediator, **kwargs) if not root_item: raise ValueError('Root item not set.') root_creation_time, root_modification_time = self._GetTimestamps(root_item) for...
Parses a summary information OLECF item. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. root_item (Optional[pyolecf.item]): root item of the OLECF file. Raises: ValueError: If the root item is not set.
juraj-google-style
def rollaxis(a, axis, start=0): if isinstance(a, np.ndarray): return np.rollaxis(a, axis, start) if axis not in range(a.ndim): raise ValueError( 'rollaxis: axis (%d) must be >=0 and < %d' % (axis, a.ndim)) if start not in range(a.ndim + 1): raise ValueError( ...
Roll the specified axis backwards, until it lies in a given position. Args: a (array_like): Input array. axis (int): The axis to roll backwards. The positions of the other axes do not change relative to one another. start (int, optional): The axis is rolled until it lies before this position. The default, 0, results...
juraj-google-style
def symm_group_cubic(mat): sym_group = np.zeros([24, 3, 3]) sym_group[0, :] = [[1, 0, 0], [0, 1, 0], [0, 0, 1]] sym_group[1, :] = [[1, 0, 0], [0, -1, 0], [0, 0, -1]] sym_group[2, :] = [[-1, 0, 0], [0, 1, 0], [0, 0, -1]] sym_group[3, :] = [[-1, 0, 0], [0, -1, 0], [0, 0, 1]] sym_group[4, :] =...
obtain cubic symmetric eqivalents of the list of vectors. Args: matrix (lattice matrix, n by 3 array/matrix) Return: cubic symmetric eqivalents of the list of vectors.
juraj-google-style
def __init__(self, line, line_num=-1, var_map=None): self.match = '' self.regex = '' self.regex_obj = None self.line_op = '' self.record_op = '' self.new_state = '' self.line_num = line_num line = line.strip() if not line: raise TextF...
Initialise a new rule object. Args: line: (str), a template rule line to parse. line_num: (int), Optional line reference included in error reporting. var_map: Map for template (${var}) substitutions. Raises: TextFSMTemplateError: If 'line' is not a valid format for a Value entry.
juraj-google-style
def __init__(self, dataset, class_weight=None, distribution=None): from keras.src.utils.module_utils import tensorflow as tf if not isinstance(dataset, (tf.data.Dataset, tf.distribute.DistributedDataset)): raise ValueError(f'Expected argument `dataset` to be a tf.data.Dataset. Received: {dataset}') ...
Initialize the TFDatasetAdapter. Args: dataset: The input `tf.data.Dataset` instance. class_weight: A map where the keys are integer class ids and values are the class weights, e.g. `{0: 0.2, 1: 0.6, 2: 0.3}`. distribution: A `keras.distribution.Distribution` instance. Used to shard the input dataset into per worker/p...
github-repos
def increase_volume(percentage): if ((percentage > 100) or (percentage < 0)): raise ValueError('percentage must be an integer between 0 and 100') if (system.get_name() == 'windows'): pass elif (system.get_name() == 'mac'): volume_int = (percentage / 10) old_volume = get() ...
Increase the volume. Increase the volume by a given percentage. Args: percentage (int): The percentage (as an integer between 0 and 100) to increase the volume by. Raises: ValueError: if the percentage is >100 or <0.
codesearchnet
def filesizes(images): while True: img = yield marv.pull(images) if img is None: break yield marv.push(img.size)
Stat filesize of files. Args: images: stream of marv image files Returns: Stream of filesizes
juraj-google-style
def _print_results(file, status): file_color = c.Fore.GREEN status_color = c.Fore.RED if status == 'Success': status_color = c.Fore.GREEN elif status == 'Skipped': status_color = c.Fore.YELLOW print( '{}{!s:<13}{}{!s:<35}{}{!s:<8}{}{}...
Print the download results. Args: file (str): The filename. status (str): The file download status.
juraj-google-style
def exception(self, timeout=None): if (not self._completed.wait(timeout=timeout)): raise exceptions.TimeoutError('Timed out waiting for result.') if (self._result != self._SENTINEL): return None return self._exception
Return the exception raised by the call, if any. This blocks until the message has successfully been published, and returns the exception. If the call succeeded, return None. Args: timeout (Union[int, float]): The number of seconds before this call times out and raises TimeoutError. Raises: TimeoutError: If the requ...
codesearchnet
def summarize_dist_params(dist, name, name_scope="dist_params"): with tf.compat.v1.name_scope(name_scope): tf.compat.v2.summary.histogram( name="{}/{}".format(name, "mean"), data=dist.mean(), step=tf.compat.v1.train.get_or_create_global_step()) tf.compat.v2.summary.histogram( ...
Summarize the parameters of a distribution. Args: dist: A Distribution object with mean and standard deviation parameters. name: The name of the distribution. name_scope: The name scope of this summary.
juraj-google-style
def exists(self, filename): result = True for repo in self._children: if (not repo.exists(filename)): result = False return result
Report whether a file exists on all distribution points. Determines file type by extension. Args: filename: Filename you wish to check. (No path! e.g.: "AdobeFlashPlayer-14.0.0.176.pkg") Returns: Boolean
codesearchnet
def _set_mtu_to_nics(self, conf): for (dom_name, dom_spec) in conf.get('domains', {}).items(): for (idx, nic) in enumerate(dom_spec.get('nics', [])): net = self._get_net(conf, dom_name, nic) mtu = net.get('mtu', 1500) if (mtu != 1500): nic['mtu'] = mtu
For all the nics of all the domains in the conf that have MTU set, save the MTU on the NIC definition. Args: conf (dict): Configuration spec to extract the domains from Returns: None
codesearchnet
def transform_framerate(self, in_fps, out_fps): if in_fps <= 0 or out_fps <= 0: raise ValueError("Framerates must be positive, cannot transform %f -> %f" % (in_fps, out_fps)) ratio = in_fps / out_fps for line in self: line.start = int(round(line.start * ratio)) ...
Rescale all timestamps by ratio of in_fps/out_fps. Can be used to fix files converted from frame-based to time-based with wrongly assumed framerate. Arguments: in_fps (float) out_fps (float) Raises: ValueError: Non-positive framerate given.
juraj-google-style
def AssertIterableType(iterable, expected_item_type): if isinstance(iterable, collections.Iterator): message = 'Expected iterable container but got iterator `%s` instead' message %= iterable raise TypeError(message) AssertType(iterable, collections.Iterable) for item in iterable: ...
Ensures that given iterable container has certain type. Args: iterable: An iterable container to assert the type for. expected_item_type: An expected type of the container items. Raises: TypeError: If given container does is not an iterable or its items do not have the expected type.
codesearchnet
def builtin(cls, name): names = {'nsdoe': LEGEND__NSDOE, 'canstrat': LEGEND__Canstrat, 'nagmdm__6_2': LEGEND__NAGMDM__6_2, 'nagmdm__6_1': LEGEND__NAGMDM__6_1, 'nagmdm__4_3': LEGEND__NAGMDM__4_3, 'sgmc': LEGEND__SGMC} return cls.from_csv(text=names[name.lower()])
Generate a default legend. Args: name (str): The name of the legend you want. Not case sensitive. 'nsdoe': Nova Scotia Dept. of Energy 'canstrat': Canstrat 'nagmdm__6_2': USGS N. Am. Geol. Map Data Model 6.2 'nagmdm__6_1': USGS N. Am. Geol. Map Data Model 6.1 'nagmdm__4_3': USGS N. Am. Geol. Map Data Model 4.3 'sgmc':...
codesearchnet
def GetAttributeValuesString(self): attributes = [] for (attribute_name, attribute_value) in sorted(self.__dict__.items()): if ((attribute_name[0] == '_') or (attribute_value is None)): continue if isinstance(attribute_value, dict): attribute_value = sorted(attribute_valu...
Retrieves a comparable string of the attribute values. Returns: str: comparable string of the attribute values.
codesearchnet
def prepare_for_translation(localization_bundle_path): logging.info('Preparing for translation..') for strings_file in os.listdir(os.path.join(localization_bundle_path, DEFAULT_LANGUAGE_DIRECTORY_NAME)): if (not strings_file.endswith('.strings')): continue strings_path = os.path.join...
Prepares the localization bundle for translation. This means, after creating the strings files using genstrings.sh, this will produce '.pending' files, that contain the files that are yet to be translated. Args: localization_bundle_path (str): The path to the localization bundle.
codesearchnet
def __call__(self, environ, start_response): start_response('200 OK', [('Content-type', 'text/plain')]) self.last_request_uri = wsgiref.util.request_uri(environ) return [self._success_message.encode('utf-8')]
WSGI Callable. Args: environ (Mapping[str, Any]): The WSGI environment. start_response (Callable[str, list]): The WSGI start_response callable. Returns: Iterable[bytes]: The response body.
juraj-google-style
def flipcheck(content): punct = tamperdict = str.maketrans('', '', punct) tamperproof = content.translate(tamperdict) if "(╯°□°)╯︵" in tamperproof: if "┻┻" in tamperproof: length = 0 for letter in content: if letter ...
Checks a string for anger and soothes said anger Args: content (str): The message to be flipchecked Returns: putitback (str): The righted table or text
juraj-google-style
def get_metalpdb_info(metalpdb_lig_file): pdb_metals = ['CU', 'ZN', 'MN', 'FE', 'MG', 'CO', 'SE', 'YB', 'SF4', 'FES', 'F3S', 'NI', 'FE2'] coordination_number = 0 endogenous_ligands = [] exogenous_ligands = [] ss = StructProp(ident='metalpdb', structure_path=metalpdb_lig_file, file_...
Parse a MetalPDB .lig file and return a tuple of the chain ID it represents, along with metal binding information. Args: metalpdb_lig_file (str): Path to .lig file Returns: tuple: (str, dict) of the chain ID and the parsed metal binding site information
juraj-google-style
def create_html_from_fragment(tag): try: assert isinstance(tag, bs4.element.Tag) except AssertionError: raise TypeError try: assert (tag.find_all('body') == []) except AssertionError: raise ValueError soup = BeautifulSoup('<html><head></head><body></body></html>', 'ht...
Creates full html tree from a fragment. Assumes that tag should be wrapped in a body and is currently not Args: tag: a bs4.element.Tag Returns:" bs4.element.Tag: A bs4 tag representing a full html document
codesearchnet
def repository_blob(self, sha, **kwargs): path = '/projects/%s/repository/blobs/%s' % (self.get_id(), sha) return self.manager.gitlab.http_get(path, **kwargs)
Return a file by blob SHA. Args: sha(str): ID of the blob **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabGetError: If the server failed to perform the request Returns: dict: The blob content and metadata
juraj-google-style
def add_unref(self, timestamp: int) -> None: self._unref_times.append(timestamp)
Adds an unref to this tensor with the specified timestamp. Args: timestamp: Timestamp of object unreference as an integer.
github-repos
def run_scratch(self, path_to_scratch, num_cores=1, outname=None, outdir=None, force_rerun=False): if not outname: outname = self.project_name if not outdir: outdir = '' outname = op.join(outdir, outname) self.out_sspro = '{}.ss'.format(outname) ...
Run SCRATCH on the sequence_file that was loaded into the class. Args: path_to_scratch: Path to the SCRATCH executable, run_SCRATCH-1D_predictors.sh outname: Prefix to name the output files outdir: Directory to store the output files force_rerun: Flag to force rerunning of SCRATCH even if the output files exist Retur...
juraj-google-style
def _post(self, url, data, scope): self._create_session(scope) response = self.session.post(url, data=data) return response.status_code, response.text
Make a POST request using the session object to a Degreed endpoint. Args: url (str): The url to send a POST request to. data (str): The json encoded payload to POST. scope (str): Must be one of the scopes Degreed expects: - `CONTENT_PROVIDER_SCOPE` - `COMPLETION_PROVIDER_SCOPE`
juraj-google-style
def extract_changelog_items(text, tags): patterns = {x['header']: tag_re(x['tag']) for x in tags} items = {x['header']: [] for x in tags} curr_tag = None curr_text = '' for line in text.splitlines(): if not line.strip(): if curr_tag is not None: items[...
Extract all tagged items from text. Args: text (str): Text to extract the tagged items from. Each tagged item is a paragraph that starts with a tag. It can also be a text list item. Returns: tuple[list[str], list[str], list[str]]: A tuple of `(features, changes, fixes)` extracted from the given text. The tagged item...
juraj-google-style
def initialize_from_assignments(assignments, k, max_assign_weight=0.75): cells = len(assignments) init_W = np.zeros((k, cells)) for (i, a) in enumerate(assignments): init_W[(a, i)] = max_assign_weight for a2 in range(k): if (a2 != a): init_W[(a2, i)] = ((1 - max_a...
Creates a weight initialization matrix from Poisson clustering assignments. Args: assignments (array): 1D array of integers, of length cells k (int): number of states/clusters max_assign_weight (float, optional): between 0 and 1 - how much weight to assign to the highest cluster. Default: 0.75 Returns: init_W (array)...
codesearchnet
def ParseContainersTable(self, parser_mediator, database=None, table=None, **unused_kwargs): if (database is None): raise ValueError('Missing database value.') if (table is None): raise ValueError('Missing table value.') for esedb_record in table.records: if parser_mediator.abort: ...
Parses the Containers table. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. database (Optional[pyesedb.file]): ESE database. table (Optional[pyesedb.table]): table. Raises: ValueError: if the database or table value is missing.
codesearchnet
def profile_update(self, profile): if (profile.get('install_json') is None): print('{}{}Missing install_json parameter for profile {}.'.format(c.Style.BRIGHT, c.Fore.YELLOW, profile.get('profile_name'))) self.profile_update_args_v2(profile) self.profile_update_args_v3(profile) self.profile_updat...
Update an existing profile with new parameters or remove deprecated parameters. Args: profile (dict): The dictionary containting the profile settings.
codesearchnet
def exponential(x): return ops.exp(x)
Exponential activation function. Args: x: Input tensor.
github-repos
def ensure_dir_path(self, path, relative=False): if not relative: rel_path = self.relpath(path) else: rel_path = path if self.is_locator(rel_path, relative=True): path = path.rstrip('/') elif rel_path: path = pa...
Ensure the path is a dir path. Should end with '/' except for schemes and locators. Args: path (str): Path or URL. relative (bool): Path is relative to current root. Returns: path: dir path
juraj-google-style
def get_items_for_config_file_output(self, source_to_settings, parsed_namespace): config_file_items = OrderedDict() for source, settings in source_to_settings.items(): if source == _COMMAND_LINE_SOURCE_KEY: _, existing_command...
Converts the given settings back to a dictionary that can be passed to ConfigFormatParser.serialize(..). Args: source_to_settings: the dictionary described in parse_known_args() parsed_namespace: namespace object created within parse_known_args() Returns: an OrderedDict where keys are strings and values are either str...
juraj-google-style
def save(tiff_filename, numpy_data): tiff_filename = os.path.expanduser(tiff_filename) if type(numpy_data) is str: fp = open(png_filename, "wb") fp.write(numpy_data) fp.close() return png_filename try: img = tiff.imsave(tiff_filename, numpy_data) excep...
Export a numpy array to a TIFF file. Arguments: tiff_filename: A filename to which to save the TIFF data numpy_data: The numpy array to save to TIFF Returns: String. The expanded filename that now holds the TIFF data
juraj-google-style
def _ReadParserPresetValues(self, preset_definition_values): if (not preset_definition_values): raise errors.MalformedPresetError('Missing preset definition values.') name = preset_definition_values.get('name', None) if (not name): raise errors.MalformedPresetError('Invalid preset definition...
Reads a parser preset from a dictionary. Args: preset_definition_values (dict[str, object]): preset definition values. Returns: ParserPreset: a parser preset. Raises: MalformedPresetError: if the format of the preset definition is not set or incorrect, or the preset of a specific operating system has already been se...
codesearchnet
def compute_inv_covariance(L_aug, Y, k, p): return np.linalg.inv(compute_covariance(L_aug, Y, k, p))
Given label matrix L and labels Y, compute the covariance. Args: L: (np.array) [n, d] The augmented (indicator) label matrix Y: (np.array int) [n] The true labels in {1,...,k}
codesearchnet
def enhance_pubmed_annotations(pubmed: Mapping[str, Any]) -> Mapping[str, Any]: text = pubmed["title"] + pubmed["abstract"] annotations = {} for nsarg in pubmed["annotations"]: url = f'{config["bel_api"]["servers"]["api_url"]}/terms/{url_path_param_quoting(nsarg)}' log.info(f"URL: {u...
Enhance pubmed namespace IDs Add additional entity and annotation types to annotations Use preferred id for namespaces as needed Add strings from Title, Abstract matching Pubtator BioConcept spans NOTE - basically duplicated code with bel_api:api.services.pubmed Args: pubmed Returns: pubmed object
juraj-google-style
def set_server_def_retries(retries): context().set_server_def_retries(retries)
Set the number of retries to use when calling SetServerDef. In cases where many servers run in high-preemption environments, jobs could be preempted during startup and initial connection via SetServerDef. Retries allow for more robust connection in these environments. Args: retries: int specifying the number of conn...
github-repos
def create_sns_event(app_name, env, region, rules): session = boto3.Session(profile_name=env, region_name=region) sns_client = session.client('sns') topic_name = rules.get('topic') lambda_alias_arn = get_lambda_alias_arn(app=app_name, account=env, region=region) topic_arn = get_sns_topic_arn(t...
Create SNS lambda event from rules. Args: app_name (str): name of the lambda function env (str): Environment/Account for lambda function region (str): AWS region of the lambda function rules (str): Trigger rules from the settings
juraj-google-style
def phase_crossings(ts, phi=0.0): ts = ts.squeeze() if (ts.ndim is not 1): raise ValueError('Currently can only use on single variable timeseries') ts = mod2pi((ts - phi)) tsa = ts[0:(- 1)] tsb = ts[1:] p2 = (np.pi / 2) zc = (np.nonzero((((((tsa > (- p2)) & (tsa < 0)) & (tsb >= 0)) &...
For a single variable timeseries representing the phase of an oscillator, find the times at which the phase crosses angle phi, with the condition that the phase must visit phi+pi between crossings. (Thus if noise causes the phase to wander back and forth across angle phi without the oscillator doing a full revolution,...
codesearchnet
def get_processors(processor_cat, prop_defs, data_attr=None): processor_defs = prop_defs.get(processor_cat, []) processor_list = [] for processor in processor_defs: proc_class = PropertyProcessor[processor['rdf_type'][0]] processor_list.append(proc_class(processor.get('kds_params', [{}]), da...
reads the prop defs and adds applicable processors for the property Args: processor_cat(str): The category of processors to retreive prop_defs: property defintions as defined by the rdf defintions data_attr: the attr to manipulate during processing. Returns: list: a list of processors
codesearchnet
def load(self, cellpy_file, parent_level="CellpyData"): try: self.logger.debug("loading cellpy-file (hdf5):") self.logger.debug(cellpy_file) new_datasets = self._load_hdf5(cellpy_file, parent_level) self.logger.debug("cellpy-file loaded") except ...
Loads a cellpy file. Args: cellpy_file (path, str): Full path to the cellpy file. parent_level (str, optional): Parent level
juraj-google-style
def format_info(variant, variant_type='snv'): observations = variant.get('observations',0) homozygotes = variant.get('homozygote') hemizygotes = variant.get('hemizygote') vcf_info = f"Obs={observations}" if homozygotes: vcf_info += f";Hom={homozygotes}" if hemizygotes: ...
Format the info field for SNV variants Args: variant(dict) variant_type(str): snv or sv Returns: vcf_info(str): A VCF formated info field
juraj-google-style
def default(self, obj): if isinstance(obj, datetime): return obj.isoformat() if issubclass(obj.__class__, Enum.__class__): return obj.value to_json = getattr(obj, 'to_json', None) if to_json: out = obj.to_json() if issubclass(obj.__class__, Model): out.update(...
Default object encoder function Args: obj (:obj:`Any`): Object to be serialized Returns: JSON string
codesearchnet
def umask(self, new_mask): if not is_int_type(new_mask): raise TypeError('an integer is required') old_umask = self.filesystem.umask self.filesystem.umask = new_mask return old_umask
Change the current umask. Args: new_mask: (int) The new umask value. Returns: The old umask. Raises: TypeError: if new_mask is of an invalid type.
juraj-google-style
def write_int8(self, value, little_endian=True): if little_endian: endian = "<" else: endian = ">" return self.pack('%sb' % endian, value)
Pack the value as a signed byte and write 1 byte to the stream. Args: value: little_endian (bool): specify the endianness. (Default) Little endian. Returns: int: the number of bytes written.
juraj-google-style
def absent(name): ret = {'name': name, 'changes': {}, 'result': False, 'comment': ''} comment_bridge_deleted = 'Bridge {0} deleted.'.format(name) comment_bridge_notdeleted = 'Unable to delete bridge: {0}.'.format(name) comment_bridge_notexists = 'Bridge {0} does not exist.'.format(name) ...
Ensures that the named bridge does not exist, eventually deletes it. Args: name: The name of the bridge.
juraj-google-style
def downsample(data, percent): n_genes = data.shape[0] n_cells = data.shape[1] new_data = data.copy() total_count = float(data.sum()) to_remove = (total_count * percent) cell_sums = data.sum(0).astype(float) cell_gene_probs = (data / cell_sums) cell_probs = np.array((cell_sums / total_co...
downsample the data by removing a given percentage of the reads. Args: data: genes x cells array or sparse matrix percent: float between 0 and 1
codesearchnet
def read_header(self, file_handle, nextdata_offset=0): header = {'FCS format': file_handle.read(6)} file_handle.read(4) for field in ('text start', 'text end', 'data start', 'data end', 'analysis start', 'analysis end'): s = file_handle.read(8) try: field_value = int(s) e...
Read the header of the FCS file. The header specifies where the annotation, data and analysis are located inside the binary file. Args: file_handle: buffer containing FCS file. nextdata_offset: byte offset of a set header from file start specified by $NEXTDATA
codesearchnet
def _flatten_beam_dim(tensor): shape = _shape_list(tensor) shape[0] *= shape[1] shape.pop(1) return tf.reshape(tensor, shape)
Reshapes first two dimensions in to single dimension. Args: tensor: Tensor to reshape of shape [A, B, ...] Returns: Reshaped tensor of shape [A*B, ...]
juraj-google-style
def update_hash(a_hash, mv): if mv.labels: signing.add_dict_to_hash(a_hash, encoding.MessageToPyValue(mv.labels)) money_value = mv.get_assigned_value(u'moneyValue') if money_value is not None: a_hash.update(b'\x00') a_hash.update(money_value.currencyCode.encode('utf-8'))
Adds ``mv`` to ``a_hash`` Args: a_hash (`Hash`): the secure hash, e.g created by hashlib.md5 mv (:class:`MetricValue`): the instance to add to the hash
juraj-google-style
def CheckTaskToMerge(self, task): with self._lock: is_abandoned = (task.identifier in self._tasks_abandoned) is_processing = (task.identifier in self._tasks_processing) is_queued = (task.identifier in self._tasks_queued) if ((not is_queued) and (not is_processing) and (not is_abandon...
Checks if the task should be merged. Args: task (Task): task. Returns: bool: True if the task should be merged. Raises: KeyError: if the task was not queued, processing or abandoned.
codesearchnet
def timestamp(method='iso8601'): if method == 'iso8601': tz_hour = time.timezone utc_offset = str(tz_hour) if tz_hour < 0 else '+' + str(tz_hour) stamp = time.strftime('%Y-%m-%dT%H%M%S') + utc_offset return stamp else: raise Value...
make an iso8601 timestamp Args: method (str): type of timestamp Example: >>> stamp = timestamp() >>> print('stamp = {!r}'.format(stamp)) stamp = ...-...-...T...
juraj-google-style
def collapse_phenotypes(self, input_phenotype_labels, output_phenotype_label, verbose=True): if isinstance(input_phenotype_labels, str): input_phenotype_labels = [input_phenotype_labels] bad_phenotypes = (set(input_phenotype_labels) - set(self.phenotypes)) if (len(bad_phenotypes) > 0): raise...
Rename one or more input phenotypes to a single output phenotype Args: input_phenotype_labels (list): A str name or list of names to combine output_phenotype_label (list): A str name to change the phenotype names to verbose (bool): output more details Returns: CellDataFrame: The CellDataFrame modified.
codesearchnet
def track_metric(self, name, value, type=None, count=None, min=None, max=None, std_dev=None, properties=None): dataPoint = channel.contracts.DataPoint() dataPoint.name = (name or NULL_CONSTANT_STRING) dataPoint.value = (value or 0) dataPoint.kind = (type or channel.contracts.DataPointType.aggregation) ...
Send information about a single metric data point that was captured for the application. Args: name (str). the name of the metric that was captured.\n value (float). the value of the metric that was captured.\n type (:class:`channel.contracts.DataPointType`). the type of the metric. (defaults to: :func:`channel.contra...
codesearchnet
def get_cytoband_coord(chrom, pos): chrom = chrom.strip('chr') pos = int(pos) result = None logger.debug("Finding Cytoband for chrom:{0} pos:{1}".format(chrom, pos)) if chrom in CYTOBANDS: for interval in CYTOBANDS[chrom][pos]: result = "{0}{1}".format(chrom, interval.data) ...
Get the cytoband coordinate for a position Args: chrom(str): A chromosome pos(int): The position Returns: cytoband
juraj-google-style
def run_one_step(self, eig_init_vec_val, eig_num_iter_val, smooth_val, penalty_val, learning_rate_val): step_feed_dict = {self.eig_init_vec_placeholder: eig_init_vec_val, self.eig_num_iter_placeholder: eig_num_iter_val, self.smooth_placeholder...
Run one step of gradient descent for optimization. Args: eig_init_vec_val: Start value for eigen value computations eig_num_iter_val: Number of iterations to run for eigen computations smooth_val: Value of smoothness parameter penalty_val: Value of penalty for the current step learning_rate_val: Value of learning rate...
juraj-google-style
async def complete_task(context, result): args = [get_task_id(context.claim_task), get_run_id(context.claim_task)] reversed_statuses = get_reversed_statuses(context) try: if result == 0: log.info("Reporting task complete...") response = await context.temp_queue.reportCom...
Mark the task as completed in the queue. Decide whether to call reportCompleted, reportFailed, or reportException based on the exit status of the script. If the task has expired or been cancelled, we'll get a 409 status. Args: context (scriptworker.context.Context): the scriptworker context. Raises: taskcluster.exc...
juraj-google-style
def Filter(fn, *args, **kwargs): if not callable(fn): raise TypeError('Filter can be used only with callable objects. Received %r instead.' % fn) wrapper = lambda x, *args, **kwargs: [x] if fn(x, *args, **kwargs) else [] label = 'Filter(%s)' % ptransform.label_from_callable(fn) if hasattr(fn, '_...
:func:`Filter` is a :func:`FlatMap` with its callable filtering out elements. Filter accepts a function that keeps elements that return True, and filters out the remaining elements. Args: fn (``Callable[..., bool]``): a callable object. First argument will be an element. *args: positional arguments passed to the tran...
github-repos
def build_relative_position(query_size, key_size, bucket_size=-1, max_position=-1, device=None): q_ids = torch.arange(0, query_size, device=device) k_ids = torch.arange(0, key_size, device=device) rel_pos_ids = q_ids[:, None] - k_ids[None, :] if bucket_size > 0 and max_position > 0: rel_pos_ids ...
Build relative position according to the query and key We assume the absolute position of query \(P_q\) is range from (0, query_size) and the absolute position of key \(P_k\) is range from (0, key_size), The relative positions from query to key is \(R_{q \rightarrow k} = P_q - P_k\) Args: query_size (int): the length...
github-repos
def __init__(self, coord, timer_interval_secs, target=None, args=None, kwargs=None): if not isinstance(coord, Coordinator): raise ValueError("'coord' argument must be a Coordinator: %s" % coord) super(LooperThread, self).__init__() self.daemon = True self._coord = coord self._timer_interval_...
Create a LooperThread. Args: coord: A Coordinator. timer_interval_secs: Time boundaries at which to call Run(), or None if it should be called back to back. target: Optional callable object that will be executed in the thread. args: Optional arguments to pass to `target` when calling it. kwargs: Optional keyword argum...
github-repos
def nearest_neighbors(self, word, top_k=10): point = self[word] diff = (self.vectors - point) distances = np.linalg.norm(diff, axis=1) top_ids = distances.argsort()[1:(top_k + 1)] return [self.vocabulary.id_word[i] for i in top_ids]
Return the nearest k words to the given `word`. Args: word (string): single word. top_k (integer): decides how many neighbors to report. Returns: A list of words sorted by the distances. The closest is the first. Note: L2 metric is used to calculate distances.
codesearchnet
def load_pickled_model(filename, dirname=None): if dirname is None: pkg_filename = pkgutil.get_loader('dragnet').get_filename('dragnet') pkg_dirname = os.path.dirname(pkg_filename) dirname = os.path.join(pkg_dirname, 'pickled_models', model_path) filepath = os.path.join(dirname, fil...
Load a pickled ``Extractor`` model from disk. Args: filename (str): Name of pickled model file under ``dirname``. dirname (str): Name of directory on disk containing the pickled model. If None, dragnet's default pickled model directory is used: /path/to/dragnet/pickled_models/[PY_VERSION]_[SKLEARN_VERSION] Returns: :...
juraj-google-style
def start_vm(access_token, subscription_id, resource_group, vm_name): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourceGroups/', resource_group, '/providers/Microsoft.Compute/virtualMachines/', vm_name, '/start', '?api-version=', COMP_API]) return do_post(endpoint, '', access...
Start a virtual machine. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. resource_group (str): Azure resource group name. vm_name (str): Name of the virtual machine. Returns: HTTP response.
codesearchnet
def _padded_shape_to_tensor(padded_shape, input_component_shape): try: padded_shape_as_shape = tensor_shape.as_shape(padded_shape) ret = ops.convert_to_tensor([dim if dim is not None else -1 for dim in padded_shape_as_shape.as_list()], dtype=dtypes.int64) except (TypeError, ValueError) as e: ...
Converts `padded_shape` to a `tf.Tensor` representing that shape. Args: padded_shape: A shape-like object, which may be a `tf.TensorShape`, a Python sequence, or a 1-D `tf.Tensor` of `tf.int64` elements. input_component_shape: A `tf.TensorShape`, with which `padded_shape` must be compatible. Returns: A 1-D `tf.Tensor...
github-repos
def env_problem(env_problem_name, **kwargs): ep_cls = Registries.env_problems[env_problem_name] ep = ep_cls() ep.initialize(**kwargs) return ep
Get and initialize the `EnvProblem` with the given name and batch size. Args: env_problem_name: string name of the registered env problem. **kwargs: forwarded to env problem's initialize method. Returns: an initialized EnvProblem with the given batch size.
codesearchnet
def get_host(self, retry_count=3): try: default_timeout_set = False if (not socket.getdefaulttimeout()): socket.setdefaulttimeout(self.timeout) default_timeout_set = True log.debug('Host query for {0}'.format(self.address_str)) ret = socket.gethostbyaddr(self....
The function for retrieving host information for an IP address. Args: retry_count (:obj:`int`): The number of times to retry in case socket errors, timeouts, connection resets, etc. are encountered. Defaults to 3. Returns: namedtuple: :hostname (str): The hostname returned mapped to the given IP address. :aliaslist ...
codesearchnet
def visit(self, node): method = 'visit_' + node.__class__.__name__ if not hasattr(self, method): raise ValueError('Unknown node type: %s' % node.__class__.__name__) visitor = getattr(self, method) if anno.hasanno(node, 'active_in'): self.active_variables = anno.getanno(node, ...
Visit a node. This method is largely modelled after the ast.NodeTransformer class. Args: node: The node to visit. Returns: A tuple of the primal and adjoint, each of which is a node or a list of nodes.
juraj-google-style
def PopTask(self): try: (_, task) = heapq.heappop(self._heap) except IndexError: return None self._task_identifiers.remove(task.identifier) return task
Retrieves and removes the first task from the heap. Returns: Task: the task or None if the heap is empty.
codesearchnet
def linear_set_layer(layer_size, inputs, context=None, activation_fn=tf.nn.relu, dropout=0.0, name=None): with tf.variable_scope(name, default_name='linear_set_layer', values=[inputs]): outputs = conv1d(inputs, layer_size, 1, activation=None, name='set_conv') if (context is not None): if...
Basic layer type for doing funky things with sets. Applies a linear transformation to each element in the input set. If a context is supplied, it is concatenated with the inputs. e.g. One can use global_pool_1d to get a representation of the set which can then be used as the context for the next layer. TODO: Add bias...
codesearchnet
def complete(self, stream): assert (not self.is_complete()) self._marker.addInputPort(outputPort=stream.oport) self.stream.oport.schema = stream.oport.schema self._pending_schema._set(self.stream.oport.schema) stream.oport.operator._start_op = True
Complete the pending stream. Any connections made to :py:attr:`stream` are connected to `stream` once this method returns. Args: stream(Stream): Stream that completes the connection.
codesearchnet
def square(x): return math_ops.square(x)
Element-wise square. Args: x: Tensor or variable. Returns: A tensor.
github-repos
def __init__(self, cluster_resolver=None, communication_options=None): if communication_options is None: communication_options = collective_util.Options() super(CollectiveAllReduceStrategy, self).__init__(CollectiveAllReduceExtended(self, cluster_resolver=cluster_resolver, communication_options=communic...
Creates the strategy. Args: cluster_resolver: optional `tf.distribute.cluster_resolver.ClusterResolver`. If `None`, `tf.distribute.cluster_resolver.TFConfigClusterResolver` is used. communication_options: optional `tf.distribute.experimental.CommunicationOptions`. This configures the default options for cross device c...
github-repos
def inversion(origin=(0, 0, 0)): mat = -np.eye(4) mat[3, 3] = 1 mat[0:3, 3] = 2 * np.array(origin) return SymmOp(mat)
Inversion symmetry operation about axis. Args: origin (3x1 array): Origin of the inversion operation. Defaults to [0, 0, 0]. Returns: SymmOp representing an inversion operation about the origin.
juraj-google-style
def __init__(self, tensor: bk.TensorLike, qubits: Qubits = None, memory: Dict[Addr, Any] = None) -> None: if qubits is None: tensor = bk.astensorproduct(tensor) bits = bk.rank(tensor) qubits = range(bits) ...
Create a new State from a tensor of qubit amplitudes Args: tensor: A vector or tensor of state amplitudes qubits: A sequence of qubit names. (Defaults to integer indices, e.g. [0, 1, 2] for 3 qubits) memory: Classical memory.
juraj-google-style
def __init__(self, actions=None): super().__init__(InstructionType.OFPIT_WRITE_ACTIONS) self.actions = actions if actions else []
Create a InstructionWriteAction with the optional parameters below. Args: actions (:class:`~.actions.ListOfActions`): Actions associated with OFPIT_WRITE_ACTIONS.
juraj-google-style
def reply(self, text): data = {'text': text, 'vchannel_id': self['vchannel_id']} if self.is_p2p(): data['type'] = RTMMessageType.P2PMessage data['to_uid'] = self['uid'] else: data['type'] = RTMMessageType.ChannelMessage data['channel_id'] ...
Replys a text message Args: text(str): message content Returns: RTMMessage
juraj-google-style
def taper_rate(p0, p1): return ((2 * abs((p0[COLS.R] - p1[COLS.R]))) / point_dist(p0, p1))
Compute the taper rate between points p0 and p1 Args: p0, p1: iterables with first 4 components containing (x, y, z, r) Returns: The taper rate, defined as the absolute value of the difference in the diameters of p0 and p1 divided by the euclidian distance between them.
codesearchnet
def profile_update_schema(profile): if profile.get('autoclear') is None: print( '{}{}Profile Update: Adding new "autoclear" parameter.'.format( c.Style.BRIGHT, c.Fore.YELLOW ) ) profile['autoclear'] = True...
Update profile to latest schema. Args: profile (dict): The dictionary containting the profile settings.
juraj-google-style
def __init__(self, api_key: str, model: str, config: genai_types.GenerateContentConfig, output_dict: dict[EventTransition, content_api.ProcessorContentTypes | None], sensitivity: Optional[dict[EventTransition, int]]=None, max_images: int=5): self._client = genai.Client(api_key=api_key) self._model = model s...
Initializes the event detection processor. Args: api_key: The API key to use for the event detection model. model: The model to use for the event detection. config: The configuration to use for the event detection model. This configuration should contain the response schema for the event detection model. output_dict: ...
github-repos
def predict(self, df_data, graph=None, **kwargs): if (graph is None): return self.create_graph_from_data(df_data, **kwargs) elif isinstance(graph, nx.DiGraph): return self.orient_directed_graph(df_data, graph, **kwargs) elif isinstance(graph, nx.Graph): return self.orient_undirected_...
Orient a graph using the method defined by the arguments. Depending on the type of `graph`, this function process to execute different functions: 1. If ``graph`` is a ``networkx.DiGraph``, then ``self.orient_directed_graph`` is executed. 2. If ``graph`` is a ``networkx.Graph``, then ``self.orient_undirected_graph`` i...
codesearchnet