code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def get_instances(serials): _validate_device_existence(serials) results = [] for s in serials: results.append(AndroidDevice(s)) return results
Create AndroidDevice instances from a list of serials. Args: serials: A list of android device serials. Returns: A list of AndroidDevice objects.
github-repos
def Extract(self, components): rundll_index = -1 for index, component in enumerate(components): if component.lower().endswith("rundll32.exe"): rundll_index = index if rundll_index == -1: return [] components = components[(rundll_index + 1):] last_component...
Extracts interesting paths from a given path. Args: components: Source string represented as a list of components. Returns: A list of extracted paths (as strings).
juraj-google-style
def company(self, **kwargs): path = self._get_path('company') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Search for companies by name. Args: query: CGI escpaed string. page: (optional) Minimum value of 1. Expected value is an integer. Returns: A dict respresentation of the JSON returned from the API.
juraj-google-style
def read(self, istream, kmip_version=enums.KMIPVersion.KMIP_1_0): super(Certificate, self).read(istream, kmip_version=kmip_version) tstream = BytearrayStream(istream.read(self.length)) self.certificate_type = CertificateType() self.certificate_value = CertificateValue() self.certificate_type.read(ts...
Read the data encoding the Certificate object and decode it into its constituent parts. Args: istream (Stream): A data stream containing encoded object data, supporting a read method; usually a BytearrayStream object. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be de...
codesearchnet
def deconv_elems_1d(x, factor, out_depth=None): out_depth = out_depth or x.get_shape().as_list()[-1] x = tf.expand_dims(x, 1) x = layers().Conv2DTranspose( filters=out_depth, kernel_size=(1, factor), strides=(1, factor), padding="valid", data_format="channels_last", )(x) x...
Increase the length and change the dimensionality. Expand/project each positions of dim depth of the input into factor*tokens of dim out_depth Args: x (tf.Tensor): shape [batch_size, length, depth] factor (int): Multiplicative factor of each tokens. out_depth (int): Output depth (if None, keep depth constant) Return...
juraj-google-style
def conv_output_length(input_length, filter_size, padding, stride, dilation=1): if input_length is None: return None assert padding in {'same', 'valid', 'full'} dilated_filter_size = filter_size + (filter_size - 1) * (dilation - 1) if padding == 'same': output_length = input_length e...
Determines output length of a convolution given input length. Args: input_length: integer. filter_size: integer. padding: one of "same", "valid", "full". stride: integer. dilation: dilation rate, integer. Returns: The output length (integer).
github-repos
def wind_direction(self, value=999.0): if value is not None: try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float ' 'for field `wind_direction`'.format(value)) ...
Corresponds to IDD Field `wind_direction` Args: value (float): value for IDD Field `wind_direction` Unit: degrees value >= 0.0 value <= 360.0 Missing value: 999.0 if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid valu...
juraj-google-style
class Sum(Metric): def __init__(self, name='sum', dtype=None): super().__init__(name=name, dtype=dtype) self.total = self.add_variable(shape=(), initializer=initializers.Zeros(), dtype=self.dtype, name='total') def update_state(self, values, sample_weight=None): values, _ = reduce_to_s...
Compute the (weighted) sum of the given values. For example, if `values` is `[1, 3, 5, 7]` then their sum is 16. If `sample_weight` was specified as `[1, 1, 0, 0]` then the sum would be 4. This metric creates one variable, `total`. This is ultimately returned as the sum value. Args: name: (Optional) string name of t...
github-repos
def to_grid_locator(latitude, longitude, precision='square'): if precision not in ('square', 'subsquare', 'extsquare'): raise ValueError('Unsupported precision value %r' % precision) if not -90 <= latitude <= 90: raise ValueError('Invalid latitude value %r' % latitude) if not -180 <= l...
Calculate Maidenhead locator from latitude and longitude. Args: latitude (float): Position's latitude longitude (float): Position's longitude precision (str): Precision with which generate locator string Returns: str: Maidenhead locator for latitude and longitude Raise: ValueError: Invalid precision identifier Value...
juraj-google-style
def __init__(self, pyof_class, items=None): self._pyof_class = pyof_class super().__init__(items)
Create a FixedTypeList with the parameters follows. Args: pyof_class (:obj:`type`): Class of the items to be stored. items (iterable, ``pyof_class``): Items to be stored.
juraj-google-style
def run_build_model(self, num_runs=5, silent=False, force_rerun=False): self.mutation_ddG_avg_outfile = 'Average_{}.fxout'.format(op.splitext(self.repaired_pdb_outfile)[0]) self.mutation_ddG_raw_outfile = 'Raw_{}.fxout'.format(op.splitext(self.repaired_pdb_outfile)[0]) foldx_build_model = 'foldx --command=B...
Run FoldX BuildModel command with a mutant file input. Original command:: foldx --command=BuildModel --pdb=4bxi_Repair.pdb --mutant-file=individual_list.txt --numberOfRuns=5 Args: num_runs (int): silent (bool): If FoldX output should be silenced from printing to the shell. force_rerun (bool): If FoldX BuildModel sho...
codesearchnet
def replace(template, **replacements): if not isinstance(template, str): raise ValueError('Expected string template, got %s' % type(template)) for k in replacements: replacements[k] = _convert_to_ast(replacements[k]) template_str = parser.STANDARD_PREAMBLE + textwrap.dedent(template) nod...
Replaces placeholders in a Python template. AST Name and Tuple nodes always receive the context that inferred from the template. However, when replacing more complex nodes (that can potentially contain Name children), then the caller is responsible for setting the appropriate context. Args: template: A string represe...
github-repos
def Get(self, request, global_params=None): config = self.GetMethodConfig('Get') return self._RunMethod(config, request, global_params=global_params)
Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service. Args: request: (CloudbuildOperationsGetRequest) input message global_params: (StandardQueryParameters, default: None) global arguments Returns: (Operation) The res...
github-repos
def FromDictionary(cls, msg_dict): level = msg_dict.get('level') msg = msg_dict.get('message') now = msg_dict.get('now_time') created = msg_dict.get('created_time') count = msg_dict.get('count', 1) msg_id = msg_dict.get('id', 0) new_msg = ServiceMessage...
Create from a dictionary with kv pairs. Args: msg_dict (dict): A dictionary with information as created by to_dict() Returns: ServiceMessage: the converted message
juraj-google-style
def get_symmetrized_structure(self): ds = self.get_symmetry_dataset() sg = SpacegroupOperations(self.get_space_group_symbol(), self.get_space_group_number(), self.get_symmetry_operations()) return SymmetrizedStructure(self._structure, sg, ds['equivalent_atoms'], ds['wyckoffs'])
Get a symmetrized structure. A symmetrized structure is one where the sites have been grouped into symmetrically equivalent groups. Returns: :class:`pymatgen.symmetry.structure.SymmetrizedStructure` object.
codesearchnet
def score_cosine(self, term1, term2, **kwargs): t1_kde = self.kde(term1, **kwargs) t2_kde = self.kde(term2, **kwargs) return 1-distance.cosine(t1_kde, t2_kde)
Compute a weighting score based on the cosine distance between the kernel density estimates of two terms. Args: term1 (str) term2 (str) Returns: float
juraj-google-style
def Open(self, path, ascii_codepage='cp1252'): path_specification = self._path_resolver.ResolvePath(path) if path_specification is None: return None return self._OpenPathSpec(path_specification)
Opens the Windows Registry file specified by the path. Args: path (str): path of the Windows Registry file. ascii_codepage (Optional[str]): ASCII string codepage. Returns: WinRegistryFile: Windows Registry file or None.
juraj-google-style
def _ScanFileSystemForWindowsDirectory(self, path_resolver): result = False for windows_path in self._WINDOWS_DIRECTORIES: windows_path_spec = path_resolver.ResolvePath(windows_path) result = (windows_path_spec is not None) if result: self._windows_directory = windows_path ...
Scans a file system for a known Windows directory. Args: path_resolver (WindowsPathResolver): Windows path resolver. Returns: bool: True if a known Windows directory was found.
codesearchnet
def _encode_constraints(self, builder: expressions.Builder, element_definition: ElementDefinition) -> List[validation_pb2.SqlRequirement]: result: List[validation_pb2.SqlRequirement] = [] constraints: List[Constraint] = cast(Any, element_definition).constraint root_constraints: List[Constraint] = [] if ...
Returns a list of `SqlRequirement`s for FHIRPath constraints. Args: builder: The builder containing the element to encode constraints for. element_definition: Element definition passed from the parent. Returns: A list of `SqlRequirement`s expressing FHIRPath constraints defined on the `element_definition` and `builde...
github-repos
def create(self, uri, local_path): matches = self.schema_pattern.search(uri) if not matches: logger.error("Unknown uri schema: '%s'. Added schemas: %s", uri, list(self.handlers.keys())) return None schema = matches.group(1) url = matches.group(2) ...
Create a project handler Args: uri (str): schema://something formatted uri local_path (str): the project configs directory Return: ProjectHandler derived class instance
juraj-google-style
def _SanitizedArgSpec(obj): output_string = '' unsanitized_arg_spec = tf_inspect.getargspec(obj) for clean_attr in ('args', 'varargs', 'keywords'): output_string += '%s=%s, ' % (clean_attr, getattr(unsanitized_arg_spec, clean_attr)) if unsanitized_arg_spec.defaults: sanitized_defaults = ...
Get an ArgSpec string that is free of addresses. We have callables as function arg defaults. This results in addresses in getargspec output. This function returns a sanitized string list of base classes. Args: obj: A python routine for us the create the sanitized arspec of. Returns: string, a string representation o...
github-repos
def timTuVi(cuc, ngaySinhAmLich): cungDan = 3 cucBanDau = cuc if cuc not in [2, 3, 4, 5, 6]: raise Exception("Số cục phải là 2, 3, 4, 5, 6") while cuc < ngaySinhAmLich: cuc += cucBanDau cungDan += 1 saiLech = cuc - ngaySinhAmLich if saiLech % 2 is 1: sa...
Tìm vị trí của sao Tử vi Args: cuc (TYPE): Description ngaySinhAmLich (TYPE): Description Returns: TYPE: Description Raises: Exception: Description
juraj-google-style
def filter_lines(code, line_spec): code_lines = code.splitlines() line_specs = [line_denom.strip() for line_denom in line_spec.split(',')] single_lines = set(map(int, filter((lambda line: ('-' not in line)), line_specs))) line_ranges = set(filter((lambda line: ('-' in line)), line_specs)) for line_r...
Removes all lines not matching the line_spec. Args: code The code to filter line_spec The line specification. This should be a comma-separated string of lines or line ranges, e.g. 1,2,5-12,15 If a line range starts with -, all lines up to this line are included. If a line range ends with -, all lines from this line on...
codesearchnet
def post_create_app(cls, app, **settings): super(MarshmallowAwareApp, cls).post_create_app(app, **settings) marsh.init_app(app) return app
Automatically register and init the Flask Marshmallow extension. Args: app (flask.Flask): The application instance in which to initialize Flask Marshmallow upon. Kwargs: settings (dict): The settings passed to this method from the parent app. Returns: flask.Flask: The Flask application that was passed in.
codesearchnet
def _ln_rnn(x, gamma, beta): r mean, variance = tf.nn.moments(x, axes=[len(x.get_shape()) - 1], keep_dims=True) x = (x - mean) / tf.sqrt(variance + tf.sg_eps) return gamma * x + beta
r"""Applies layer normalization. Normalizes the last dimension of the tensor `x`. Args: x: A `Tensor`. gamma: A constant `Tensor`. Scale parameter. Default is 1. beta: A constant `Tensor`. Offset parameter. Default is 0. Returns: A `Tensor` with the same shape as `x`.
juraj-google-style
def batch_decode(self, sequences: Union[List[int], List[List[int]], 'np.ndarray', 'torch.Tensor', 'tf.Tensor'], skip_special_tokens: bool=False, clean_up_tokenization_spaces: Optional[bool]=None, **kwargs) -> List[str]: return [self.decode(seq, skip_special_tokens=skip_special_tokens, clean_up_tokenization_spaces=c...
Convert a list of lists of token ids into a list of strings by calling decode. Args: sequences (`Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]`): List of tokenized input ids. Can be obtained using the `__call__` method. skip_special_tokens (`bool`, *optional*, defaults to `False`): Whether or ...
github-repos
def to_json(self, from_api: dict=None, from_json: dict=None, parents: dict={}) -> dict: if from_api: from_json = deepcopy(from_api) for key, value in from_json.items(): if not isinstance(value, dict): continue if '$ref' in value: ref = value['$ref'] pa...
Returns a Discovery API Document schema with all refrences extrapolated. Recursively crawls the discovery document reference tree to build document. Leverages recursion depth passed in constructor to stop if necessary. Args: from_api: the api schema to extrapolate from_json: new object with references replaced, not p...
github-repos
def get_special_tokens_mask(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None, already_has_special_tokens: bool=False) -> List[int]: if already_has_special_tokens: return super().get_special_tokens_mask(token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True) ...
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. already_has_spe...
github-repos
def price(self, valuation_date, market, model=None, pricing_context=None, name=None): del model, pricing_context name = name or self._name + '_price' with tf.name_scope(name): discount_curve = market.discount_curve reference_curve = market.reference_curve libor_rate = rc.get_rate_ind...
Returns the present value of the stream on the valuation date. Args: valuation_date: A scalar `DateTensor` specifying the date on which valuation is being desired. market: A namedtuple of type `InterestRateMarket` which contains the necessary information for pricing the cashflow stream. model: Reserved for future use....
github-repos
def Serialize(self, writer): super(SpentCoinState, self).Serialize(writer) writer.WriteUInt256(self.TransactionHash) writer.WriteUInt32(self.TransactionHeight) writer.WriteVarInt(len(self.Items)) for item in self.Items: writer.WriteUInt16(item.index) ...
Serialize full object. Args: writer (neo.IO.BinaryWriter):
juraj-google-style
def stringify_default(default: Any) -> str: if isinstance(default, bool): return f'`{default}`' elif isinstance(default, enum.Enum): return f'`{str(default)}`' elif isinstance(default, int): return str(default) elif isinstance(default, float): result = str(default) ...
Returns the string representation of a default value, as used in docstring: numbers are left as is, all other objects are in backtiks. Args: default (`Any`): The default value to process Returns: `str`: The string representation of that default.
github-repos
def from_shape(cls, shape): if shape.__class__ is cls: return shape else: error = linearization_error(shape.nodes) if error < _ERROR_VAL: linearized = cls(shape, error) return linearized else: ...
Try to linearize a curve (or an already linearized curve). Args: shape (Union[SubdividedCurve, \ ~bezier._geometric_intersection.Linearization]): A curve or an already linearized curve. Returns: Union[SubdividedCurve, \ ~bezier._geometric_intersection.Linearization]: The (potentially linearized) curve.
juraj-google-style
def db_dp010(self, value=None): if (value is not None): try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float for field `db_dp010`'.format(value)) self._db_dp010 = value
Corresponds to IDD Field `db_dp010` mean coincident dry-bulb temperature to Dew-point temperature corresponding to 1.0% annual cumulative frequency of occurrence Args: value (float): value for IDD Field `db_dp010` Unit: C if `value` is None it will not be checked against the specification and is assumed to be a missin...
codesearchnet
def transition_complete(self, pipeline_key): def txn(): pipeline_record = db.get(pipeline_key) if pipeline_record is None: logging.warning( 'Tried to mark pipeline ID "%s" as complete but it does not exist.', pipeline_key.name()) raise db.Rollback() if ...
Marks the given pipeline as complete. Does nothing if the pipeline is no longer in a state that can be completed. Args: pipeline_key: db.Key of the _PipelineRecord that has completed.
juraj-google-style
def add_or_update(self, section, key, value): updates = self.update(section, key, value) if (updates == 0): self.add(section, key, value) return updates
Update the key or, if no previous value existed, add it. Returns: int: Number of updated lines.
codesearchnet
def delete_many(self, keys, noreply=None): if (not keys): return True if (noreply is None): noreply = self.default_noreply cmds = [] for key in keys: cmds.append((((b'delete ' + self.check_key(key)) + (b' noreply' if noreply else b'')) + b'\r\n')) self._misc_cmd(cmds, b'delet...
A convenience function to delete multiple keys. Args: keys: list(str), the list of keys to delete. noreply: optional bool, True to not wait for the reply (defaults to self.default_noreply). Returns: True. If an exception is raised then all, some or none of the keys may have been deleted. Otherwise all the keys have b...
codesearchnet
def changed(dirname, filename='.md5', args=None, glob=None): root = Path(dirname) if not root.exists(): return True cachefile = root / filename current_digest = cachefile.open().read() if cachefile.exists() else "" _digest = digest(dirname, glob=glob) if args and args...
Has `glob` changed in `dirname` Args: dirname: directory to measure filename: filename to store checksum
juraj-google-style
def __init__(self, client, base_path): self._client = client self._base_path = base_path self._queue_path = posixpath.join(self._base_path, 'queue', '') self._counter_path = posixpath.join(self._queue_path, 'counter') self._ensure_counter() self._ensure_queue()
Initialise the class. Args: client (:class:`consulate.Consul`): A :class:`consulate.Consul` instance. base_path (str): the base path to use in Consul.
juraj-google-style
def save(self, force=False): from time import time from datetime import datetime savefreq = TaskDB.get_option("savefreq", 2, int) if self.lastsave is not None: delta = (datetime.fromtimestamp(time()) - datetim...
Serializes the database file to disk. Args: force (bool): when True, the elapsed time since last save is ignored and the database is saved anyway (subject to global :data:`writeable` setting).
juraj-google-style
def sg_argmax(tensor, opt): r opt += tf.sg_opt(axis=tensor.get_shape().ndims-1) return tf.argmax(tensor, opt.axis, opt.name)
r"""Returns the indices of the maximum values along the specified axis. See `tf.argmax()` in tensorflow. Args: tensor: A `Tensor` (automatically given by chain). opt: axis: Target axis. Default is the last one. name: If provided, replace current tensor's name. Returns: A `Tensor`.
juraj-google-style
def all(self, data={}, **kwargs): return super(Invoice, self).all(data, **kwargs)
Fetch all Invoice entities Returns: Dictionary of Invoice data
codesearchnet
def _format_variant(self, case_id, gemini_variant, individual_objs, index=0, add_all_info=False): chrom = gemini_variant['chrom'] if (chrom.startswith('chr') or chrom.startswith('CHR')): chrom = chrom[3:] variant_dict = {'CHROM': chrom, 'POS': str(gemini_variant['start']), 'ID': gemini_variant['rs_i...
Make a puzzle variant from a gemini variant Args: case_id (str): related case id gemini_variant (GeminiQueryRow): The gemini variant individual_objs (list(dict)): A list of Individuals index(int): The index of the variant Returns: variant (dict): A Variant object
codesearchnet
def _ConvertDictToObject(cls, json_dict): class_type = json_dict.get('__type__', None) if (not class_type): return json_dict if (class_type == 'bytes'): return binascii.a2b_qp(json_dict['stream']) if (class_type == 'tuple'): return tuple(cls._ConvertListToObject(json_dict['values...
Converts a JSON dict into an object. The dictionary of the JSON serialized objects consists of: { '__type__': 'AttributeContainer' '__container_type__': ... ... } Here '__type__' indicates the object base type. In this case 'AttributeContainer'. '__container_type__' indicates the attribute container type. The rest ...
codesearchnet
def _deferred_pool_runner(has_chief, num_workers, initializer=None, share_gpu=True): container = [] def get_or_create(): if not container: cluster_spec = multi_worker_test_base.create_cluster_spec(has_chief=has_chief, num_workers=num_workers, num_ps=0, has_eval=False) runner = m...
Returns a callable that returns the pool runner. It creates the pool runner only upon first invocation. This avoids creating it when this file is imported. Args: has_chief: whether there should be a chief. num_workers: the number of workers excluding the chief. initializer: initializer of each process. share_gpu: whe...
github-repos
def normalize_version(version): rv = [] for x in version.split("."): try: rv.append(int(x)) except ValueError: for y in re.split("([0-9]+)", x): if y == '': continue try: rv.append(int(y)) ...
Helper function to normalize version. Returns a comparable object. Args: version (str) version, e.g. "0.1.0"
juraj-google-style
def __init__(self, use_resource_alias: bool=False, value_set_codes_table: Optional[str]=None, value_set_codes_definitions: Optional[fhir_package.FhirPackageManager]=None) -> None: self._use_resource_alias = use_resource_alias self._value_set_codes_table = value_set_codes_table self._value_set_codes_definiti...
Creates a SparkSqlInterpreter. Args: use_resource_alias: Determines whether it is necessary to call the resource table directly through an alias. value_set_codes_table: The name of the database table containing value set code definitions. Used when building SQL for memberOf expressions. If given, value set definitions...
github-repos
def tf_solve(self, fn_x, x_init, base_value, target_value, estimated_improvement=None): return super(LineSearch, self).tf_solve(fn_x, x_init, base_value, target_value, estimated_improvement)
Iteratively optimizes $f(x)$ for $x$ on the line between $x'$ and $x_0$. Args: fn_x: A callable returning the value $f(x)$ at $x$. x_init: Initial solution guess $x_0$. base_value: Value $f(x')$ at $x = x'$. target_value: Value $f(x_0)$ at $x = x_0$. estimated_improvement: Estimated improvement for $x = x_0$, $f(x')$ ...
juraj-google-style
def list(self, *args, **kwargs): return [self.prepare_model(n) for n in self.client.api.nodes(*args, **kwargs)]
List swarm nodes. Args: filters (dict): Filters to process on the nodes list. Valid filters: ``id``, ``name``, ``membership`` and ``role``. Default: ``None`` Returns: A list of :py:class:`Node` objects. Raises: :py:class:`docker.errors.APIError` If the server returns an error. Example: >>> client.nodes.list(filter...
codesearchnet
def _MaybeNewName(self, name): if not name: return name if name == self._old[:-1]: return self._module_name before, match, after = name.partition(self._old) if match and (not before): return self._new + after else: return name
Decides if a name should be replaced. Args: name: A name for which a prefix should be changed. Returns: If name is local to the module described by old_module_name the old_module_part will be replaced by new_module_name and returned, otherwise node.name will be returned.
github-repos
def __init__(self, state_regex, regex, actions, next_state, flags=re.I): self.state_regex = re.compile( state_regex, re.DOTALL | re.M | re.S | re.U | flags) self.regex = re.compile(regex, re.DOTALL | re.M | re.S | re.U | flags) self.re_str = regex self.actions = [] if actions: sel...
Initializes the token object. Args: state_regex: If this regular expression matches the current state this rule is considered. regex: A regular expression to try and match from the current point. actions: A command separated list of method names in the Lexer to call. next_state: The next state we transition to if thi...
juraj-google-style
def shared_s3_app_bucket(self, include_region=False): if include_region: shared_s3_app_bucket = self.format['shared_s3_app_region_bucket'].format(**self.data) else: shared_s3_app_bucket = self.format['shared_s3_app_bucket'].format(**self.data) return shared_s3_app_bucket
Generate shared s3 application bucket name. Args: include_region (bool): Include region in the name generation.
codesearchnet
def checksum(self, path): try: return self._gcsIO().checksum(path) except Exception as e: raise BeamIOError('Checksum operation failed', {path: e})
Fetch checksum metadata of a file on the :class:`~apache_beam.io.filesystem.FileSystem`. Args: path: string path of a file. Returns: string containing checksum Raises: ``BeamIOError``: if path isn't a file or doesn't exist.
github-repos
def MultiDeleteAttributes(self, subjects, attributes, start=None, end=None, sync=True): for subject in subjects: self.DeleteAttributes( subject, attributes...
Remove all specified attributes from a list of subjects. Args: subjects: The list of subjects that will have these attributes removed. attributes: A list of attributes. start: A timestamp, attributes older than start will not be deleted. end: A timestamp, attributes newer than end will not be deleted. sync: If true we...
juraj-google-style
def _get_dump_file_path(dump_root, device_name, debug_node_name): dump_root = os.path.join(dump_root, debug_data.device_name_to_device_path(device_name)) if '/' in debug_node_name: dump_dir = os.path.join(dump_root, os.path.dirname(debug_node_name)) dump_file_name = re.sub(':', '_', os.path.base...
Get the file path of the dump file for a debug node. Args: dump_root: (str) Root dump directory. device_name: (str) Name of the device that the debug node resides on. debug_node_name: (str) Name of the debug node, e.g., cross_entropy/Log:0:DebugIdentity. Returns: (str) Full path of the dump file.
github-repos
def parse_table_name(bigquery_table): id_name = bigquery_table.split(':') if len(id_name) != 2: raise ValueError('Bigquery table name should be in the form ' 'project_id:dataset.table_name. Got %s' % bigquery_table) return id_name[1]
Giving a string a:b.c, returns b.c. Args: bigquery_table: full table name project_id:dataset:table Returns: dataset:table Raises: ValueError: if a, b, or c contain the character ':'.
juraj-google-style
def setSeasonSchedules(self, cmd_dict=None, password="00000000"): result = False self.setContext("setSeasonSchedules") if not cmd_dict: cmd_dict = self.m_seasons_sched_params try: if not self.request(False): self.writeCmdMsg("Bad read CR...
Serial command to set seasons table. If no dictionary is passed, the meter object buffer is used. Args: cmd_dict (dict): Optional dictionary of season schedules. password (str): Optional password Returns: bool: True on completion and ACK.
juraj-google-style
def stations(self, station, limit=10): query = {'start': 1, 'S': (station + '?'), 'REQ0JourneyStopsB': limit} rsp = requests.get('http: return parse_stations(rsp.text)
Find stations for given queries Args: station (str): search query limit (int): limit number of results
codesearchnet
def _parse_batch_get(get_doc_response, reference_map, client): result_type = get_doc_response.WhichOneof('result') if (result_type == 'found'): reference = _get_reference(get_doc_response.found.name, reference_map) data = _helpers.decode_dict(get_doc_response.found.fields, client) snapsh...
Parse a `BatchGetDocumentsResponse` protobuf. Args: get_doc_response (~google.cloud.proto.firestore.v1beta1.\ firestore_pb2.BatchGetDocumentsResponse): A single response (from a stream) containing the "get" response for a document. reference_map (Dict[str, .DocumentReference]): A mapping (produced by :func:`_reference...
codesearchnet
def _replay(self, trial_id: int, dna: DNA, reward: Union[None, float, Tuple[float]]): del trial_id if reward is not None: self._feedback(dna, reward)
Replay a single DNA from the history for state recovery. The default implementation to call `DNAGenerator._feedback`. Subclasses that have states and can be recovered from replaying the history should override this method. See class `Sweeping` as an example. Args: trial_id: A zero-based integer as the trial ID for th...
github-repos
def start(period: int) -> threading.Event: global _heartbeat_timer if _heartbeat_timer is not None: logging.warning('A heartbeat thread is already running, skipping this one.') return _heartbeat_timer task_id = config.client_id() num_tasks = config.num_clients() if task_id == 0: ...
Starts a persistent thread exchanging heartbeats between workers. Args: period: Heartbeat interval in seconds. Heartbeat timeout is set to the larger of `period` - 10 and 2s. Returns: A threading.Event object. Users can choose to call its set() method to shut down the heartbeat service gracefully. This isn't necessar...
github-repos
def get_by_index(self, index): if index >= len(self._datasets): raise DataInvalidIndex('A dataset with index {} does not exist'.format(index)) return self._datasets[index]
Return a dataset by its index. Args: index (int): The index of the dataset that should be returned. Raises: DataInvalidIndex: If the index does not represent a valid dataset.
juraj-google-style
def adaptive_enc_mask(x_len, chunk_start_idx, left_window=0, right_window=0): chunk_start_idx = torch.Tensor(chunk_start_idx).long() start_pad = torch.nn.functional.pad(chunk_start_idx, (1, 0)) end_pad = torch.nn.functional.pad(chunk_start_idx, (0, 1), value=x_len) seq_range = torch.arange(0, x_len).uns...
The function is very important for Transformer Transducer Streaming mode Args: xs_len (int): sequence length chunk_start_idx (list): first idx of each chunk, such as [0,18,36,48]. It also supports adaptive chunk size [0,10,15,45] left_window (int): how many left chunks can be seen right_window (int): how many right chu...
github-repos
def register_command(self, name, handler, validator): self._commands[name] = (handler, validator)
Register a coroutine command handler. This handler will be called whenever a command message is received from the client, whose operation key matches ``name``. The handler will be called as:: response_payload = await handler(cmd_payload, context) If the coroutine returns, it will be assumed to have completed correc...
codesearchnet
class PromptDepthAnythingReassembleStage(nn.Module): def __init__(self, config): super().__init__() self.config = config self.layers = nn.ModuleList() for channels, factor in zip(config.neck_hidden_sizes, config.reassemble_factors): self.layers.append(PromptDepthAnything...
This class reassembles the hidden states of the backbone into image-like feature representations at various resolutions. This happens in 3 stages: 1. Take the patch embeddings and reshape them to image-like feature representations. 2. Project the channel dimension of the hidden states according to `config.neck_hidden_...
github-repos
def add_server(self, hostname, port, use_ssl, tls_ctx=None): if not use_ssl and tls_ctx: raise ValueError("Cannot specify a TLS context and not use SSL!") server = ldap3.Server( hostname, port=port, use_ssl=use_ssl, tls=tls_ctx ...
Add an additional server to the server pool and return the freshly created server. Args: hostname (str): Hostname of the server port (int): Port of the server use_ssl (bool): True if SSL is to be used when connecting. tls_ctx (ldap3.Tls): An optional TLS context object to use when connecting. Returns: ldap3.Server: T...
juraj-google-style
def GetSourceStrings(cls, event): formatter_object = cls.GetFormatterObject(event.data_type) return formatter_object.GetSources(event)
Retrieves the formatted source strings for a specific event object. Args: event (EventObject): event. Returns: list[str, str]: short and long version of the source of the event.
codesearchnet
def _FormatIPToken(self, token_data): data = ''.join(['{0:02x}'.format(byte) for byte in token_data.data]) return {'IPv4_Header': data}
Formats an IPv4 packet header token as a dictionary of values. Args: token_data (bsm_token_data_ip): AUT_IP token data. Returns: dict[str, str]: token values.
codesearchnet
def delete(self, json=None): return self._call('delete', url=self.endpoint, json=json)
Send a DELETE request and return the JSON decoded result. Args: json (dict, optional): Object to encode and send in request. Returns: mixed: JSON decoded response data.
juraj-google-style
def set_imu_callback(self, callback, data=None): self.imu_callback = callback self.imu_callback_data = data
Register a callback for incoming IMU data packets. This method allows you to pass in a callbable which will be called on receipt of each IMU data packet sent by this SK8 device. Set to `None` to disable it again. Args: callback: a callable with the following signature: (acc, gyro, mag, imu_index, seq, timestamp, data...
codesearchnet
def _GenerateZipInfo(self, arcname=None, compress_type=None, st=None): if (st is None): st = os.stat_result((33188, 0, 0, 0, 0, 0, 0, 0, 0, 0)) mtime = time.localtime((st.st_mtime or time.time())) date_time = mtime[0:6] if (arcname is None): raise ValueError('An arcname must be provided....
Generate ZipInfo instance for the given name, compression and stat. Args: arcname: The name in the archive this should take. compress_type: Compression type (zipfile.ZIP_DEFLATED, or ZIP_STORED) st: An optional stat object to be used for setting headers. Returns: ZipInfo instance. Raises: ValueError: If arcname is n...
codesearchnet
def authorization_url(self, **kwargs): kwargs.setdefault('access_type', 'offline') (url, state) = self.oauth2session.authorization_url(self.client_config['auth_uri'], **kwargs) return (url, state)
Generates an authorization URL. This is the first step in the OAuth 2.0 Authorization Flow. The user's browser should be redirected to the returned URL. This method calls :meth:`requests_oauthlib.OAuth2Session.authorization_url` and specifies the client configuration's authorization URI (usually Google's authorizatio...
codesearchnet
def _concat(prefix, suffix, static=False): if isinstance(prefix, tensor.Tensor): p = prefix p_static = tensor_util.constant_value(prefix) if p.shape.ndims == 0: p = array_ops.expand_dims(p, 0) elif p.shape.ndims != 1: raise ValueError('prefix tensor must be ei...
Concat that enables int, Tensor, or TensorShape values. This function takes a size specification, which can be an integer, a TensorShape, or a Tensor, and converts it into a concatenated Tensor (if static = False) or a list of integers (if static = True). Args: prefix: The prefix; usually the batch size (and/or time ...
github-repos
def delete(self, key): dct = self keys = key.split('.') last_key = keys[-1] for k in keys: if k == last_key: del dct[k] break if isinstance(dct, DotDict): dct = super(...
Remove a value from the `DotDict`. The `key` parameter can either be a regular string key, e.g. "foo", or it can be a string key with dot notation, e.g. "foo.bar.baz", to signify a nested element. If the key does not exist in the `DotDict`, it will continue silently. Args: key (str): The key to remove.
juraj-google-style
def max_intensity(item_a, time_a, item_b, time_b, max_value): intensity_a = item_a.max_intensity(time_a) intensity_b = item_b.max_intensity(time_b) diff = np.sqrt((intensity_a - intensity_b) ** 2) return np.minimum(diff, max_value) / float(max_value)
RMS difference in maximum intensity Args: item_a: STObject from the first set in ObjectMatcher time_a: Time integer being evaluated item_b: STObject from the second set in ObjectMatcher time_b: Time integer being evaluated max_value: Maximum distance value used as scaling value and upper constraint. Returns: Distance...
juraj-google-style
def filter_on_submodules(all_modules, submodules): filtered_modules = [] for mod in all_modules: for submodule in submodules: for package in PACKAGES: if package + submodule in mod.__name__: filtered_modules.append(mod) return filtered_modules
Filters all the modules based on the modules flag. The module flag has to be relative to the core package imported. For example, if `module=keras.layers` then, this function will return all the modules in the submodule. Args: all_modules: All the modules in the core package. submodules: Submodules to filter from all ...
github-repos
def get_transformed_output_time(self, window: 'BoundedWindow', input_timestamp: Timestamp) -> Timestamp: return input_timestamp
Given input time and output window, returns output time for window. If TimestampCombiner.OUTPUT_AT_EARLIEST_TRANSFORMED is used in the Windowing, the output timestamp for the given window will be the earliest of the timestamps returned by get_transformed_output_time() for elements of the window. Arguments: window: Ou...
github-repos
def _TensorArrayScatterGrad(op: ops.Operation, flow): handle = op.inputs[0] indices = op.inputs[1] dtype = op.get_attr('T') grad_source = _GetGradSource(flow) flow_out = array_ops.identity(op.outputs[0], 'flow_out') with ops.control_dependencies([flow_out]): flow = array_ops.identity(flo...
Gradient for TensorArrayScatter. Args: op: Forward TensorArrayScatter op. flow: Gradient `Tensor` flow to TensorArrayScatter. Returns: A grad `Tensor`, the gradient created in upstream ReadGrads or PackGrad.
github-repos
def fetch(self, invoice_id, data={}, **kwargs): return super(Invoice, self).fetch(invoice_id, data, **kwargs)
Fetch Invoice for given Id Args: invoice_id : Id for which invoice object has to be retrieved Returns: Invoice dict for given invoice Id
codesearchnet
def split_once(self, horizontal: bool, position: int) -> None: cdata = self._as_cdata() lib.TCOD_bsp_split_once(cdata, horizontal, position) self._unpack_bsp_tree(cdata)
Split this partition into 2 sub-partitions. Args: horizontal (bool): position (int):
juraj-google-style
def _get_new_global_index(self, index_override): if index_override is None: global_index = self._next_global_index else: if index_override in self._used_global_indices: raise ValueError('Index %d was already used by another call to add') global_index = index_override self...
Return the next unused argument index in order or use an override. Args: index_override: An index to use instead of the next available or None to use the next available. Returns: A valid global_index to use for the next hint argument. Raises: ValueError: If the index_override is already used by another hint.
github-repos
def restore_saved_local_scope(self, saved_variables, args_mapping, line_number): restore_nodes = list() for var in saved_variables: if (var.RHS in args_mapping): restore_nodes.append(RestoreNode(((var.RHS + ' = ') + args_mapping[var.RHS]), var.RHS, [var.LHS], line_number=line_number, path=se...
Restore the previously saved variables to their original values. Args: saved_variables(list[SavedVariable]) args_mapping(dict): A mapping of call argument to definition argument. line_number(int): Of the def of the function call about to be entered into. Note: We do not need connect_if_allowed because of the precedin...
codesearchnet
def default_pass_manager_simulator(basis_gates): pass_manager = PassManager() pass_manager.append(Unroller(basis_gates)) pass_manager.append([RemoveResetInZeroState(), Depth(), FixedPoint('depth')], do_while=lambda property_set: not property_set['depth_fixed_point']) retu...
The default pass manager without a coupling map. Args: basis_gates (list[str]): list of basis gate names to unroll to. Returns: PassManager: A passmanager that just unrolls, without any optimization.
juraj-google-style
def convert(self, obj): if isinstance(obj, pobjects.SymmetricKey): return self._build_core_key(obj, secrets.SymmetricKey) elif isinstance(obj, secrets.SymmetricKey): return self._build_pie_key(obj, pobjects.SymmetricKey) elif isinstance(obj, pobjects.PublicKey): ...
Convert a Pie object into a core secret object and vice versa. Args: obj (various): A Pie or core secret object to convert into the opposite object space. Required. Raises: TypeError: if the object type is unrecognized or unsupported.
juraj-google-style
def info_qry(tickers, flds) -> str: full_list = '\n'.join(([f'tickers: {tickers[:8]}'] + [f' {tickers[n:(n + 8)]}' for n in range(8, len(tickers), 8)])) return f
Logging info for given tickers and fields Args: tickers: tickers flds: fields Returns: str Examples: >>> print(info_qry( ... tickers=['NVDA US Equity'], flds=['Name', 'Security_Name'] ... )) tickers: ['NVDA US Equity'] fields: ['Name', 'Security_Name']
codesearchnet
def raise_io_error(self, errno, filename=None): raise IOError(errno, self._error_message(errno), filename)
Raises IOError. The error message is constructed from the given error code and shall start with the error in the real system. Args: errno: A numeric error code from the C variable errno. filename: The name of the affected file, if any.
juraj-google-style
def parse_coach_ec_df(infile): ec_df = pd.read_table(infile, delim_whitespace=True, names=['pdb_template', 'tm_score', 'rmsd', 'seq_ident', 'seq_coverage', 'c_score', 'ec_number', 'binding_residues']) ec_df['pdb_template_id'] = ec_df['pdb_templat...
Parse the EC.dat output file of COACH and return a dataframe of results EC.dat contains the predicted EC number and active residues. The columns are: PDB_ID, TM-score, RMSD, Sequence identity, Coverage, Confidence score, EC number, and Active site residues Args: infile (str): Path to EC.dat Returns: DataFrame: Panda...
juraj-google-style
def file_md5( filename ): with zopen( filename, 'r' ) as f: file_string = f.read() try: file_string = file_string.decode() except AttributeError: pass return( md5sum( file_string ) )
Generate the md5 checksum for a file Args: filename (Str): The file to be checksummed. Returns: (Str): The hex checksum Notes: If the file is gzipped, the md5 checksum returned is for the uncompressed ASCII file.
juraj-google-style
def underlying_variable(t): t = underlying_variable_ref(t) assert t is not None if not hasattr(tf.get_default_graph(), "var_index"): tf.get_default_graph().var_index = {} var_index = tf.get_default_graph().var_index for v in tf.global_variables()[len(var_index):]: var_index[v.name] = v return ...
Find the underlying tf.Variable object. Args: t: a Tensor Returns: tf.Variable.
juraj-google-style
def push(self, document=None): if (self.document is None): if (document is None): doc = Document() else: doc = document elif (document is None): doc = self.document else: raise ValueError('Cannot push() a different document from existing session.docume...
Push the given document to the server and record it as session.document. If this is called more than once, the Document has to be the same (or None to mean "session.document"). .. note:: Automatically calls :func:`~connect` before pushing. Args: document (:class:`~bokeh.document.Document`, optional) : The document w...
codesearchnet
def get_mail_keys(message, complete=True): if complete: log.debug('Get all headers') all_headers_keys = {i.lower() for i in message.keys()} all_parts = ((ADDRESSES_HEADERS | OTHERS_PARTS) | all_headers_keys) else: log.debug('Get only mains headers') all_parts = (ADDRESSES...
Given an email.message.Message, return a set with all email parts to get Args: message (email.message.Message): email message object complete (bool): if True returns all email headers Returns: set with all email parts
codesearchnet
def dumpfile(item, path): with io.open(path, 'wb') as fd: fd.write(en(item))
Dump an object to a file by path. Args: item (object): The object to serialize. path (str): The file path to save. Returns: None
codesearchnet
def supported_device(self, index=0): if ((not util.is_natural(index)) or (index >= self.num_supported_devices())): raise ValueError('Invalid index.') info = structs.JLinkDeviceInfo() result = self._dll.JLINKARM_DEVICE_GetInfo(index, ctypes.byref(info)) return info
Gets the device at the given ``index``. Args: self (JLink): the ``JLink`` instance index (int): the index of the device whose information to get Returns: A ``JLinkDeviceInfo`` describing the requested device. Raises: ValueError: if index is less than 0 or >= supported device count.
codesearchnet
def WaitUntilDone(self, timeout=None): utils.Poll(generator=self.GetState, condition=(lambda s: (s != self.__class__.STATE_RUNNING)), timeout=timeout) self.target_file = self.target_file.Get() return self
Wait until the operation is done. Args: timeout: timeout in seconds. None means default timeout (1 hour). 0 means no timeout (wait forever). Returns: Operation object with refreshed target_file. Raises: PollTimeoutError: if timeout is reached.
codesearchnet
def ExportClientsByKeywords(keywords, filename, token=None): r index = client_index.CreateClientIndex(token=token) client_list = index.LookupClients(keywords) logging.info("found %d clients", len(client_list)) if not client_list: return writer = csv.DictWriter([ u"client_id", u"hostname", ...
r"""A script to export clients summaries selected by a keyword search. This script does a client search for machines matching all of keywords and writes a .csv summary of the results to filename. Multi-value fields are '\n' separated. Args: keywords: a list of keywords to search for filename: the name of the file to ...
juraj-google-style
def ravel(x): if any_symbolic_tensors((x,)): return Ravel().symbolic_call(x) return backend.numpy.ravel(x)
Return a contiguous flattened tensor. A 1-D tensor, containing the elements of the input, is returned. Args: x: Input tensor. Returns: Output tensor.
github-repos
def properties_with_values(self, include_defaults=True): return self.query_properties_with_values((lambda prop: prop.serialized), include_defaults)
Collect a dict mapping property names to their values. This method *always* traverses the class hierarchy and includes properties defined on any parent classes. Non-serializable properties are skipped and property values are in "serialized" format which may be slightly different from the values you would normally rea...
codesearchnet
def global_step(device=''): global_step_ref = tf.get_collection(tf.GraphKeys.GLOBAL_STEP) if global_step_ref: return global_step_ref[0] else: collections = [ VARIABLES_TO_RESTORE, tf.GraphKeys.GLOBAL_VARIABLES, tf.GraphKeys.GLOBAL_STEP, ] with tf.device(variable_dev...
Returns the global step variable. Args: device: Optional device to place the variable. It can be an string or a function that is called to get the device for the variable. Returns: the tensor representing the global step variable.
juraj-google-style
def retransmit(self, data): if data["euuid"] in self.event_uuids: self.event_uuids[data["euuid"]] += 1 if (self.event_uuids[data["euuid"]] > self.max_retries or data["cuuid"] not in self.regi...
Processes messages that have been delivered from the listener. Args: data (dict): A dictionary containing the uuid, euuid, and message response. E.g. {"cuuid": x, "euuid": y, "response": z}. Returns: None
juraj-google-style
def time_stats(self, **kwargs): if ('time_stats' in self.attributes): return self.attributes['time_stats'] path = ('%s/%s/time_stats' % (self.manager.path, self.get_id())) return self.manager.gitlab.http_get(path, **kwargs)
Get time stats for the object. Args: **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabTimeTrackingError: If the time tracking update cannot be done
codesearchnet
def random_subsets(self, relative_sizes, by_duration=False, balance_labels=False, label_list_ids=None): resulting_sets = {} next_bigger_subset = self.corpus for relative_size in reversed(relative_sizes): generator = SubsetGenerator(next_bigger_subset, random_seed=self.random_seed) if by_dura...
Create a bunch of subsets with the given sizes relative to the size or duration of the full corpus. Basically the same as calling ``random_subset`` or ``random_subset_by_duration`` multiple times with different values. But this method makes sure that every subset contains only utterances, that are also contained in the...
codesearchnet