code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def get_device_policy(): device_policy = context.context().device_policy if device_policy == context.DEVICE_PLACEMENT_SILENT: return 'silent' elif device_policy == context.DEVICE_PLACEMENT_SILENT_FOR_INT32: return 'silent_for_int32' elif device_policy == context.DEVICE_PLACEMENT_WARN: ...
Gets the current device policy. The device policy controls how operations requiring inputs on a specific device (e.g., on GPU:0) handle inputs on a different device (e.g. GPU:1). This function only gets the device policy for the current thread. Any subsequently started thread will again use the default policy. Retur...
github-repos
def plot_all_stability_map(self, max_r, increments=50, delu_dict=None, delu_default=0, plt=None, labels=None, from_sphere_area=False, e_units='keV', r_units='nanometers', normalize=False, scale_per_atom=False): plt = (plt if plt else pretty_plot(width=8, height=7)) for (i, analyzer) in enumerate(self.se_analyze...
Returns the plot of the formation energy of a particles of different polymorphs against its effect radius Args: max_r (float): The maximum radius of the particle to plot up to. increments (int): Number of plot points delu_dict (Dict): Dictionary of the chemical potentials to be set as constant. Note the key should be ...
codesearchnet
def activate(self, uid=None): if uid is not None: if not isinstance(uid, six.string_types): raise TypeError("uid must be a string") result = self.proxy.activate(uid) status = result.result_status.value if status == enums.ResultStat...
Activate a managed object stored by a KMIP appliance. Args: uid (string): The unique ID of the managed object to activate. Optional, defaults to None. Returns: None Raises: ClientConnectionNotOpen: if the client connection is unusable KmipOperationFailure: if the operation result is a failure TypeError: if the input...
juraj-google-style
def register_command_handler(self, prefix, handler, help_info, prefix_aliases=None): self._command_handler_registry.register_command_handler(prefix, handler, help_info, prefix_aliases=prefix_aliases) self._tab_completion_registry.extend_comp_items('', [prefix]) if prefix_aliases: self._tab_completio...
A wrapper around CommandHandlerRegistry.register_command_handler(). In addition to calling the wrapped register_command_handler() method, this method also registers the top-level tab-completion context based on the command prefixes and their aliases. See the doc string of the wrapped method for more details on the ar...
github-repos
def to_csv(pipe: BeamEventSet, file_path_prefix: str, schema: Schema, timestamp_key: str='timestamp', **wargs): header_values = [timestamp_key] + schema.index_names() + schema.feature_names() header_string = io.StringIO() header_writer = csv.writer(header_string) header_writer.writerow(header_values) ...
Writes a Beam EventSet to a file or set of csv files. Limitation: Timestamps are always stored as numerical values. TODO: Support datetime timestamps. Usage example: ``` input_node: tp.EventSetNode = ... ( p | tpb.from_csv("/input.csv", input_node.schema) | ... # processing | tpb.to_csv("/output.csv", output_node.sc...
github-repos
def sub_chempots(gamma_dict, chempots): coeffs = [gamma_dict[k] for k in gamma_dict.keys()] chempot_vals = [] for k in gamma_dict.keys(): if k not in chempots.keys(): chempot_vals.append(k) elif k == 1: chempot_vals.append(1) else: chempot_va...
Uses dot product of numpy array to sub chemical potentials into the surface grand potential. This is much faster than using the subs function in sympy. Args: gamma_dict (dict): Surface grand potential equation as a coefficient dictionary chempots (dict): Dictionary assigning each chemical potential (key) in gamma a val...
juraj-google-style
def get_cbm_vbm(self, tol=0.001, abs_tol=False, spin=None): if spin is None: tdos = self.y if len(self.ydim) == 1 else np.sum(self.y, axis=1) elif spin == Spin.up: tdos = self.y[:, 0] else: tdos = self.y[:, 1] if not abs_tol: ...
Expects a DOS object and finds the cbm and vbm. Args: tol: tolerance in occupations for determining the gap abs_tol: An absolute tolerance (True) and a relative one (False) spin: Possible values are None - finds the gap in the summed densities, Up - finds the gap in the up spin channel, Down - finds the gap in the dow...
juraj-google-style
def get_layer_timing_signal_learned_1d(channels, layer, num_layers): shape = [num_layers, 1, 1, channels] layer_embedding = ( tf.get_variable( "layer_embedding", shape, initializer=tf.random_normal_initializer(0, channels**-0.5)) * (channels**0.5)) return layer_embeddi...
get n-dimensional embedding as the layer (vertical) timing signal. Adds embeddings to represent the position of the layer in the tower. Args: channels: dimension of the timing signal layer: layer num num_layers: total number of layers Returns: a Tensor of timing signals [1, 1, channels].
juraj-google-style
def ready(self, node_id, metadata_priority=True): self.maybe_connect(node_id) return self.is_ready(node_id, metadata_priority=metadata_priority)
Check whether a node is connected and ok to send more requests. Arguments: node_id (int): the id of the node to check metadata_priority (bool): Mark node as not-ready if a metadata refresh is required. Default: True Returns: bool: True if we are ready to send to the given node
codesearchnet
def as_object(obj): LOGGER.debug('as_object(%s)', obj) if isinstance(obj, datetime.date): return as_date(obj) elif hasattr(obj, '__dict__'): out = {k: obj.__dict__[k] for k in obj.__dict__ if not k.startswith('_')} for k, v in ( (p, getattr(...
Return a JSON serializable type for ``o``. Args: obj (:py:class:`object`): the object to be serialized. Raises: :py:class:`AttributeError`: when ``o`` is not a Python object. Returns: (dict): JSON serializable type for the given object.
juraj-google-style
def map_seqprop_resnums_to_structprop_resnums(self, resnums, seqprop=None, structprop=None, chain_id=None, use_representatives=False): resnums = ssbio.utils.force_list(resnums) if use_representatives: seqprop = self.representative_sequence structprop = self.representative_structure chain...
Map a residue number in any SeqProp to the structure's residue number for a specified chain. Args: resnums (int, list): Residue numbers in the sequence seqprop (SeqProp): SeqProp object structprop (StructProp): StructProp object chain_id (str): Chain ID to map to use_representatives (bool): If the representative seque...
codesearchnet
def __init__(self, flag_desc, help): self.desc = flag_desc self.help = help self.default = '' self.tips = ''
Create the flag object. Args: flag_desc The command line forms this could take. (string) help The help text (string)
juraj-google-style
def scan_file(path): path = os.path.abspath(path) if settings.USE_CLAMD: return clamd.scan_file(path) else: return clamscan.scan_file(path)
Scan `path` for viruses using ``clamd`` or ``clamscan`` (depends on :attr:`settings.USE_CLAMD`. Args: path (str): Relative or absolute path of file/directory you need to scan. Returns: dict: ``{filename: ("FOUND", "virus type")}`` or blank dict. Raises: ValueError: When the server is not running. AssertionError: Whe...
juraj-google-style
def type_check(type_constraint, datum, is_input): datum_type = 'input' if is_input else 'output' try: check_constraint(type_constraint, datum) except CompositeTypeHintError as e: _, _, tb = sys.exc_info() raise TypeCheckError(e.args[0]).with_traceback(tb) except SimpleTypeHintErr...
Typecheck a PTransform related datum according to a type constraint. This function is used to optionally type-check either an input or an output to a PTransform. Args: type_constraint: An instance of a typehints.TypeContraint, one of the white-listed builtin Python types, or a custom user class. datum: An instance of...
github-repos
def account_id(self, value): if value == self._defaults['ai.user.accountId'] and 'ai.user.accountId' in self._values: del self._values['ai.user.accountId'] else: self._values['ai.user.accountId'] = value
The account_id property. Args: value (string). the property value.
juraj-google-style
def _remove_duplicate_points(points, groups): group_initial_ids = groups[(:, GPFIRST)] to_be_reduced = np.zeros(len(group_initial_ids)) to_be_removed = [] for (ig, g) in enumerate(groups): (iid, typ, pid) = (g[GPFIRST], g[GTYPE], g[GPID]) if ((pid != (- 1)) and (typ != 1) and (groups[pid...
Removes the duplicate points from the beginning of a section, if they are present in points-groups representation. Returns: points, groups with unique points.
codesearchnet
def get_chromosomes(self, sv=False): if sv: res = self.db.structural_variant.distinct('chrom') else: res = self.db.variant.distinct('chrom') return res
Return a list of all chromosomes found in database Args: sv(bool): if sv variants should be choosen Returns: res(iterable(str)): An iterable with all chromosomes in the database
juraj-google-style
def sample(self, size=None): self._recompute() if size is None: n = np.random.randn(len(self._t)) else: n = np.random.randn(len(self._t), size) n = self.solver.dot_L(n) if size is None: return self.mean.get_value(self._t) + n[:, 0] ...
Sample from the prior distribution over datasets Args: size (Optional[int]): The number of samples to draw. Returns: array[n] or array[size, n]: The samples from the prior distribution over datasets.
juraj-google-style
def addon_name(self): with self.selenium.context(self.selenium.CONTEXT_CHROME): el = self.find_description() return el.find_element(By.CSS_SELECTOR, 'b').text
Provide access to the add-on name. Returns: str: Add-on name.
codesearchnet
def plot_structures(self, structures, fontsize=6, **kwargs): import matplotlib.pyplot as plt nrows = len(structures) (fig, axes) = plt.subplots(nrows=nrows, ncols=1, sharex=True, squeeze=False) for (i, (ax, structure)) in enumerate(zip(axes.ravel(), structures)): self.get_plot(structure, fontsiz...
Plot diffraction patterns for multiple structures on the same figure. Args: structures (Structure): List of structures two_theta_range ([float of length 2]): Tuple for range of two_thetas to calculate in degrees. Defaults to (0, 90). Set to None if you want all diffracted beams within the limiting sphere of radius 2 /...
codesearchnet
def _set_resultdir(name=None): resultdir_name = (name or ('enos_' + datetime.today().isoformat())) resultdir_path = os.path.abspath(resultdir_name) if os.path.isfile(resultdir_path): raise EnosFilePathError(resultdir_path, ('Result directory cannot be created due to existing file %s' % resultdir_pat...
Set or get the directory to store experiment results. Looks at the `name` and create the directory if it doesn"t exist or returns it in other cases. If the name is `None`, then the function generates an unique name for the results directory. Finally, it links the directory to `SYMLINK_NAME`. Args: name (str): file p...
codesearchnet
def _create_variables_and_slots(self) -> Dict[Text, Dict[Text, tf_variables.Variable]]: variables = {} for table in self._table_config: variables[table.name] = self._create_variables(table, trainable=True) return variables
Create variables for TPU embeddings. Note that this will always ensure that the variable is created under the TPUStrategy. Returns: A dict of dicts. The outer dict is keyed by the table names and the inner dicts are keyed by 'parameters' and the slot variable names.
github-repos
def is_video(mime: str) -> bool: return mime in INPUT_VIDEO_TYPES or mime.startswith('video/')
Returns whether the content is a video. Args: mime: The mime string. Returns: True of it is a video, False otherwise.
github-repos
def matches_filters(self, node): visible = self.visible if self.options['text']: if isregex(self.options['text']): regex = self.options['text'] elif (self.exact_text is True): regex = re.compile('\\A{}\\Z'.format(re.escape(self.options['text']))) else: ...
Returns whether the given node matches all filters. Args: node (Element): The node to evaluate. Returns: bool: Whether the given node matches.
codesearchnet
def no_results(channel): gui = ui_embed.UI(channel, 'No results', ':c', modulename=modulename, colour=16746496) return gui
Creates an embed UI for when there were no results Args: channel (discord.Channel): The Discord channel to bind the embed to Returns: ui (ui_embed.UI): The embed UI object
codesearchnet
def DeletePendingNotification(self, timestamp): shown_notifications = self.Get(self.Schema.SHOWN_NOTIFICATIONS) if (not shown_notifications): shown_notifications = self.Schema.SHOWN_NOTIFICATIONS() pending = self.Get(self.Schema.PENDING_NOTIFICATIONS) if (not pending): return delete_...
Deletes the pending notification with the given timestamp. Args: timestamp: The timestamp of the notification. Assumed to be unique. Raises: UniqueKeyError: Raised if multiple notifications have the timestamp.
codesearchnet
def get(self, id): for obj in self.model.db: if obj["id"] == id: return self._cast_model(obj) return None
Get a object by id Args: id (int): Object id Returns: Object: Object with specified id None: If object not found
juraj-google-style
def save(self, clean=True): ret = {} if clean: self._dirty = False else: ret['_dirty'] = self._dirty return ret
Serialize into raw representation. Clears the dirty bit by default. Args: clean (bool): Whether to clear the dirty bit. Returns: dict: Raw.
juraj-google-style
def Run(self, conf, args): try: options, args = self.parser.parse_args(args) except SystemExit as e: return e.code if options.maps: self.log.info('Setting configured maps to %s', options.maps) conf.maps = options.maps for map_name in conf.maps: if map_name == conf...
Run the Status command. See Command.Run() for full documentation on the Run() method. Args: conf: nss_cache.config.Config object args: list of arguments to be parsed by this command Returns: zero on success, nonzero on error
github-repos
def _get_resource_hash(zone_name, record): record_data = defaultdict(int, record) if type(record_data['GeoLocation']) == dict: record_data['GeoLocation'] = ":".join(["{}={}".format(k, v) for k, v in record_data['GeoLocation'].items()]) args = [ zone_name, ...
Returns the last ten digits of the sha256 hash of the combined arguments. Useful for generating unique resource IDs Args: zone_name (`str`): The name of the DNS Zone the record belongs to record (`dict`): A record dict to generate the hash from Returns: `str`
juraj-google-style
def FindExecutableOnPath(executable, path=None, pathext=None, allow_extensions=False): if not allow_extensions and os.path.splitext(executable)[1]: raise ValueError('FindExecutableOnPath({0},...) failed because first argument must not have an extension.'.format(executable)) if os.path.dirname(executable...
Searches for `executable` in the directories listed in `path` or $PATH. Executable must not contain a directory or an extension. Args: executable: The name of the executable to find. path: A list of directories to search separated by 'os.pathsep'. If None then the system PATH is used. pathext: An iterable of file na...
github-repos
def _sign_of(money): units = money.units nanos = money.nanos if units: if (units > 0): return 1 elif (units < 0): return (- 1) if nanos: if (nanos > 0): return 1 elif (nanos < 0): return (- 1) return 0
Determines the amount sign of a money instance Args: money (:class:`endpoints_management.gen.servicecontrol_v1_messages.Money`): the instance to test Return: int: 1, 0 or -1
codesearchnet
def get_html_titles(index_page): dom = dhtmlparser.parseString(index_page) title_tags = dom.find("title") return [ SourceString(tag.getContent().strip(), "HTML") for tag in title_tags if tag.getContent().strip() ]
Return list of titles parsed from HTML. Args: index_page (str): HTML content of the page you wish to analyze. Returns: list: List of :class:`.SourceString` objects.
juraj-google-style
def _get_vep_transcript(self, transcript_info): transcript = Transcript( hgnc_symbol = transcript_info.get('SYMBOL'), transcript_id = transcript_info.get('Feature'), ensembl_id = transcript_info.get('Gene'), biotype = transcript_info.get('...
Create a Transcript based on the vep annotation Args: transcript_info (dict): A dict with vep info Returns: transcript (puzzle.models.Transcript): A Transcripts
juraj-google-style
def filterbanks(num_filter, coefficients, sampling_freq, low_freq=None, high_freq=None): high_freq = (high_freq or (sampling_freq / 2)) low_freq = (low_freq or 300) s = 'High frequency cannot be greater than half of the sampling frequency!' assert (high_freq <= (sampling_freq / 2)), s assert (low_fr...
Compute the Mel-filterbanks. Each filter will be stored in one rows. The columns correspond to fft bins. Args: num_filter (int): the number of filters in the filterbank, default 20. coefficients (int): (fftpoints//2 + 1). Default is 257. sampling_freq (float): the samplerate of the signal we are working with. It affec...
codesearchnet
def linear_interpolate_rank(tensor1, tensor2, coeffs, rank=1): _, _, _, num_channels = common_layers.shape_list(tensor1) diff_sq_sum = tf.reduce_sum((tensor1 - tensor2)**2, axis=(0, 1, 2)) _, feature_ranks = tf.math.top_k(diff_sq_sum, k=rank) feature_rank = feature_ranks[-1] channel_inds = tf.range(num_...
Linearly interpolate channel at "rank" between two tensors. The channels are ranked according to their L2 norm between tensor1[channel] and tensor2[channel]. Args: tensor1: 4-D Tensor, NHWC tensor2: 4-D Tensor, NHWC coeffs: list of floats. rank: integer. Returns: interp_latents: list of interpolated 4-D Tensors, shap...
juraj-google-style
def _slice_single_param(param, param_event_ndims, slices, dist_batch_shape): param_shape = tf.shape(input=param) insert_ones = tf.ones( [tf.size(input=dist_batch_shape) + param_event_ndims - tf.rank(param)], dtype=param_shape.dtype) new_param_shape = tf.concat([insert_ones, param_shape], axis=0)...
Slices a single parameter of a distribution. Args: param: A `Tensor`, the original parameter to slice. param_event_ndims: `int` event parameterization rank for this parameter. slices: A `tuple` of normalized slices. dist_batch_shape: The distribution's batch shape `Tensor`. Returns: new_param: A `Tensor`, batch-slice...
juraj-google-style
def _check_expiration(self, url: str, data: 'SavedEndpoint') -> 'SavedEndpoint': if data.expires_after < time.time(): del self.data[url] data = None return data
Checks the expiration time for data for a url. If the data has expired, it is deleted from the cache. Args: url: url to check data: page of data for that url Returns: value of either the passed data or None if it expired
juraj-google-style
def install_event_handlers(self, categories=None, handlers=None): if ((categories is not None) and (handlers is not None)): raise ValueError('categories and handlers are mutually exclusive!') from .events import get_event_handler_classes if categories: raise NotImplementedError() han...
Install the `EventHandlers for this `Node`. If no argument is provided the default list of handlers is installed. Args: categories: List of categories to install e.g. base + can_change_physics handlers: explicit list of :class:`EventHandler` instances. This is the most flexible way to install handlers. .. note:: cat...
codesearchnet
def _replica_ctx_all_reduce(self, reduce_op, value, options=None): if options is None: options = collective_util.Options() replica_context = get_replica_context() assert replica_context, '`StrategyExtended._replica_ctx_all_reduce` must be called in a replica context' def merge_fn(_, flat_value)...
All-reduce `value` across all replicas so that all get the final result. If `value` is a nested structure of tensors, all-reduces of these tensors will be batched when possible. `options` can be set to hint the batching behavior. This API must be called in a replica context. Args: reduce_op: A `tf.distribute.ReduceO...
github-repos
class HfDeepSpeedConfig(DeepSpeedConfig): def __init__(self, config_file_or_dict): set_hf_deepspeed_config(self) dep_version_check('accelerate') dep_version_check('deepspeed') super().__init__(config_file_or_dict)
This object contains a DeepSpeed configuration dictionary and can be quickly queried for things like zero stage. A `weakref` of this object is stored in the module's globals to be able to access the config from areas where things like the Trainer object is not available (e.g. `from_pretrained` and `_get_resized_embedd...
github-repos
def run_stages(self, stage_context: translations.TransformContext, stages: List[translations.Stage]) -> 'RunnerResult': worker_handler_manager = WorkerHandlerManager(stage_context.components.environments, self._provision_info) pipeline_metrics = MetricsContainer('') pipeline_metrics.get_counter(MetricName(s...
Run a list of topologically-sorted stages in batch mode. Args: stage_context (translations.TransformContext) stages (list[fn_api_runner.translations.Stage])
github-repos
def sg_summary_param(tensor, prefix=None, name=None): prefix = ('' if (prefix is None) else (prefix + '/')) name = ((prefix + _pretty_name(tensor)) if (name is None) else (prefix + name)) _scalar((name + '/abs'), tf.reduce_mean(tf.abs(tensor))) _histogram((name + '/abs-h'), tf.abs(tensor))
r"""Register `tensor` to summary report as `parameters` Args: tensor: A `Tensor` to log as parameters prefix: A `string`. A prefix to display in the tensor board web UI. name: A `string`. A name to display in the tensor board web UI. Returns: None
codesearchnet
def create_metadata(self, resource, keys_vals): self.metadata_service.set_auth(self._token_metadata) self.metadata_service.create(resource, keys_vals)
Associates new key-value pairs with the given resource. Will attempt to add all key-value pairs even if some fail. Args: resource (intern.resource.boss.BossResource) keys_vals (dictionary): Collection of key-value pairs to assign to given resource. Raises: HTTPErrorList on failure.
juraj-google-style
def __init__(self, channel): self.GetModelStatus = channel.unary_unary( '/tensorflow.serving.ModelService/GetModelStatus', request_serializer=tensorflow__serving_dot_apis_dot_get__model__status__pb2.GetModelStatusRequest.SerializeToString, response_deserializer=tensorflow__serving_dot_a...
Constructor. Args: channel: A grpc.Channel.
juraj-google-style
def create(self, python=None, system_site=False, always_copy=False): command = 'virtualenv' if python: command = '{0} --python={1}'.format(command, python) if system_site: command = '{0} --system-site-packages'.format(command) if always_copy: ...
Create a new virtual environment. Args: python (str): The name or path of a python interpreter to use while creating the virtual environment. system_site (bool): Whether or not use use the system site packages within the virtual environment. Default is False. always_copy (bool): Whether or not to force copying instead...
juraj-google-style
def sagemaker_auth(overrides={}, path='.'): api_key = overrides.get(env.API_KEY, Api().api_key) if (api_key is None): raise ValueError("Can't find W&B ApiKey, set the WANDB_API_KEY env variable or run `wandb login`") overrides[env.API_KEY] = api_key with open(os.path.join(path, 'secrets.env'), '...
Write a secrets.env file with the W&B ApiKey and any additional secrets passed. Args: overrides (dict, optional): Additional environment variables to write to secrets.env path (str, optional): The path to write the secrets file.
codesearchnet
def get_settings(category='All'): if (category.lower() in ['all', '*']): category = '*' elif (category.lower() not in [x.lower() for x in categories]): raise KeyError('Invalid category: "{0}"'.format(category)) cmd = '/get /category:"{0}"'.format(category) results = _auditpol_cmd(cmd) ...
Get the current configuration for all audit settings specified in the category Args: category (str): One of the nine categories to return. Can also be ``All`` to return the settings for all categories. Valid options are: - Account Logon - Account Management - Detailed Tracking - DS Access - Logon/Logoff - Object Acce...
codesearchnet
def register_filter(self, filter_name, filter_ref, force=False): if not force and (filter_name in self.filters_list()): self.log_warning("Extension %s already exist, ignore redefinition." % ext_in) return self.__jinja2_environment.filters[filter_name] = filter_ref
Add/register one filter. Args: filter_name (str): Filter name used inside :program:`Jinja2` tags. filter_ref: Reference to the filter itself, i.e. the corresponding :program:`Python` function. force (bool): If set to ``True``, forces the registration of a filter no matter if it already exists or not. Note: The list o...
juraj-google-style
def __init__(self, encoding_method=None, parent=None, **kwargs): if not encoding_method or not parent: raise ValueError('Missing encoding method or parent value.') super(EncodedStreamPathSpec, self).__init__(parent=parent, **kwargs) self.encoding_method = encoding_method
Initializes a path specification. Note that the encoded stream path specification must have a parent. Args: encoding_method (Optional[str]): method used to the encode the data. parent (Optional[PathSpec]): parent path specification. Raises: ValueError: when encoding method or parent are not set.
juraj-google-style
def __tf_tensor__(self, dtype=None, name=None): pass
Converts this object to a Tensor. Args: dtype: data type for the returned Tensor name: a name for the operations which create the Tensor Returns: A Tensor.
github-repos
def add_header(self, key, value, **params): key = self.escape(key) ci_key = key.casefold() def quoted_params(items): for p in items: param_name = self.escape(p[0]) param_val = self.de_quote(self.escape(p[1])) yield param_name...
Add a header to the collection, including potential parameters. Args: key (str): The name of the header value (str): The value to store under that key params: Option parameters to be appended to the value, automatically formatting them in a standard way
juraj-google-style
def split_recursive(self, depth: int, min_width: int, min_height: int, max_horizontal_ratio: float, max_vertical_ratio: float, seed: Optional[tcod.random.Random]=None) -> None: cdata = self._as_cdata() lib.TCOD_bsp_split_recursive(cdata, (seed or ffi.NULL), depth, min_width, min_height, max_horizontal_ratio, ma...
Divide this partition recursively. Args: depth (int): The maximum depth to divide this object recursively. min_width (int): The minimum width of any individual partition. min_height (int): The minimum height of any individual partition. max_horizontal_ratio (float): Prevent creating a horizontal ratio more extreme tha...
codesearchnet
def get_latest_package(name, range_=None, paths=None, error=False): it = iter_packages(name, range_=range_, paths=paths) try: return max(it, key=lambda x: x.version) except ValueError: if error: raise PackageFamilyNotFoundError("No such package fa...
Get the latest package for a given package name. Args: name (str): Package name. range_ (`VersionRange`): Version range to search within. paths (list of str, optional): paths to search for package families, defaults to `config.packages_path`. error (bool): If True, raise an error if no package is found. Returns: `Pac...
juraj-google-style
def instrument(self, package, options=None, runner=None, handler=None) -> bytes: if runner is None: runner = DEFAULT_INSTRUMENTATION_RUNNER if options is None: options = {} options_list = [] for option_key, option_value in options.items(): options_list.append('-e %s %s' % (option...
Runs an instrumentation command on the device. This is a convenience wrapper to avoid parameter formatting. Example: .. code-block:: python device.instrument( 'com.my.package.test', options = { 'class': 'com.my.package.test.TestSuite', }, ) Args: package: string, the package of the instrumentation tests. options: ...
github-repos
def add_residues_highlight_to_nglview(view, structure_resnums, chain, res_color='red'): chain = ssbio.utils.force_list(chain) if isinstance(structure_resnums, list): structure_resnums = list(set(structure_resnums)) elif isinstance(structure_resnums, int): structure_resnums = ssbio.utils.forc...
Add a residue number or numbers to an NGLWidget view object. Args: view (NGLWidget): NGLWidget view object structure_resnums (int, list): Residue number(s) to highlight, structure numbering chain (str, list): Chain ID or IDs of which residues are a part of. If not provided, all chains in the mapped_chains attribute wi...
codesearchnet
def stage(self, name, pipeline_counter=None): return Stage( self.server, pipeline_name=self.name, stage_name=name, pipeline_counter=pipeline_counter, )
Helper to instantiate a :class:`gocd.api.stage.Stage` object Args: name: The name of the stage pipeline_counter: Returns:
juraj-google-style
def create_customer(self, *, full_name, email): payload = {'fullName': full_name, 'email': email} return self.client._post((self.url + 'customers'), json=payload, headers=self.get_headers())
Creation of a customer in the system. Args: full_name: Customer's complete name. Alphanumeric. Max: 255. email: Customer's email address. Alphanumeric. Max: 255. Returns:
codesearchnet
def html_for_cgi_argument(argument, form): value = (form[argument].value if (argument in form) else None) return KEY_VALUE_TEMPLATE.format(argument, value)
Returns an HTML snippet for a CGI argument. Args: argument: A string representing an CGI argument name in a form. form: A CGI FieldStorage object. Returns: String HTML representing the CGI value and variable.
codesearchnet
def get_jwt_key_data(): global __jwt_data if __jwt_data: return __jwt_data from cloud_inquisitor import config_path from cloud_inquisitor.config import dbconfig jwt_key_file = dbconfig.get('jwt_key_file_path', default='ssl/private.key') if (not os.path.isabs(jwt_key_file)): jwt_k...
Returns the data for the JWT private key used for encrypting the user login token as a string object Returns: `str`
codesearchnet
async def _perform_ping_timeout(self, delay: int): (await sleep(delay)) error = TimeoutError('Ping timeout: no data received from server in {timeout} seconds.'.format(timeout=self.PING_TIMEOUT)) (await self.on_data_error(error))
Handle timeout gracefully. Args: delay (int): delay before raising the timeout (in seconds)
codesearchnet
def event(self, name, **kwargs): group_obj = Event(name, **kwargs) return self._group(group_obj)
Add Event data to Batch object. Args: name (str): The name for this Group. date_added (str, kwargs): The date timestamp the Indicator was created. event_date (str, kwargs): The event datetime expression for this Group. status (str, kwargs): The status for this Group. xid (str, kwargs): The external id for this Group. ...
codesearchnet
def __init__(self, mac_addr): addr_info = mac_addr.lower().split(':') if len(addr_info) < 6: raise ValueError('Invalid mac address') addr_info[2] = 'EtherSync' self._addr = ''.join(addr_info[2:])
Construct a EtherSync object. Args: mac_addr: mac address of the Cambrionix unit for EtherSync.
juraj-google-style
def assignSchedule(self, schedule, period, hour, minute, tariff): if ((schedule not in range(Extents.Schedules)) or (period not in range(Extents.Tariffs)) or (hour < 0) or (hour > 23) or (minute < 0) or (minute > 59) or (tariff < 0)): ekm_log(...
Assign one schedule tariff period to meter bufffer. Args: schedule (int): A :class:`~ekmmeters.Schedules` value or in range(Extents.Schedules). tariff (int): :class:`~ekmmeters.Tariffs` value or in range(Extents.Tariffs). hour (int): Hour from 0-23. minute (int): Minute from 0-59. tariff (int): Rate value. Returns: b...
juraj-google-style
def __init__(self, url): if isinstance(url, Uri): self.uri = url else: self.uri = Uri(url)
Connect to an assembly that points to the assembly specified with the url. Args: - url (str): The url of the onshape item
juraj-google-style
def add(name, beacon_data, **kwargs): ret = {'comment': 'Failed to add beacon {0}.'.format(name), 'result': False} if (name in list_(return_yaml=False, **kwargs)): ret['comment'] = 'Beacon {0} is already configured.'.format(name) return ret if any((('beacon_module' in key) for key in beacon_...
Add a beacon on the minion Args: name (str): Name of the beacon to configure beacon_data (dict): Dictionary or list containing configuration for beacon. Returns: dict: Boolean and status message on success or failure of add. CLI Example: .. code-block:: bash salt '*' beacons.add ps "[{'processes': {'salt-master'...
codesearchnet
def _get_other_names(self, line): m = re.search(self.compound_regex['other_names'][0], line, re.IGNORECASE) if m: self.other_names.append(m.group(1).strip())
Parse and extract any other names that might be recorded for the compound Args: line (str): line of the msp file
codesearchnet
def build_backend(self, backend_node): proxy_name = backend_node.backend_header.proxy_name.text config_block_lines = self.__build_config_block( backend_node.config_block) return config.Backend(name=proxy_name, config_block=config_block_lines)
parse `backend` sections Args: backend_node (TreeNode): Description Returns: config.Backend: an object
juraj-google-style
def _FormatTag(self, event): tag = getattr(event, 'tag', None) if not tag: return '-' return ' '.join(tag.labels)
Formats the event tag. Args: event (EventObject): event. Returns: str: event tag field.
juraj-google-style
def scroll(self, direction='vertical', percent=0.6, duration=2.0): if direction not in ('vertical', 'horizontal'): raise ValueError('Argument `direction` should be one of "vertical" or "horizontal". Got {}' .format(repr(direction))) start = [0.5, 0.5] ...
Scroll from the lower part to the upper part of the entire screen. Args: direction (:py:obj:`str`): scrolling direction. "vertical" or "horizontal" percent (:py:obj:`float`): scrolling distance percentage of the entire screen height or width according to direction duration (:py:obj:`float`): time interval in which the...
juraj-google-style
def __init__(self, path, **kwargs): self.error_context = kwargs.pop('error_context', None) self.error_context = self.error_context or StatikErrorContext() if 'config' in kwargs and isinstance(kwargs['config'], dict): logger.debug("Loading project configuration from construc...
Constructor. Args: path: The full filesystem path to the base of the project.
juraj-google-style
def licenses(self): buf_size = self.MAX_BUF_SIZE buf = (ctypes.c_char * buf_size)() res = self._dll.JLINK_GetAvailableLicense(buf, buf_size) if res < 0: raise errors.JLinkException(res) return ctypes.string_at(buf).decode()
Returns a string of the built-in licenses the J-Link has. Args: self (JLink): the ``JLink`` instance Returns: String of the contents of the built-in licenses the J-Link has.
juraj-google-style
def AddVSSProcessingOptions(self, argument_group): argument_group.add_argument('--no_vss', '--no-vss', dest='no_vss', action='store_true', default=False, help='Do not scan for Volume Shadow Snapshots (VSS). This means that Volume Shadow Snapshots (VSS) are not processed.') argument_group.add_argument('--vss_onl...
Adds the VSS processing options to the argument group. Args: argument_group (argparse._ArgumentGroup): argparse argument group.
codesearchnet
def save(self, vleaf, fpath, cleanup=False, format=None): graph = self.create_graphviz_digraph(vleaf, format=format) graph.render(fpath, cleanup=cleanup)
Save the graph to a given file path. Args: vleaf (`nnabla.Variable`): End variable. All variables and functions which can be traversed from this variable are shown in the reuslt. fpath (`str`): The file path used to save. cleanup (`bool`): Clean up the source file after rendering. Default is False. format (str): Force...
juraj-google-style
def _CheckIsDirectory(self, file_entry): if (definitions.FILE_ENTRY_TYPE_DIRECTORY not in self._file_entry_types): return False return file_entry.IsDirectory()
Checks the is_directory find specification. Args: file_entry (FileEntry): file entry. Returns: bool: True if the file entry matches the find specification, False if not.
codesearchnet
def convert_to_python_types(args): if isinstance(args, dict): return {k: convert_to_python_type(v) for k, v in args.items()} else: return [convert_to_python_type(v) for v in args]
Convert the given list or dictionary of args to python types. Args: args: Either an iterable of types, or a dictionary where the values are types. Returns: If given an iterable, a list of converted types. If given a dictionary, a dictionary with the same keys, and values which have been converted.
github-repos
def copy_rec(source, dest): if os.path.isdir(source): for child in os.listdir(source): new_dest = os.path.join(dest, child) os.makedirs(new_dest, exist_ok=True) copy_rec(os.path.join(source, child), new_dest) elif os.path.isfile(source): logging.info(' ...
Copy files between diferent directories. Copy one or more files to an existing directory. This function is recursive, if the source is a directory, all its subdirectories are created in the destination. Existing files in destination are overwrited without any warning. Args: source (str): File or directory name. dest ...
juraj-google-style
def maximum(x1, x2): if any_symbolic_tensors((x1, x2)): return Maximum().symbolic_call(x1, x2) return backend.numpy.maximum(x1, x2)
Element-wise maximum of `x1` and `x2`. Args: x1: First tensor. x2: Second tensor. Returns: Output tensor, element-wise maximum of `x1` and `x2`.
github-repos
def count(self, val=True): return sum((elem.count(val) for elem in self._iter_components()))
Get the number of bits in the array with the specified value. Args: val: A boolean value to check against the array's value. Returns: An integer of the number of bits in the array equal to val.
codesearchnet
def auth_middleware(policy): assert isinstance(policy, AbstractAuthentication) async def _auth_middleware_factory(app, handler): async def _middleware_handler(request): request[POLICY_KEY] = policy response = await handler(request) ...
Returns a aiohttp_auth middleware factory for use by the aiohttp application object. Args: policy: A authentication policy with a base class of AbstractAuthentication.
juraj-google-style
def get_package_hashes(filename): log.debug('Getting package hashes') filename = os.path.abspath(filename) with open(filename, 'rb') as f: data = f.read() _hash = hashlib.sha256(data).hexdigest() log.debug('Hash for file %s: %s', filename, _hash) return _hash
Provides hash of given filename. Args: filename (str): Name of file to hash Returns: (str): sha256 hash
codesearchnet
def _prepare_headers(self, additional_headers=None, **kwargs): user_agent = "pyseaweed/{version}".format(version=__version__) headers = {"User-Agent": user_agent} if additional_headers is not None: headers.update(additional_headers) return headers
Prepare headers for http communication. Return dict of header to be used in requests. Args: .. versionadded:: 0.3.2 **additional_headers**: (optional) Additional headers to be used with request Returns: Headers dict. Key and values are string
juraj-google-style
def _url_format(self, service): base_service_url = '{base}{service}'.format(base=self.urlbase, service=service) return base_service_url
Generate URL from urlbase and service. Args: service (str): The endpoint service to use, i.e. gradebook Returns: str: URL to where the request should be made
codesearchnet
def _get_non_space_email(self, doc) -> List: result_lst = [] for e in doc: if "mail:" in e.text.lower(): idx = e.text.lower().index("mail:") + 5 value = e.text[idx:] tmp_doc = self._nlp(value) tmp_email_matches = self._...
Deal with corner case that there is "email" string in text and no space around it Args: doc: List[Token] Returns: Bool
juraj-google-style
def _page_streamable(page_descriptor): def inner(a_func, settings, request, **kwargs): page_iterator = gax.PageIterator( a_func, page_descriptor, settings.page_token, request, **kwargs) if settings.flatten_pages: return gax.ResourceIterator(page_iterator) ...
Creates a function that yields an iterable to performs page-streaming. Args: page_descriptor (:class:`PageDescriptor`): indicates the structure of page streaming to be performed. Returns: Callable: A function that returns an iterator.
juraj-google-style
def replace_list(items, match, replacement): return [replace(item, match, replacement) for item in items]
Replaces occurrences of a match string in a given list of strings and returns a list of new strings. The match string can be a regex expression. Args: items (list): the list of strings to modify. match (str): the search expression. replacement (str): the string to replace with.
juraj-google-style
def search_users(self, user): user_url = ('%s/%s/%s' % (self.url, 'user', user)) response = self.jss.get(user_url) return LDAPUsersResults(self.jss, response)
Search for LDAP users. Args: user: User to search for. It is not entirely clear how the JSS determines the results- are regexes allowed, or globbing? Returns: LDAPUsersResult object. Raises: Will raise a JSSGetError if no results are found.
codesearchnet
def initialize_resources(resource_list, name='init'): if resource_list: return control_flow_ops.group(*[r.create for r in resource_list], name=name) return control_flow_ops.no_op(name=name)
Initializes the resources in the given list. Args: resource_list: list of resources to initialize. name: name of the initialization op. Returns: op responsible for initializing all resources.
github-repos
def load_dictionary(self, filename, encoding="utf-8"): with load_file(filename, encoding) as data: self._dictionary.update(json.loads(data.lower(), encoding=encoding)) self._update_dictionary()
Load in a pre-built word frequency list Args: filename (str): The filepath to the json (optionally gzipped) \ file to be loaded encoding (str): The encoding of the dictionary
juraj-google-style
def testNoopElimination(self, init_dataset_fn, transformation, expected_name): dataset = init_dataset_fn() if expected_name: dataset = dataset.apply(testing.assert_next([expected_name, 'FiniteTake'])) else: dataset = dataset.apply(testing.assert_next(['FiniteTake'])) dataset = dataset.ap...
Runs a noop elimination test case. Args: init_dataset_fn: Function to create the initial dataset transformation: Transformation to apply expected_name: Name of the transformation if it is not eliminated
github-repos
def deref(value: base.Symbolic, recursive: bool=False) -> Any: if isinstance(value, Ref): value = value.value if recursive: def _deref(k, v, p): del k, p if isinstance(v, Ref): return deref(v.value, recursive=True) return v return valu...
Dereferences a symbolic value that may contain pg.Ref. Args: value: The input symbolic value. recursive: If True, dereference `pg.Ref` in the entire tree. Otherwise Only dereference the root node. Returns: The dereferenced root, or dereferenced tree if recursive is True.
github-repos
def sample_rate(self, value): if value == self._defaults['sampleRate'] and 'sampleRate' in self._values: del self._values['sampleRate'] else: self._values['sampleRate'] = value
The sample_rate property. Args: value (float). the property value.
juraj-google-style
def _create_field(self, uri , name, field_type, **kwargs): if not (name and (field_type in ['TEXT_INPUT', 'DATE', 'PERSON'])): return requests.codes.bad_request, {'success' : 'False', 'error': 'name needs to be provided and field_type needs to be \'TEXT_INPUT\', \'DATE\' or \'PERSON\''} kwarg...
Creates a field with the provided attributes. Args: uri base uri for the field (pipeline or box uri) name required name string field_type required type string [TEXT_INPUT, DATE or PERSON] kwargs {} return (status code, field dict)
juraj-google-style
def get_value_by_xy(self, x, y): if x < self.xMin or x > self.xMax or y < self.yMin or y > self.yMax: return None else: row = self.nRows - int(numpy.ceil((y - self.yMin) / self.dx)) col = int(numpy.floor((x - self.xMin) / self.dx)) va...
Get raster value by xy coordinates. Args: x: X Coordinate. y: Y Coordinate. Returns: raster value, None if the input are invalid.
juraj-google-style
def setZeroResettableKWH(self, password="00000000"): result = False self.setContext("setZeroResettableKWH") try: if not self.requestA(): self.writeCmdMsg("Bad read CRC on setting") else: if not self.serialCmdPwdAuth(password): ...
Serial call to zero resettable kWh registers. Args: password (str): Optional password. Returns: bool: True on completion and ACK.
juraj-google-style
def __init__(self, endpoint_name, sagemaker_session=None): super(TensorFlowPredictor, self).__init__(endpoint_name, sagemaker_session, tf_json_serializer, tf_json_deserializer)
Initialize an ``TensorFlowPredictor``. Args: endpoint_name (str): The name of the endpoint to perform inference on. sagemaker_session (sagemaker.session.Session): Session object which manages interactions with Amazon SageMaker APIs and any other AWS services needed. If not specified, the estimator creates one using th...
juraj-google-style
def get_vep_info(vep_string, vep_header): vep_annotations = [dict(zip(vep_header, vep_annotation.split('|'))) for vep_annotation in vep_string.split(',')] return vep_annotations
Make the vep annotations into a dictionaries A vep dictionary will have the vep column names as keys and the vep annotations as values. The dictionaries are stored in a list Args: vep_string (string): A string with the CSQ annotation vep_header (list): A list with the vep header Return: vep_annotations (list): A lis...
codesearchnet
def _rowwise_unsorted_segment_sum(values, indices, n): (batch, k) = tf.unstack(tf.shape(indices), num=2) indices_flat = (tf.reshape(indices, [(- 1)]) + (tf.div(tf.range((batch * k)), k) * n)) ret_flat = tf.unsorted_segment_sum(tf.reshape(values, [(- 1)]), indices_flat, (batch * n)) return tf.reshape(ret...
UnsortedSegmentSum on each row. Args: values: a `Tensor` with shape `[batch_size, k]`. indices: an integer `Tensor` with shape `[batch_size, k]`. n: an integer. Returns: A `Tensor` with the same type as `values` and shape `[batch_size, n]`.
codesearchnet
def aggregate_and_return_name_for_input(self, out_graphdef): del out_graphdef raise RuntimeError('Unimplemented abstract method.')
This adds the node(s) to out_graphdef and returns the input node name. Args: out_graphdef: A graphdef that is ready to have this input added. Returns: The output that the stub should use as an input for this operand. Raises: RuntimeError: if the method is not implemented.
github-repos
def read(self, index, name=None): return self._implementation.read(index, name=name)
Read the value at location `index` in the TensorArray. Args: index: 0-D. int32 tensor with the index to read from. name: A name for the operation (optional). Returns: The tensor at index `index`.
github-repos