code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def __get_distribution_tags(self, client, arn): return {t['Key']: t['Value'] for t in client.list_tags_for_resource(Resource=arn)['Tags']['Items']}
Returns a dict containing the tags for a CloudFront distribution Args: client (botocore.client.CloudFront): Boto3 CloudFront client object arn (str): ARN of the distribution to get tags for Returns: `dict`
codesearchnet
def to_channel_dimension_format(image: np.ndarray, channel_dim: Union[ChannelDimension, str], input_channel_dim: Optional[Union[ChannelDimension, str]]=None) -> np.ndarray: if not isinstance(image, np.ndarray): raise TypeError(f'Input image must be of type np.ndarray, got {type(image)}') if input_channe...
Converts `image` to the channel dimension format specified by `channel_dim`. The input can have arbitrary number of leading dimensions. Only last three dimension will be permuted to format the `image`. Args: image (`numpy.ndarray`): The image to have its channel dimension set. channel_dim (`ChannelDimension`): The cha...
github-repos
def cache_bottlenecks(sess, image_lists, image_dir, bottleneck_dir, jpeg_data_tensor, decoded_image_tensor, resized_input_tensor, bottleneck_tensor, module_name): how_many_bottlenecks = 0 ensure_dir_exists(bottleneck_dir) for (label_name, label_lists) in image_lists.items(): for category in ['traini...
Ensures all the training, testing, and validation bottlenecks are cached. Because we're likely to read the same image multiple times (if there are no distortions applied during training) it can speed things up a lot if we calculate the bottleneck layer values once for each image during preprocessing, and then just rea...
codesearchnet
def _new_named_tuple(self, class_name: str, fields: list[tuple[str, Any]]) -> pytd.Class: class_base = pytd.NamedType('typing.NamedTuple') class_constants = tuple((pytd.Constant(n, t) for n, t in fields)) return pytd.Class(name=class_name, keywords=(), bases=(class_base,), methods=(), constants=class_consta...
Generates a pytd class for a named tuple. Args: class_name: The name of the generated class fields: A list of (name, type) tuples. Returns: A generated class that describes the named tuple.
github-repos
def restore_app_connection(self, port=None): self.host_port = (port or utils.get_available_host_port()) self._retry_connect() self.ed = self._start_event_client()
Restores the sl4a after device got disconnected. Instead of creating new instance of the client: - Uses the given port (or find a new available host_port if none is given). - Tries to connect to remote server with selected port. Args: port: If given, this is the host port from which to connect to remote device port. ...
codesearchnet
def decode_list(cls, obj, element_type): if (not isinstance(obj, list)): raise Exception('expected a python list') return list(map((lambda x: cls.do_decode(x, element_type)), obj))
Decodes json into a list, handling conversion of the elements. Args: obj: the json object to decode element_type: a class object which is the conjure type of the elements in this list. Returns: A python list where the elements are instances of type element_type.
codesearchnet
def mean(self): chunk_iter = chunks(self.times, self.bestof) times = list(map(min, chunk_iter)) mean = (sum(times) / len(times)) return mean
The mean of the best results of each trial. Returns: float: mean of measured seconds Note: This is typically less informative than simply looking at the min. It is recommended to use min as the expectation value rather than mean in most cases. Example: >>> import math >>> self = Timerit(num=10, verbose=0) >>> self.c...
codesearchnet
def __init__(self, src_file, sync_dst_file, *async_dst_files): self._origin_stack = '\n'.join(traceback.format_stack()) self.tee_file = None self._src_file = src_file self._sync_dst_file = sync_dst_file self._async_dst_files = list(async_dst_files) sel...
Constructor. Args: src_file: file to read from. sync_dst_file: file to write to synchronously when `self.write()` is called. async_dst_files: files to write to asynchronously
juraj-google-style
def __x_google_quota_definitions_descriptor(self, limit_definitions): if not limit_definitions: return None definitions_list = [{ 'name': ld.metric_name, 'metric': ld.metric_name, 'unit': '1/min/{project}', 'values': {'STANDARD': ld.default_limit}, 'displayNam...
Describes the quota limit definitions for an API. Args: limit_definitions: List of endpoints.LimitDefinition tuples Returns: A dict descriptor of the API's quota limit definitions.
juraj-google-style
def _reshape(self, fused_qkv: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: batch_size, seq_length, three_times_hidden_size = fused_qkv.shape fused_qkv = fused_qkv.view(batch_size, seq_length, self.num_heads, 3, self.head_dim) query_layer = fused_qkv[..., 0, :].transpose(1, 2) key_la...
Split the last dimension into (num_heads, head_dim) and reshapes to (bs, heads, len, dim) shape without making any copies, results share same memory storage as `fused_qkv` Args: fused_qkv (`torch.tensor`): [batch_size, seq_length, num_heads * 3 * head_dim] Returns: query: [batch_size, num_heads, seq_length, head_dim]...
github-repos
def generate_code(meta, prefix=None, node=False, min=False): if isinstance(meta, dict): (url_prefix, auth_header, resources) = parse_meta(meta) else: (url_prefix, auth_header, resources) = meta if (prefix is not None): url_prefix = prefix core = render_core(url_prefix, auth_heade...
Generate res.js Args: meta: tuple(url_prefix, auth_header, resources) or metadata of API Returns: res.js source code
codesearchnet
def draw_ID(ID, idx_array, drawID_raster): for i in range(idx_array.shape[0]): x = idx_array[i, 0] y = idx_array[i, 1] drawID_raster[x, y] = ID return drawID_raster
Draw every pixel's ID After computing all given value's pixels connectivity, every pixel will have an ID. Then we need to draw these pixels' ID on the undrawed rasterfile. Args: ID: given ID value idx_array: pixels position set which have the given ID value drawID_raster: undrawed rasterfile Return: drawID_raster: r...
juraj-google-style
def output_compressed_dinf(dinfflowang, compdinffile, weightfile): dinf_r = RasterUtilClass.read_raster(dinfflowang) data = dinf_r.data xsize = dinf_r.nCols ysize = dinf_r.nRows nodata_value = dinf_r.noDataValue cal_dir_code = frompyfunc(DinfUtil.compress_dinf, ...
Output compressed Dinf flow direction and weight to raster file Args: dinfflowang: Dinf flow direction raster file compdinffile: Compressed D8 flow code weightfile: The correspond weight
juraj-google-style
def __init__(self, channel): self.CreateJob = channel.unary_unary( "/google.cloud.talent.v4beta1.JobService/CreateJob", request_serializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_job__service__pb2.CreateJobRequest.SerializeToString, response_deserializer=g...
Constructor. Args: channel: A grpc.Channel.
juraj-google-style
def load_identity_signer(key_dir, key_name): key_path = os.path.join(key_dir, '{}.priv'.format(key_name)) if (not os.path.exists(key_path)): raise LocalConfigurationError('No such signing key file: {}'.format(key_path)) if (not os.access(key_path, os.R_OK)): raise LocalConfigurationError('Ke...
Loads a private key from the key directory, based on a validator's identity. Args: key_dir (str): The path to the key directory. key_name (str): The name of the key to load. Returns: Signer: the cryptographic signer for the key
codesearchnet
def linear(x): return x
Linear activation function (pass-through). A "linear" activation is an identity function: it returns the input, unmodified. Args: x: Input tensor.
github-repos
def register_model(cls, model): rest_name = model.rest_name resource_name = model.resource_name if (rest_name not in cls._model_rest_name_registry): cls._model_rest_name_registry[rest_name] = [model] cls._model_resource_name_registry[resource_name] = [model] elif (model not in cls._model...
Register a model class according to its remote name Args: model: the model to register
codesearchnet
def assert_text(self, *args, **kwargs): query = TextQuery(*args, **kwargs) @self.synchronize(wait=query.wait) def assert_text(): count = query.resolve_for(self) if (not (matches_count(count, query.options) and ((count > 0) or expects_none(query.options)))): raise ExpectationNotM...
Asserts that the page or current node has the given text content, ignoring any HTML tags. Args: *args: Variable length argument list for :class:`TextQuery`. **kwargs: Arbitrary keyword arguments for :class:`TextQuery`. Returns: True Raises: ExpectationNotMet: If the assertion hasn't succeeded during the wait time.
codesearchnet
def _ragged_stack_concat_axis_0(rt_inputs, stack_values): flat_values = [rt.flat_values for rt in rt_inputs] concatenated_flat_values = array_ops.concat(flat_values, axis=0) nested_splits = [rt.nested_row_splits for rt in rt_inputs] ragged_rank = rt_inputs[0].ragged_rank concatenated_nested_splits =...
Helper function to concatenate or stack ragged tensors along axis 0. Args: rt_inputs: A list of RaggedTensors, all with the same rank and ragged_rank. stack_values: Boolean. If true, then stack values; otherwise, concatenate them. Returns: A RaggedTensor.
github-repos
def add_argument(self, parser, bootstrap=False): if self.cli_expose: args = self._get_argparse_names(parser.prefix_chars) kwargs = self._get_argparse_kwargs(bootstrap) parser.add_argument(*args, **kwargs)
Add this item as an argument to the given parser. Args: parser (argparse.ArgumentParser): The parser to add this item to. bootstrap: Flag to indicate whether you only want to mark this item as required or not
codesearchnet
def setup(self, hosts, artifacts, extra_artifacts, use_tsk, reason, grr_server_url, grr_username, grr_password, approvers=None, verify=True): super(GRRArtifactCollector, self).setup(reason, grr_server_url, grr_username, grr_password, approvers=approvers, verify=verify) if (artifacts is not None): self.a...
Initializes a GRR artifact collector. Args: hosts: Comma-separated list of hostnames to launch the flow on. artifacts: list of GRR-defined artifacts. extra_artifacts: list of GRR-defined artifacts to append. use_tsk: toggle for use_tsk flag on GRR flow. reason: justification for GRR access. grr_server_url: GRR server ...
codesearchnet
def fasta_format_check(fasta_path, logger): header_count = 0 line_count = 1 nt_count = 0 with open(fasta_path) as f: for l in f: l = l.strip() if (l == ''): continue if (l[0] == '>'): header_count += 1 continue ...
Check that a file is valid FASTA format. - First non-blank line needs to begin with a '>' header character. - Sequence can only contain valid IUPAC nucleotide characters Args: fasta_str (str): FASTA file contents string Raises: Exception: If invalid FASTA format
codesearchnet
def install_app(app, target='/Applications/'): if (target[(- 4):] != '.app'): if (app[(- 1):] == '/'): base_app = os.path.basename(app[:(- 1)]) else: base_app = os.path.basename(app) target = os.path.join(target, base_app) if (not (app[(- 1)] == '/')): app...
Install an app file by moving it into the specified Applications directory Args: app (str): The location of the .app file target (str): The target in which to install the package to Default is ''/Applications/'' Returns: str: The results of the rsync command CLI Example: .. code-block:: bash salt '*' macpackage.in...
codesearchnet
def Push(self, source_file, device_filename, mtime='0', timeout_ms=None, progress_callback=None, st_mode=None): if isinstance(source_file, str): if os.path.isdir(source_file): self.Shell(('mkdir ' + device_filename)) for f in os.listdir(source_file): self.Push(os.path...
Push a file or directory to the device. Args: source_file: Either a filename, a directory or file-like object to push to the device. device_filename: Destination on the device to write to. mtime: Optional, modification time to set on the file. timeout_ms: Expected timeout for any part of the push. st_mode: stat mode f...
codesearchnet
async def snap(self, user=None, view=None): if view is None: view = self.view if user is None: user = self.auth.getUserByName('root') snap = await view.snap(user) return snap
Return a transaction object for the default view. Args: write (bool): Set to True for a write transaction. Returns: (synapse.lib.snap.Snap) NOTE: This must be used in a with block.
juraj-google-style
async def evaluate_trained_model(state): return (await evaluate_model(state.train_model_path, state.best_model_path, os.path.join(fsdb.eval_dir(), state.train_model_name), state.seed))
Evaluate the most recently trained model against the current best model. Args: state: the RL loop State instance.
codesearchnet
def DocumentVersionsRow( self, parser_mediator, query, row, **unused_kwargs): query_hash = hash(query) version_path = self._GetRowValue(query_hash, row, 'version_path') path = self._GetRowValue(query_hash, row, 'path') paths = version_path.split('/') if len(paths) < 2 or not p...
Parses a document versions row. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. query (str): query that created the row. row (sqlite3.Row): row.
juraj-google-style
def get_dimension(self, dimension, default=None, strict=False): if ((dimension is not None) and (not isinstance(dimension, (int, basestring, Dimension)))): raise TypeError(('Dimension lookup supports int, string, and Dimension instances, cannot lookup Dimensions using %s type.' % type(dimension).__name__)) ...
Get a Dimension object by name or index. Args: dimension: Dimension to look up by name or integer index default (optional): Value returned if Dimension not found strict (bool, optional): Raise a KeyError if not found Returns: Dimension object for the requested dimension or default
codesearchnet
def add_redistribution(self, protocol, route_map_name=None): protocols = ['bgp', 'rip', 'static', 'connected'] if (protocol not in protocols): raise ValueError('redistributed protocol must bebgp, connected, rip or static') if (route_map_name is None): cmd = 'redistribute {}'.format(protocol)...
Adds a protocol redistribution to OSPF Args: protocol (str): protocol to redistribute route_map_name (str): route-map to be used to filter the protocols Returns: bool: True if the command completes successfully Exception: ValueError: This will be raised if the protocol pass is not one of the following: [rip, bgp, st...
codesearchnet
def send(self, content_type='HTML'): payload = self.api_representation(content_type) endpoint = 'https: self._make_api_call('post', endpoint=endpoint, data=json.dumps(payload))
Takes the recipients, body, and attachments of the Message and sends. Args: content_type: Can either be 'HTML' or 'Text', defaults to HTML.
juraj-google-style
def linear_interpolate_rank(tensor1, tensor2, coeffs, rank=1): (_, _, _, num_channels) = common_layers.shape_list(tensor1) diff_sq_sum = tf.reduce_sum(((tensor1 - tensor2) ** 2), axis=(0, 1, 2)) (_, feature_ranks) = tf.math.top_k(diff_sq_sum, k=rank) feature_rank = feature_ranks[(- 1)] channel_inds ...
Linearly interpolate channel at "rank" between two tensors. The channels are ranked according to their L2 norm between tensor1[channel] and tensor2[channel]. Args: tensor1: 4-D Tensor, NHWC tensor2: 4-D Tensor, NHWC coeffs: list of floats. rank: integer. Returns: interp_latents: list of interpolated 4-D Tensors, shap...
codesearchnet
def get_roles(client): done = False marker = None roles = [] while not done: if marker: response = client.list_roles(Marker=marker) else: response = client.list_roles() roles += response['Roles'] ...
Returns a list of all the roles for an account. Returns a list containing all the roles for the account. Args: client (:obj:`boto3.session.Session`): A boto3 Session object Returns: :obj:`list` of `dict`
juraj-google-style
def tag_versions(repo_path): repo = dulwich.repo.Repo(repo_path) tags = get_tags(repo) maj_version = 0 feat_version = 0 fix_version = 0 last_maj_version = 0 last_feat_version = 0 result = [] for commit_sha, children in reversed( get_children_per_first_parent(repo_pa...
Given a repo will add a tag for each major version. Args: repo_path(str): path to the git repository to tag.
juraj-google-style
def sholl_crossings(neurites, center, radii): def _count_crossings(neurite, radius): 'count_crossings of segments in neurite with radius' r2 = (radius ** 2) count = 0 for (start, end) in iter_segments(neurite): (start_dist2, end_dist2) = (morphmath.point_dist2(center, st...
calculate crossings of neurites Args: nrn(morph): morphology on which to perform Sholl analysis radii(iterable of floats): radii for which crossings will be counted Returns: Array of same length as radii, with a count of the number of crossings for the respective radius
codesearchnet
def get_sparse_tensors(self, transformation_cache, state_manager): sparse_tensors = self.categorical_column.get_sparse_tensors(transformation_cache, state_manager) return self._get_sparse_tensors_helper(sparse_tensors)
Returns an IdWeightPair. `IdWeightPair` is a pair of `SparseTensor`s which represents ids and weights. `IdWeightPair.id_tensor` is typically a `batch_size` x `num_buckets` `SparseTensor` of `int64`. `IdWeightPair.weight_tensor` is either a `SparseTensor` of `float` or `None` to indicate all weights should be taken to...
github-repos
def _is_valid_netmask(self, netmask): mask = netmask.split('.') if (len(mask) == 4): try: for x in mask: if (int(x) not in self._valid_mask_octets): return False except ValueError: return False for (idx, y) in enumerate(mask): ...
Verify that the netmask is valid. Args: netmask: A string, either a prefix or dotted decimal netmask. Returns: A boolean, True if the prefix represents a valid IPv4 netmask.
codesearchnet
def normalize_url(base_url, rel_url): if (not rel_url): return None if (not is_absolute_url(rel_url)): rel_url = rel_url.replace('../', '/') if ((not base_url.endswith('/')) and (not rel_url.startswith('/'))): return ((base_url + '/') + rel_url.replace('../', '/')) re...
Normalize the `url` - from relative, create absolute URL. Args: base_url (str): Domain with ``protocol://`` string rel_url (str): Relative or absolute url. Returns: str/None: Normalized URL or None if `url` is blank.
codesearchnet
def assign_add(self, delta, use_locking=False, name=None, read_value=True): assign = state_ops.assign_add(self._variable, delta, use_locking=use_locking, name=name) if read_value: return assign return assign.op
Adds a value to this variable. This is essentially a shortcut for `assign_add(self, delta)`. Args: delta: A `Tensor`. The value to add to this variable. use_locking: If `True`, use locking during the operation. name: The name of the operation to be created read_value: if True, will return something which evaluates to...
github-repos
def create_config(sections, section_contents): sections_length, section_contents_length = len(sections), len(section_contents) if sections_length != section_contents_length: raise ValueError("Mismatch between argument lengths.\n" "len(sections) = {}\n" ...
Create a config file from the provided sections and key value pairs. Args: sections (List[str]): A list of section keys. key_value_pairs (Dict[str, str]): A list of of dictionaries. Must be as long as the list of sections. That is to say, if there are two sections, there should be two dicts. Returns: configparser.Conf...
juraj-google-style
def connect_to(name): kwargs = config_for(name) if (not kwargs): raise AttributeError('connection profile not found in config') node = connect(return_node=True, **kwargs) return node
Creates a node instance based on an entry from the config This function will retrieve the settings for the specified connection from the config and return a Node instance. The configuration must be loaded prior to calling this function. Args: name (str): The name of the connection to load from the config. The name ...
codesearchnet
def fileToMD5(filename, block_size=256*128, binary=False): md5 = hashlib.md5() with open(filename,'rb') as f: for chunk in iter(lambda: f.read(block_size), b''): md5.update(chunk) if not binary: return md5.hexdigest() return md5.digest()
A function that calculates the MD5 hash of a file. Args: ----- filename: Path to the file. block_size: Chunks of suitable size. Block size directly depends on the block size of your filesystem to avoid performances issues. Blocks of 4096 octets (Default NTFS). binary: A boolean representing whether the returned info i...
juraj-google-style
def _sort_records_map(records): ctx = context.get() l = len(records) key_records = [None] * l logging.debug("Parsing") for i in range(l): proto = kv_pb.KeyValue() proto.ParseFromString(records[i]) key_records[i] = (proto.key(), records[i]) logging.debug("Sorting") key_records.sort(cmp=_co...
Map function sorting records. Converts records to KeyValue protos, sorts them by key and writes them into new GCS file. Creates _OutputFile entity to record resulting file name. Args: records: list of records which are serialized KeyValue protos.
juraj-google-style
def DEFINE_boolean(name, default, help, flag_values=_flagvalues.FLAGS, module_name=None, **args): DEFINE_flag(_flag.BooleanFlag(name, default, help, **args), flag_values, module_name)
Registers a boolean flag. Such a boolean flag does not take an argument. If a user wants to specify a false value explicitly, the long option beginning with 'no' must be used: i.e. --noflag This flag will have a value of None, True or False. None is possible if default=None and the user does not specify the flag on...
codesearchnet
def case(pred_fn_pairs, default=None, exclusive=False, name='smart_case'): return control_flow_ops._case_helper(cond, pred_fn_pairs, default, exclusive, name, allow_python_preds=True)
Like tf.case, except attempts to statically evaluate predicates. If any predicate in `pred_fn_pairs` is a bool or has a constant value, the associated callable will be called or omitted depending on its value. Otherwise this functions like tf.case. Args: pred_fn_pairs: Dict or list of pairs of a boolean scalar tensor...
codesearchnet
def mesh_axis_to_tensor_axis(self, mesh_ndims): ta2ma = self._tensor_axis_to_mesh_axis return tuple( [ta2ma.index(mesh_axis) if mesh_axis in ta2ma else None for mesh_axis in xrange(mesh_ndims)])
For each mesh axis, which Tensor axis maps to it. Args: mesh_ndims: int. Returns: Tuple of optional integers, with length mesh_ndims.
juraj-google-style
def FromString(cls, range_string): disjuncts = None range_string = range_string.strip() if (len(range_string) == 0): raise ArgumentError('You must pass a finite string to SemanticVersionRange.FromString', range_string=range_string) if ((len(range_string) == 1) and (range_string[0] == '*')): ...
Parse a version range string into a SemanticVersionRange Currently, the only possible range strings are: ^X.Y.Z - matches all versions with the same leading nonzero digit greater than or equal the given range. * - matches everything =X.Y.Z - matches only the exact version given Args: range_string (string): A string ...
codesearchnet
def ValidateFeedStartAndExpirationDates(self, problems, first_date, last_date, first_date_origin, last_date_origin, today): warning_cutoff = today + datetime.timedelta(days=60) if last_date < warning_cutoff: problem...
Validate the start and expiration dates of the feed. Issue a warning if it only starts in the future, or if it expires within 60 days. Args: problems: The problem reporter object first_date: A date object representing the first day the feed is active last_date: A date object representing the last day the feed is activ...
juraj-google-style
def update_variant(self, variant_obj): LOG.debug('Updating variant %s', variant_obj.get('simple_id')) new_variant = self.variant_collection.find_one_and_replace( {'_id': variant_obj['_id']}, variant_obj, return_document=pymongo.ReturnDocument.AFTER )...
Update one variant document in the database. This means that the variant in the database will be replaced by variant_obj. Args: variant_obj(dict) Returns: new_variant(dict)
juraj-google-style
def available_credit(context): notes = commerce.CreditNote.unclaimed().filter(invoice__user=user_for_context(context)) ret = (notes.values('amount').aggregate(Sum('amount'))['amount__sum'] or 0) return (0 - ret)
Calculates the sum of unclaimed credit from this user's credit notes. Returns: Decimal: the sum of the values of unclaimed credit notes for the current user.
codesearchnet
def get_megatron_sharded_states(args, tp_size, pp_size, pp_rank): tp_state_dicts = [] for i in range(tp_size): sub_dir_name = f'mp_rank_{i:02d}' if pp_size == 1 else f'mp_rank_{i:02d}_{pp_rank:03d}' for checkpoint_name in ['model_optim_rng.pt', 'model_rng.pt']: checkpoint_path = os.p...
Get sharded checkpoints from NVIDIA Megatron-LM checkpoint based on the provided tensor parallel size, pipeline parallel size and pipeline parallel rank. Args: args (argparse.Namespace): the arguments to the script tp_size (int): the tensor parallel size pp_size (int): the pipeline parallel size pp_rank (int): the pip...
github-repos
def __getattr__(self, name): return lambda *args, **kwargs: self._Execute(name, *args, **kwargs)
Handles transparent proxying to gdb subprocess. This returns a lambda which, when called, sends an RPC request to gdb Args: name: The method to call within GdbService Returns: The result of the RPC.
juraj-google-style
def get_object(tree): if isinstance(tree, Tree): if tree.label() == 'DT' or tree.label() == 'POS': return '' words = [] for child in tree: words.append(get_object(child)) return ' '.join([_f for _f in words if _f]) else: return tree
Get the object in the tree object. Method should remove unnecessary letters and words:: the a/an 's Args: tree (Tree): Parsed tree structure Returns: Resulting string of tree ``(Ex: "red car")``
juraj-google-style
def _object_url(self, objtype, objid): return "{base_url}/api/{api_version}/{controller}/{obj_id}".format( base_url=self._base_url(), api_version=self.api_version, controller=self._controller_name(objtype), obj_id=objid )
Generate the URL for the specified object Args: objtype (str): The object's type objid (int): The objects ID Returns: A string containing the URL of the object
juraj-google-style
def htmlcolor_to_rgb(str_color): if (not (str_color.startswith(' raise ValueError("Bad html color format. Expected: ' result = [((1.0 * int(n, 16)) / 255) for n in (str_color[1:3], str_color[3:5], str_color[5:])] return result
function to convert HTML-styly color string to RGB values Args: s: Color in HTML format Returns: list of three RGB color components
codesearchnet
def evaluate_hourly_forecasts(self): score_columns = ['Run_Date', 'Forecast_Hour', 'Ensemble Name', 'Model_Name', 'Forecast_Variable', 'Neighbor_Radius', 'Smoothing_Radius', 'Size_Threshold', 'ROC', 'Reliability'] all_scores = pd.DataFrame(columns=score_columns) for (h, hour) in enumerate(range(self.start_h...
Calculates ROC curves and Reliability scores for each forecast hour. Returns: A pandas DataFrame containing forecast metadata as well as DistributedROC and Reliability objects.
codesearchnet
def refresh_state(self, id_or_uri, configuration, timeout=-1): uri = self._client.build_uri(id_or_uri) + self.REFRESH_STATE_PATH return self._client.update(resource=configuration, uri=uri, timeout=timeout)
Refreshes a drive enclosure. Args: id_or_uri: Can be either the resource ID or the resource URI. configuration: Configuration timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation in OneView; it just stops waiting for its completion. Returns: dict: Drive Enclosure
juraj-google-style
def has_implicit_access_to_enrollment_api(user, obj): request = get_request_or_stub() decoded_jwt = get_decoded_jwt_from_request(request) return request_user_has_implicit_access_via_jwt(decoded_jwt, ENTERPRISE_ENROLLMENT_API_ADMIN_ROLE, obj)
Check that if request user has implicit access to `ENTERPRISE_ENROLLMENT_API_ADMIN_ROLE` feature role. Returns: boolean: whether the request user has access or not
codesearchnet
def __eq__(self, other): if super().__eq__(other) and \ (self._samples == other._samples).all(): return True return False
Two SamplePulses are the same if they are of the same type and have the same name and samples. Args: other (SamplePulse): other SamplePulse Returns: bool: are self and other equal.
juraj-google-style
def ListAssets(logdir, plugin_name): plugin_dir = PluginDirectory(logdir, plugin_name) try: return [x.rstrip('/') for x in tf.io.gfile.listdir(plugin_dir)] except tf.errors.NotFoundError: return []
List all the assets that are available for given plugin in a logdir. Args: logdir: A directory that was created by a TensorFlow summary.FileWriter. plugin_name: A string name of a plugin to list assets for. Returns: A string list of available plugin assets. If the plugin subdirectory does not exist (either because th...
codesearchnet
def parse(file_contents, file_name): try: yaml.load(file_contents) except Exception: _, exc_value, _ = sys.exc_info() return("Cannot Parse: {file_name}: \n {exc_value}" .format(file_name=file_name, exc_value=exc_value))
This takes a list of filenames and their paths of expected yaml files and tried to parse them, erroring if there are any parsing issues. Args: file_contents (str): Contents of a yml file Raises: yaml.parser.ParserError: Raises an error if the file contents cannot be parsed and interpreted as yaml
juraj-google-style
def trace_flush(self): cmd = enums.JLinkTraceCommand.FLUSH res = self._dll.JLINKARM_TRACE_Control(cmd, 0) if (res == 1): raise errors.JLinkException('Failed to flush the trace buffer.') return None
Flushes the trace buffer. After this method is called, the trace buffer is empty. This method is best called when the device is reset. Args: self (JLink): the ``JLink`` instance. Returns: ``None``
juraj-google-style
def validate_file(fn, options=None): file_results = FileValidationResults(filepath=fn) output.info(('Performing JSON schema validation on %s' % fn)) if (not options): options = ValidationOptions(files=fn) try: with open(fn) as instance_file: file_results.object_results = vali...
Validate the input document `fn` according to the options passed in. If any exceptions are raised during validation, no further validation will take place. Args: fn: The filename of the JSON file to be validated. options: An instance of ``ValidationOptions``. Returns: An instance of FileValidationResults.
codesearchnet
def get_all_apps(): LOG.info('Retreiving list of all Spinnaker applications') url = '{}/applications'.format(API_URL) response = requests.get(url, verify=GATE_CA_BUNDLE, cert=GATE_CLIENT_CERT) assert response.ok, 'Could not retrieve application list' pipelines = response.json() LOG.debug('All Ap...
Get a list of all applications in Spinnaker. Returns: requests.models.Response: Response from Gate containing list of all apps.
codesearchnet
def parse(cls, args): try: (options, args) = cls.optparser.parse_args(args) if options.mode not in ["1", "2"]: raise ParseError("mode must be either '1' or '2'", cls.optparser.format_help()) if (options.dbtap_id is N...
Parse command line arguments to construct a dictionary of command parameters that can be used to create a command Args: `args`: sequence of arguments Returns: Dictionary that can be used in create method Raises: ParseError: when the arguments are not correct
juraj-google-style
async def await_rpc(self, address, rpc_id, *args, **kwargs): self.verify_calling_thread(True, 'await_rpc must be called from **inside** the event loop') if isinstance(rpc_id, RPCDeclaration): arg_format = rpc_id.arg_format resp_format = rpc_id.resp_format rpc_id = rpc_id.rpc_id else:...
Send an RPC from inside the EmulationLoop. This is the primary method by which tasks running inside the EmulationLoop dispatch RPCs. The RPC is added to the queue of waiting RPCs to be drained by the RPC dispatch task and this coroutine will block until it finishes. **This method must only be called from inside the ...
codesearchnet
def __init__(self, *args, pubdate=None, excerpt=None, tags=None, allow_comments=True, **kwargs): super().__init__(*args, **kwargs) self.excerpt = excerpt or _get_excerpt(self.body) self.pubdate = pubdate self.tags = tags or [] self.allow_comments = allow_comments
Constructor. Also see Entry.__init__. Args: pubdate (datetime): When the post was published. excerpt (str): An excerpt of the post body. tags (list): A list of Tag objects associated with the post. allow_comments (bool): Whether to allow comments. Default False.
juraj-google-style
def get_file(self, filename, scope='all'): filename = os.path.abspath(os.path.join(self.root, filename)) layouts = self._get_layouts_in_scope(scope) for ly in layouts: if (filename in ly.files): return ly.files[filename] return None
Returns the BIDSFile object with the specified path. Args: filename (str): The path of the file to retrieve. Must be either an absolute path, or relative to the root of this BIDSLayout. scope (str, list): Scope of the search space. If passed, only BIDSLayouts that match the specified scope will be searched. See BIDSLa...
codesearchnet
def uncheck(self, locator=None, allow_label_click=None, **kwargs): self._check_with_label( "checkbox", False, locator=locator, allow_label_click=allow_label_click, **kwargs)
Find a check box and uncheck it. The check box can be found via name, id, or label text. :: page.uncheck("German") Args: locator (str, optional): Which check box to uncheck. allow_label_click (bool, optional): Attempt to click the label to toggle state if element is non-visible. Defaults to :data:`capybara.automatic_...
juraj-google-style
def has_request(self, request): queue_item = QueueItem(request, Response(request.url)) key = queue_item.get_hash() for status in QueueItem.STATUSES: if key in self.__get_var("items_" + status).keys(): return True return False
Check if the given request already exists in the queue. Args: request (:class:`nyawc.http.Request`): The request to check. Returns: bool: True if already exists, False otherwise.
juraj-google-style
def find_module_defining_flag(self, flagname, default=None): registered_flag = self._flags().get(flagname) if registered_flag is None: return default for module, flags in six.iteritems(self.flags_by_module_dict()): for flag in flags: if (flag.name == regis...
Return the name of the module defining this flag, or default. Args: flagname: str, name of the flag to lookup. default: Value to return if flagname is not defined. Defaults to None. Returns: The name of the module which registered the flag with this name. If no such module exists (i.e. no flag with this name exists),...
juraj-google-style
def delete(self, reference, option=None): write_pb = _helpers.pb_for_delete(reference._document_path, option) self._add_write_pbs([write_pb])
Add a "change" to delete a document. See :meth:`~.firestore_v1beta1.document.DocumentReference.delete` for more information on how ``option`` determines how the change is applied. Args: reference (~.firestore_v1beta1.document.DocumentReference): A document reference that will be deleted in this batch. option (Optiona...
codesearchnet
def get_output_info_dict(self, signature=None): return self._spec.get_output_info_dict(signature=signature, tags=self._tags)
Describes the outputs provided by a signature. Args: signature: A string with the signature to get ouputs information for. If None, the default signature is used if defined. Returns: The result of ModuleSpec.get_output_info_dict() for the given signature, and the graph variant selected by `tags` when this Module was ...
juraj-google-style
def _update_state_from_shard_states(self, state, shard_states, control): (state.active_shards, state.aborted_shards, state.failed_shards) = (0, 0, 0) total_shards = 0 processed_counts = [] processed_status = [] state.counters_map.clear() for s in shard_states: total_shards += 1 s...
Update mr state by examing shard states. Args: state: current mapreduce state as MapreduceState. shard_states: an iterator over shard states. control: model.MapreduceControl entity.
codesearchnet
def update(self, media_blob: genai_types.Blob): if self.generation_start_sec is not None and self.ttft_sec is None: self.time_audio_start = time.perf_counter() self.ttft_sec = self.time_audio_start - self.generation_start_sec self.audio_duration += audio_duration_sec(media_blob.data, RECEIVE_SAM...
Updates the generation request with the new media data. Args: media_blob: The new media data.
github-repos
def _group(self, group_data): if isinstance(group_data, dict): xid = group_data.get('xid') else: xid = group_data.xid if (self.groups.get(xid) is not None): group_data = self.groups.get(xid) elif (self.groups_shelf.get(xid) is not None): group_data = self.groups_shelf.get...
Return previously stored group or new group. Args: group_data (dict|obj): An Group dict or instance of Group object. Returns: dict|obj: The new Group dict/object or the previously stored dict/object.
codesearchnet
def remove_line(self, section, line): try: s = self._get_section(section, create=False) except KeyError: return 0 return s.remove(line)
Remove all instances of a line. Returns: int: the number of lines removed
codesearchnet
def list_key_values(input: t.Dict[str, str]) -> None: for cmd, desc in input.items(): print(f'{cmd} => {desc}')
Display key-value pairs from a dictionary. Args: input (Dict[str, str]): The dictionary containing key-value pairs.
github-repos
def peek_step(self, val: ArrayValue, sn: 'DataNode') -> Tuple[(ObjectValue, 'DataNode')]: keys = self.parse_keys(sn) for en in val: flag = True try: for k in keys: if (en[k] != keys[k]): flag = False break except KeyErro...
Return the entry addressed by the receiver + its schema node. Args: val: Current value (array). sn: Current schema node.
codesearchnet
def _initialize_pvariables(self, pvariables: Dict[(str, PVariable)], ordering: List[str], initializer: Optional[InitializerList]=None) -> List[Tuple[(str, TensorFluent)]]: if (initializer is not None): init = dict() for ((name, args), value) in initializer: arity = (len(args) if (args is...
Instantiates `pvariables` given an initialization list and returns a list of TensorFluents in the given `ordering`. Returns: List[Tuple[str, TensorFluent]]: A list of pairs of fluent name and fluent tensor.
codesearchnet
def parse(self, body): if isinstance(body, six.string_types): body = json.loads(body) version = body['version'] self.version = version session = body['session'] self.session.new = session['new'] self.session.session_id = session['s...
Parse JSON request, storing content in object attributes. Args: body: str. HTTP request body. Returns: self
juraj-google-style
def exists(self, pattern, **match_kwargs): ret = self.match(pattern, **match_kwargs) if (ret is None): return None if (not ret.matched): return None return ret
Check if image exists in screen Returns: If exists, return FindPoint, or return None if result.confidence < self.image_match_threshold
codesearchnet
def register(name): if not isinstance(name, str): raise TypeError('Expected `name` to be a string; got %r' % (name,)) if not _REGISTERED_NAME_RE.match(name): raise ValueError("Registered name must have the form '{project_name}.{type_name}' (e.g. 'my_project.MyTypeSpec'); got %r." % name) de...
Decorator used to register a globally unique name for a TypeSpec subclass. Args: name: The name of the type spec. Must be globally unique. Must have the form `"{project_name}.{type_name}"`. E.g. `"my_project.MyTypeSpec"`. Returns: A class decorator that registers the decorated class with the given name.
github-repos
def reload_config(self, dockercfg_path=None): self._auth_configs = auth.load_config( dockercfg_path, credstore_env=self.credstore_env )
Force a reload of the auth configuration Args: dockercfg_path (str): Use a custom path for the Docker config file (default ``$HOME/.docker/config.json`` if present, otherwise``$HOME/.dockercfg``) Returns: None
juraj-google-style
def load_ner_model(lang='en', version='2'): src_dir = 'ner{}'.format(version) p = locate_resource(src_dir, lang) fh = _open(p) try: return pickle.load(fh) except UnicodeDecodeError: fh.seek(0) return pickle.load(fh, encoding='latin1')
Return a named entity extractor parameters for `lang` and of version `version` Args: lang (string): language code. version (string): version of the parameters to be used.
codesearchnet
def group_modes(modes): if len(modes) > 0: previous = modes[0] grouped = [] for changep in modes[1:]: if changep['label'] != previous['label']: previous['to'] = changep['from'] grouped.append(previous) previous = changep ...
Groups consecutive transportation modes with same label, into one Args: modes (:obj:`list` of :obj:`dict`) Returns: :obj:`list` of :obj:`dict`
juraj-google-style
def _astimezone_ts(self, timezone): if (self.created.tzinfo is timezone): return self else: nw_obj = Timestamps(((None,) * 4)) nw_obj.created = self.created.astimezone(timezone) nw_obj.changed = self.changed.astimezone(timezone) nw_obj.mft_changed = self.mft_changed.astim...
Changes the time zones of all timestamps. Receives a new timezone and applies to all timestamps, if necessary. Args: timezone (:obj:`tzinfo`): Time zone to be applied Returns: A new ``Timestamps`` object if the time zone changes, otherwise returns ``self``.
codesearchnet
def last_updated(self, path): raise NotImplementedError
Get UNIX Epoch time in seconds on the FileSystem. Args: path: string path of file. Returns: float UNIX Epoch time Raises: ``BeamIOError``: if path doesn't exist.
github-repos
def get_metadata(self, handle): response = self.open_url(url=handle, suffix='.metadata') try: return json.load(response) finally: response.close()
Returns the associated metadata info for the given handle, the metadata file must exist (``handle + '.metadata'``). If the given handle has an ``.xz`` extension, it will get removed when calculating the handle metadata path Args: handle (str): Path to the template to get the metadata from Returns: dict: Metadata for ...
juraj-google-style
def merge_dictionaries(dicts, merge_lists=False): dict1 = dicts[0] for other_dict in dicts[1:]: merge_two_dictionaries(dict1, other_dict, merge_lists=merge_lists) return dict1
Merges all dictionaries in dicts into a single dictionary and returns result Args: dicts (List[DictUpperBound]): Dictionaries to merge into the first one in the list merge_lists (bool): Whether to merge lists (True) or replace lists (False). Default is False. Returns: DictUpperBound: Merged dictionary
juraj-google-style
def appendDirectory(self, directory, projectFilePath): lines = [] with open(projectFilePath, 'r') as original: for l in original: lines.append(l) with open(projectFilePath, 'w') as new: for line in lines: card = {} try: card = self._extract...
Append directory to relative paths in project file. By default, the project file paths are read and written as relative paths. Use this method to prepend a directory to all the paths in the project file. Args: directory (str): Directory path to prepend to file paths in project file. projectFilePath (str): Path to proj...
codesearchnet
def _process_kwargs_parameters(sig, func, parent_class, model_name_lowercase, documented_kwargs, indent_level, undocumented_parameters): docstring = '' source_args_dict = source_args_doc(ImageProcessorArgs) unroll_kwargs = func.__name__ in UNROLL_KWARGS_METHODS if not unroll_kwargs and parent_class is n...
Process **kwargs parameters if needed. Args: sig (`inspect.Signature`): Function signature func (`function`): Function the parameters belong to parent_class (`class`): Parent class of the function model_name_lowercase (`str`): Lowercase model name documented_kwargs (`dict`): Dictionary of kwargs that are already docum...
github-repos
def perform_extract_job(self, destination, job_id, table_reference, destination_format, project=None, include_header=True, compression=ExportCompression.NONE, use_avro_logical_types=False, job_labels=None): job_project = project or table_reference.projectId job_reference = bigquery.JobReference(jobId=job_id, pr...
Starts a job to export data from BigQuery. Returns: bigquery.JobReference with the information about the job that was started.
github-repos
def decode(self, codes): assert codes.ndim == 2 N, M = codes.shape assert M == self.M assert codes.dtype == self.code_dtype vecs = np.empty((N, self.Ds * self.M), dtype=np.float32) for m in range(self.M): vecs[:, m * self.Ds : (m+1) * self.Ds] = self...
Given PQ-codes, reconstruct original D-dimensional vectors approximately by fetching the codewords. Args: codes (np.ndarray): PQ-cdoes with shape=(N, M) and dtype=self.code_dtype. Each row is a PQ-code Returns: np.ndarray: Reconstructed vectors with shape=(N, D) and dtype=np.float32
juraj-google-style
def set_size(self, height=220, width=350, height_threshold=120, width_threshold=160): self.set_integer("height", height) self.set_integer("width", width) self.set_integer("small_height_threshold", height_threshold) self.set_integer("small_width_...
Set the size of the chart. Args: height (int): height in pixels. width (int): width in pixels. height_threshold (int): height threshold in pixels width_threshold (int): width threshold in pixesls
juraj-google-style
def window_design(self, window_length, beta): self.window = np.kaiser(window_length, beta) return self.window
Kaiser window design Args: window_length: Length of the window in number of samples beta: Beta value for Kaiser window design Returns: window: Window designed using the beta and length provided as inputs
juraj-google-style
def __init__(self, model_data, image, env=None): self.model_data = model_data self.image = image self.env = env
Create a definition of a model which can be part of an Inference Pipeline Args: model_data (str): The S3 location of a SageMaker model data ``.tar.gz`` file. image (str): A Docker image URI. env (dict[str, str]): Environment variables to run with ``image`` when hosted in SageMaker (default: None).
juraj-google-style
def load_hdf5(path): with h5py.File(path, 'r') as f: is_sparse = f['issparse'][...] if is_sparse: shape = tuple(f['shape'][...]) data = f['data'][...] indices = f['indices'][...] indptr = f['indptr'][...] X = sparse.csr_matrix((data, ...
Load data from a HDF5 file. Args: path (str): A path to the HDF5 format file containing data. dense (boolean): An optional variable indicating if the return matrix should be dense. By default, it is false. Returns: Data matrix X and target vector y
juraj-google-style
def get_version(self, id=None, endpoint=None): return self._call_endpoint(GET_VERSION, id=id, endpoint=endpoint)
Get the current version of the endpoint. Note: Not all endpoints currently implement this method Args: id: (int, optional) id to use for response tracking endpoint: (RPCEndpoint, optional) endpoint to specify to use Returns: json object of the result or the error encountered in the RPC call
juraj-google-style
def get_dimension(self, key, value, **kwargs): return self._get_object_by_name(self._DIMENSION_ENDPOINT_SUFFIX, '{0}/{1}'.format(key, value), **kwargs)
get a dimension by key and value Args: key (string): key of the dimension value (string): value of the dimension Returns: dictionary of response
codesearchnet
def set_checkbox_value(w, value): save = w.blockSignals(True) try: w.setChecked(bool(value)) finally: w.blockSignals(save)
Sets a checkbox's "checked" property + signal blocking + value tolerance Args: w: QCheckBox instance value: something that can be converted to a bool
juraj-google-style