code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def _embedding_dim(vocab_size): if ((not vocab_size) or (vocab_size <= 0)): raise ValueError(('Invalid vocab_size %g.' % vocab_size)) return int(round((6.0 * math.sqrt(math.sqrt(vocab_size)))))
Calculate a reasonable embedding size for a vocabulary. Rule of thumb is 6 * 4th root of vocab_size. Args: vocab_size: Size of the input vocabulary. Returns: The embedding size to use. Raises: ValueError: if `vocab_size` is invalid.
codesearchnet
def mlir_sparsify(input_data_str): return wrap_converter.wrapped_experimental_mlir_sparsify(input_data_str)
Sparsify `input_data_str` to encode sparse tensor with proper format. Args: input_data_str: Input data in serialized form (e.g. a TFLITE model). Returns: Sparsified model in serialized form (e.g. a TFLITE model).
github-repos
def _eager_run_fn(fn: PartFn, part: _T) -> AsyncIterable[_T]: q = asyncio.Queue[_T | _FinishedT]() async def call_fn(): async for c in fn(part): q.put_nowait(c) q.put_nowait(_Finished) context.create_task(call_fn()) async def result_iter(): while (c := (await q.get())) is not _Finished: yield c return result_iter()
Executes fn on part in an asyncio.task. Must be called called in an async context. It eagerly schedules a task on the event loop to execute the whole of `fn` on the part. Results from the AsyncIterable returned by `fn` can be retrieved via the AsyncIterable returned by this method. Args: fn: the part function to execute on the part. part: the part to execute the function on. Returns: An AsyncIterable that can be used to retrieve the results of `fn` on `part` in order. NOTE: this method is non-blocking.
github-repos
def trading_dates(start, end, calendar='US'): kw = dict(start=pd.Timestamp(start, tz='UTC').date(), end=pd.Timestamp(end, tz='UTC').date()) us_cal = getattr(sys.modules[__name__], f'{calendar}TradingCalendar')() return pd.bdate_range(**kw).drop(us_cal.holidays(**kw))
Trading dates for given exchange Args: start: start date end: end date calendar: exchange as string Returns: pd.DatetimeIndex: datetime index Examples: >>> bus_dates = ['2018-12-24', '2018-12-26', '2018-12-27'] >>> trd_dates = trading_dates(start='2018-12-23', end='2018-12-27') >>> assert len(trd_dates) == len(bus_dates) >>> assert pd.Series(trd_dates == pd.DatetimeIndex(bus_dates)).all()
juraj-google-style
def unbroadcast_numpy_to(array, shape): axis = create_unbroadcast_axis(shape, numpy.shape(array)) return numpy.reshape(numpy.sum(array, axis=axis), shape)
Reverse the broadcasting operation. Args: array: An array. shape: A shape that could have been broadcasted to the shape of array. Returns: Array with dimensions summed to match `shape`.
juraj-google-style
def post(fqdn, package, result, entry, bound, ekey, *argl, **argd): global _atdepth_call, _cstack_call _cstack_call.pop() if (len(_cstack_call) == 0): _atdepth_call = False r = _post_call(_atdepth_call, package, fqdn, result, entry, bound, ekey, argl, argd) return r
Adds logging for the post-call result of calling the method externally. Args: fqdn (str): fully-qualified domain name of the function being logged. package (str): name of the package we are logging for. Usually the first element of `fqdn.split('.')`. result: returned from calling the method we are logging. entry (dict): one of the values returned by :func:`pre`. bound (bool): true if the method is bound. ekey (str): key under which to store the entry in the database.
codesearchnet
def earliest_date(dates, full_date=False): min_date = min(PartialDate.loads(date) for date in dates) if not min_date.month and full_date: min_date.month = 1 if not min_date.day and full_date: min_date.day = 1 return min_date.dumps()
Return the earliest among the schema-compliant dates. This is a convenience wrapper around :ref:`PartialDate`, which should be used instead if more features are needed. Args: dates(list): List of dates from which oldest/earliest one will be returned full_date(bool): Adds month and/or day as "01" if they are missing Returns: str: Earliest date from provided list
juraj-google-style
def get_pattern_step_time(self, patternnumber, stepnumber): _checkPatternNumber(patternnumber) _checkStepNumber(stepnumber) address = _calculateRegisterAddress('time', patternnumber, stepnumber) return self.read_register(address, 0)
Get the step time. Args: * patternnumber (integer): 0-7 * stepnumber (integer): 0-7 Returns: The step time (int??).
juraj-google-style
def assemble(cls, header_json, metadata_json, content_json): try: header = json_decode(header_json) except ValueError: raise MessageError("header could not be decoded") try: metadata = json_decode(metadata_json) except ValueError: raise MessageError("metadata could not be decoded") try: content = json_decode(content_json) except ValueError: raise MessageError("content could not be decoded") msg = cls(header, metadata, content) msg._header_json = header_json msg._metadata_json = metadata_json msg._content_json = content_json return msg
Creates a new message, assembled from JSON fragments. Args: header_json (``JSON``) : metadata_json (``JSON``) : content_json (``JSON``) : Returns: Message subclass Raises: MessageError
juraj-google-style
def _remove_squeezable_dimensions(labels, predictions, weights=None, expected_rank_diff=0): labels, predictions = confusion_matrix.remove_squeezable_dimensions(labels, predictions, expected_rank_diff=expected_rank_diff) if weights is not None: weights = ops.convert_to_tensor(weights) labels_rank = labels.get_shape().ndims weights_shape = weights.get_shape() weights_rank = weights_shape.ndims if labels_rank is not None and weights_rank is not None: rank_diff = weights_rank - labels_rank if rank_diff == 1: weights = array_ops.squeeze(weights, [-1]) return (labels, predictions, weights) rank_diff = array_ops.rank(weights) - array_ops.rank(labels) if weights_rank is None or (weights_rank > 0 and weights_shape.dims[-1].is_compatible_with(1)): weights = cond.cond(math_ops.equal(1, rank_diff), lambda: array_ops.squeeze(weights, [-1]), lambda: weights) return (labels, predictions, weights)
Internal version of _remove_squeezable_dimensions which handles weights. Squeezes `predictions` and `labels` if their ranks differ from expected by exactly 1. Squeezes `weights` if its rank is 1 more than the new rank of `predictions` This will use static shape if available. Otherwise, it will add graph operations, which could result in a performance hit. Args: labels: Label values, a `Tensor` whose dimensions match `predictions`. predictions: Predicted values, a `Tensor` of arbitrary dimensions. weights: Optional weight `Tensor`. It will be squeezed if it's not scalar, and its rank is 1 more than the new rank of `labels`. expected_rank_diff: Expected result of `rank(predictions) - rank(labels)`. Returns: Tuple of `predictions`, `labels` and `weights`, possibly with the last dimension squeezed.
github-repos
def fleet_id_to_slug(did): try: fleet_slug = IOTileFleetSlug(did) except ValueError: raise ArgumentError("Unable to recognize {} as a fleet id".format(did)) return str(fleet_slug)
Converts a fleet id into a correct fleet slug. Args: did (long) : A fleet id did (string) : A device slug in the form of XXXX, XXXX-XXXX-XXXX, g--XXXX, g--XXXX-XXXX-XXXX Returns: str: The device slug in the g--XXXX-XXXX-XXX format Raises: ArgumentError: if the ID is not in the [1, 16**12] range, or if not a valid string
juraj-google-style
def partitions_for_topic(self, topic): if topic not in self._partitions: return None return set(self._partitions[topic].keys())
Return set of all partitions for topic (whether available or not) Arguments: topic (str): topic to check for partitions Returns: set: {partition (int), ...}
juraj-google-style
def _get_fields(mcs, bases, namespace): fields = [(name, namespace.pop(name)) for (name, attribute) in list(namespace.items()) if isinstance(attribute, BaseField)] for base in reversed(bases): if hasattr(base, mcs._fields_storage_key): fields = (list(getattr(base, mcs._fields_storage_key).items()) + fields) return OrderedDict(fields)
Create fields dictionary to be used in resource class namespace. Pop all field objects from attributes dict (namespace) and store them under _field_storage_key atrribute. Also collect all fields from base classes in order that ensures fields can be overriden. Args: bases: all base classes of created serializer class namespace (dict): namespace as dictionary of attributes
codesearchnet
def _load_chunk(dat_path, cat_path, info_path): dat_array = read_binary_matrix(dat_path) dat_array = np.expand_dims(dat_array, (- 1)) cat_array = read_binary_matrix(cat_path) info_array = read_binary_matrix(info_path) info_array = np.copy(info_array) info_array[(:, 2)] = (info_array[(:, 2)] / 2) return (dat_array, cat_array, info_array)
Loads a data chunk as specified by the paths. Args: dat_path: Path to dat file of the chunk. cat_path: Path to cat file of the chunk. info_path: Path to info file of the chunk. Returns: Tuple with the dat, cat, info_arrays.
codesearchnet
def plogdet(K): r egvals = eigvalsh(K) return npsum(log(egvals[egvals > epsilon]))
r"""Log of the pseudo-determinant. It assumes that ``K`` is a positive semi-definite matrix. Args: K (array_like): matrix. Returns: float: log of the pseudo-determinant.
juraj-google-style
def make_fake_movie(nframes, mask_shape=(64, 64), mask_center=None, bg_intensity=0.1, mask_sigma=10, dt=0.02, rate=1.0, tau=1.0, sigma=0.001, seed=None): gen = np.random.RandomState(seed) n = gen.poisson((rate * dt), size=nframes) gamma = np.exp(((- dt) / tau)) c = signal.lfilter(np.r_[1], np.r_[(1, (- gamma))], n, axis=0) (nr, nc) = mask_shape npix = (nr * nc) if (mask_center is None): mask_center = ((nc (a, b) = mask_center (y, x) = np.ogrid[(:nr, :nc)] xs = ((x - a) ** 2.0) ys = ((y - b) ** 2.0) twoss = (2.0 * (mask_sigma ** 2.0)) alpha = np.exp(((- 1) * ((xs / twoss) + (ys / twoss)))).ravel() alpha /= alpha.sum() beta = (gen.randn(npix) * bg_intensity) lamb = rate epsilon = (gen.randn(npix, nframes) * sigma) F = (((c[(None, :)] * alpha[(:, None)]) + beta[(:, None)]) + epsilon) theta = (sigma, alpha, beta, lamb, gamma) return (F, c, n, theta)
Generate 2D fake fluorescence movie Arguments: --------------------------------------------------------------------------- nframes: number of timebins to simulate mask_shape: tuple (nrows, ncols), shape of a single movie frame mask_center: tuple (x, y), pixel coords of cell center bg_intensity: scalar, amplitude of (static) baseline fluorescence mask_sigma: scalar, standard deviation of Gaussian mask dt: timestep (s) rate: mean spike rate (Hz) tau: time constant of decay in calcium concentration (s) sigma: SD of additive noise on fluorescence seed: Seed for RNG Returns: --------------------------------------------------------------------------- F: fluorescence [npixels, nframes] c: calcium concentration [nframes,] n: spike train [nframes,] theta: tuple of true model parameters: (sigma, alpha, beta, lambda, gamma)
codesearchnet
def single_slice_dim(self, shape): if not isinstance(shape, (tuple, list)): raise TypeError('`shape` must be a sequence (like tuple or list) instead of ' + type(shape).__name__) if len(shape) != len(self.full_shape): raise ValueError('Expected equal length, but received shape={} of length {} while self.full_shape={} is of length {}.'.format(shape, len(shape), self.full_shape, len(self.full_shape))) for i in range(len(shape)): if self.var_offset[i] + shape[i] > self.full_shape[i]: raise ValueError('With self.var_offset={}, a partition of shape={} would exceed self.full_shape={} in dimension {}.'.format(self.var_offset, shape, self.full_shape, i)) slice_dim = None for i in range(len(shape)): if shape[i] == self.full_shape[i]: continue if slice_dim is not None: raise ValueError('Cannot use single_slice_dim() with shape={} and self.full_shape={} since slice dim could be either dimension {} or {}.'.format(shape, self.full_shape, i, slice_dim)) slice_dim = i return slice_dim
Returns the slice dim when the variable is partitioned only in one dim. Args: shape: Tuple or list of `int` indicating the shape of one specific variable partition. Returns: `int` representing the dimension that the variable is partitioned in, or `None` if the variable doesn't seem to be partitioned at all. Raises: TypeError: If `shape` is not a sequence. ValueError: If `shape` is not the same length as `self.full_shape`. If the variable is partitioned in more than one dimension.
github-repos
def __setitem__(self, key: Union[str, int], value: Any) -> None: if not hasattr(self, '_sym_parent'): return if base.treats_as_sealed(self): raise base.WritePermissionError(self._error_message('Cannot modify field of a sealed Dict.')) if not base.writtable_via_accessors(self): raise base.WritePermissionError(self._error_message("Cannot modify Dict field by attribute or key while accessor_writable is set to False. Use 'rebind' method instead.")) update = self._set_item_without_permission_check(key, value) if flags.is_change_notification_enabled() and update: self._notify_field_updates([update])
Set item in this Dict. Args: key: String key. (Please be noted that key path is not supported.) value: Value to be inserted. Raises: WritePermissionError: when Dict cannot be modified by accessor or is sealed. KeyError: Key is not allowed according to the value spec. ValueError: Value is not acceptable according to the value spec.
github-repos
def put(self, dash_id=0): data = request.get_json() updated = self._update_dash(dash_id, data) return build_response(dict(data=updated, code=200))
Update a dash meta and content, return updated dash content. Args: dash_id: dashboard id. Returns: A dict containing the updated content of that dashboard, not include the meta info.
codesearchnet
def get_student_current_grades(self, username, course_ids=None): if (course_ids is None): enrollments_client = CourseEnrollments(self.requester, self.base_url) enrollments = enrollments_client.get_student_enrollments() course_ids = list(enrollments.get_enrolled_course_ids()) all_current_grades = [] for course_id in course_ids: try: all_current_grades.append(self.get_student_current_grade(username, course_id)) except HTTPError as error: if (error.response.status_code >= 500): raise return CurrentGradesByUser(all_current_grades)
Returns a CurrentGradesByUser object with the user current grades. Args: username (str): an edx user's username course_ids (list): a list of edX course ids. Returns: CurrentGradesByUser: object representing the student current grades
codesearchnet
def _construct_location_to_filter_list(match_query): location_to_filters = {} for match_traversal in match_query.match_traversals: for match_step in match_traversal: current_filter = match_step.where_block if (current_filter is not None): current_location = match_step.as_block.location location_to_filters.setdefault(current_location, []).append(current_filter) return location_to_filters
Return a dict mapping location -> list of filters applied at that location. Args: match_query: MatchQuery object from which to extract location -> filters dict Returns: dict mapping each location in match_query to a list of Filter objects applied at that location
codesearchnet
def remove_all_servers(self): for server_id in list(self._servers.keys()): self.remove_server(server_id)
Remove all registered WBEM servers from the subscription manager. This also unregisters listeners from these servers and removes all owned indication subscriptions, owned indication filters, and owned listener destinations. Raises: Exceptions raised by :class:`~pywbem.WBEMConnection`.
codesearchnet
def humanize_time_delta(sec): if (sec < 0): logger.warn('humanize_time_delta() obtains negative seconds!') return '{:.3g} seconds'.format(sec) if (sec == 0): return '0 second' time = (datetime(2000, 1, 1) + timedelta(seconds=int(sec))) units = ['day', 'hour', 'minute', 'second'] vals = [int((sec if (sec < 60): vals[(- 1)] = sec def _format(v, u): return '{:.3g} {}{}'.format(v, u, ('s' if (v > 1) else '')) ans = [] for (v, u) in zip(vals, units): if (v > 0): ans.append(_format(v, u)) return ' '.join(ans)
Humanize timedelta given in seconds Args: sec (float): time difference in seconds. Must be positive. Returns: str - time difference as a readable string Example: .. code-block:: python print(humanize_time_delta(1)) # 1 second print(humanize_time_delta(60 + 1)) # 1 minute 1 second print(humanize_time_delta(87.6)) # 1 minute 27 seconds print(humanize_time_delta(0.01)) # 0.01 seconds print(humanize_time_delta(60 * 60 + 1)) # 1 hour 1 second print(humanize_time_delta(60 * 60 * 24 + 1)) # 1 day 1 second print(humanize_time_delta(60 * 60 * 24 + 60 * 2 + 60*60*9 + 3)) # 1 day 9 hours 2 minutes 3 seconds
codesearchnet
def delete_nsg_rule(access_token, subscription_id, resource_group, nsg_name, nsg_rule_name): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourceGroups/', resource_group, '/providers/Microsoft.Network/networkSecurityGroups/', nsg_name, '/securityRules/', nsg_rule_name, '?api-version=', NETWORK_API]) return do_delete(endpoint, access_token)
Delete network security group rule. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. resource_group (str): Azure resource group name. nsg_name (str): Name of the Network Security Group. nsg_rule_name (str): Name of the NSG rule. Returns: HTTP response.
juraj-google-style
def help_members(obj, use_other=False): import utool as ut attrnames = dir(obj) attr_list = [getattr(obj, attrname) for attrname in attrnames] attr_types = ut.lmap(ut.type_str, map(type, attr_list)) (unique_types, groupxs) = ut.group_indices(attr_types) type_to_items = ut.dzip(unique_types, ut.apply_grouping(attr_list, groupxs)) type_to_itemname = ut.dzip(unique_types, ut.apply_grouping(attrnames, groupxs)) memtypes = ['instancemethod'] func_mems = ut.dict_subset(type_to_items, memtypes, []) func_list = ut.flatten(func_mems.values()) defsig_list = [] num_unbound_args_list = [] num_args_list = [] for func in func_list: argspec = ut.get_func_argspec(func) args = argspec.args unbound_args = get_unbound_args(argspec) defsig = ut.func_defsig(func) defsig_list.append(defsig) num_unbound_args_list.append(len(unbound_args)) num_args_list.append(len(args)) group = ut.hierarchical_group_items(defsig_list, [num_unbound_args_list, num_args_list]) print(repr(obj)) print(ut.repr3(group, strvals=True)) if use_other: other_mems = ut.delete_keys(type_to_items.copy(), memtypes) other_mems_attrnames = ut.dict_subset(type_to_itemname, other_mems.keys()) named_other_attrs = ut.dict_union_combine(other_mems_attrnames, other_mems, (lambda x, y: list(zip(x, y)))) print(ut.repr4(named_other_attrs, nl=2, strvals=True))
r""" Inspects members of a class Args: obj (class or module): CommandLine: python -m utool.util_inspect help_members Example: >>> # ENABLE_DOCTEST >>> from utool.util_inspect import * # NOQA >>> import utool as ut >>> obj = ut.DynStruct >>> result = help_members(obj) >>> print(result)
codesearchnet
def is_supergroup(self, subgroup): warnings.warn("This is not fully functional. Only trivial subsets are " "tested right now. ") return set(subgroup.symmetry_ops).issubset(self.symmetry_ops)
True if this group is a supergroup of the supplied group. Args: subgroup (SymmetryGroup): Subgroup to test. Returns: True if this group is a supergroup of the supplied group.
juraj-google-style
def description(self, description): self._data['description'] = description request = self._base_request request['description'] = description return self._tc_requests.update(request, owner=self.owner)
Updates the security labels description. Args: description:
codesearchnet
def _exec_procedure_func(self, func, tr_record): func_name = func.__name__ procedure_name = func_name[1:] if func_name[0] == '_' else func_name with self._log_test_stage(procedure_name): try: func(copy.deepcopy(tr_record)) except signals.TestAbortSignal: raise except Exception as e: logging.exception('Exception happened when executing %s for %s.', procedure_name, self.current_test_info.name) tr_record.add_error(procedure_name, e)
Executes a procedure function like on_pass, on_fail etc. This function will alter the 'Result' of the test's record if exceptions happened when executing the procedure function, but prevents procedure functions from altering test records themselves by only passing in a copy. This will let signals.TestAbortAll through so abort_all works in all procedure functions. Args: func: The procedure function to be executed. tr_record: The TestResultRecord object associated with the test executed.
github-repos
def solid_named(self, name): check.str_param(name, 'name') if (name not in self._solid_dict): raise DagsterInvariantViolationError('Pipeline {pipeline_name} has no solid named {name}.'.format(pipeline_name=self.name, name=name)) return self._solid_dict[name]
Return the solid named "name". Throws if it does not exist. Args: name (str): Name of solid Returns: SolidDefinition: SolidDefinition with correct name.
codesearchnet
def parse_objective_coefficient(entry): for parameter in entry.kinetic_law_reaction_parameters: pid, name, value, units = parameter if (pid == 'OBJECTIVE_COEFFICIENT' or name == 'OBJECTIVE_COEFFICIENT'): return value return None
Return objective value for reaction entry. Detect objectives that are specified using the non-standardized kinetic law parameters which are used by many pre-FBC SBML models. The objective coefficient is returned for the given reaction, or None if undefined. Args: entry: :class:`SBMLReactionEntry`.
juraj-google-style
def sample_frame_indices(clip_len, frame_sample_rate, seg_len): converted_len = int(clip_len * frame_sample_rate) end_idx = np.random.randint(converted_len, seg_len) start_idx = end_idx - converted_len indices = np.linspace(start_idx, end_idx, num=clip_len) indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64) return indices
Sample a given number of frame indices from the video. Args: clip_len (`int`): Total number of frames to sample. frame_sample_rate (`int`): Sample every n-th frame. seg_len (`int`): Maximum allowed index of sample's last frame. Returns: indices (`List[int]`): List of sampled frame indices
github-repos
def get_plugin(self, identifier, cls=None): if (((cls is None) or (cls == 'provider')) and (identifier in self.available_providers)): return self.available_providers[identifier] elif (((cls is None) or (cls == 'checker')) and (identifier in self.available_checkers)): return self.available_checkers[identifier] return Config.load_local_plugin(identifier)
Return the plugin corresponding to the given identifier and type. Args: identifier (str): identifier of the plugin. cls (str): one of checker / provider. Returns: Checker/Provider: plugin class.
codesearchnet
def smoothing_cross_entropy_factored(a, b, labels, confidence): num_splits = 16 vocab_size = shape_list(b)[0] labels = approximate_split(labels, num_splits) a = approximate_split(a, num_splits) parts = [] for part in range(num_splits): with tf.control_dependencies(parts[-1:]): logits = tf.matmul(a[part], b, transpose_b=True) parts.append( smoothing_cross_entropy(logits, labels[part], vocab_size, confidence)) return tf.concat(parts, 0)
Memory-efficient computation of smoothing cross-entropy. Avoids realizing the entire logits matrix at once. Args: a: a Tensor with shape [batch, inner_dim] b: a Tensor with shape [vocab_size, inner_dim] labels: an integer Tensor with shape [batch] confidence: a float Returns: A Tensor with shape [batch]
juraj-google-style
def _finalize_func(string_handle): iterator_resource = gen_dataset_ops.iterator_from_string_handle_v2(string_handle, **self._input_dataset._flat_structure) with ops.control_dependencies([resource_variable_ops.destroy_resource_op(iterator_resource, ignore_lookup_error=True)]): return array_ops.constant(0, dtypes.int64)
Destroys the iterator resource created. Args: string_handle: An iterator string handle created by _init_func Returns: Tensor constant 0
github-repos
def _add_scalar(self, scalar): encoded = EncodedNumber.encode(self.public_key, scalar, max_exponent=self.exponent) return self._add_encoded(encoded)
Returns E(a + b), given self=E(a) and b. Args: scalar: an int or float b, to be added to `self`. Returns: EncryptedNumber: E(a + b), calculated by encrypting b and taking the product of E(a) and E(b) modulo :attr:`~PaillierPublicKey.n` ** 2. Raises: ValueError: if scalar is out of range or precision.
juraj-google-style
def maybe_download_and_extract_dataset(self, data_url, dest_directory): if not data_url: return if not gfile.Exists(dest_directory): os.makedirs(dest_directory) filename = data_url.split('/')[-1] filepath = os.path.join(dest_directory, filename) if not gfile.Exists(filepath): def _progress(count, block_size, total_size): sys.stdout.write('\r>> Downloading %s %.1f%%' % (filename, float(count * block_size) / float(total_size) * 100.0)) sys.stdout.flush() try: filepath, _ = urllib.request.urlretrieve(data_url, filepath, _progress) except: tf.compat.v1.logging.error('Failed to download URL: {0} to folder: {1}. Please make sure you have enough free space and an internet connection'.format(data_url, filepath)) raise print() statinfo = os.stat(filepath) tf.compat.v1.logging.info('Successfully downloaded {0} ({1} bytes)'.format(filename, statinfo.st_size)) tarfile.open(filepath, 'r:gz').extractall(dest_directory)
Download and extract data set tar file. If the data set we're using doesn't already exist, this function downloads it from the TensorFlow.org website and unpacks it into a directory. If the data_url is none, don't download anything and expect the data directory to contain the correct files already. Args: data_url: Web location of the tar file containing the data set. dest_directory: File path to extract data to.
github-repos
def _prepare_feed_values(model, inputs, targets, sample_weights, mode): strategy = model._distribution_strategy inputs, targets, sample_weights = _get_input_from_iterator(inputs, model) if backend.is_tpu_strategy(strategy): if sample_weights is not None: raise ValueError('TPUStrategy does not support sample weights.') if isinstance(inputs, dict): inputs = [inputs[key] for key in model._feed_input_names] if is_distributing_by_cloning(model): inputs = flatten_per_replica_values(strategy, inputs) targets = flatten_per_replica_values(strategy, targets) inputs, targets = nest.map_structure(training_utils_v1.standardize_single_array, (inputs, targets)) else: inputs = training_utils_v1.ModelInputs(inputs).as_list() if mode == ModeKeys.PREDICT: sample_weights = [] targets = [] elif sample_weights is not None and is_distributing_by_cloning(model): if context.executing_eagerly() and (not model._compile_distribution): raise NotImplementedError('`sample_weight` is not supported when using tf.distribute.Strategy in eager mode and cloning=True.') sample_weights = flatten_per_replica_values(strategy, sample_weights) ins = [inputs, targets, sample_weights] return tuple(ins)
Prepare feed values to the model execution function. Args: model: Model to prepare feed values for. inputs: List or dict of model inputs. targets: Optional list of model targets. sample_weights: Optional list of sample weight arrays. mode: One of ModeKeys.TRAIN/ModeKeys.TEST/ModeKeys.PREDICT. Returns: Feed values for the model in the given mode.
github-repos
def _get_num_nvidia_gpus(): try: return len(os.environ['CUDA_VISIBLE_DEVICES'].split(',')) except KeyError: pass try: output = subprocess.check_output(['nvidia-smi', '--list-gpus'], encoding='utf-8') return sum((l.startswith('GPU ') for l in output.strip().split('\n'))) except subprocess.CalledProcessError as e: raise RuntimeError('Could not get number of GPUs from nvidia-smi. Maybe it is missing?\nOutput: %s' % e.output)
Gets the number of NVIDIA GPUs by using CUDA_VISIBLE_DEVICES and nvidia-smi. Returns: Number of GPUs available on the node Raises: RuntimeError if executing nvidia-smi failed
github-repos
def _init_volume_service(self, version): volume_cfg = self._load_config_section(CONFIG_VOLUME_SECTION) self._token_volume = volume_cfg[CONFIG_TOKEN] proto = volume_cfg[CONFIG_PROTOCOL] host = volume_cfg[CONFIG_HOST] self._volume = VolumeService(host, version) self._volume.base_protocol = proto self._volume.set_auth(self._token_volume)
Method to initialize the Volume Service from the config data Args: version (string): Version of Boss API to use. Returns: None Raises: (KeyError): if given invalid version.
codesearchnet
def post_process_travis_macos(journal_filename): travis_build_dir = os.environ.get('TRAVIS_BUILD_DIR', '') with open(journal_filename, 'r') as file_obj: content = file_obj.read() processed = content.replace(travis_build_dir, '${TRAVIS_BUILD_DIR}') with open(journal_filename, 'w') as file_obj: file_obj.write(processed)
Post-process a generated journal file on Travis macOS. Args: journal_filename (str): The name of the journal file.
codesearchnet
def predict(self, documents, **kwargs): if isinstance(documents, (str, bytes, unicode_, np.unicode_)): return self._predict_one(documents, **kwargs) else: return np.concatenate([self._predict_one(doc, **kwargs) for doc in documents])
Predict class (content=1 or not-content=0) of the blocks in one or many HTML document(s). Args: documents (str or List[str]): HTML document(s) Returns: ``np.ndarray`` or List[``np.ndarray``]: array of binary predictions for content (1) or not-content (0).
juraj-google-style
def handle_subscribe(self, request, path): ret = [] if path: name = path[0] if name not in self.children: self.children[name] = NotifierNode( getattr(self.data, name, None), self) ret += self.children[name].handle_subscribe(request, path[1:]) else: serialized = serialize_object(self.data) if request.delta: self.delta_requests.append(request) ret.append(request.delta_response([[[], serialized]])) else: self.update_requests.append(request) ret.append(request.update_response(serialized)) return ret
Add to the list of request to notify, and notify the initial value of the data held Args: request (Subscribe): The subscribe request path (list): The relative path from ourself Returns: list: [(callback, Response)] that need to be called
juraj-google-style
def format_sec_to_dhm(sec): (rem_int, s_int) = divmod(int(sec), 60) (rem_int, m_int) = divmod(rem_int, 60) (d_int, h_int) = divmod(rem_int, 24) return '{}d{:02d}h{:02d}m'.format(d_int, h_int, m_int)
Format seconds to days, hours, minutes. Args: sec: float or int Number of seconds in a period of time Returns: Period of time represented as a string on the form ``0d:00h:00m``.
codesearchnet
def get_data(self, url, *args, **kwargs): res = self._conn.get(url, headers=self._prepare_headers(**kwargs)) if (res.status_code == 200): return res.text else: return None
Gets data from url as text Returns content under the provided url as text Args: **url**: address of the wanted data .. versionadded:: 0.3.2 **additional_headers**: (optional) Additional headers to be used with request Returns: string
codesearchnet
def __init__(self, root, attached_dependencies=None): trackable_view.TrackableView.__init__(self, root) self._root_ref = root if isinstance(root, weakref.ref) else weakref.ref(root) self._attached_dependencies = attached_dependencies
Configure the graph view. Args: root: A `Trackable` object whose variables (including the variables of dependencies, recursively) should be saved. May be a weak reference. attached_dependencies: List of dependencies to attach to the root object. Used when saving a Checkpoint with a defined root object. To avoid reference cycles, this should use the WeakTrackableReference class.
github-repos
def set_defaults(self, defaults: Sequence[cfg.Variable]) -> 'PyTDSignature': defaults = list(defaults) params = [] for param in reversed(self.pytd_sig.params): if defaults: defaults.pop() params.append(pytd.Parameter(name=param.name, type=param.type, kind=param.kind, optional=True, mutated_type=param.mutated_type)) else: params.append(pytd.Parameter(name=param.name, type=param.type, kind=param.kind, optional=False, mutated_type=param.mutated_type)) new_sig = pytd.Signature(params=tuple(reversed(params)), starargs=self.pytd_sig.starargs, starstarargs=self.pytd_sig.starstarargs, return_type=self.pytd_sig.return_type, exceptions=self.pytd_sig.exceptions, template=self.pytd_sig.template) self.pytd_sig = new_sig self.param_types = [self.ctx.convert.constant_to_value(p.type, subst=datatypes.AliasingDict(), node=self.ctx.root_node) for p in self.pytd_sig.params] self.signature = function.Signature.from_pytd(self.ctx, self.name, self.pytd_sig) return self
Set signature's default arguments. Requires rebuilding PyTD signature. Args: defaults: An iterable of function argument defaults. Returns: Self with an updated signature.
github-repos
def add_roles(self, databaseName, roleNames, collectionName=None): for roleName in roleNames: self.add_role(databaseName, roleName, collectionName)
Add multiple roles Args: databaseName (str): Database Name roleNames (list of RoleSpecs): roles Keyword Args: collectionName (str): Collection Raises: ErrRoleException: role not compatible with the databaseName and/or collectionName
juraj-google-style
def PrintMessage(self, message): fields = message.ListFields() if self.use_index_order: fields.sort(key=(lambda x: x[0].index)) for (field, value) in fields: if _IsMapEntry(field): for key in sorted(value): entry_submsg = field.message_type._concrete_class(key=key, value=value[key]) self.PrintField(field, entry_submsg) elif (field.label == descriptor.FieldDescriptor.LABEL_REPEATED): for element in value: self.PrintField(field, element) else: self.PrintField(field, value)
Convert protobuf message to text format. Args: message: The protocol buffers message.
codesearchnet
def unsubscribe(self, subscription, max=None): if (max is None): self._send(('UNSUB %d' % subscription.sid)) self._subscriptions.pop(subscription.sid) else: subscription.max = max self._send(('UNSUB %d %s' % (subscription.sid, max)))
Unsubscribe will remove interest in the given subject. If max is provided an automatic Unsubscribe that is processed by the server when max messages have been received Args: subscription (pynats.Subscription): a Subscription object max (int=None): number of messages
codesearchnet
def describe_file_set(modules): descriptor = FileSet() file_descriptors = [] for module in modules: file_descriptors.append(describe_file(module)) if file_descriptors: descriptor.files = file_descriptors return descriptor
Build a file set from a specified Python modules. Args: modules: Iterable of Python module to describe. Returns: Initialized FileSet instance describing the modules.
codesearchnet
def month(self, value=None): if (value is not None): try: value = int(value) except ValueError: raise ValueError('value {} need to be of type int for field `month`'.format(value)) if (value < 1): raise ValueError('value need to be greater or equal 1 for field `month`') if (value > 12): raise ValueError('value need to be smaller 12 for field `month`') self._month = value
Corresponds to IDD Field `month` Args: value (int): value for IDD Field `month` value >= 1 value <= 12 if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
codesearchnet
def defaults(cls, *options, **kwargs): if (kwargs and (len(kwargs) != 1) and (list(kwargs.keys())[0] != 'backend')): raise Exception('opts.defaults only accepts "backend" keyword argument') cls._linemagic(cls._expand_options(merge_options_to_dict(options)), backend=kwargs.get('backend'))
Set default options for a session. Set default options for a session. whether in a Python script or a Jupyter notebook. Args: *options: Option objects used to specify the defaults. backend: The plotting extension the options apply to
codesearchnet
def create(cls, hashing_algorithm=HashingAlgorithmEnum.SHA_256, digest_value=b'', key_format_type=KeyFormatTypeEnum.RAW): algorithm = HashingAlgorithm(hashing_algorithm) value = DigestValue(bytearray(digest_value)) format_type = KeyFormatType(key_format_type) return Digest(hashing_algorithm=algorithm, digest_value=value, key_format_type=format_type)
Construct a Digest object from provided digest values. Args: hashing_algorithm (HashingAlgorithm): An enumeration representing the hash algorithm used to compute the digest. Optional, defaults to HashingAlgorithm.SHA_256. digest_value (byte string): The bytes of the digest hash. Optional, defaults to the empty byte string. key_format_type (KeyFormatType): An enumeration representing the format of the key corresponding to the digest. Optional, defaults to KeyFormatType.RAW. Returns: Digest: The newly created Digest. Example: >>> x = Digest.create(HashingAlgorithm.MD5, b'\x00', ... KeyFormatType.RAW) >>> x.hashing_algorithm HashingAlgorithm(value=HashingAlgorithm.MD5) >>> x.digest_value DigestValue(value=bytearray(b'\x00')) >>> x.key_format_type KeyFormatType(value=KeyFormatType.RAW)
codesearchnet
def get_kpoint_weights(self, kpoints, atol=1e-5): kpts = np.array(kpoints) shift = [] mesh = [] for i in range(3): nonzero = [i for i in kpts[:, i] if abs(i) > 1e-5] if len(nonzero) != len(kpts): if not nonzero: mesh.append(1) else: m = np.abs(np.round(1/np.array(nonzero))) mesh.append(int(max(m))) shift.append(0) else: m = np.abs(np.round(0.5/np.array(nonzero))) mesh.append(int(max(m))) shift.append(1) mapping, grid = spglib.get_ir_reciprocal_mesh( np.array(mesh), self._cell, is_shift=shift, symprec=self._symprec) mapping = list(mapping) grid = (np.array(grid) + np.array(shift) * (0.5, 0.5, 0.5)) / mesh weights = [] mapped = defaultdict(int) for k in kpoints: for i, g in enumerate(grid): if np.allclose(pbc_diff(k, g), (0, 0, 0), atol=atol): mapped[tuple(g)] += 1 weights.append(mapping.count(mapping[i])) break if (len(mapped) != len(set(mapping))) or ( not all([v == 1 for v in mapped.values()])): raise ValueError("Unable to find 1:1 corresponding between input " "kpoints and irreducible grid!") return [w/sum(weights) for w in weights]
Calculate the weights for a list of kpoints. Args: kpoints (Sequence): Sequence of kpoints. np.arrays is fine. Note that the code does not check that the list of kpoints provided does not contain duplicates. atol (float): Tolerance for fractional coordinates comparisons. Returns: List of weights, in the SAME order as kpoints.
juraj-google-style
def rotate_view(self, axis_ind=0, angle=0): camera = self.ren.GetActiveCamera() if (axis_ind == 0): camera.Roll(angle) elif (axis_ind == 1): camera.Azimuth(angle) else: camera.Pitch(angle) self.ren_win.Render()
Rotate the camera view. Args: axis_ind: Index of axis to rotate. Defaults to 0, i.e., a-axis. angle: Angle to rotate by. Defaults to 0.
codesearchnet
async def get_random_popular_person(self, limit=500): index = random.randrange(limit) data = await self._get_popular_people_page() if data is None: return if index >= len(data['results']): page, index = self._calculate_page_index(index, data) data = await self._get_popular_people_page(page) if data is None: return json_data = data['results'][index] details = await self._get_person_json(json_data['id']) details.update(**json_data) return Person.from_json(details, self.config['data'].get('images'))
Randomly select a popular person. Notes: Requires at least two API calls. May require three API calls if the randomly-selected index isn't within the first page of required data. Arguments: limit (:py:class:`int`, optional): How many of the most popular people to make random choice from (defaults to top ``500``). Returns: :py:class:`~.Person`: A randomly-selected popular person.
juraj-google-style
def _Open(self, path_spec=None, mode='rb'): if not self._file_object_set_in_init and not path_spec: raise ValueError('Missing path specification.') if self._file_object_set_in_init: return self._file_object = self._OpenFileObject(path_spec) if not self._file_object: raise IOError('Unable to open missing file-like object.')
Opens the file-like object defined by path specification. Args: path_spec (Optional[PathSpec]): path specification. mode (Optional[str]): file access mode. Raises: AccessError: if the access to open the file was denied. IOError: if the file-like object could not be opened. OSError: if the file-like object could not be opened. PathSpecError: if the path specification is incorrect. ValueError: if the path specification is invalid.
juraj-google-style
def Artifacts(self, os_name=None, cpe=None, label=None): return [ c.artifact for c in self.conditions if c.Artifacts(os_name, cpe, label) ]
Find the artifacts that correspond with other trigger conditions. Args: os_name: An OS string. cpe: A CPE string. label: A label string. Returns: A list of artifacts to be processed.
juraj-google-style
def normal(self, shape, mean=0.0, stddev=1.0, dtype=dtypes.float32, name=None): with ops.name_scope(name, 'stateful_normal', [shape, mean, stddev]) as name: shape = _shape_tensor(shape) mean = ops.convert_to_tensor(mean, dtype=dtype, name='mean') stddev = ops.convert_to_tensor(stddev, dtype=dtype, name='stddev') rnd = self._standard_normal(shape, dtype=dtype) return math_ops.add(rnd * stddev, mean, name=name)
Outputs random values from a normal distribution. Args: shape: A 1-D integer Tensor or Python array. The shape of the output tensor. mean: A 0-D Tensor or Python value of type `dtype`. The mean of the normal distribution. stddev: A 0-D Tensor or Python value of type `dtype`. The standard deviation of the normal distribution. dtype: The type of the output. name: A name for the operation (optional). Returns: A tensor of the specified shape filled with random normal values.
github-repos
def add_string_parameters(self, string): if isinstance(string, list): for x in string: self.add_string_parameters(x) return self._parameters.append("{ \"value\": \"" + string + "\" }")
Add given string parameters to the internal list. Args: string (list of str or str): A string or list of strings to add to the parameters.
juraj-google-style
def __call__(self, y_true, y_pred, sample_weight=None, regularization_losses=None): y_true = self._conform_to_outputs(y_pred, y_true) sample_weight = self._conform_to_outputs(y_pred, sample_weight) if not self._built: self.build(y_pred) y_pred = nest.flatten(y_pred) y_true = nest.flatten(y_true) sample_weight = nest.flatten(sample_weight) loss_values = [] loss_metric_values = [] batch_dim = None zip_args = (y_true, y_pred, sample_weight, self._losses, self._loss_weights, self._per_output_metrics) for y_t, y_p, sw, loss_obj, loss_weight, metric_obj in zip(*zip_args): if y_t is None or loss_obj is None: continue y_t, y_p, sw = match_dtype_and_rank(y_t, y_p, sw) sw = apply_mask(y_p, sw, get_mask(y_p)) loss_value = loss_obj(y_t, y_p, sample_weight=sw) loss_metric_value = loss_value if loss_obj.reduction == losses_utils.ReductionV2.SUM: loss_metric_value *= distribute_lib.get_strategy().num_replicas_in_sync if batch_dim is None: if tf_utils.is_ragged(y_t): batch_dim = y_t.nrows() else: batch_dim = array_ops.shape(y_t)[0] if metric_obj is not None: metric_obj.update_state(loss_metric_value, sample_weight=batch_dim) if loss_weight is not None: loss_value *= loss_weight loss_metric_value *= loss_weight if loss_obj.reduction == losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE or loss_obj.reduction == losses_utils.ReductionV2.AUTO: loss_value = losses_utils.scale_loss_for_distribution(loss_value) loss_values.append(loss_value) loss_metric_values.append(loss_metric_value) if regularization_losses: regularization_losses = losses_utils.cast_losses_to_common_dtype(regularization_losses) reg_loss = math_ops.add_n(regularization_losses) loss_metric_values.append(reg_loss) loss_values.append(losses_utils.scale_loss_for_distribution(reg_loss)) if loss_values: loss_metric_values = losses_utils.cast_losses_to_common_dtype(loss_metric_values) total_loss_metric_value = math_ops.add_n(loss_metric_values) self._loss_metric.update_state(total_loss_metric_value, sample_weight=batch_dim) loss_values = losses_utils.cast_losses_to_common_dtype(loss_values) total_loss = math_ops.add_n(loss_values) return total_loss else: return array_ops.zeros(shape=())
Computes the overall loss. Args: y_true: An arbitrary structure of Tensors representing the ground truth. y_pred: An arbitrary structure of Tensors representing a Model's outputs. sample_weight: An arbitrary structure of Tensors representing the per-sample loss weights. If one Tensor is passed, it is used for all losses. If multiple Tensors are passed, the structure should match `y_pred`. regularization_losses: Additional losses to be added to the total loss. Returns: Tuple of `(total_loss, per_output_loss_list)`
github-repos
def ch_start_time(self, *channels: List[Channel]) -> int: return self.timeslots.ch_start_time(*channels)
Return minimum start time for supplied channels. Args: *channels: Supplied channels
juraj-google-style
def _create_datadict(cls, internal_name): if (internal_name == 'LOCATION'): return Location() if (internal_name == 'DESIGN CONDITIONS'): return DesignConditions() if (internal_name == 'TYPICAL/EXTREME PERIODS'): return TypicalOrExtremePeriods() if (internal_name == 'GROUND TEMPERATURES'): return GroundTemperatures() if (internal_name == 'HOLIDAYS/DAYLIGHT SAVINGS'): return HolidaysOrDaylightSavings() if (internal_name == 'COMMENTS 1'): return Comments1() if (internal_name == 'COMMENTS 2'): return Comments2() if (internal_name == 'DATA PERIODS'): return DataPeriods() raise ValueError('No DataDictionary known for {}'.format(internal_name))
Creates an object depending on `internal_name` Args: internal_name (str): IDD name Raises: ValueError: if `internal_name` cannot be matched to a data dictionary object
codesearchnet
def SetHasherNames(self, hasher_names_string): hasher_names = hashers_manager.HashersManager.GetHasherNamesFromString( hasher_names_string) debug_hasher_names = ', '.join(hasher_names) logger.debug('Got hasher names: {0:s}'.format(debug_hasher_names)) self._hashers = hashers_manager.HashersManager.GetHashers(hasher_names) self._hasher_names_string = hasher_names_string
Sets the hashers that should be enabled. Args: hasher_names_string (str): comma separated names of hashers to enable.
juraj-google-style
def valUserCert(self, byts, cacerts=None): cert = crypto.load_certificate(crypto.FILETYPE_PEM, byts) if cacerts is None: cacerts = self.getCaCerts() store = crypto.X509Store() [store.add_cert(cacert) for cacert in cacerts] ctx = crypto.X509StoreContext(store, cert) ctx.verify_certificate() return cert
Validate the PEM encoded x509 user certificate bytes and return it. Args: byts (bytes): The bytes for the User Certificate. cacerts (tuple): A tuple of OpenSSL.crypto.X509 CA Certificates. Raises: OpenSSL.crypto.X509StoreContextError: If the certificate is not valid. Returns: OpenSSL.crypto.X509: The certificate, if it is valid.
juraj-google-style
def get(self, client_id, client_secret, code, redirect_uri): check_type(client_id, basestring, may_be_none=False) check_type(client_secret, basestring, may_be_none=False) check_type(code, basestring, may_be_none=False) check_type(redirect_uri, basestring, may_be_none=False) post_data = dict_from_items_with_values(grant_type='authorization_code', client_id=client_id, client_secret=client_secret, code=code, redirect_uri=redirect_uri) response = requests.post(self._endpoint_url, data=post_data, **self._request_kwargs) check_response_code(response, EXPECTED_RESPONSE_CODE['POST']) json_data = extract_and_parse_json(response) return self._object_factory(OBJECT_TYPE, json_data)
Exchange an Authorization Code for an Access Token. Exchange an Authorization Code for an Access Token that can be used to invoke the APIs. Args: client_id(basestring): Provided when you created your integration. client_secret(basestring): Provided when you created your integration. code(basestring): The Authorization Code provided by the user OAuth process. redirect_uri(basestring): The redirect URI used in the user OAuth process. Returns: AccessToken: An AccessToken object with the access token provided by the Webex Teams cloud. Raises: TypeError: If the parameter types are incorrect. ApiError: If the Webex Teams cloud returns an error.
codesearchnet
def GetVSSStoreIdentifiers(self, volume_system, volume_identifiers): print_header = True while True: if print_header: self._PrintVSSStoreIdentifiersOverview(volume_system, volume_identifiers) print_header = False self._output_writer.Write('\n') lines = self._textwrapper.wrap(self._USER_PROMPT_VSS) self._output_writer.Write('\n'.join(lines)) self._output_writer.Write('\n\nVSS identifier(s): ') try: selected_volumes = self._ReadSelectedVolumes(volume_system, prefix='vss') if ((not selected_volumes) or (not set(selected_volumes).difference(volume_identifiers))): break except ValueError: pass self._output_writer.Write('\n') lines = self._textwrapper.wrap('Unsupported VSS identifier(s), please try again or abort with Ctrl^C.') self._output_writer.Write('\n'.join(lines)) self._output_writer.Write('\n\n') return selected_volumes
Retrieves VSS store identifiers. This method can be used to prompt the user to provide VSS store identifiers. Args: volume_system (VShadowVolumeSystem): volume system. volume_identifiers (list[str]): volume identifiers including prefix. Returns: list[str]: selected volume identifiers including prefix or None.
codesearchnet
def get_display_name(self, room=None): if room: try: return room.members_displaynames[self.user_id] except KeyError: return self.user_id if not self.displayname: self.displayname = self.api.get_display_name(self.user_id) return self.displayname or self.user_id
Get this user's display name. Args: room (Room): Optional. When specified, return the display name of the user in this room. Returns: The display name. Defaults to the user ID if not set.
juraj-google-style
def graph_op_digests(self, op_type=None): if op_type is not None: return [digest for digest in self._graph_op_digests if digest.op_type == op_type] else: return self._graph_op_digests
Get the list of the digests for graph-op creation so far. Args: op_type: Optional op type to filter the creation events with. Returns: A list of `GraphOpCreationDigest` objects.
github-repos
def _initialize_memory(self, policy_params): template = (self._batch_env.observ[0], self._batch_env.action[0], tools.nested.map((lambda x: x[(0, 0)]), policy_params), self._batch_env.reward[0]) with tf.variable_scope('ppo_temporary'): self._current_episodes = parts.EpisodeMemory(template, len(self._batch_env), self._config.max_length, 'episodes') self._finished_episodes = parts.EpisodeMemory(template, self._config.update_every, self._config.max_length, 'memory') self._num_finished_episodes = tf.Variable(0, False)
Initialize temporary and permanent memory. Args: policy_params: Nested tuple of policy parameters with all dimensions set. Initializes the attributes `self._current_episodes`, `self._finished_episodes`, and `self._num_finished_episodes`. The episodes memory serves to collect multiple episodes in parallel. Finished episodes are copied into the next free slot of the second memory. The memory index points to the next free slot.
codesearchnet
def add_header(self, key, value, **params): key = self.escape(key) ci_key = key.casefold() def quoted_params(items): for p in items: param_name = self.escape(p[0]) param_val = self.de_quote(self.escape(p[1])) (yield (param_name, param_val)) sorted_items = sorted(params.items()) quoted_iter = (('%s="%s"' % p) for p in quoted_params(sorted_items)) param_str = ' '.join(quoted_iter) if param_str: value = ('%s; %s' % (value, param_str)) self._header_data[ci_key] = (key, value)
Add a header to the collection, including potential parameters. Args: key (str): The name of the header value (str): The value to store under that key params: Option parameters to be appended to the value, automatically formatting them in a standard way
codesearchnet
def render_chart_data(data): builder = HtmlBuilder() builder._render_objects(data, datatype='chartdata') return builder._to_html()
Return a dictionary list formatted as a HTML table. Args: data: data in the form consumed by Google Charts.
juraj-google-style
def _create(cls, model_class, *args, **kwargs): manager = cls._get_manager(model_class) return manager.create_user(*args, **kwargs)
Create a new user instance. Args: model_class: The type of model to create an instance of. args: Positional arguments to create the instance with. kwargs: Keyword arguments to create the instance with. Returns: A new user instance of the type specified by ``model_class``.
juraj-google-style
def open_image(fn): flags = cv2.IMREAD_UNCHANGED+cv2.IMREAD_ANYDEPTH+cv2.IMREAD_ANYCOLOR if not os.path.exists(fn) and not str(fn).startswith("http"): raise OSError('No such file or directory: {}'.format(fn)) elif os.path.isdir(fn) and not str(fn).startswith("http"): raise OSError('Is a directory: {}'.format(fn)) elif isdicom(fn): slice = pydicom.read_file(fn) if slice.PhotometricInterpretation.startswith('MONOCHROME'): im = np.stack([slice.pixel_array]*3,-1) return im / ((1 << slice.BitsStored)-1) else: raise OSError('Unsupported DICOM image with PhotometricInterpretation=={}'.format(slice.PhotometricInterpretation)) else: try: if str(fn).startswith("http"): req = urllib.urlopen(str(fn)) image = np.asarray(bytearray(req.read()), dtype="uint8") im = cv2.imdecode(image, flags).astype(np.float32)/255 else: im = cv2.imread(str(fn), flags).astype(np.float32)/255 if im is None: raise OSError(f'File not recognized by opencv: {fn}') return cv2.cvtColor(im, cv2.COLOR_BGR2RGB) except Exception as e: raise OSError('Error handling image at: {}'.format(fn)) from e
Opens an image using OpenCV given the file path. Arguments: fn: the file path of the image Returns: The image in RGB format as numpy array of floats normalized to range between 0.0 - 1.0
juraj-google-style
def get_model_field(model, field_name): meta = model._meta try: if DJANGO19: field = meta.get_field(field_name) else: field = meta.get_field_by_name(field_name)[0] return field except: if DJANGO19: related_objs = ( f for f in meta.get_fields() if (f.one_to_many or f.one_to_one) and f.auto_created and not f.concrete ) related_m2m_objs = ( f for f in meta.get_fields(include_hidden=True) if f.many_to_many and f.auto_created ) else: related_objs = meta.get_all_related_objects() related_m2m_objs = meta.get_all_related_many_to_many_objects() related_objects = { o.get_accessor_name(): o for o in chain(related_objs, related_m2m_objs) } if field_name in related_objects: return related_objects[field_name] else: if hasattr(meta, 'virtual_fields'): for field in meta.virtual_fields: if field.name == field_name: return field raise AttributeError( '%s is not a valid field for %s' % (field_name, model) )
Return a field given a model and field name. Arguments: model: a Django model field_name: the name of a field Returns: A Django field if `field_name` is a valid field for `model`, None otherwise.
juraj-google-style
def __init__(self, parent=None): super(SupportedDtypesTranslator, self).__init__(parent) self._strs = [(np.dtype(object), self.tr('text'))] self._ints = [(np.dtype(np.int8), self.tr('small integer (8 bit)')), (np.dtype(np.int16), self.tr('small integer (16 bit)')), (np.dtype(np.int32), self.tr('integer (32 bit)')), (np.dtype(np.int64), self.tr('integer (64 bit)'))] self._uints = [(np.dtype(np.uint8), self.tr('unsigned small integer (8 bit)')), (np.dtype(np.uint16), self.tr('unsigned small integer (16 bit)')), (np.dtype(np.uint32), self.tr('unsigned integer (32 bit)')), (np.dtype(np.uint64), self.tr('unsigned integer (64 bit)'))] self._floats = [(np.dtype(np.float16), self.tr('floating point number (16 bit)')), (np.dtype(np.float32), self.tr('floating point number (32 bit)')), (np.dtype(np.float64), self.tr('floating point number (64 bit)'))] self._datetime = [(np.dtype('<M8[ns]'), self.tr('date and time'))] self._bools = [(np.dtype(bool), self.tr('true/false value'))] self._all = self._strs + self._ints + self._uints + self._floats + self._bools + self._datetime
Constructs the object with the given parent. Args: parent (QtCore.QObject, optional): Causes the objected to be owned by `parent` instead of Qt. Defaults to `None`.
juraj-google-style
def __init__(self, vertex_out, vertex_in, weight=1): self.vertex_out = None self.vertex_in = None self.weight = weight self.go_from(vertex_out) self.go_in(vertex_in)
Initialization method. Args: vertex_out (Vertex): source vertex (edge going out). vertex_in (Vertex): target vertex (edge going in). weight (int): weight of the edge.
juraj-google-style
def rgb_to_grayscale(images, name=None): with ops.name_scope(name, 'rgb_to_grayscale', [images]) as name: images = ops.convert_to_tensor(images, name='images') orig_dtype = images.dtype flt_image = convert_image_dtype(images, dtypes.float32) rgb_weights = [0.2989, 0.587, 0.114] gray_float = math_ops.tensordot(flt_image, rgb_weights, [-1, -1]) gray_float = array_ops.expand_dims(gray_float, -1) return convert_image_dtype(gray_float, orig_dtype, name=name)
Converts one or more images from RGB to Grayscale. Outputs a tensor of the same `DType` and rank as `images`. The size of the last dimension of the output is 1, containing the Grayscale value of the pixels. >>> original = tf.constant([[[1.0, 2.0, 3.0]]]) >>> converted = tf.image.rgb_to_grayscale(original) >>> print(converted.numpy()) [[[1.81...]]] Args: images: The RGB tensor to convert. The last dimension must have size 3 and should contain RGB values. name: A name for the operation (optional). Returns: The converted grayscale image(s).
github-repos
def _wrap_2d_function(inputs, compute_op, dim=-1, name=None): def _swap_axis(input_tensor, dim_index, last_index, name=None): return array_ops.transpose(input_tensor, array_ops.concat([math_ops.range(dim_index), [last_index], math_ops.range(dim_index + 1, last_index), [dim_index]], 0), name=name) inputs = ops.convert_to_tensor(inputs) shape = inputs.get_shape() is_last_dim = dim == -1 or dim == shape.ndims - 1 if is_last_dim: return compute_op(inputs, name=name) dim_val = dim if isinstance(dim, tensor_lib.Tensor): dim_val = tensor_util.constant_value(dim) if dim_val is not None and (not -shape.ndims <= dim_val < shape.ndims): raise errors_impl.InvalidArgumentError(None, None, f'`dim` must be in the range [{-shape.ndims}, {shape.ndims}) where {shape.ndims} is the number of dimensions in the input. Received: dim={dim_val}') ndims = array_ops.rank(inputs) if not isinstance(dim, tensor_lib.Tensor): if dim < 0: dim += ndims else: dim = array_ops.where(math_ops.less(dim, 0), dim + ndims, dim) input_rank = array_ops.rank(inputs) dim_axis = dim % shape.ndims inputs = _swap_axis(inputs, dim_axis, math_ops.subtract(input_rank, 1)) def fix_output(output): output = _swap_axis(output, dim_axis, math_ops.subtract(input_rank, 1), name=name) output.set_shape(shape) return output outputs = compute_op(inputs) if isinstance(outputs, tuple): return tuple((fix_output(output) for output in outputs)) else: return fix_output(outputs)
Helper function for ops that accept and return 2d inputs of same shape. It reshapes and transposes the inputs into a 2-D Tensor and then invokes the given function. The output would be transposed and reshaped back. If the given function returns a tuple of tensors, each of them will be transposed and reshaped. Args: inputs: A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. compute_op: The function to wrap. Must accept the input tensor as its first arugment, and a second keyword argument `name`. dim: The dimension softmax would be performed on. The default is -1 which indicates the last dimension. name: A name for the operation (optional). Returns: A `Tensor`. Has the same shape as inputs. If compute_op returns multiple tensors, each of them have the same shape as the input. Raises: InvalidArgumentError: if `inputs` is empty or `dim` is beyond the last dimension of `inputs`.
github-repos
def get_invalid_txn_info(self, batch_id): with self._lock: return [info.copy() for info in self._invalid.get(batch_id, [])]
Fetches the id of the Transaction that failed within a particular Batch, as well as any error message or other data about the failure. Args: batch_id (str): The id of the Batch containing an invalid txn Returns: list of dict: A list of dicts with three possible keys: * 'id' - the header_signature of the invalid Transaction * 'message' - the error message sent by the TP * 'extended_data' - any additional data sent by the TP
codesearchnet
def _get_overlaps_tensor(self, L): (n, m) = L.shape LY = np.array([np.where((L == y), 1, 0) for y in range(self.k_0, (self.k + 1))]) O = (np.einsum('abc,dbe,fbg->cegadf', LY, LY, LY) / n) return torch.from_numpy(O).float()
Transforms the input label matrix to a three-way overlaps tensor. Args: L: (np.array) An n x m array of LF output labels, in {0,...,k} if self.abstains, else in {1,...,k}, generated by m conditionally independent LFs on n data points Outputs: O: (torch.Tensor) A (m, m, m, k, k, k) tensor of the label-specific empirical overlap rates; that is, O[i,j,k,y1,y2,y3] = P(\lf_i = y1, \lf_j = y2, \lf_k = y3) where this quantity is computed empirically by this function, based on the label matrix L.
codesearchnet
def derive_annotations(self, annotations): cls = type(self) return cls( self[0], self[1], self[2], self[3], annotations, self[5] )
Derives a new event from this one setting the ``annotations`` attribute. Args: annotations: (Sequence[Union[amazon.ion.symbols.SymbolToken, unicode]]): The annotations associated with the derived event. Returns: IonEvent: The newly generated event.
juraj-google-style
def register_key_flag_for_module(self, module_name, flag): key_flags_by_module = self.key_flags_by_module_dict() key_flags = key_flags_by_module.setdefault(module_name, []) if flag not in key_flags: key_flags.append(flag)
Specifies that a flag is a key flag for a module. Args: module_name: str, the name of a Python module. flag: Flag, the Flag instance that is key to the module.
juraj-google-style
def filter(self, scored_list): top_n_key = ((- 1) * self.top_n) top_n_list = sorted(scored_list, key=(lambda x: x[1]))[top_n_key:] result_list = sorted(top_n_list, key=(lambda x: x[0])) return result_list
Filtering with top-n ranking. Args: scored_list: The list of scoring. Retruns: The list of filtered result.
codesearchnet
def peek(self, index, name=None): if name is None: name = '%s_peek' % self._name fn = lambda: gen_data_flow_ops.stage_peek(index, dtypes=self._dtypes, shared_name=self._name, name=name, capacity=self._capacity, memory_limit=self._memory_limit) return self.__internal_get(fn, name)
Peeks at an element in the staging area. If the staging area is too small to contain the element at the specified index, it will block until enough elements are inserted to complete the operation. The placement of the returned tensor will be determined by the current device scope when this function is called. Args: index: The index of the tensor within the staging area to look up. name: A name for the operation (optional). Returns: The tuple of tensors that was gotten.
github-repos
def get_numpy_iterator(self): raise NotImplementedError
Get a Python iterable for the `DataAdapter`, that yields NumPy arrays. Returns: A Python iterator.
github-repos
def save_model_to_hdf5(model, filepath, overwrite=True, include_optimizer=True): if h5py is None: raise ImportError('`save_model` requires h5py.') if len(model.weights) != len(model._undeduplicated_weights): logging.warning('Found duplicated `Variable`s in Model\'s `weights`. This is usually caused by `Variable`s being shared by Layers in the Model. These `Variable`s will be treated as separate `Variable`s when the Model is restored. To avoid this, please save with `save_format="tf"`.') if not isinstance(filepath, h5py.File): if not overwrite and os.path.isfile(filepath): proceed = ask_to_proceed_with_overwrite(filepath) if not proceed: return dirpath = os.path.dirname(filepath) if not os.path.exists(dirpath): gfile.MakeDirs(dirpath) f = h5py.File(filepath, mode='w') opened_new_file = True else: f = filepath opened_new_file = False try: model_metadata = saving_utils.model_metadata(model, include_optimizer) for k, v in model_metadata.items(): if isinstance(v, (dict, list, tuple)): f.attrs[k] = json.dumps(v, default=json_utils.get_json_type).encode('utf8') else: f.attrs[k] = v model_weights_group = f.create_group('model_weights') model_layers = model.layers save_weights_to_hdf5_group(model_weights_group, model_layers) if include_optimizer and model.optimizer and (not isinstance(model.optimizer, optimizer_v1.TFOptimizer)): save_optimizer_weights_to_hdf5_group(f, model.optimizer) f.flush() finally: if opened_new_file: f.close()
Saves a model to a HDF5 file. The saved model contains: - the model's configuration (topology) - the model's weights - the model's optimizer's state (if any) Thus the saved model can be reinstantiated in the exact same state, without any of the code used for model definition or training. Args: model: Keras model instance to be saved. filepath: One of the following: - String, path where to save the model - `h5py.File` object where to save the model overwrite: Whether we should overwrite any existing model at the target location, or instead ask the user with a manual prompt. include_optimizer: If True, save optimizer's state together. Raises: ImportError: if h5py is not available.
github-repos
def convert_ids_to_tokens(self, ids: Union[int, list[int]], skip_special_tokens: bool=False) -> Union[str, list[str]]: if isinstance(ids, int): return self._tokenizer.id_to_token(ids) tokens = [] ids_to_skip = set(self.all_special_ids) if skip_special_tokens else set() for index in ids: index = int(index) if index in ids_to_skip: continue tokens.append(self._tokenizer.id_to_token(index)) return tokens
Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens. Args: ids (`int` or `List[int]`): The token id (or token ids) to convert to tokens. skip_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not to remove special tokens in the decoding. Returns: `str` or `List[str]`: The decoded token(s).
github-repos
def record_batch_metrics(self, requests_in_batch: List) -> None: if not _has_opentelemetry or not requests_in_batch: return decode_tokens = 0 prefill_tokens = 0 for state in requests_in_batch: if state.status == RequestStatus.DECODING: decode_tokens += 1 elif state.status in [RequestStatus.PREFILLING, RequestStatus.PREFILLING_SPLIT]: prefill_tokens += len(state.prompt_ids) total_batch_tokens = decode_tokens + prefill_tokens try: if prefill_tokens > 0: self.prefill_tokens_counter.add(prefill_tokens) if decode_tokens > 0: self.decode_tokens_counter.add(decode_tokens) if prefill_tokens > 0: ratio = decode_tokens / prefill_tokens self.decode_prefill_ratio_gauge.set(ratio) fill_percentage = total_batch_tokens / self.max_batch_tokens * 100.0 self.batch_fill_percentage_histogram.record(fill_percentage) logger.debug(f'Batch metrics: {decode_tokens} decode tokens, {prefill_tokens} prefill tokens, batch fill: {fill_percentage:.2f}% ({total_batch_tokens}/{self.max_batch_tokens})') except Exception as e: logger.warning(f'Failed to record batch metrics: {e}')
Record metrics about the batch composition including decode/prefill ratio and batch fill percentage. Args: requests_in_batch: List of request states in the current batch
github-repos
def remove(self, *dic): dicList = list(flatten(dic)) for d in dicList: di = [] for k in d: di.append(Pair(k, IntegerSingle(d[k]))) dictSingle = DictSingle(di) self._remove([dictSingle], self.l)
remove a calendar config. Args: *dic (dict): dictionary with format {'Day': 12, 'Hour': 34} Avaliable keys are Month, Day, Weekday, Hour, Minute. *Note the uppercase.* You can use gen(), genMix() to generate complex config dictionary.
juraj-google-style
def get_random_numeric_tensor(self, dtype=None, min_size=_MIN_SIZE, max_size=_MAX_SIZE, min_val=_MIN_INT, max_val=_MAX_INT): if max_size > 8: raise tf.errors.InvalidArgumentError(None, None, 'Given size of {} will result in an OOM error'.format(max_size)) seed = self.get_int() shape = self.get_int_list(min_length=min_size, max_length=max_size, min_int=min_size, max_int=max_size) if dtype is None: dtype = self.get_tf_dtype(allowed_set=_TF_RANDOM_DTYPES) elif dtype not in _TF_RANDOM_DTYPES: raise tf.errors.InvalidArgumentError(None, None, 'Given dtype {} is not accepted in get_random_numeric_tensor'.format(dtype)) return tf.random.uniform(shape=shape, minval=min_val, maxval=max_val, dtype=dtype, seed=seed)
Return a tensor of random shape and values. Generated tensors are capped at dimension sizes of 8, as 2^32 bytes of requested memory crashes the fuzzer (see b/34190148). Returns only type that tf.random.uniform can generate. If you need a different type, consider using tf.cast. Args: dtype: Type of tensor, must of one of the following types: float16, float32, float64, int32, or int64 min_size: Minimum size of returned tensor max_size: Maximum size of returned tensor min_val: Minimum value in returned tensor max_val: Maximum value in returned tensor Returns: Tensor of random shape filled with uniformly random numeric values.
github-repos
def export_gpx_file(self): gpx = create_elem('gpx', GPX_ELEM_ATTRIB) if (not self.metadata.bounds): self.metadata.bounds = [j for i in self for j in i] gpx.append(self.metadata.togpx()) track = create_elem('trk') gpx.append(track) for segment in self: chunk = create_elem('trkseg') track.append(chunk) for place in segment: chunk.append(place.togpx()) return etree.ElementTree(gpx)
Generate GPX element tree from ``Trackpoints``. Returns: etree.ElementTree: GPX element tree depicting ``Trackpoints`` objects
codesearchnet
def __init__(self, visitor): self._visitor = visitor self._root_name = 'tf' self._private_map = {'tf': ['compiler', 'core', 'security', 'dtensor', 'python', 'tsl'], 'tf.flags': ['cpp_flags']} self._do_not_descend_map = {'tf': ['examples', 'flags', 'platform', 'pywrap_tensorflow', 'user_ops', 'tools', 'tensorboard'], 'tf.app': ['flags'], 'tf.test': ['mock']}
Constructor. `visitor` should be a callable suitable as a visitor for `traverse`. It will be called only for members of the public TensorFlow API. Args: visitor: A visitor to call for the public API.
github-repos
def from_arrays(cls, path, trn, val, bs=64, tfms=(None, None), classes=None, num_workers=4, test=None, continuous=False): f = (ArraysIndexRegressionDataset if continuous else ArraysIndexDataset) datasets = cls.get_ds(f, trn, val, tfms, test=test) return cls(path, datasets, bs, num_workers, classes=classes)
Read in images and their labels given as numpy arrays Arguments: path: a root path of the data (used for storing trained models, precomputed values, etc) trn: a tuple of training data matrix and target label/classification array (e.g. `trn=(x,y)` where `x` has the shape of `(5000, 784)` and `y` has the shape of `(5000,)`) val: a tuple of validation data matrix and target label/classification array. bs: batch size tfms: transformations (for data augmentations). e.g. output of `tfms_from_model` classes: a list of all labels/classifications num_workers: a number of workers test: a matrix of test data (the shape should match `trn[0]`) Returns: ImageClassifierData
codesearchnet
def start_dag(self, dag, *, data=None): return self._client.send( Request( action='start_dag', payload={'name': dag.name if isinstance(dag, Dag) else dag, 'data': data if isinstance(data, MultiTaskData) else None} ) ).payload['dag_name']
Schedule the execution of a dag by sending a signal to the workflow. Args: dag (Dag, str): The dag object or the name of the dag that should be started. data (MultiTaskData): The data that should be passed on to the new dag. Returns: str: The name of the successfully started dag.
juraj-google-style
def forward(self, device_port, local_port=None): port = self._adb_device.forward(device_port, local_port) return (self._host, port)
Forward device port to local Args: device_port: port inside device local_port: port on PC, if this value is None, a port will random pick one. Returns: tuple, (host, local_port)
juraj-google-style
def pixel_image(shape, sd=None, init_val=None): if sd is not None and init_val is not None: warnings.warn( "`pixel_image` received both an initial value and a sd argument. Ignoring sd in favor of the supplied initial value." ) sd = sd or 0.01 init_val = init_val or np.random.normal(size=shape, scale=sd).astype(np.float32) return tf.Variable(init_val)
A naive, pixel-based image parameterization. Defaults to a random initialization, but can take a supplied init_val argument instead. Args: shape: shape of resulting image, [batch, width, height, channels]. sd: standard deviation of param initialization noise. init_val: an initial value to use instead of a random initialization. Needs to have the same shape as the supplied shape argument. Returns: tensor with shape from first argument.
juraj-google-style
def _get_bit(self, n, hash_bytes): if hash_bytes[n return True return False
Determines if the n-th bit of passed bytes is 1 or 0. Arguments: hash_bytes - List of hash byte values for which the n-th bit value should be checked. Each element of the list should be an integer from 0 to 255. Returns: True if the bit is 1. False if the bit is 0.
juraj-google-style
def update_dns_zone_record(env, zone_id, **kwargs): client = boto3.Session(profile_name=env).client('route53') response = {} hosted_zone_info = client.get_hosted_zone(Id=zone_id) zone_name = hosted_zone_info['HostedZone']['Name'].rstrip('.') dns_name = kwargs.get('dns_name') if (dns_name and dns_name.endswith(zone_name)): dns_name_aws = kwargs.get('dns_name_aws') dns_json = get_template(template_file='infrastructure/dns_upsert.json.j2', **kwargs) LOG.info('Attempting to create DNS record %s (%s) in Hosted Zone %s (%s)', dns_name, dns_name_aws, zone_id, zone_name) try: response = client.change_resource_record_sets(HostedZoneId=zone_id, ChangeBatch=json.loads(dns_json)) LOG.info('Upserted DNS record %s (%s) in Hosted Zone %s (%s)', dns_name, dns_name_aws, zone_id, zone_name) except botocore.exceptions.ClientError as error: LOG.info('Error creating DNS record %s (%s) in Hosted Zone %s (%s)', dns_name, dns_name_aws, zone_id, zone_name) LOG.debug(error) else: LOG.info('Skipping creating DNS record %s in non-matching Hosted Zone %s (%s)', dns_name, zone_id, zone_name) LOG.debug('Route53 JSON Response: \n%s', pformat(response))
Create a Route53 CNAME record in _env_ zone. Args: env (str): Deployment environment. zone_id (str): Route53 zone id. Keyword Args: dns_name (str): FQDN of application's dns entry to add/update. dns_name_aws (str): FQDN of AWS resource dns_ttl (int): DNS time-to-live (ttl)
codesearchnet
def __init__(self, input_queue, output_queue): super(WorkflowThread, self).__init__(input_queue, output_queue) self.pending = PendingBarriers() self.worker_threads = [] self.register(WorkflowItem, input_queue)
Initializer. Args: input_queue: Queue this worker consumes work from. These should be WorkflowItems to process, or any WorkItems registered with this class using the register() method. output_queue: Queue where this worker puts finished work items, if any.
juraj-google-style