code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def AddNewSpecification(self, identifier): if identifier in self._format_specifications: raise KeyError( 'Format specification {0:s} is already defined in store.'.format( identifier)) self._format_specifications[identifier] = FormatSpecification(identifier) return self._format_specifications[identifier]
Adds a new format specification. Args: identifier (str): format identifier, which should be unique for the store. Returns: FormatSpecification: format specification. Raises: KeyError: if the store already contains a specification with the same identifier.
juraj-google-style
def createGroup(self, group, vendorSpecific=None): response = self.createGroupResponse(group, vendorSpecific) return self._read_boolean_response(response)
See Also: createGroupResponse() Args: group: vendorSpecific: Returns:
juraj-google-style
def __init__(self, pyregf_key, key_path=''): super(REGFWinRegistryKey, self).__init__(key_path=key_path) self._pyregf_key = pyregf_key
Initializes a Windows Registry key object. Args: pyregf_key (pyregf.key): pyregf key object. key_path (Optional[str]): Windows Registry key path.
juraj-google-style
class TFBaseModelOutputWithPoolingAndNoAttention(ModelOutput): last_hidden_state: Optional[tf.Tensor] = None pooler_output: Optional[tf.Tensor] = None hidden_states: Optional[Tuple[tf.Tensor, ...]] = None
Base class for model's outputs that also contains a pooling of the last hidden states. Args: last_hidden_state (`tf.Tensor` of shape `(batch_size, num_channels, height, width)`): Sequence of hidden-states at the output of the last layer of the model. pooler_output (`tf.Tensor` of shape `(batch_size, hidden_size)`): Last layer hidden-state after a pooling operation on the spatial dimensions. hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, num_channels, height, width)`. Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
github-repos
def run_step(context): logger.debug('started') context.assert_key_has_value(key='contextClear', caller=__name__) for k in context['contextClear']: logger.debug(f'removing {k} from context') context.pop(k, None) logger.info(f'removed {k} from context') logger.debug('done')
Remove specified keys from context. Args: Context is a dictionary or dictionary-like. context['contextClear'] must exist. It's a dictionary. Will iterate context['contextClear'] and remove those keys from context. For example, say input context is: key1: value1 key2: value2 key3: value3 key4: value4 contextClear: - key2 - key4 - contextClear This will result in return context: key1: value1 key3: value3
codesearchnet
def GetEntries(self, parser_mediator, top_level=None, **unused_kwargs): for (root, key, datetime_value) in interface.RecurseKey(top_level): if (not isinstance(datetime_value, datetime.datetime)): continue event_data = plist_event.PlistTimeEventData() event_data.key = key event_data.root = root event = time_events.PythonDatetimeEvent(datetime_value, definitions.TIME_DESCRIPTION_WRITTEN) parser_mediator.ProduceEventWithEventData(event, event_data)
Simple method to exact date values from a Plist. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. top_level (dict[str, object]): plist top-level key.
codesearchnet
def CreateWithLock(self, urn, aff4_type, token=None, age=NEWEST_TIME, force_new_version=True, blocking=True, blocking_lock_timeout=10, blocking_sleep_interval=1, lease_time=100): if (not data_store.AFF4Enabled()): raise NotImplementedError('AFF4 data store has been disabled.') transaction = self._AcquireLock(urn, blocking=blocking, blocking_lock_timeout=blocking_lock_timeout, blocking_sleep_interval=blocking_sleep_interval, lease_time=lease_time) return self.Create(urn, aff4_type, mode='rw', token=token, age=age, force_new_version=force_new_version, transaction=transaction)
Creates a new object and locks it. Similar to OpenWithLock below, this creates a locked object. The difference is that when you call CreateWithLock, the object does not yet have to exist in the data store. Args: urn: The object to create. aff4_type: The desired type for this object. token: The Security Token to use for opening this item. age: The age policy used to build this object. Only makes sense when mode has "r". force_new_version: Forces the creation of a new object in the data_store. blocking: When True, wait and repeatedly try to grab the lock. blocking_lock_timeout: Maximum wait time when sync is True. blocking_sleep_interval: Sleep time between lock grabbing attempts. Used when blocking is True. lease_time: Maximum time the object stays locked. Lock will be considered released when this time expires. Returns: An AFF4 object of the desired type and mode. Raises: AttributeError: If the mode is invalid.
codesearchnet
def from_config(cls, config): config = config.copy() function_keys = [ 'kernel_posterior_fn', 'kernel_posterior_tensor_fn', 'kernel_prior_fn', 'kernel_divergence_fn', 'bias_posterior_fn', 'bias_posterior_tensor_fn', 'bias_prior_fn', 'bias_divergence_fn', ] for function_key in function_keys: serial = config[function_key] function_type = config.pop(function_key + '_type') if serial is not None: config[function_key] = tfp_layers_util.deserialize_function( serial, function_type=function_type) return cls(**config)
Creates a layer from its config. This method is the reverse of `get_config`, capable of instantiating the same layer from the config dictionary. Args: config: A Python dictionary, typically the output of `get_config`. Returns: layer: A layer instance.
juraj-google-style
def HandleNetworkInterfaces(self, result): network_interfaces = self._ExtractInterfaceMetadata(result) if self.network_setup_enabled: self.network_setup.EnableNetworkInterfaces([interface.name for interface in network_interfaces[1:]]) for interface in network_interfaces: if self.ip_forwarding_enabled: self.ip_forwarding.HandleForwardedIps(interface.name, interface.forwarded_ips, interface.ip)
Called when network interface metadata changes. Args: result: dict, the metadata response with the network interfaces.
codesearchnet
def scatter_max(self, sparse_delta, use_locking=False, name=None): if not isinstance(sparse_delta, indexed_slices.IndexedSlices): raise TypeError('sparse_delta is not IndexedSlices: %s' % sparse_delta) return gen_state_ops.scatter_max(self._variable, sparse_delta.indices, sparse_delta.values, use_locking=use_locking, name=name)
Updates this variable with the max of `tf.IndexedSlices` and itself. Args: sparse_delta: `tf.IndexedSlices` to use as an argument of max with this variable. use_locking: If `True`, use locking during the operation. name: the name of the operation. Returns: A `Tensor` that will hold the new value of this variable after the scattered maximization has completed. Raises: TypeError: if `sparse_delta` is not an `IndexedSlices`.
github-repos
def join(self, other, *args, **kwarg): event = Event(*args, **kwarg) if self.intersects(other): if self.starts_within(other): event.begin = other.begin else: event.begin = self.begin if self.ends_within(other): event.end = other.end else: event.end = self.end return event raise ValueError("Cannot join {} with {}: they don't intersect.".format(self, other))
Create a new event which covers the time range of two intersecting events All extra parameters are passed to the Event constructor. Args: other: the other event Returns: a new Event instance
codesearchnet
def apply_func_to_select_indices(self, axis, func, indices, keep_remaining=False): if (self.partitions.size == 0): return np.array([[]]) if isinstance(indices, dict): dict_indices = indices indices = list(indices.keys()) else: dict_indices = None if (not isinstance(indices, list)): indices = [indices] partitions_dict = self._get_dict_of_block_index(axis, indices, ordered=(not keep_remaining)) if (not axis): partitions_for_apply = self.partitions.T else: partitions_for_apply = self.partitions if (dict_indices is not None): def local_to_global_idx(partition_id, local_idx): if (partition_id == 0): return local_idx if (axis == 0): cumulative_axis = np.cumsum(self.block_widths) else: cumulative_axis = np.cumsum(self.block_lengths) return (cumulative_axis[(partition_id - 1)] + local_idx) if (not keep_remaining): result = np.array([self._apply_func_to_list_of_partitions(func, partitions_for_apply[o_idx], func_dict={i_idx: dict_indices[local_to_global_idx(o_idx, i_idx)] for i_idx in list_to_apply if (i_idx >= 0)}) for (o_idx, list_to_apply) in partitions_dict]) else: result = np.array([(partitions_for_apply[i] if (i not in partitions_dict) else self._apply_func_to_list_of_partitions(func, partitions_for_apply[i], func_dict={idx: dict_indices[local_to_global_idx(i, idx)] for idx in partitions_dict[i] if (idx >= 0)})) for i in range(len(partitions_for_apply))]) elif (not keep_remaining): result = np.array([self._apply_func_to_list_of_partitions(func, partitions_for_apply[idx], internal_indices=list_to_apply) for (idx, list_to_apply) in partitions_dict]) else: result = np.array([(partitions_for_apply[i] if (i not in partitions_dict) else self._apply_func_to_list_of_partitions(func, partitions_for_apply[i], internal_indices=partitions_dict[i])) for i in range(len(partitions_for_apply))]) return (self.__constructor__(result.T) if (not axis) else self.__constructor__(result))
Applies a function to select indices. Note: Your internal function must take a kwarg `internal_indices` for this to work correctly. This prevents information leakage of the internal index to the external representation. Args: axis: The axis to apply the func over. func: The function to apply to these indices. indices: The indices to apply the function to. keep_remaining: Whether or not to keep the other partitions. Some operations may want to drop the remaining partitions and keep only the results. Returns: A new BaseFrameManager object, the type of object that called this.
codesearchnet
def bucket(self, experiment, user_id, bucketing_id): if (not experiment): return None if (experiment.groupPolicy in GROUP_POLICIES): group = self.config.get_group(experiment.groupId) if (not group): return None user_experiment_id = self.find_bucket(bucketing_id, experiment.groupId, group.trafficAllocation) if (not user_experiment_id): self.config.logger.info(('User "%s" is in no experiment.' % user_id)) return None if (user_experiment_id != experiment.id): self.config.logger.info(('User "%s" is not in experiment "%s" of group %s.' % (user_id, experiment.key, experiment.groupId))) return None self.config.logger.info(('User "%s" is in experiment %s of group %s.' % (user_id, experiment.key, experiment.groupId))) variation_id = self.find_bucket(bucketing_id, experiment.id, experiment.trafficAllocation) if variation_id: variation = self.config.get_variation_from_id(experiment.key, variation_id) self.config.logger.info(('User "%s" is in variation "%s" of experiment %s.' % (user_id, variation.key, experiment.key))) return variation self.config.logger.info(('User "%s" is in no variation.' % user_id)) return None
For a given experiment and bucketing ID determines variation to be shown to user. Args: experiment: Object representing the experiment for which user is to be bucketed. user_id: ID for user. bucketing_id: ID to be used for bucketing the user. Returns: Variation in which user with ID user_id will be put in. None if no variation.
codesearchnet
def from_np_datetimes(np_datetimes): ordinals = tf.constant(np_datetimes, dtype=tf.int32) + _ORDINAL_OF_1_1_1970 return from_ordinals(ordinals, validate=False)
Creates DateTensor from a Numpy array of dtype datetime64. Args: np_datetimes: Numpy array of dtype datetime64. Returns: DateTensor object. #### Example ```python import datetime import numpy as np date_tensor_np = np.array( [[datetime.date(2019, 3, 25), datetime.date(2020, 6, 2)], [datetime.date(2020, 9, 15), datetime.date(2020, 12, 27)]], dtype=np.datetime64) date_tensor = tff.datetime.dates_from_np_datetimes(date_tensor_np) ```
github-repos
def read_from_hdx(identifier, configuration=None): showcase = Showcase(configuration=configuration) result = showcase._load_from_hdx('showcase', identifier) if result: return showcase return None
Reads the showcase given by identifier from HDX and returns Showcase object Args: identifier (str): Identifier of showcase configuration (Optional[Configuration]): HDX configuration. Defaults to global configuration. Returns: Optional[Showcase]: Showcase object if successful read, None if not
juraj-google-style
def fit(self, **kwargs): if (self.fit_method is not None): fit_args = self._fit_params.copy() fit_args.update(kwargs) getattr(self.instance, self.fit_method)(**fit_args)
Call the fit method of the primitive. The given keyword arguments will be passed directly to the `fit` method of the primitive instance specified in the JSON annotation. If any of the arguments expected by the produce method had been given during the MLBlock initialization, they will be passed as well. If the fit method was not specified in the JSON annotation, or if the primitive is a simple function, this will be a noop. Args: **kwargs: Any given keyword argument will be directly passed to the primitive fit method. Raises: TypeError: A `TypeError` might be raised if any argument not expected by the primitive fit method is given.
codesearchnet
def _get_candidates(self): candidates = np.where((self.dpp_vector == 0)) return (None if (len(candidates[0]) == 0) else candidates[0])
Finds the pipelines that are not yet tried. Returns: np.array: Indices corresponding to columns in ``dpp_matrix`` that haven't been tried on ``X``. ``None`` if all pipelines have been tried on X.
codesearchnet
def bounter(size_mb=None, need_iteration=True, need_counts=True, log_counting=None): if (not need_counts): return CardinalityEstimator() if (size_mb is None): raise ValueError('Max size in MB must be provided.') if need_iteration: if log_counting: raise ValueError('Log counting is only supported with CMS implementation (need_iteration=False).') return HashTable(size_mb=size_mb) else: return CountMinSketch(size_mb=size_mb, log_counting=log_counting)
Factory method for bounter implementation. Args: size_mb (int): Desired memory footprint of the counter. need_iteration (Bool): With `True`, create a `HashTable` implementation which can iterate over inserted key/value pairs. With `False`, create a `CountMinSketch` implementation which performs better in limited-memory scenarios, but does not support iteration over elements. need_counts (Bool): With `True`, construct the structure normally. With `False`, ignore all remaining parameters and create a minimalistic cardinality counter based on hyperloglog which only takes 64KB memory. log_counting (int): Counting to use with `CountMinSketch` implementation. Accepted values are `None` (default counting with 32-bit integers), 1024 (16-bit), 8 (8-bit). See `CountMinSketch` documentation for details. Raise ValueError if not `None `and `need_iteration` is `True`.
codesearchnet
def mouse_event_callback(self, window, xpos, ypos): self.example.mouse_position_event(xpos, ypos)
Mouse event callback from glfw. Translates the events forwarding them to :py:func:`cursor_event`. Args: window: The window xpos: viewport x pos ypos: viewport y pos
juraj-google-style
def _modeIsValid(self, mode): try: return (mode in self.modes.keys()) except AttributeError as e: if (mode in self.isValidMode.keys()): if (mode in self.isValidMode.keys()): return True return False
Verification of whether the mode is a correct option to be used. Args: ----- mode: Mode to be executed. Return: ------- True if the mode exists in the three main folders.
codesearchnet
def _build_url_filters(cls, session: AppSession): args = session.args filters = [(HTTPSOnlyFilter() if args.https_only else SchemeFilter()), RecursiveFilter(enabled=args.recursive, page_requisites=args.page_requisites), FollowFTPFilter(follow=args.follow_ftp)] if args.no_parent: filters.append(ParentFilter()) if (args.domains or args.exclude_domains): filters.append(BackwardDomainFilter(args.domains, args.exclude_domains)) if (args.hostnames or args.exclude_hostnames): filters.append(HostnameFilter(args.hostnames, args.exclude_hostnames)) if args.tries: filters.append(TriesFilter(args.tries)) if ((args.level and args.recursive) or args.page_requisites_level): filters.append(LevelFilter(args.level, inline_max_depth=args.page_requisites_level)) if (args.accept_regex or args.reject_regex): filters.append(RegexFilter(args.accept_regex, args.reject_regex)) if (args.include_directories or args.exclude_directories): filters.append(DirectoryFilter(args.include_directories, args.exclude_directories)) if (args.accept or args.reject): filters.append(BackwardFilenameFilter(args.accept, args.reject)) return filters
Create the URL filter instances. Returns: A list of URL filter instances
codesearchnet
def CheckAddressState(self, script_hash): for key, contract in self._contracts.items(): if contract.ScriptHash.ToBytes() == script_hash.ToBytes(): return AddressState.InWallet for watch in self._watch_only: if watch == script_hash: return AddressState.InWallet | AddressState.WatchOnly return AddressState.NoState
Determine the address state of the provided script hash. Args: script_hash (UInt160): a script hash to determine the address state of. Returns: AddressState: the address state.
juraj-google-style
def convert(self): saved_model_convert_result = self._convert_as_saved_model() if saved_model_convert_result: return saved_model_convert_result return super(TFLiteKerasModelConverter, self).convert()
Converts a Keras model based on instance variables. Returns: The converted data in serialized format, either a TFLite Flatbuffer or a Graphviz graph depending on value in `output_format`. Raises: ValueError: Input shape is not specified. None value for dimension in input_tensor.
github-repos
def clean_registration_ids(self, registration_ids=[]): valid_registration_ids = [] for registration_id in registration_ids: details = self.registration_info_request(registration_id) if details.status_code == 200: valid_registration_ids.append(registration_id) return valid_registration_ids
Checks registration ids and excludes inactive ids Args: registration_ids (list, optional): list of ids to be cleaned Returns: list: cleaned registration ids
juraj-google-style
def GetKeyByPath(self, key_path): key_path_upper = key_path.upper() if key_path_upper.startswith(self._key_path_prefix_upper): relative_key_path = key_path[self._key_path_prefix_length:] elif key_path.startswith(definitions.KEY_PATH_SEPARATOR): relative_key_path = key_path key_path = ''.join([self._key_path_prefix, key_path]) else: return None try: regf_key = self._regf_file.get_key_by_path(relative_key_path) except IOError: regf_key = None if not regf_key: return None return REGFWinRegistryKey(regf_key, key_path=key_path)
Retrieves the key for a specific path. Args: key_path (str): Windows Registry key path. Returns: WinRegistryKey: Registry key or None if not available.
juraj-google-style
def _AddsAnalysisProcessStatusTableRow(self, process_status, table_view): used_memory = self._FormatSizeInUnitsOf1024(process_status.used_memory) events = '' if ((process_status.number_of_consumed_events is not None) and (process_status.number_of_consumed_events_delta is not None)): events = '{0:d} ({1:d})'.format(process_status.number_of_consumed_events, process_status.number_of_consumed_events_delta) event_tags = '' if ((process_status.number_of_produced_event_tags is not None) and (process_status.number_of_produced_event_tags_delta is not None)): event_tags = '{0:d} ({1:d})'.format(process_status.number_of_produced_event_tags, process_status.number_of_produced_event_tags_delta) reports = '' if ((process_status.number_of_produced_reports is not None) and (process_status.number_of_produced_reports_delta is not None)): reports = '{0:d} ({1:d})'.format(process_status.number_of_produced_reports, process_status.number_of_produced_reports_delta) table_view.AddRow([process_status.identifier, process_status.pid, process_status.status, used_memory, events, event_tags, reports])
Adds an analysis process status table row. Args: process_status (ProcessStatus): processing status. table_view (CLITabularTableView): table view.
codesearchnet
def crop_image_to_patches(self, images: 'torch.Tensor', min_patches: int, max_patches: int, use_thumbnail: bool=True, patch_size: Optional[Union[Tuple, int, dict]]=None, interpolation: Optional['F.InterpolationMode']=None): patch_size_height, patch_size_width = (patch_size.height, patch_size.width) original_height, original_width = images.shape[-2:] num_columns, num_rows = get_optimal_tiled_canvas((original_height, original_width), (patch_size_height, patch_size_width), min_patches, max_patches) target_width = patch_size_width * num_columns target_height = patch_size_height * num_rows num_blocks = num_columns * num_rows resized_image = self.resize(images, SizeDict(height=target_height, width=target_width), interpolation=interpolation) processed_images = [] for i in range(num_blocks): column = i % num_columns row = i box = (column * patch_size_width, row * patch_size_height, (column + 1) * patch_size_width, (row + 1) * patch_size_height) patch_image = resized_image[..., box[1]:box[3], box[0]:box[2]] processed_images.append(patch_image) if use_thumbnail and len(processed_images) != 1: thumbnail_img = self.resize(images, patch_size, interpolation=interpolation) processed_images.append(thumbnail_img) processed_images = torch.stack(processed_images, dim=0).transpose(0, 1).contiguous() return processed_images
Crop the images to patches and return a list of cropped images. The number of patches and their grid arrangement are determined by the original image size, the target patch size and the minimum and maximum number of patches. The aspect ratio of the patches grid is chosen to be the closest to the original image aspect ratio. Args: images (`torch.Tensor`): The images to be cropped. min_patches (`int`): The minimum number of patches to be extracted from the image. max_patches (`int`): The maximum number of patches to be extracted from the image. use_thumbnail (`bool`, *optional*, defaults to `True`): Whether to add a thumbnail image to the list of cropped patches. patch_size (`int`, `Tuple[int, int]`, `dict`, *optional*): The size of the output patches. The format of the image data. If `None`, the format is inferred from the input image. Returns: List[`PIL.Image.Image`] or List[np.ndarray]: The list of cropped images.
github-repos
def get_parameter_bounds(self, include_frozen=False): if include_frozen: return self.parameter_bounds return list(p for p, f in zip(self.parameter_bounds, self.unfrozen_mask) if f)
Get a list of the parameter bounds Args: include_frozen (Optional[bool]): Should the frozen parameters be included in the returned value? (default: ``False``)
juraj-google-style
def do_hook_actions(self, actions, hook_type): logger.log_debug('call {} hook actions.'.format(hook_type)) for action in actions: if (isinstance(action, dict) and (len(action) == 1)): (var_name, hook_content) = list(action.items())[0] hook_content_eval = self.session_context.eval_content(hook_content) logger.log_debug('assignment with hook: {} = {} => {}'.format(var_name, hook_content, hook_content_eval)) self.session_context.update_test_variables(var_name, hook_content_eval) else: logger.log_debug('call hook function: {}'.format(action)) self.session_context.eval_content(action)
call hook actions. Args: actions (list): each action in actions list maybe in two format. format1 (dict): assignment, the value returned by hook function will be assigned to variable. {"var": "${func()}"} format2 (str): only call hook functions. ${func()} hook_type (enum): setup/teardown
codesearchnet
def __dir__() -> list[str]: return ['__all__', 'LAZY_MODULES', 'print_current_imports']
`lazy_imports` public API. Because `globals()` contains hundreds of symbols, we overwrite `dir(module)` to avoid poluting the namespace during auto-completion. Returns: public symbols
github-repos
def reqs(amend: bool=False, stage: bool=False): changed_files = CTX.repo.changed_files() if (('requirements.txt' in changed_files) or ('requirements-dev.txt' in changed_files)): LOGGER.error('Requirements have changed; cannot update them') sys.exit((- 1)) _write_reqs(amend, stage)
Write requirements files Args: amend: amend last commit with changes stage: stage changes
codesearchnet
def _transform_col(self, x, i): labels = self.label_encoder._transform_col(x, i) label_max = self.label_encoder.label_maxes[i] index = np.array(range(len(labels))) i = index[(labels > 0)] j = (labels[(labels > 0)] - 1) if (len(i) > 0): return sparse.coo_matrix((np.ones_like(i), (i, j)), shape=(x.shape[0], label_max)) else: return None
Encode one categorical column into sparse matrix with one-hot-encoding. Args: x (pandas.Series): a categorical column to encode i (int): column index Returns: X (scipy.sparse.coo_matrix): sparse matrix encoding a categorical variable into dummy variables
codesearchnet
def _gen_sentence(self, assetid_body_tuple): (asset_id, body) = assetid_body_tuple text = self._process(body) sentence = LabeledSentence(text, labels=[('DOC_%s' % str(asset_id))]) return sentence
Takes an assetid_body_tuple and returns a Doc2Vec LabeledSentence Args: assetid_body_tuple (tuple): (assetid, bodytext) pair
codesearchnet
def use_spec(self, spec: DNASpec) -> 'DNA': if not isinstance(spec, DNASpec): raise ValueError(f"Argument 'spec' must be a `pg.DNASpec` object. Encountered: {spec!r}.") if self._spec is spec: return self def _use_spec_for_child_choices(spec: DNASpec, children: List[DNA]): assert spec.is_categorical, spec if spec.num_choices != len(children): raise ValueError(f'Number of choices ({spec.num_choices}) does not match with the number of child values (len(children)). Spec: {spec!r}, Children: {children!r}.') for i, child in enumerate(children): subchoice = spec.subchoice(i) child.use_spec(subchoice) child_values = [c.value for c in children] if spec.sorted and sorted(child_values) != child_values: raise ValueError(f'Child values {child_values!r} are not sorted. Spec: {spec!r}.') if spec.distinct and len(set(child_values)) != len(child_values): raise ValueError(f'Child values {child_values!r} are not distinct. Spec: {spec!r}.') while spec.is_space and len(spec.elements) == 1: spec = spec.elements[0] if spec.is_space: if self.value is not None: raise ValueError(f'DNA value type mismatch. Value: {self.value}, Spec: {spec!r}.') if len(spec.elements) != len(self.children): raise ValueError(f'Length of DNA child values ({len(self.children)}) is different from the number of elements ({len(spec.elements)}) in Spec: {spec!r}.') for i, elem_spec in enumerate(spec.elements): self.children[i].use_spec(elem_spec) elif spec.is_categorical: if spec.num_choices == 1: if not isinstance(self.value, int): raise ValueError(f'DNA value type mismatch. Value: {self.value}, Spec: {spec!r}.') if self.value >= len(spec.candidates): raise ValueError(f'Value of DNA is out of range according to the DNA spec. Value: {self.value}, Spec: {spec!r}.') chosen_candidate = spec.candidates[self.value] assert chosen_candidate.is_space, chosen_candidate if not chosen_candidate.elements and self.children: raise ValueError(f'There is no DNA spec for child DNA values. Child values: {self.children}.') if len(chosen_candidate.elements) > 1: if len(chosen_candidate.elements) != len(self.children): raise ValueError(f'Number of elements in child templates ({len(chosen_candidate.elements)}) does not match with the length of children ({len(self.children)}) from DNA: {self!r}, Spec: {chosen_candidate}.') for i, elem_spec in enumerate(chosen_candidate.elements): self.children[i].use_spec(elem_spec) elif len(chosen_candidate.elements) == 1: sub_spec = chosen_candidate while sub_spec.is_space and len(sub_spec.elements) == 1: sub_spec = sub_spec.elements[0] if sub_spec.is_numerical or sub_spec.is_custom_decision_point: if len(self.children) != 1: raise ValueError(f'Encountered more than 1 value.Child value: {self.children}, Spec: {sub_spec}.') self.children[0].use_spec(sub_spec) else: assert sub_spec.is_categorical, sub_spec _use_spec_for_child_choices(sub_spec, self.children) else: if self.value is not None: raise ValueError(f'Cannot apply multi-choice DNA spec on value {self.value}: {spec!r}.') _use_spec_for_child_choices(spec, self.children) elif spec.is_numerical: if not isinstance(self.value, float): raise ValueError(f'DNA value type mismatch. Value: {self.value}, Spec: {spec!r}.') if self.value < spec.min_value: raise ValueError(f'DNA value should be no less than {spec.min_value}. Encountered {self.value}, Spec: {spec!r}.') if self.value > spec.max_value: raise ValueError(f'DNA value should be no greater than {spec.max_value}. Encountered {self.value}, Spec: {spec!r}.') else: assert spec.is_custom_decision_point, spec if not isinstance(self.value, str): raise ValueError(f'DNA value type mismatch, Value: {self.value!r}, Spec: {spec!r}.') self._spec = spec return self
Use a DNA spec for this node and children recursively. Args: spec: DNA spec. Returns: Self. Raises: ValueError: current DNA tree does not conform to the DNA spec.
github-repos
def data_file(file_fmt, info=None, **kwargs): if isinstance(info, dict): kwargs['hash_key'] = hashlib.sha256(json.dumps(info).encode('utf-8')).hexdigest() kwargs.update(info) return utils.fstr(fmt=file_fmt, **kwargs)
Data file name for given infomation Args: file_fmt: file format in terms of f-strings info: dict, to be hashed and then pass to f-string using 'hash_key' these info will also be passed to f-strings **kwargs: arguments for f-strings Returns: str: data file name
codesearchnet
def _SkipFieldMessage(tokenizer): if tokenizer.TryConsume('<'): delimiter = '>' else: tokenizer.Consume('{') delimiter = '}' while not tokenizer.LookingAt('>') and not tokenizer.LookingAt('}'): _SkipField(tokenizer) tokenizer.Consume(delimiter)
Skips over a field message. Args: tokenizer: A tokenizer to parse the field name and values.
juraj-google-style
def get_callable_name(func): try: return meta_util_six.get_funcname(func) except AttributeError: if isinstance(func, type): return repr(func).replace("<type '", '').replace("'>", '') elif hasattr(func, '__name__'): return func.__name__ else: raise NotImplementedError(('cannot get func_name of func=%rtype(func)=%r' % (func, type(func))))
Works on must functionlike objects including str, which has no func_name Args: func (function): Returns: str: CommandLine: python -m utool.util_str --exec-get_callable_name Example: >>> # ENABLE_DOCTEST >>> from utool.util_str import * # NOQA >>> func = len >>> result = get_callable_name(func) >>> print(result) len
codesearchnet
def _CreateLogicalLines(tokens): formatted_tokens = [] prev_tok = None for tok in tokens: tok = TokenInfo(*tok) if prev_tok and prev_tok.line.rstrip().endswith('\\') and (prev_tok.start[0] < tok.start[0]): ctok = TokenInfo(type=CONTINUATION, string='\\', start=(prev_tok.start[0], prev_tok.start[1] + 1), end=(prev_tok.end[0], prev_tok.end[0] + 2), line=prev_tok.line) ctok.lineno = ctok.start[0] ctok.column = ctok.start[1] ctok.value = '\\' formatted_tokens.append(format_token.FormatToken(ctok, 'CONTINUATION')) tok.lineno = tok.start[0] tok.column = tok.start[1] tok.value = tok.string formatted_tokens.append(format_token.FormatToken(tok, token.tok_name[tok.type])) prev_tok = tok logical_lines, cur_logical_line = ([], []) depth = 0 for tok in formatted_tokens: if tok.type == tokenize.ENDMARKER: break if tok.type == tokenize.NEWLINE: logical_lines.append(logical_line.LogicalLine(depth, cur_logical_line)) cur_logical_line = [] elif tok.type == tokenize.INDENT: depth += 1 elif tok.type == tokenize.DEDENT: depth -= 1 elif tok.type == tokenize.NL: pass else: if cur_logical_line and (not tok.type == tokenize.COMMENT) and (cur_logical_line[0].type == tokenize.COMMENT): logical_lines.append(logical_line.LogicalLine(depth, cur_logical_line)) cur_logical_line = [] cur_logical_line.append(tok) for line in logical_lines: previous = line.first bracket_stack = [previous] if previous.OpensScope() else [] for tok in line.tokens[1:]: tok.previous_token = previous previous.next_token = tok previous = tok if tok.OpensScope(): bracket_stack.append(tok) elif tok.ClosesScope(): bracket_stack[-1].matching_bracket = tok tok.matching_bracket = bracket_stack.pop() return logical_lines
Separate tokens into logical lines. Arguments: tokens: (list of tokenizer.TokenInfo) Tokens generated by tokenizer. Returns: A list of LogicalLines.
github-repos
def DecompressMessageList(cls, packed_message_list): compression = packed_message_list.compression if (compression == rdf_flows.PackedMessageList.CompressionType.UNCOMPRESSED): data = packed_message_list.message_list elif (compression == rdf_flows.PackedMessageList.CompressionType.ZCOMPRESSION): try: data = zlib.decompress(packed_message_list.message_list) except zlib.error as e: raise DecodingError(('Failed to decompress: %s' % e)) else: raise DecodingError('Compression scheme not supported') try: result = rdf_flows.MessageList.FromSerializedString(data) except rdfvalue.DecodeError: raise DecodingError('RDFValue parsing failed.') return result
Decompress the message data from packed_message_list. Args: packed_message_list: A PackedMessageList rdfvalue with some data in it. Returns: a MessageList rdfvalue. Raises: DecodingError: If decompression fails.
codesearchnet
def add_method(self, m, **kwargs): if isinstance(m, types.FunctionType): self['function', id(m)] = m else: f, obj = get_method_vars(m) wrkey = (f, id(obj)) self[wrkey] = obj
Add an instance method or function Args: m: The instance method or function to store
juraj-google-style
def map_parser_to_rules(parser_name: str) -> Tuple[TypeParser, RulesMap]: parser: TypeParser usable_rules: dict[str, RuleWrapper] if parser_name == 'parse_str': parser = Parsers['parse_str'] usable_rules = TextRules elif parser_name == 'parse_int': parser = Parsers['parse_int'] usable_rules = NumericRules elif parser_name == 'parse_float': parser = Parsers['parse_float'] usable_rules = NumericRules else: raise ValueError('Invalid parser specified.') parser.__name__ = parser_name return (parser, usable_rules)
Check if the chosen parser exists and return the matching parser function and available rule mappings. Args: * parser: string Returns: Tuple, with * TypeParser: Func that parses Any to Type * RulesMap: Dict of rule name to rule wrapper Raises: * ValueError: if non-existent parser name provided
github-repos
def make_edge_vectors(adjacency_matrix, num_edge_types, depth, name=None): with tf.variable_scope(name, default_name="edge_vectors"): att_adj_vectors_shape = [num_edge_types, depth] adjacency_matrix_shape = common_layers.shape_list(adjacency_matrix) adj_vectors = ( tf.get_variable( "adj_vectors", att_adj_vectors_shape, initializer=tf.random_normal_initializer(0, depth**-0.5)) * (depth**0.5)) adjacency_matrix_one_hot = tf.one_hot(adjacency_matrix, num_edge_types) att_adj_vectors = tf.matmul( tf.reshape(tf.to_float(adjacency_matrix_one_hot), [-1, num_edge_types]), adj_vectors) return tf.reshape(att_adj_vectors, [adjacency_matrix_shape[0], adjacency_matrix_shape[1], adjacency_matrix_shape[2], depth])
Gets edge vectors for the edge types in the adjacency matrix. Args: adjacency_matrix: A [batch, num_nodes, num_nodes] tensor of ints. num_edge_types: Number of different edge types depth: Number of channels name: a string Returns: A [batch, num_nodes, num_nodes, depth] vector of tensors
juraj-google-style
def get_pattern_actual_step(self, patternnumber): _checkPatternNumber(patternnumber) address = _calculateRegisterAddress('actualstep', patternnumber) return self.read_register(address, 0)
Get the 'actual step' parameter for a given pattern. Args: patternnumber (integer): 0-7 Returns: The 'actual step' parameter (int).
codesearchnet
def constant_value(pred): if isinstance(pred, tensor.Tensor): return tensor_util.constant_value(pred) if pred in {0, 1}: return bool(pred) if isinstance(pred, bool): return pred if isinstance(pred, variables.Variable): return None raise TypeError('`pred` must be a Tensor, or a Python bool, or 1 or 0. Found instead: %s' % type(pred))
Return the bool value for `pred`, or None if `pred` had a dynamic value. Args: pred: A scalar, either a Python bool or a TensorFlow boolean variable or tensor, or the Python integer 1 or 0. Returns: True or False if `pred` has a constant boolean value, None otherwise. Raises: TypeError: If `pred` is not a Variable, Tensor or bool, or Python integer 1 or 0.
github-repos
def _convert_from_saved_model(self, graph_def): self._save_conversion_params_metric(graph_def) quant_mode = QuantizationMode(self.optimizations, self.target_spec, self.representative_dataset, graph_def, self._experimental_disable_per_channel, self.experimental_new_dynamic_range_quantizer, self._experimental_low_bit_qat, self._experimental_full_integer_quantization_bias_type, self._experimental_variable_quantization, self._experimental_strict_qdq) self._validate_inference_input_output_types(quant_mode) converter_kwargs = {'enable_tflite_resource_variables': self.experimental_enable_resource_variables} converter_kwargs.update(self._get_base_converter_args()) converter_kwargs.update(quant_mode.converter_flags()) result = _convert_saved_model(**converter_kwargs) return self._optimize_tflite_model(result, quant_mode, _build_conversion_flags(**converter_kwargs).debug_options, quant_io=self.experimental_new_quantizer)
Helper method that converts saved model. Args: graph_def: GraphDef object for the model, used only for stats. Returns: The converted TFLite model.
github-repos
def describe(self, **kwargs): description = {'label': self.label, 'details': inspect.cleandoc(self.details), 'required': self.required, 'many': self.many, 'spec': self.spec, 'default': self.default, 'type': (self.type or 'unspecified')} description.update(kwargs) return description
Describe this parameter instance for purpose of self-documentation. Args: kwargs (dict): dictionary of additional description items for extending default description Returns: dict: dictionary of description items Suggested way for overriding description fields or extending it with additional items is calling super class method with new/overriden fields passed as keyword arguments like following: .. code-block:: python class DummyParam(BaseParam): def description(self, **kwargs): super().describe(is_dummy=True, **kwargs)
codesearchnet
def console_from_file(filename: str) -> tcod.console.Console: return tcod.console.Console._from_cdata( lib.TCOD_console_from_file(filename.encode("utf-8")) )
Return a new console object from a filename. The file format is automactially determined. This can load REXPaint `.xp`, ASCII Paint `.apf`, or Non-delimited ASCII `.asc` files. Args: filename (Text): The path to the file, as a string. Returns: A new :any`Console` instance.
juraj-google-style
def untar(file_path, extract_folder=None): file_path = Path(file_path) if (extract_folder is None): extract_folder = file_path.parent extract_folder = Path(extract_folder) tar = tarfile.open(file_path) tar.extractall(extract_folder) tar.close()
Simple tar archive extractor Args: file_path: path to the tar file to be extracted extract_folder: folder to which the files will be extracted
codesearchnet
def whois_emails(self, emails): api_name = 'opendns-whois-emails' fmt_url_path = u'whois/emails/{0}' return self._multi_get(api_name, fmt_url_path, emails)
Calls WHOIS Email end point Args: emails: An enumerable of string Emails Returns: A dict of {email: domain_result}
codesearchnet
def _gen_rpc_request(self, rpc_id, rpc_func_name, *args, **kwargs): data = {'id': rpc_id, 'method': rpc_func_name, 'params': args} if kwargs: data['kwargs'] = kwargs return json.dumps(data, sort_keys=True)
Generates the JSON RPC request. In the generated JSON string, the fields are sorted by keys in ascending order. Args: rpc_id: int, the id of this RPC. rpc_func_name: str, the name of the snippet function to execute on the server. *args: any, the positional arguments of the RPC. **kwargs: any, the keyword arguments of the RPC. Returns: A string of the JSON RPC request.
github-repos
def load_local_config(filename): if (not filename): return imp.new_module('local_pylint_config') module = imp.load_source('local_pylint_config', filename) return module
Loads the pylint.config.py file. Args: filename (str): The python file containing the local configuration. Returns: module: The loaded Python module.
codesearchnet
def threshold(image, block_size=DEFAULT_BLOCKSIZE, mask=None): if mask is None: mask = np.zeros(image.shape[:2], dtype=np.uint8) mask[:] = 255 if len(image.shape) > 2 and image.shape[2] == 4: image = cv2.cvtColor(image, cv2.COLOR_BGRA2GRAY) res = _calc_block_mean_variance(image, mask, block_size) res = image.astype(np.float32) - res.astype(np.float32) + 255 _, res = cv2.threshold(res, 215, 255, cv2.THRESH_BINARY) return res
Applies adaptive thresholding to the given image. Args: image: BGRA image. block_size: optional int block_size to use for adaptive thresholding. mask: optional mask. Returns: Thresholded image.
juraj-google-style
def dict_from_file(filename, key_type=str): mapping = {} with open(filename, 'r') as f: for line in f: items = line.rstrip('\n').split() assert len(items) >= 2 key = key_type(items[0]) val = items[1:] if len(items) > 2 else items[1] mapping[key] = val return mapping
Load a text file and parse the content as a dict. Each line of the text file will be two or more columns splited by whitespaces or tabs. The first column will be parsed as dict keys, and the following columns will be parsed as dict values. Args: filename(str): Filename. key_type(type): Type of the dict's keys. str is user by default and type conversion will be performed if specified. Returns: dict: The parsed contents.
juraj-google-style
def set_cellpy_datadir(self, directory=None): if (directory is None): self.logger.info('no directory name given') return if (not os.path.isdir(directory)): self.logger.info('directory does not exist') return self.cellpy_datadir = directory
Set the directory containing .hdf5-files. Used for setting directory for looking for hdf5-files. A valid directory name is required. Args: directory (str): path to hdf5-directory Example: >>> d = CellpyData() >>> directory = "MyData/HDF5" >>> d.set_raw_datadir(directory)
codesearchnet
def transpose(self, *args, **kwargs): new_data = self.data.transpose(*args, **kwargs) new_manager = self.__constructor__(new_data, self.columns, self.index) new_manager._is_transposed = (self._is_transposed ^ 1) return new_manager
Transposes this DataManager. Returns: Transposed new DataManager.
codesearchnet
def get_transition_chempots(self, element): if (element not in self.elements): raise ValueError('get_transition_chempots can only be called with elements in the phase diagram.') critical_chempots = [] for facet in self.facets: chempots = self._get_facet_chempots(facet) critical_chempots.append(chempots[element]) clean_pots = [] for c in sorted(critical_chempots): if (len(clean_pots) == 0): clean_pots.append(c) elif (abs((c - clean_pots[(- 1)])) > PhaseDiagram.numerical_tol): clean_pots.append(c) clean_pots.reverse() return tuple(clean_pots)
Get the critical chemical potentials for an element in the Phase Diagram. Args: element: An element. Has to be in the PD in the first place. Returns: A sorted sequence of critical chemical potentials, from less negative to more negative.
codesearchnet
def save(self, branch, commit_message, **kwargs): self.branch = branch self.commit_message = commit_message self.file_path = self.file_path.replace('/', '%2F') super(ProjectFile, self).save(**kwargs)
Save the changes made to the file to the server. The object is updated to match what the server returns. Args: branch (str): Branch in which the file will be updated commit_message (str): Message to send with the commit **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabUpdateError: If the server cannot perform the request
juraj-google-style
def __format_error(self, error_list_tag): error = {'domain': self.domain(), 'reason': self.reason(), 'message': self.message()} error.update((self.extra_fields() or {})) return {'error': {error_list_tag: [error], 'code': self.status_code(), 'message': self.message()}}
Format this error into a JSON response. Args: error_list_tag: A string specifying the name of the tag to use for the error list. Returns: A dict containing the reformatted JSON error response.
codesearchnet
async def download_cot_artifacts(chain): upstream_artifacts = chain.task['payload'].get('upstreamArtifacts', []) all_artifacts_per_task_id = get_all_artifacts_per_task_id(chain, upstream_artifacts) mandatory_artifact_tasks = [] optional_artifact_tasks = [] for (task_id, paths) in all_artifacts_per_task_id.items(): for path in paths: coroutine = asyncio.ensure_future(download_cot_artifact(chain, task_id, path)) if is_artifact_optional(chain, task_id, path): optional_artifact_tasks.append(coroutine) else: mandatory_artifact_tasks.append(coroutine) mandatory_artifacts_paths = (await raise_future_exceptions(mandatory_artifact_tasks)) (succeeded_optional_artifacts_paths, failed_optional_artifacts) = (await get_results_and_future_exceptions(optional_artifact_tasks)) if failed_optional_artifacts: log.warning('Could not download {} artifacts: {}'.format(len(failed_optional_artifacts), failed_optional_artifacts)) return (mandatory_artifacts_paths + succeeded_optional_artifacts_paths)
Call ``download_cot_artifact`` in parallel for each "upstreamArtifacts". Optional artifacts are allowed to not be downloaded. Args: chain (ChainOfTrust): the chain of trust object Returns: list: list of full paths to downloaded artifacts. Failed optional artifacts aren't returned Raises: CoTError: on chain of trust sha validation error, on a mandatory artifact BaseDownloadError: on download error on a mandatory artifact
codesearchnet
def update_media_assetfile(access_token, parent_asset_id, asset_id, content_length, name): path = '/Files' full_path = ''.join([path, "('", asset_id, "')"]) full_path_encoded = urllib.parse.quote(full_path, safe='') endpoint = ''.join([ams_rest_endpoint, full_path_encoded]) body = (((((((('{ \t\t"ContentFileSize": "' + str(content_length)) + '", \t\t"Id": "') + asset_id) + '", \t\t"MimeType": "video/mp4", \t\t"Name": "') + name) + '", \t\t"ParentAssetId": "') + parent_asset_id) + '" \t}') return do_ams_patch(endpoint, full_path_encoded, body, access_token)
Update Media Service Asset File. Args: access_token (str): A valid Azure authentication token. parent_asset_id (str): A Media Service Asset Parent Asset ID. asset_id (str): A Media Service Asset Asset ID. content_length (str): A Media Service Asset Content Length. name (str): A Media Service Asset name. Returns: HTTP response. JSON body.
codesearchnet
def load_glossary(file_path: str, read_json=False) -> List[str]: if read_json: if file_path.endswith(".gz"): return json.load(gzip.open(file_path)) return json.load(open(file_path)) return open(file_path).read().splitlines()
A glossary is a text file, one entry per line. Args: file_path (str): path to a text file containing a glossary. read_json (bool): set True if the glossary is in json format Returns: List of the strings in the glossary.
juraj-google-style
def typify(value, type_hint=None): if isinstance(value, string_types): value = value.strip() elif (type_hint is None): return value if isiterable(type_hint): if (isinstance(type_hint, type) and issubclass(type_hint, Enum)): try: return type_hint(value) except ValueError: return type_hint[value] type_hint = set(type_hint) if (not (type_hint - NUMBER_TYPES_SET)): return numberify(value) elif (not (type_hint - STRING_TYPES_SET)): return text_type(value) elif (not (type_hint - {bool, NoneType})): return boolify(value, nullable=True) elif (not (type_hint - (STRING_TYPES_SET | {bool}))): return boolify(value, return_string=True) elif (not (type_hint - (STRING_TYPES_SET | {NoneType}))): value = text_type(value) return (None if (value.lower() == 'none') else value) elif (not (type_hint - {bool, int})): return typify_str_no_hint(text_type(value)) else: raise NotImplementedError() elif (type_hint is not None): try: return (boolify(value) if (type_hint == bool) else type_hint(value)) except ValueError as e: raise TypeCoercionError(value, text_type(e)) else: return typify_str_no_hint(value)
Take a primitive value, usually a string, and try to make a more relevant type out of it. An optional type_hint will try to coerce the value to that type. Args: value (Any): Usually a string, not a sequence type_hint (type or Tuple[type]): Examples: >>> typify('32') 32 >>> typify('32', float) 32.0 >>> typify('32.0') 32.0 >>> typify('32.0.0') '32.0.0' >>> [typify(x) for x in ('true', 'yes', 'on')] [True, True, True] >>> [typify(x) for x in ('no', 'FALSe', 'off')] [False, False, False] >>> [typify(x) for x in ('none', 'None', None)] [None, None, None]
codesearchnet
def d_hkl(self, miller_index: Vector3Like) -> float: gstar = self.reciprocal_lattice_crystallographic.metric_tensor hkl = np.array(miller_index) return 1 / ((dot(dot(hkl, gstar), hkl.T)) ** (1 / 2))
Returns the distance between the hkl plane and the origin Args: miller_index ([h,k,l]): Miller index of plane Returns: d_hkl (float)
juraj-google-style
def GetGtfsClassByFileName(self, filename): if filename not in self._file_mapping: return None mapping = self._file_mapping[filename] class_list = mapping['classes'] if len(class_list) > 1: raise problems.NonStandardMapping(filename) else: return self._class_mapping[class_list[0]]
Returns the transitfeed class corresponding to a GTFS file. Args: filename: The filename whose class is to be returned Raises: NonStandardMapping if the specified filename has more than one corresponding class
juraj-google-style
def piece_size(model_file=None, model_proto=None, name=None): return _gen_sentencepiece_processor_op.sentencepiece_get_piece_size( model_file=model_file, model_proto=model_proto, name=name)
Returns the piece size (vocabulary size). Args: model_file: The sentencepiece model file path. model_proto: The sentencepiece model serialized proto. Either `model_file` or `model_proto` must be set. name: The name argument that is passed to the op function. Returns: A scalar representing the vocabulary size.
juraj-google-style
def create_token_type_ids_from_sequences(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None) -> List[int]: sep = [self.sep_token_id] cls = [self.cls_token_id] if token_ids_1 is None: return len(cls + token_ids_0 + sep) * [0] return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0]
Create a mask from the two sequences passed to be used in a sequence-pair classification task. BART does not make use of token type ids, therefore a list of zeros is returned. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of zeros.
github-repos
def __init__( self, resolver_context, file_system, path_spec, file_entry_type=None, is_root=False): super(FakeFileEntry, self).__init__( resolver_context, file_system, path_spec, is_root=is_root, is_virtual=True) self._date_time = dfdatetime_fake_time.FakeTime() self._name = None self.entry_type = file_entry_type
Initializes a file entry. Args: resolver_context (Context): resolver context. file_system (FileSystem): file system. path_spec (PathSpec): path specification. file_entry_type (Optional[str]): file entry type. is_root (Optional[bool]): True if the file entry is the root file entry of the corresponding file system.
juraj-google-style
def stats(data): return {'len': len(data), 'mean': np.mean(data), 'sum': np.sum(data), 'std': np.std(data), 'min': np.min(data), 'max': np.max(data)}
Dictionary with summary stats for data Returns: dicitonary with length, mean, sum, standard deviation,\ min and max of data
codesearchnet
def user(self, user: str) -> 'ChildHTTPAPI': if self.is_real_user: raise ValueError("Can't get child of real user") try: return self.children[user] except KeyError: child = ChildHTTPAPI(user, self) self.children[user] = child return child
Get a child HTTPAPI instance. Args: user: The Matrix ID of the user whose API to get. Returns: A HTTPAPI instance that always uses the given Matrix ID.
codesearchnet
def _make_inc_temp(self, suffix="", prefix="", directory_name="/tmp/ray"): directory_name = os.path.expanduser(directory_name) index = self._incremental_dict[suffix, prefix, directory_name] while index < tempfile.TMP_MAX: if index == 0: filename = os.path.join(directory_name, prefix + suffix) else: filename = os.path.join(directory_name, prefix + "." + str(index) + suffix) index += 1 if not os.path.exists(filename): self._incremental_dict[suffix, prefix, directory_name] = index return filename raise FileExistsError(errno.EEXIST, "No usable temporary filename found")
Return a incremental temporary file name. The file is not created. Args: suffix (str): The suffix of the temp file. prefix (str): The prefix of the temp file. directory_name (str) : The base directory of the temp file. Returns: A string of file name. If there existing a file having the same name, the returned name will look like "{directory_name}/{prefix}.{unique_index}{suffix}"
juraj-google-style
def get_index_mapping(index): mappings_dir = get_setting("mappings_dir") filename = "%s.json" % index path = os.path.join(mappings_dir, filename) with open(path, "r") as f: return json.load(f)
Return the JSON mapping file for an index. Mappings are stored as JSON files in the mappings subdirectory of this app. They must be saved as {{index}}.json. Args: index: string, the name of the index to look for.
juraj-google-style
def handle_encodnig(html): encoding = _get_encoding( dhtmlparser.parseString( html.split("</head>")[0] ) ) if encoding == "utf-8": return html return html.decode(encoding).encode("utf-8")
Look for encoding in given `html`. Try to convert `html` to utf-8. Args: html (str): HTML code as string. Returns: str: HTML code encoded in UTF.
juraj-google-style
def __init__(self, name, buckets, description, *labels): super(Sampler, self).__init__('Sampler', _sampler_methods, len(labels), name, buckets.buckets, description, *labels)
Creates a new Sampler. Args: name: name of the new metric. buckets: bucketing strategy of the new metric. description: description of the new metric. *labels: The label list of the new metric.
github-repos
def propagate(self, date): if self.propagator.orbit is not self: self.propagator.orbit = self return self.propagator.propagate(date)
Propagate the orbit to a new date Args: date (Date) Return: Orbit
juraj-google-style
def _MakeParseFn(fn, metadata): fn_spec = inspectutils.GetFullArgSpec(fn) num_required_args = len(fn_spec.args) - len(fn_spec.defaults) required_kwonly = set(fn_spec.kwonlyargs) - set(fn_spec.kwonlydefaults) def _ParseFn(args): kwargs, remaining_kwargs, remaining_args = _ParseKeywordArgs(args, fn_spec) parsed_args, kwargs, remaining_args, capacity = _ParseArgs(fn_spec.args, fn_spec.defaults, num_required_args, kwargs, remaining_args, metadata) if fn_spec.varargs or fn_spec.varkw: capacity = True extra_kw = set(kwargs) - set(fn_spec.kwonlyargs) if fn_spec.varkw is None and extra_kw: raise FireError('Unexpected kwargs present:', extra_kw) missing_kwonly = set(required_kwonly) - set(kwargs) if missing_kwonly: raise FireError('Missing required flags:', missing_kwonly) if fn_spec.varargs is not None: varargs, remaining_args = (remaining_args, []) else: varargs = [] for index, value in enumerate(varargs): varargs[index] = _ParseValue(value, None, None, metadata) varargs = parsed_args + varargs remaining_args += remaining_kwargs consumed_args = args[:len(args) - len(remaining_args)] return ((varargs, kwargs), consumed_args, remaining_args, capacity) return _ParseFn
Creates a parse function for fn. Args: fn: The function or class to create the parse function for. metadata: Additional metadata about the component the parse function is for. Returns: A parse function for fn. The parse function accepts a list of arguments and returns (varargs, kwargs), remaining_args. The original function fn can then be called with fn(*varargs, **kwargs). The remaining_args are the leftover args from the arguments to the parse function.
github-repos
def tetragonal(a: float, c: float): return Lattice.from_parameters(a, a, c, 90, 90, 90)
Convenience constructor for a tetragonal lattice. Args: a (float): *a* lattice parameter of the tetragonal cell. c (float): *c* lattice parameter of the tetragonal cell. Returns: Tetragonal lattice of dimensions a x a x c.
codesearchnet
def _GetFieldAttributes(field): if (not isinstance(field, messages.Field)): raise TypeError(('Field %r to be copied not a ProtoRPC field.' % (field,))) positional_args = [] kwargs = {'required': field.required, 'repeated': field.repeated, 'variant': field.variant, 'default': field._Field__default} if isinstance(field, messages.MessageField): kwargs.pop('default') if (not isinstance(field, message_types.DateTimeField)): positional_args.insert(0, field.message_type) elif isinstance(field, messages.EnumField): positional_args.insert(0, field.type) return (positional_args, kwargs)
Decomposes field into the needed arguments to pass to the constructor. This can be used to create copies of the field or to compare if two fields are "equal" (since __eq__ is not implemented on messages.Field). Args: field: A ProtoRPC message field (potentially to be copied). Raises: TypeError: If the field is not an instance of messages.Field. Returns: A pair of relevant arguments to be passed to the constructor for the field type. The first element is a list of positional arguments for the constructor and the second is a dictionary of keyword arguments.
codesearchnet
def f(options, expected_tf_failures=0): test_parameters = [{'ksize': [[1, 1, 1, 1, 1], [1, 2, 2, 2, 1], [1, 2, 3, 4, 1]], 'strides': [[1, 1, 1, 1, 1], [1, 2, 1, 2, 1], [1, 2, 2, 4, 1]], 'input_shape': [[1, 1, 1, 1, 1], [1, 16, 15, 14, 1], [3, 16, 15, 14, 3]], 'padding': ['SAME', 'VALID'], 'data_format': ['NDHWC']}] def build_graph(parameters): input_tensor = tf.compat.v1.placeholder(dtype=tf.float32, name='input', shape=parameters['input_shape']) out = pool_op(input_tensor, ksize=parameters['ksize'], strides=parameters['strides'], data_format=parameters['data_format'], padding=parameters['padding']) return ([input_tensor], [out]) def build_inputs(parameters, sess, inputs, outputs): input_values = create_tensor_data(tf.float32, parameters['input_shape']) return ([input_values], sess.run(outputs, feed_dict=dict(zip(inputs, [input_values])))) extra_convert_options = ExtraConvertOptions() extra_convert_options.allow_custom_ops = True make_zip_of_tests(options, test_parameters, build_graph, build_inputs, extra_convert_options, expected_tf_failures=expected_tf_failures)
Actual function that generates examples. Args: options: An Options instance. expected_tf_failures: number of expected tensorflow failures.
github-repos
def run(argv=None, save_main_session=True): known_args, pipeline_args = parse_known_args(argv) pipeline_options = PipelineOptions(pipeline_args) pipeline_options.view_as(SetupOptions).save_main_session = save_main_session with beam.Pipeline(options=pipeline_options) as pipeline: _ = pipeline | 'Read Data' >> beam.io.ReadFromText(known_args.input) | 'Split data to make List' >> beam.Map(lambda x: x.split(',')) | 'Filter rows' >> beam.Filter(custom_filter) | 'Create Key' >> beam.ParDo(CreateKey()) | 'Group by education' >> beam.GroupByKey() | 'Prepare Data' >> beam.ParDo(PrepareDataforTraining()) | 'Train Model' >> beam.ParDo(TrainModel()) | 'Save' >> fileio.WriteToFiles(path=known_args.output, sink=ModelSink())
Args: argv: Command line arguments defined for this example. save_main_session: Used for internal testing.
github-repos
def GetFormattedEventObject(cls, event): time_string = timelib.Timestamp.CopyToIsoFormat(event.timestamp) lines_of_text = [('+-' * 40), '[Timestamp]:', ' {0:s}'.format(time_string)] pathspec = getattr(event, 'pathspec', None) if pathspec: lines_of_text.append('[Pathspec]:') attribute_string = pathspec.comparable.replace('\n', '\n ') attribute_string = ' {0:s}\n'.format(attribute_string) lines_of_text.append(attribute_string) lines_of_text.append('[Reserved attributes]:') out_additional = ['[Additional attributes]:'] for (attribute_name, attribute_value) in sorted(event.GetAttributes()): if (attribute_name not in definitions.RESERVED_VARIABLE_NAMES): attribute_string = ' {{{0!s}}} {1!s}'.format(attribute_name, attribute_value) out_additional.append(attribute_string) elif (attribute_name not in ('pathspec', 'tag')): attribute_string = ' {{{0!s}}} {1!s}'.format(attribute_name, attribute_value) lines_of_text.append(attribute_string) lines_of_text.append('') out_additional.append('') lines_of_text.extend(out_additional) return '\n'.join(lines_of_text)
Retrieves a string representation of the event. Args: event (EventObject): event. Returns: str: string representation of the event.
codesearchnet
def ParseIfaddrs(ifaddrs): precondition.AssertOptionalType(ifaddrs, ctypes.POINTER(Ifaddrs)) ifaces = {} for ifaddr in IterIfaddrs(ifaddrs): ifname = ctypes.string_at(ifaddr.ifa_name).decode('utf-8') iface = ifaces.setdefault(ifname, rdf_client_network.Interface()) iface.ifname = ifname if (not ifaddr.ifa_addr): continue sockaddr = ctypes.cast(ifaddr.ifa_addr, ctypes.POINTER(Sockaddr)) iffamily = sockaddr.contents.sa_family if (iffamily == AF_INET): sockaddrin = ctypes.cast(ifaddr.ifa_addr, ctypes.POINTER(Sockaddrin)) address = rdf_client_network.NetworkAddress() address.address_type = rdf_client_network.NetworkAddress.Family.INET address.packed_bytes = struct.pack('=L', sockaddrin.contents.sin_addr) iface.addresses.append(address) elif (iffamily == AF_INET6): sockaddrin = ctypes.cast(ifaddr.ifa_addr, ctypes.POINTER(Sockaddrin6)) address = rdf_client_network.NetworkAddress() address.address_type = rdf_client_network.NetworkAddress.Family.INET6 address.packed_bytes = bytes(list(sockaddrin.contents.sin6_addr)) iface.addresses.append(address) elif (iffamily == AF_LINK): sockaddrdl = ctypes.cast(ifaddr.ifa_addr, ctypes.POINTER(Sockaddrdl)) nlen = sockaddrdl.contents.sdl_nlen alen = sockaddrdl.contents.sdl_alen iface.mac_address = bytes(sockaddrdl.contents.sdl_data[nlen:(nlen + alen)]) else: raise ValueError(('Unexpected socket address family: %s' % iffamily)) return itervalues(ifaces)
Parses contents of the intrusive linked list of `ifaddrs`. Args: ifaddrs: A pointer to the first node of `ifaddrs` linked list. Can be NULL. Returns: An iterator over instances of `rdf_client_network.Interface`.
codesearchnet
def block_matrix(A, B, C, D): r return vstackm((hstackm((A, B)), hstackm((C, D))))
r"""Generate the operator matrix with quadrants .. math:: \begin{pmatrix} A B \\ C D \end{pmatrix} Args: A (Matrix): Matrix of shape ``(n, m)`` B (Matrix): Matrix of shape ``(n, k)`` C (Matrix): Matrix of shape ``(l, m)`` D (Matrix): Matrix of shape ``(l, k)`` Returns: Matrix: The combined block matrix ``[[A, B], [C, D]]``.
juraj-google-style
def convert(in_file, out_file, in_fmt="", out_fmt=""): in_file = os.path.expanduser(in_file) out_file = os.path.expanduser(out_file) if not os.path.exists(in_file): raise IOError("Input file {0} does not exist, stopping..." .format(in_file)) in_fmt = in_fmt.lower() or _guess_format_from_extension( in_file.split('.')[-1].lower()) out_fmt = out_fmt.lower() or _guess_format_from_extension( out_file.split('.')[-1].lower()) if not in_fmt or not out_fmt: raise ValueError("Cannot determine conversion formats.") return False if in_fmt is out_fmt: shutil.copyfileobj(in_file, out_file) return out_file if in_fmt == 'hdf5': from . import hdf5 data = hdf5.load(in_file) elif in_fmt == 'tiff': from . import tiff data = tiff.load(in_file) elif in_fmt == 'png': from . import png data = png.load(in_file) else: return _fail_pair_conversion(in_fmt, out_fmt) if out_fmt == 'hdf5': from . import hdf5 return hdf5.save(out_file, data) elif out_fmt == 'tiff': from . import tiff return tiff.save(out_file, data) elif out_fmt == 'png': from . import png return png.export_png(out_file, data) else: return _fail_pair_conversion(in_fmt, out_fmt) return _fail_pair_conversion(in_fmt, out_fmt)
Converts in_file to out_file, guessing datatype in the absence of in_fmt and out_fmt. Arguments: in_file: The name of the (existing) datafile to read out_file: The name of the file to create with converted data in_fmt: Optional. The format of incoming data, if not guessable out_fmt: Optional. The format of outgoing data, if not guessable Returns: String. Output filename
juraj-google-style
def handle(self, handler, req, resp, **kwargs): params = self.require_params(req) if getattr(self, '_with_context', False): handler = partial(handler, context=req.context) (meta, content) = self.require_meta_and_content(handler, params, **kwargs) self.make_body(resp, params, meta, content) return content
Handle given resource manipulation flow in consistent manner. This mixin is intended to be used only as a base class in new flow mixin classes. It ensures that regardless of resource manunipulation semantics (retrieve, get, delete etc.) the flow is always the same: 1. Decode and validate all request parameters from the query string using ``self.require_params()`` method. 2. Use ``self.require_meta_and_content()`` method to construct ``meta`` and ``content`` dictionaries that will be later used to create serialized response body. 3. Construct serialized response body using ``self.body()`` method. Args: handler (method): resource manipulation method handler. req (falcon.Request): request object instance. resp (falcon.Response): response object instance to be modified. **kwargs: additional keyword arguments retrieved from url template. Returns: Content dictionary (preferably resource representation).
codesearchnet
def set_current(self, current): self.current = current self.input = current.input self.output = current.output self.cmd = current.task_data['cmd'] if (self.cmd and (NEXT_CMD_SPLITTER in self.cmd)): (self.cmd, self.next_cmd) = self.cmd.split(NEXT_CMD_SPLITTER) else: self.next_cmd = None
Creates some aliases for attributes of ``current``. Args: current: :attr:`~zengine.engine.WFCurrent` object.
codesearchnet
def plogdet(K): egvals = eigvalsh(K) return npsum(log(egvals[(egvals > epsilon)]))
r"""Log of the pseudo-determinant. It assumes that ``K`` is a positive semi-definite matrix. Args: K (array_like): matrix. Returns: float: log of the pseudo-determinant.
codesearchnet
def query_string_to_dict(query): query_params = {} for key_value in query.split("&"): key_value_pair = key_value.split("=", 1) key = key_value_pair[0] if len(key_value_pair) >= 1 else "" value = key_value_pair[1] if len(key_value_pair) == 2 else "" query_params[key] = value return query_params
Convert a string to a query dict. Args: query (str): The query string. Returns: obj: The key value object with query params. Note: This method does the same as urllib.parse.parse_qsl except that it doesn't actually decode the values.
juraj-google-style
def _benchmarkRunOpPrebuilt(self, name, target, iters): times = [] with ops.Graph().as_default(): v = variables.Variable(random_ops.random_normal([])) with session.Session(target) as sess: sess.run(v.initializer) runner = sess.make_callable(v.op) runner() for _ in range(iters): start_time = time.time() runner() end_time = time.time() times.append(end_time - start_time) print('%s %f' % (name, np.median(times))) self.report_benchmark(iters=1, wall_time=np.median(times), name=name)
Runs a microbenchmark to measure the cost of running an op. Reports the median cost of running a trivial (Variable) op. Args: name: A human-readable name for logging the output. target: The session target to use for the benchmark. iters: The number of iterations to perform.
github-repos
def _CreateArgItem(arg, docstring_info, spec): max_str_length = LINE_LENGTH - SECTION_INDENTATION - SUBSECTION_INDENTATION description = _GetArgDescription(arg, docstring_info) arg_string = formatting.BoldUnderline(arg.upper()) arg_type = _GetArgType(arg, spec) arg_type = f'Type: {arg_type}' if arg_type else '' available_space = max_str_length - len(arg_type) arg_type = formatting.EllipsisTruncate(arg_type, available_space, max_str_length) description = '\n'.join((part for part in (arg_type, description) if part)) return _CreateItem(arg_string, description, indent=SUBSECTION_INDENTATION)
Returns a string describing a positional argument. Args: arg: The name of the positional argument. docstring_info: A docstrings.DocstringInfo namedtuple with information about the containing function's docstring. spec: An instance of fire.inspectutils.FullArgSpec, containing type and default information about the arguments to a callable. Returns: A string to be used in constructing the help screen for the function.
github-repos
def _operations_list(self, ops_filter, max_tasks, page_size, page_token): max_page_size = 128 page_size = min(sz for sz in [page_size, max_page_size, max_tasks] if sz) api = self._service.projects().operations().list( name='projects/{}/operations'.format(self._project), filter=ops_filter, pageToken=page_token, pageSize=page_size) response = google_base.Api.execute(api) return [ GoogleOperation(op) for op in response.get('operations', []) if google_v2_operations.is_dsub_operation(op) ], response.get('nextPageToken')
Gets the list of operations for the specified filter. Args: ops_filter: string filter of operations to return max_tasks: the maximum number of job tasks to return or 0 for no limit. page_size: the number of operations to requested on each list operation to the pipelines API (if 0 or None, the API default is used) page_token: page token returned by a previous _operations_list call. Returns: Operations matching the filter criteria.
juraj-google-style
def mean(data, n=3, **kwargs): if len(data[-n:]) < n: forecast = np.nan else: forecast = np.mean(data[-n:]) return forecast
The mean forecast for the next point is the mean value of the previous ``n`` points in the series. Args: data (np.array): Observed data, presumed to be ordered in time. n (int): period over which to calculate the mean Returns: float: a single-valued forecast for the next value in the series.
juraj-google-style
def add_node(self, node_descriptor): if self._max_nodes is not None and len(self.nodes) >= self._max_nodes: raise ResourceUsageError("Maximum number of nodes exceeded", max_nodes=self._max_nodes) node, inputs, processor = parse_node_descriptor(node_descriptor, self.model) in_root = False for i, input_data in enumerate(inputs): selector, trigger = input_data walker = self.sensor_log.create_walker(selector) if walker.selector.inexhaustible: walker.reading = IOTileReading(0xFFFFFFFF, walker.selector.as_stream(), 0) node.connect_input(i, walker, trigger) if selector.input and not in_root: self.roots.append(node) in_root = True else: found = False for other in self.nodes: if selector.matches(other.stream): other.connect_output(node) found = True if not found and selector.buffered: raise NodeConnectionError("Node has input that refers to another node that has not been created yet", node_descriptor=node_descriptor, input_selector=str(selector), input_index=i) for other_node in self.nodes: for selector, trigger in other_node.inputs: if selector.matches(node.stream): node.connect_output(other_node) func = self.find_processing_function(processor) if func is None: raise ProcessingFunctionError("Could not find processing function in installed packages", func_name=processor) node.set_func(processor, func) self.nodes.append(node)
Add a node to the sensor graph based on the description given. The node_descriptor must follow the sensor graph DSL and describe a node whose input nodes already exist. Args: node_descriptor (str): A description of the node to be added including its inputs, triggering conditions, processing function and output stream.
juraj-google-style
def __init__(self, module, method_name=None, **kwargs): super(ModuleWrapper, self).__init__(**kwargs) if method_name is None: if hasattr(module, '__call__'): method_name = '__call__' elif hasattr(module, 'call'): method_name = 'call' if method_name is None or not hasattr(module, method_name): raise ValueError('{} is not defined on object {}'.format(method_name, module)) self._module = module self._method_name = method_name method = getattr(module, method_name) method_arg_spec = tf_inspect.getfullargspec(method) self._expects_training_arg = 'training' in method_arg_spec.args or method_arg_spec.varkw is not None self._expects_mask_arg = 'mask' in method_arg_spec.args or method_arg_spec.varkw is not None
Initializes the wrapper Layer for this module. Args: module: The `tf.Module` instance to be wrapped. method_name: (Optional) str. The name of the method to use as the forward pass of the module. If not set, defaults to '__call__' if defined, or 'call'. **kwargs: Additional keywrod arguments. See `tf.keras.layers.Layer`. Raises: ValueError: If `method` is not defined on `module`.
github-repos
def rescale(self, image: np.ndarray, scale: Union[int, float], offset: bool=True, data_format: Optional[Union[str, ChannelDimension]]=None, input_data_format: Optional[Union[str, ChannelDimension]]=None, **kwargs): rescaled_image = rescale(image, scale=scale, data_format=data_format, input_data_format=input_data_format, **kwargs) if offset: rescaled_image = rescaled_image - 1 return rescaled_image
Rescale an image by a scale factor. If `offset` is `True`, the image has its values rescaled by `scale` and then offset by 1. If `scale` is 1/127.5, the image is rescaled between [-1, 1]. image = image * scale - 1 If `offset` is `False`, and `scale` is 1/255, the image is rescaled between [0, 1]. image = image * scale Args: image (`np.ndarray`): Image to rescale. scale (`int` or `float`): Scale to apply to the image. offset (`bool`, *optional*): Whether to scale the image in both negative and positive directions. data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format of the image. If not provided, it will be the same as the input image. input_data_format (`ChannelDimension` or `str`, *optional*): The channel dimension format of the input image. If not provided, it will be inferred.
github-repos
def _add_op_node(self, op, qargs, cargs, condition=None): node_properties = { "type": "op", "op": op, "name": op.name, "qargs": qargs, "cargs": cargs, "condition": condition } self._max_node_id += 1 new_node = DAGNode(data_dict=node_properties, nid=self._max_node_id) self._multi_graph.add_node(new_node) self._id_to_node[self._max_node_id] = new_node
Add a new operation node to the graph and assign properties. Args: op (Instruction): the operation associated with the DAG node qargs (list): list of quantum wires to attach to. cargs (list): list of classical wires to attach to. condition (tuple or None): optional condition (ClassicalRegister, int)
juraj-google-style
def in_top_k(targets, predictions, k): if any_symbolic_tensors((targets, predictions)): return InTopK(k).symbolic_call(targets, predictions) return backend.math.in_top_k(targets, predictions, k)
Checks if the targets are in the top-k predictions. Args: targets: A tensor of true labels. predictions: A tensor of predicted labels. k: An integer representing the number of predictions to consider. Returns: A boolean tensor of the same shape as `targets`, where each element indicates whether the corresponding target is in the top-k predictions. Example: >>> targets = keras.ops.convert_to_tensor([2, 5, 3]) >>> predictions = keras.ops.convert_to_tensor( ... [[0.1, 0.4, 0.6, 0.9, 0.5], ... [0.1, 0.7, 0.9, 0.8, 0.3], ... [0.1, 0.6, 0.9, 0.9, 0.5]]) >>> in_top_k(targets, predictions, k=3) array([ True False True], shape=(3,), dtype=bool)
github-repos
def read_geom_h5(xdmf_file, snapshot): header = {} xdmf_root = xmlET.parse(str(xdmf_file)).getroot() if snapshot is None: return None, xdmf_root elt_snap = xdmf_root[0][0][snapshot] header['ti_ad'] = float(elt_snap.find('Time').get('Value')) header['mo_lambda'] = _maybe_get(elt_snap, 'mo_lambda', 'Value', float) header['mo_thick_sol'] = _maybe_get(elt_snap, 'mo_thick_sol', 'Value', float) header['ntb'] = 1 coord_h5 = [] coord_shape = [] twod = None for elt_subdomain in elt_snap.findall('Grid'): if elt_subdomain.get('Name').startswith('meshYang'): header['ntb'] = 2 break elt_geom = elt_subdomain.find('Geometry') if elt_geom.get('Type') == 'X_Y' and twod is None: twod = '' for data_item in elt_geom.findall('DataItem'): coord = data_item.text.strip()[-1] if coord in 'XYZ': twod += coord data_item = elt_geom.find('DataItem') coord_shape.append(_get_dim(data_item)) coord_h5.append( xdmf_file.parent / data_item.text.strip().split(':/', 1)[0]) _read_coord_h5(coord_h5, coord_shape, header, twod) return header, xdmf_root
Extract geometry information from hdf5 files. Args: xdmf_file (:class:`pathlib.Path`): path of the xdmf file. snapshot (int): snapshot number. Returns: (dict, root): geometry information and root of xdmf document.
juraj-google-style
def encoder_decoder_attention_loss(expected_attention_logits, actual_attentions, loss_type='kl_divergence', loss_multiplier=1.0): def combine_attentions(attention_list): 'Combine different layer attentions and then average over layers/heads.' attentions = tf.stack(attention_list) return tf.reduce_mean(attentions, [0, 2]) def kl_divergence_loss(expected_logits, actual_logits): p = tfp.distributions.Categorical(logits=expected_logits) q = tfp.distributions.Categorical(logits=actual_logits) return tfp.distributions.kl_divergence(p, q) def mse_loss(expected_logits, actual_weights): expected_weights = tf.nn.softmax(expected_logits) return tf.losses.mean_squared_error(expected_weights, actual_weights) loss = 0.0 if (loss_type == 'mse'): actual_encdec_attention_weights = [t for (layer_key, t) in actual_attentions.items() if (('encdec_attention' in layer_key) and (not layer_key.endswith('/logits')))] actual_attention_weights = combine_attentions(actual_encdec_attention_weights) loss = mse_loss(expected_attention_logits, actual_attention_weights) else: actual_encdec_attention_logits = [t for (layer_key, t) in actual_attentions.items() if (('encdec_attention' in layer_key) and layer_key.endswith('/logits'))] actual_attention_logits = combine_attentions(actual_encdec_attention_logits) loss = kl_divergence_loss(expected_attention_logits, actual_attention_logits) return (loss * loss_multiplier)
Computes encdec attention loss between expected and actual attentions. Args: expected_attention_logits: Tensor storing the expected encoder-decoder attention logits with shape [batch_size, target_length, input_length]. actual_attentions: Dictionary with actual attention logits for different attention types and hidden layers. loss_type: type of the loss function. loss_multiplier: multiplier for the attention loss. Returns: KL_divergence loss between the actual and expected attention logits.
codesearchnet
def get_volumes(blocks, layout_info): volumes = {} vol_blocks_lists = sort.by_vol_id(blocks, layout_info[2]) for vol_rec in blocks[layout_info[0]].vtbl_recs: vol_name = vol_rec.name.strip(b'\x00').decode('utf-8') if (vol_rec.rec_index not in vol_blocks_lists): vol_blocks_lists[vol_rec.rec_index] = [] volumes[vol_name] = description(vol_rec.rec_index, vol_rec, vol_blocks_lists[vol_rec.rec_index]) return volumes
Get a list of UBI volume objects from list of blocks Arguments: List:blocks -- List of layout block objects List:layout_info -- Layout info (indexes of layout blocks and associated data blocks.) Returns: Dict -- Of Volume objects by volume name, including any relevant blocks.
codesearchnet
def parse_multiple_json(json_file, offset=None): json_info_list = [] if not os.path.exists(json_file): return json_info_list try: with open(json_file, "r") as f: if offset: f.seek(offset) for line in f: if line[-1] != "\n": break json_info = json.loads(line) json_info_list.append(json_info) offset += len(line) except BaseException as e: logging.error(e.message) return json_info_list, offset
Parse multiple json records from the given file. Seek to the offset as the start point before parsing if offset set. return empty list if the json file does not exists or exception occurs. Args: json_file (str): File path to be parsed. offset (int): Initial seek position of the file. Returns: A dict of json info. New offset after parsing.
juraj-google-style