code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def from_dict(cls, fields, mapping): iterable = [None] * len(fields) for key, value in mapping.items(): try: index = fields.index(key) except KeyError: raise ItsdbError('Invalid field name(s): ' + key) iterable[index] = value return cls(fields, iterable)
Create a Record from a dictionary of field mappings. The *fields* object is used to determine the column indices of fields in the mapping. Args: fields: the Relation schema for the table of this record mapping: a dictionary or other mapping from field names to column values Returns: a :class:`Record` object
juraj-google-style
def _step(time, output_ta_t, *states): current_input = tuple((ta.read(time) for ta in input_ta)) current_input = tree.pack_sequence_as(inputs, current_input) output, new_states = step_function(current_input, tuple(states) + tuple(constants)) flat_new_state = tree.flatten(new_states) flat_output = tree.flatten(output) ta_index_to_write = time if return_all_outputs else 0 output_ta_t = tuple((ta.write(ta_index_to_write, out) for ta, out in zip(output_ta_t, flat_output))) new_states = tree.pack_sequence_as(initial_states, flat_new_state) return (time + 1, output_ta_t) + tuple(new_states)
RNN step function. Args: time: Current timestep value. output_ta_t: TensorArray. *states: List of states. Returns: Tuple: `(time + 1,output_ta_t) + tuple(new_states)`
github-repos
def SetEventTag(self, event_tag): event_identifier = event_tag.GetEventIdentifier() lookup_key = event_identifier.CopyToString() self._index[lookup_key] = event_tag.GetIdentifier()
Sets an event tag in the index. Args: event_tag (EventTag): event tag.
codesearchnet
def GetSoapXMLForComplexType(self, type_name, value): schema = self.suds_client.wsdl.schema definition_type = schema.elements[(type_name, self._namespace_override)] marshaller = suds.mx.literal.Literal(schema) content = suds.mx.Content( tag=type_name, value=value, name=type_name, type=definition_type) data = marshaller.process(content) return data
Return an XML string representing a SOAP complex type. Args: type_name: The name of the type with namespace prefix if necessary. value: A python dictionary to hydrate the type instance with. Returns: A string containing the SOAP XML for the type.
juraj-google-style
def flag_all(self, thresh_dict=None, include=None, exclude=None): if thresh_dict is None: thresh_dict = {} row_idx = set() col_idx = set() include = self.results if include is None else include include = list( set(include) - set(exclude)) if exclude is not None else include for diagnostic in include: if diagnostic in thresh_dict: flagged = self.flag(diagnostic, thresh_dict[diagnostic]) else: flagged = self.flag(diagnostic) if diagnostic == 'RowMahalanobisDistances': row_idx = row_idx.union(flagged) else: col_idx = col_idx.union(flagged) return sorted(list(row_idx)), sorted(list(col_idx))
Returns indices of (rows, columns) that satisfy flag() on any diagnostic. Uses user-provided thresholds in thresh_dict/ Args: thresh_dict (dict): dictionary of diagnostic->threshold functions include (list): optional sublist of diagnostics to flag exclude (list): optional sublist of diagnostics to not flag
juraj-google-style
def write_transcriptions(utterances: List[Utterance], tgt_dir: Path, ext: str, lazy: bool) -> None: tgt_dir.mkdir(parents=True, exist_ok=True) for utter in utterances: out_path = (tgt_dir / '{}.{}'.format(utter.prefix, ext)) if (lazy and out_path.is_file()): continue with out_path.open('w') as f: print(utter.text, file=f)
Write the utterance transcriptions to files in the tgt_dir. Is lazy and checks if the file already exists. Args: utterances: A list of Utterance objects to be written. tgt_dir: The directory in which to write the text of the utterances, one file per utterance. ext: The file extension for the utterances. Typically something like "phonemes", or "phonemes_and_tones".
codesearchnet
def get_tracks_for_album(self, artist, album, full_album_art_uri=False): subcategories = [artist, album] result = self.get_album_artists( full_album_art_uri=full_album_art_uri, subcategories=subcategories, complete_result=True) result._metadata['search_type'] = 'tracks_for_album' return result
Get the tracks of an artist's album. Args: artist (str): an artist's name. album (str): an album name. full_album_art_uri: whether the album art URI should be absolute (i.e. including the IP address). Default `False`. Returns: A `SearchResult` instance.
juraj-google-style
def on_delete(self, req, resp, handler=None, **kwargs): self.handle((handler or self.delete), req, resp, **kwargs) resp.status = falcon.HTTP_ACCEPTED
Respond on DELETE HTTP request assuming resource deletion flow. This request handler assumes that DELETE requests are associated with resource deletion. Thus default flow for such requests is: * Delete existing resource instance. * Set response status code to ``202 Accepted``. Args: req (falcon.Request): request object instance. resp (falcon.Response): response object instance to be modified handler (method): deletion method handler to be called. Defaults to ``self.delete``. **kwargs: additional keyword arguments retrieved from url template.
codesearchnet
def _project_TH3(self, hist: Hist) -> Any: if ((len(self.projection_axes) < 1) or (len(self.projection_axes) > 2)): raise ValueError(len(self.projection_axes), 'Invalid number of axes') projection_axis_name = '' for axis in self.projection_axes: proj_axis_name = axis.axis_type.name[:1] if (proj_axis_name not in ['x', 'y', 'z']): raise ValueError(f"Projection axis name {proj_axis_name} is not 'x', 'y', or 'z'. Please check your configuration.") projection_axis_name += proj_axis_name if (len(self.projection_axes) == 2): projection_axis_name = projection_axis_name[::(- 1)] logger.info(f'Projecting onto axes "{projection_axis_name}" from hist {hist.GetName()}') projected_hist = hist.Project3D(projection_axis_name) return projected_hist
Perform the actual TH3 -> TH1 projection. This projection could be to 1D or 2D. Args: hist (ROOT.TH3): Histogram from which the projections should be performed. Returns: ROOT.TH1: The projected histogram.
codesearchnet
def load_stopwords(self, path): if path: with open(path) as f: self.stopwords = set(f.read().splitlines()) else: self.stopwords = set( pkgutil .get_data('textplot', 'data/stopwords.txt') .decode('utf8') .splitlines() )
Load a set of stopwords. Args: path (str): The stopwords file path.
juraj-google-style
def _collapse_state(args: Dict[(str, Any)]): index = args['index'] result = args['result'] prob_one = args['prob_one'] state = _state_shard(args) normalization = np.sqrt((prob_one if result else (1 - prob_one))) state *= ((_one_projector(args, index) * result) + ((1 - _one_projector(args, index)) * (1 - result))) state /= normalization
Projects state shards onto the appropriate post measurement state. This function makes no assumptions about the interpretation of quantum theory. Args: args: The args from shard_num_args.
codesearchnet
def org(self, notification_type, priority='Low'): self._notification_type = notification_type self._recipients = None self._priority = priority self._is_organization = True
Set vars for the passed in data. Used for org notification. .. code-block:: javascript { "notificationType": notification_type, "priority": priority "isOrganization": true } Args: notification_type (str): The notification type. priority (str): The priority: Low, Medium, High.
juraj-google-style
class DynamicBackend: def __init__(self, backend=None): self._backend = backend or backend_module.backend() def set_backend(self, backend): if backend not in ('tensorflow', 'jax', 'torch', 'numpy', 'openvino'): raise ValueError(f"Available backends are ('tensorflow', 'jax', 'torch', 'numpy' and 'openvino'). Received: backend={backend}") self._backend = backend def reset(self): self._backend = backend_module.backend() @property def name(self): return self._backend def __getattr__(self, name): if self._backend == 'tensorflow': module = importlib.import_module('keras.src.backend.tensorflow') return getattr(module, name) if self._backend == 'jax': module = importlib.import_module('keras.src.backend.jax') return getattr(module, name) if self._backend == 'torch': module = importlib.import_module('keras.src.backend.torch') return getattr(module, name) if self._backend == 'numpy': if backend_module.backend() == 'numpy': return getattr(backend_module, name) else: raise NotImplementedError('Currently, we cannot dynamically import the numpy backend because it would disrupt the namespace of the import.') if self._backend == 'openvino': module = importlib.import_module('keras.src.backend.openvino') return getattr(module, name)
A class that can be used to switch from one backend to another. Example: ```python backend = DynamicBackend("tensorflow") y = backend.square(tf.constant(...)) backend.set_backend("jax") y = backend.square(jax.numpy.array(...)) ``` Args: backend: Initial backend to use (string).
github-repos
def delete_session_tensor(handle, name=None): handle_device = TensorHandle._get_device_name(handle) with ops.device(handle_device): holder = array_ops.placeholder(dtypes.string) deleter = gen_data_flow_ops.delete_session_tensor(holder, name=name) return (holder, deleter)
Delete the tensor for the given tensor handle. This is EXPERIMENTAL and subject to change. Delete the tensor of a given tensor handle. The tensor is produced in a previous run() and stored in the state of the session. Args: handle: The string representation of a persistent tensor handle. name: Optional name prefix for the return tensor. Returns: A pair of graph elements. The first is a placeholder for feeding a tensor handle and the second is a deletion operation.
github-repos
def load_virt_stream(virt_fd): try: virt_conf = json.load(virt_fd) except ValueError: virt_fd.seek(0) virt_conf = yaml.load(virt_fd) return deepcopy(virt_conf)
Loads the given conf stream into a dict, trying different formats if needed Args: virt_fd (str): file like objcect with the virt config to load Returns: dict: Loaded virt config
juraj-google-style
def declarations(cls, extra_defs=None): warnings.warn( "Factory.declarations is deprecated; use Factory._meta.pre_declarations instead.", DeprecationWarning, stacklevel=2, ) decls = cls._meta.pre_declarations.as_dict() decls.update(extra_defs or {}) return decls
Retrieve a copy of the declared attributes. Args: extra_defs (dict): additional definitions to insert into the retrieved DeclarationDict.
juraj-google-style
def post(self, path, body, headers=None): response = requests.post( self._url_for(path), data=json.dumps(body), headers=self._headers(headers) ) self._handle_errors(response) return response
Perform a POST request, providing a body, which will be JSON-encoded. Args: path (str): A path that gets appended to ``base_url``. body (dict): Dictionary that will be JSON-encoded and sent as the body. Example: api_client.post('/users', body={'name': 'Billy Jean'}) Returns: A requests ``Response`` object.
juraj-google-style
def merge(self, status: 'Status[Input, Output]') -> 'Status[Input, Output]': if status is None or status.farthest is None: pass elif self.farthest is None: self.farthest = status.farthest self.expected = status.expected elif status.farthest.position < self.farthest.position: pass elif status.farthest.position > self.farthest.position: self.farthest = status.farthest self.expected = status.expected else: self.expected = status.expected + self.expected return self
Merge the failure message from another status into this one. Whichever status represents parsing that has gone the farthest is retained. If both statuses have gone the same distance, then the expected values from both are retained. Args: status: The status to merge into this one. Returns: This ``Status`` which may have ``farthest`` and ``expected`` updated accordingly.
juraj-google-style
def _ParseBinaryDataAsString(self, parser_mediator, binary_data_value): if not binary_data_value: return None try: return binary_data_value.decode('utf-8') except UnicodeDecodeError: parser_mediator.ProduceExtractionWarning( 'invalid binary data string value: {0:s}'.format( repr(binary_data_value))) return None
Parses a binary data value as string Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. binary_data_value (bytes): binary data value (CSSM_DB_ATTRIBUTE_FORMAT_BLOB) Returns: str: binary data value formatted as a string or None if no string could be extracted or binary data value is None (NULL).
juraj-google-style
def __init__(self, options, log): self.options = options self.log = log self.compile_pattern()
Initializer. Subclass may override. Args: options: an dict containing the options passed to RefactoringTool that could be used to customize the fixer through the command line. log: a list to append warnings and other messages to.
juraj-google-style
def from_label(cls, label): r z = np.zeros(len(label), dtype=np.bool) x = np.zeros(len(label), dtype=np.bool) for i, char in enumerate(label): if char == 'X': x[-i - 1] = True elif char == 'Z': z[-i - 1] = True elif char == 'Y': z[-i - 1] = True x[-i - 1] = True elif char != 'I': raise QiskitError("Pauli string must be only consisted of 'I', 'X', " "'Y' or 'Z' but you have {}.".format(char)) return cls(z=z, x=x)
r"""Take pauli string to construct pauli. The qubit index of pauli label is q_{n-1} ... q_0. E.g., a pauli is $P_{n-1} \otimes ... \otimes P_0$ Args: label (str): pauli label Returns: Pauli: the constructed pauli Raises: QiskitError: invalid character in the label
juraj-google-style
def size(self, name=None): with ops.name_scope(name, '%s_Size' % self.name, [self.resource_handle]): return gen_lookup_ops.lookup_table_size_v2(self.resource_handle)
Compute the number of elements in this table. Args: name: A name for the operation (optional). Returns: A scalar tensor containing the number of elements in this table.
github-repos
def plot_normal_cdf(rbound=None, lbound=None, mean=0, sd=1): shade = rbound is not None or lbound is not None shade_left = rbound is not None and lbound is not None inf = 3.5 * sd step = 0.1 rlabel = rbound llabel = lbound if rbound is None: rbound = inf + mean rlabel = "$\infty$" if lbound is None: lbound = -inf + mean llabel = "-$\infty$" pdf_range = np.arange(-inf + mean, inf + mean, step) plt.plot(pdf_range, stats.norm.pdf(pdf_range, loc=mean, scale=sd), color='k', lw=1) cdf_range = np.arange(lbound, rbound + step, step) if shade: plt.fill_between(cdf_range, stats.norm.pdf(cdf_range, loc=mean, scale=sd), color='gold') if shade_left: cdf_range = np.arange(-inf+mean, lbound + step, step) plt.fill_between(cdf_range, stats.norm.pdf(cdf_range, loc=mean, scale=sd), color='darkblue') plt.ylim(0, stats.norm.pdf(0, loc=0, scale=sd) * 1.25) plt.xlabel('z') plt.ylabel('$\phi$(z)', rotation=90) plt.title("Normal Curve ~ ($\mu$ = {0}, $\sigma$ = {1}) " "{2} < z < {3}".format(mean, sd, llabel, rlabel), fontsize=16) plt.show()
Plots a normal curve with specified parameters and area below curve shaded between ``lbound`` and ``rbound``. Args: ``rbound`` (numeric): right boundary of shaded region ``lbound`` (numeric): left boundary of shaded region; by default is negative infinity ``mean`` (numeric): mean/expectation of normal distribution ``sd`` (numeric): standard deviation of normal distribution
juraj-google-style
def __init__(self, open_id_valid, jwks_uri): self._open_id_valid = open_id_valid self._jwks_uri = jwks_uri
Create an instance of IsserUriConfig. Args: open_id_valid: indicates whether the corresponding issuer is valid for OpenId discovery. jwks_uri: is the saved jwks_uri. Its value can be None if the OpenId discovery process has not begun or has already failed.
juraj-google-style
def update(self, *names: str) -> 'ListTree': for name in names: parts = name.split(self._delimiter) self._root.add(*parts) return self
Add all the mailbox names to the tree, filling in any missing nodes. Args: names: The names of the mailboxes.
codesearchnet
def convert_convtranspose(params, w_name, scope_name, inputs, layers, weights, names): print('Converting transposed convolution ...') if (names == 'short'): tf_name = ('C' + random_string(7)) elif (names == 'keep'): tf_name = w_name else: tf_name = (w_name + str(random.random())) bias_name = '{0}.bias'.format(w_name) weights_name = '{0}.weight'.format(w_name) if (len(weights[weights_name].numpy().shape) == 4): W = weights[weights_name].numpy().transpose(2, 3, 1, 0) (height, width, n_filters, channels) = W.shape n_groups = params['group'] if (n_groups > 1): raise AssertionError('Cannot convert conv1d with groups != 1') if (params['dilations'][0] > 1): raise AssertionError('Cannot convert conv1d with dilation_rate != 1') if (bias_name in weights): biases = weights[bias_name].numpy() has_bias = True else: biases = None has_bias = False input_name = inputs[0] if has_bias: weights = [W, biases] else: weights = [W] conv = keras.layers.Conv2DTranspose(filters=n_filters, kernel_size=(height, width), strides=(params['strides'][0], params['strides'][1]), padding='valid', output_padding=0, weights=weights, use_bias=has_bias, activation=None, dilation_rate=params['dilations'][0], bias_initializer='zeros', kernel_initializer='zeros', name=tf_name) layers[scope_name] = conv(layers[input_name]) layers[scope_name].set_shape(layers[scope_name]._keras_shape) pads = params['pads'] if (pads[0] > 0): assert ((len(pads) == 2) or ((pads[2] == pads[0]) and (pads[3] == pads[1]))) crop = keras.layers.Cropping2D(pads[:2], name=(tf_name + '_crop')) layers[scope_name] = crop(layers[scope_name]) else: raise AssertionError('Layer is not supported for now')
Convert transposed convolution layer. Args: params: dictionary with layer parameters w_name: name prefix in state_dict scope_name: pytorch scope name inputs: pytorch node inputs layers: dictionary with keras tensors weights: pytorch state_dict names: use short names for keras layers
codesearchnet
def _initialize_slots(self, seed, hashvalues): self.seed = seed self.hashvalues = self._parse_hashvalues(hashvalues)
Initialize the slots of the LeanMinHash. Args: seed (int): The random seed controls the set of random permutation functions generated for this LeanMinHash. hashvalues: The hash values is the internal state of the LeanMinHash.
codesearchnet
def __init__(self, scope, parent, result): CodeExpression.__init__(self, scope, parent, '(default)', result)
Constructor for default arguments. Args: scope (CodeEntity): The program scope where this object belongs. parent (CodeEntity): This object's parent in the program tree. result (str): The return type of the argument in the program.
juraj-google-style
def classes_in_module(module) -> List: md = module.__dict__ return [md[c] for c in md if (isinstance(md[c], type) and issubclass(md[c], ETKModule) and (md[c].__module__ == module.__name__))]
Return all classes with super class ExtractionModule Args: module: Returns: List of classes
codesearchnet
def read(self, n, echo=None): d = self.channel.read(n) if echo or (echo is None and self.echo): sys.stdout.write(d.decode('latin1')) sys.stdout.flush() return d
Read *n* bytes from the channel. Args: n(int): The number of bytes to read from the channel. echo(bool): Whether to write the read data to stdout. Returns: bytes: *n* bytes of data. Raises: EOFError: If the channel was closed.
juraj-google-style
def _get_executor_init(self, workers): def pool_fn(seqs): pool = get_pool_class(True)(workers, initializer=init_pool_generator, initargs=(seqs, self.random_seed, get_worker_id_queue())) _DATA_POOLS.add(pool) return pool return pool_fn
Gets the Pool initializer for multiprocessing. Args: workers: Number of works. Returns: A Function to initialize the pool
github-repos
def quad_genz_keister_24 ( order ): order = sorted(GENZ_KEISTER_24.keys())[order] abscissas, weights = GENZ_KEISTER_24[order] abscissas = numpy.array(abscissas) weights = numpy.array(weights) weights /= numpy.sum(weights) abscissas *= numpy.sqrt(2) return abscissas, weights
Hermite Genz-Keister 24 rule. Args: order (int): The quadrature order. Must be in the interval (0, 8). Returns: (:py:data:typing.Tuple[numpy.ndarray, numpy.ndarray]): Abscissas and weights Examples: >>> abscissas, weights = quad_genz_keister_24(1) >>> print(numpy.around(abscissas, 4)) [-1.7321 0. 1.7321] >>> print(numpy.around(weights, 4)) [0.1667 0.6667 0.1667]
juraj-google-style
def profile_args(_args): if ( _args.get('app', {}).get('optional') is not None or _args.get('app', {}).get('required') is not None ): app_args_optional = _args.get('app', {}).get('optional', {}) app_args_required = _args.get('app', {}).get('required', {}) default_args = _args.get('default', {}) _args = {} _args.update(app_args_optional) _args.update(app_args_required) _args.update(default_args) elif _args.get('app') is not None and _args.get('default') is not None: app_args = _args.get('app', {}) default_args = _args.get('default', {}) _args = {} _args.update(app_args) _args.update(default_args) return _args
Return args for v1, v2, or v3 structure. Args: _args (dict): The args section from the profile. Returns: dict: A collapsed version of the args dict.
juraj-google-style
def mkzip(archive, items, mode='w', save_full_paths=False): close = False try: if (not isinstance(archive, zipfile.ZipFile)): archive = zipfile.ZipFile(archive, mode, allowZip64=True) close = True logger.info('mkdzip: Creating %s, from: %s', archive.filename, items) if isinstance(items, str): items = [items] for item in items: item = os.path.abspath(item) basename = os.path.basename(item) if os.path.isdir(item): for (root, directoires, filenames) in os.walk(item): for filename in filenames: path = os.path.join(root, filename) if save_full_paths: archive_path = path.encode('utf-8') else: archive_path = os.path.join(basename, path.replace(item, '').strip('\\/')).encode('utf-8') archive.write(path, archive_path) elif os.path.isfile(item): if save_full_paths: archive_name = item.encode('utf-8') else: archive_name = basename.encode('utf-8') archive.write(item, archive_name) return True except Exception as e: logger.error(('Error occurred during mkzip: %s' % e)) return False finally: if close: archive.close()
Recursively zip a directory. Args: archive (zipfile.ZipFile or str): ZipFile object add to or path to the output zip archive. items (str or list of str): Single item or list of items (files and directories) to be added to zipfile. mode (str): w for create new and write a for append to. save_full_paths (bool): Preserve full paths.
codesearchnet
def increase_route_count(self, crawled_request): for route in self.__routing_options.routes: if re.compile(route).match(crawled_request.url): count_key = str(route) + crawled_request.method if count_key in self.__routing_count.keys(): self.__routing_count[count_key] += 1 else: self.__routing_count[count_key] = 1 break
Increase the count that determines how many times a URL of a certain route has been crawled. Args: crawled_request (:class:`nyawc.http.Request`): The request that possibly matches a route.
juraj-google-style
def _get_entities(self, text, language=''): body = { 'document': { 'type': 'PLAIN_TEXT', 'content': text, }, 'encodingType': 'UTF32', } if language: body['document']['language'] = language request = self.service.documents().analyzeEntities(body=body) response = request.execute() result = [] for entity in response.get('entities', []): mentions = entity.get('mentions', []) if not mentions: continue entity_text = mentions[0]['text'] offset = entity_text['beginOffset'] for word in entity_text['content'].split(): result.append({'content': word, 'beginOffset': offset}) offset += len(word) return result
Returns the list of entities retrieved from the given text. Args: text (str): Input text. language (:obj:`str`, optional): Language code. Returns: List of entities.
juraj-google-style
def design_stat_heating(self, value='Heating'): if (value is not None): try: value = str(value) except ValueError: raise ValueError('value {} need to be of type str for field `design_stat_heating`'.format(value)) if (',' in value): raise ValueError('value should not contain a comma for field `design_stat_heating`') vals = set() vals.add('Heating') if (value not in vals): raise ValueError('value {} is not an accepted value for field `design_stat_heating`'.format(value)) self._design_stat_heating = value
Corresponds to IDD Field `design_stat_heating` Args: value (str): value for IDD Field `design_stat_heating` Accepted values are: - Heating Default value: Heating if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
codesearchnet
def _verify_required_claims_exist(jwt_claims): for claim_name in [u"aud", u"exp", u"iss", u"sub"]: if claim_name not in jwt_claims: raise suppliers.UnauthenticatedException(u'Missing "%s" claim' % claim_name)
Verifies that the required claims exist. Args: jwt_claims: the JWT claims to be verified. Raises: UnauthenticatedException: if some claim doesn't exist.
juraj-google-style
def _GetPathSegmentIndexForOccurrenceWeights(self, occurrence_weights, value_weights): largest_weight = occurrence_weights.GetLargestWeight() if (largest_weight > 0): occurrence_weight_indexes = occurrence_weights.GetIndexesForWeight(largest_weight) number_of_occurrence_indexes = len(occurrence_weight_indexes) else: number_of_occurrence_indexes = 0 path_segment_index = None if (number_of_occurrence_indexes == 0): path_segment_index = self._GetPathSegmentIndexForValueWeights(value_weights) elif (number_of_occurrence_indexes == 1): path_segment_index = occurrence_weight_indexes[0] else: largest_weight = 0 for occurrence_index in occurrence_weight_indexes: value_weight = value_weights.GetWeightForIndex(occurrence_index) if ((not path_segment_index) or (largest_weight < value_weight)): largest_weight = value_weight path_segment_index = occurrence_index return path_segment_index
Retrieves the index of the path segment based on occurrence weights. Args: occurrence_weights: the occurrence weights object (instance of _PathSegmentWeights). value_weights: the value weights object (instance of _PathSegmentWeights). Returns: An integer containing the path segment index.
codesearchnet
def get_best_gain(mapping, candidate_mappings, weight_dict, instance_len, cur_match_num): largest_gain = 0 use_swap = True node1 = None node2 = None unmatched = set(range(instance_len)) for nid in mapping: if nid in unmatched: unmatched.remove(nid) for i, nid in enumerate(mapping): for nm in unmatched: if nm in candidate_mappings[i]: if veryVerbose: print("Remap node", i, "from ", nid, "to", nm, file=DEBUG_LOG) mv_gain = move_gain(mapping, i, nid, nm, weight_dict, cur_match_num) if veryVerbose: print("Move gain:", mv_gain, file=DEBUG_LOG) new_mapping = mapping[:] new_mapping[i] = nm new_match_num = compute_match(new_mapping, weight_dict) if new_match_num != cur_match_num + mv_gain: print(mapping, new_mapping, file=ERROR_LOG) print("Inconsistency in computing: move gain", cur_match_num, mv_gain, new_match_num, file=ERROR_LOG) if mv_gain > largest_gain: largest_gain = mv_gain node1 = i node2 = nm use_swap = False for i, m in enumerate(mapping): for j in range(i + 1, len(mapping)): m2 = mapping[j] if veryVerbose: print("Swap node", i, "and", j, file=DEBUG_LOG) print("Before swapping:", i, "-", m, ",", j, "-", m2, file=DEBUG_LOG) print(mapping, file=DEBUG_LOG) print("After swapping:", i, "-", m2, ",", j, "-", m, file=DEBUG_LOG) sw_gain = swap_gain(mapping, i, m, j, m2, weight_dict, cur_match_num) if veryVerbose: print("Swap gain:", sw_gain, file=DEBUG_LOG) new_mapping = mapping[:] new_mapping[i] = m2 new_mapping[j] = m print(new_mapping, file=DEBUG_LOG) new_match_num = compute_match(new_mapping, weight_dict) if new_match_num != cur_match_num + sw_gain: print(mapping, new_mapping, file=ERROR_LOG) print("Inconsistency in computing: swap gain", cur_match_num, sw_gain, new_match_num, file=ERROR_LOG) if sw_gain > largest_gain: largest_gain = sw_gain node1 = i node2 = j use_swap = True cur_mapping = mapping[:] if node1 is not None: if use_swap: if veryVerbose: print("Use swap gain", file=DEBUG_LOG) temp = cur_mapping[node1] cur_mapping[node1] = cur_mapping[node2] cur_mapping[node2] = temp else: if veryVerbose: print("Use move gain", file=DEBUG_LOG) cur_mapping[node1] = node2 else: if veryVerbose: print("no move/swap gain found", file=DEBUG_LOG) if veryVerbose: print("Original mapping", mapping, file=DEBUG_LOG) print("Current mapping", cur_mapping, file=DEBUG_LOG) return largest_gain, cur_mapping
Hill-climbing method to return the best gain swap/move can get Arguments: mapping: current node mapping candidate_mappings: the candidates mapping list weight_dict: the weight dictionary instance_len: the number of the nodes in AMR 2 cur_match_num: current triple match number Returns: the best gain we can get via swap/move operation
juraj-google-style
def num_accumulated(self, name=None): if name is None: name = '%s_NumAccumulated' % self._name return gen_data_flow_ops.accumulator_num_accumulated(self._accumulator_ref, name=name)
Number of gradients that have currently been aggregated in accumulator. Args: name: Optional name for the operation. Returns: Number of accumulated gradients currently in accumulator.
github-repos
def gallery_section(images, title): imgs = [] while True: img = (yield marv.pull(images)) if (img is None): break imgs.append({'src': img.relpath}) if (not imgs): return widget = {'title': images.title, 'gallery': {'images': imgs}} section = {'title': title, 'widgets': [widget]} (yield marv.push(section))
Create detail section with gallery. Args: title (str): Title to be displayed for detail section. images: stream of marv image files Returns One detail section.
codesearchnet
def __strip_extra_attributes(self, node: yaml.Node, known_attrs: List[str]) -> None: known_keys = list(known_attrs) known_keys.remove('self') if ('yatiml_extra' in known_keys): known_keys.remove('yatiml_extra') for (key_node, value_node) in node.value: if ((not isinstance(key_node, yaml.ScalarNode)) or (key_node.tag != 'tag:yaml.org,2002:str')): raise RecognitionError('{}{}Mapping keys that are not of type string are not supported by YAtiML.'.format(node.start_mark, os.linesep)) if (key_node.value not in known_keys): self.__strip_tags(value_node)
Strips tags from extra attributes. This prevents nodes under attributes that are not part of our \ data model from being converted to objects. They'll be plain \ CommentedMaps instead, which then get converted to OrderedDicts \ for the user. Args: node: The node to process known_attrs: The attributes to not strip
codesearchnet
def post(self, headers={}, body=''): (code, message) = self.command('POST') if (code != 340): raise NNTPReplyError(code, message) hdrs = utils.unparse_headers(headers) self.socket.sendall(hdrs) if isinstance(body, basestring): body = cStringIO.StringIO(body) illegal = False for line in body: if line.startswith('.'): line = ('.' + line) if line.endswith('\r\n'): line = line[:(- 2)] elif line.endswith('\n'): line = line[:(- 1)] if any(((c in line) for c in '\x00\r')): illegal = True break self.socket.sendall((line + '\r\n')) self.socket.sendall('.\r\n') (code, message) = self.status() if illegal: raise NNTPDataError('Illegal characters found') if (code != 240): raise NNTPReplyError(code, message) message_id = message.split(None, 1)[0] if (message_id.startswith('<') and message_id.endswith('>')): return message_id return True
POST command. Args: headers: A dictionary of headers. body: A string or file like object containing the post content. Raises: NNTPDataError: If binary characters are detected in the message body. Returns: A value that evaluates to true if posting the message succeeded. (See note for further details) Note: '\\n' line terminators are converted to '\\r\\n' Note: Though not part of any specification it is common for usenet servers to return the message-id for a successfully posted message. If a message-id is identified in the response from the server then that message-id will be returned by the function, otherwise True will be returned. Note: Due to protocol issues if illegal characters are found in the body the message will still be posted but will be truncated as soon as an illegal character is detected. No illegal characters will be sent to the server. For information illegal characters include embedded carriage returns '\\r' and null characters '\\0' (because this function converts line feeds to CRLF, embedded line feeds are not an issue)
codesearchnet
def build_inputs_with_special_tokens(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None, cls_token_id: Optional[int]=None, sep_token_id: Optional[int]=None) -> List[int]: cls = [self.cls_token_id] if cls_token_id is None else [cls_token_id] sep = [self.sep_token_id] if sep_token_id is None else [sep_token_id] if token_ids_1 is None: return cls + token_ids_0 + sep return cls + token_ids_0 + sep + token_ids_1 + sep
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format: - single sequence: `[CLS] X [SEP]` - pair of sequences: `[CLS] A [SEP] B [SEP]` Args: token_ids_0 (`List[int]`): List of IDs to which the special tokens will be added. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
github-repos
def handle_block( mediator_state: MediatorTransferState, state_change: Block, channelidentifiers_to_channels: ChannelMap, pseudo_random_generator: random.Random, ) -> TransitionResult[MediatorTransferState]: expired_locks_events = events_to_remove_expired_locks( mediator_state, channelidentifiers_to_channels, state_change.block_number, pseudo_random_generator, ) secret_reveal_events = events_for_onchain_secretreveal_if_dangerzone( channelmap=channelidentifiers_to_channels, secrethash=mediator_state.secrethash, transfers_pair=mediator_state.transfers_pair, block_number=state_change.block_number, block_hash=state_change.block_hash, ) unlock_fail_events = events_for_expired_pairs( channelidentifiers_to_channels=channelidentifiers_to_channels, transfers_pair=mediator_state.transfers_pair, waiting_transfer=mediator_state.waiting_transfer, block_number=state_change.block_number, ) iteration = TransitionResult( mediator_state, unlock_fail_events + secret_reveal_events + expired_locks_events, ) return iteration
After Raiden learns about a new block this function must be called to handle expiration of the hash time locks. Args: state: The current state. Return: TransitionResult: The resulting iteration
juraj-google-style
def load_data(self): units = '' if (self.file_objects[0] is None): raise IOError() (var_name, z_index) = self.format_var_name(self.variable, list(self.file_objects[0].variables.keys())) ntimes = 0 if ('time' in self.file_objects[0].variables[var_name].dimensions): ntimes = len(self.file_objects[0].dimensions['time']) if (ntimes > 1): if (z_index is None): data = self.file_objects[0].variables[var_name][self.forecast_hours].astype(np.float32) else: data = self.file_objects[0].variables[var_name][(self.forecast_hours, z_index)].astype(np.float32) else: (y_dim, x_dim) = self.file_objects[0].variables[var_name].shape[(- 2):] data = np.zeros((len(self.valid_dates), y_dim, x_dim), dtype=np.float32) for (f, file_object) in enumerate(self.file_objects): if (file_object is not None): if (z_index is None): data[f] = file_object.variables[var_name][0] else: data[f] = file_object.variables[var_name][(0, z_index)] if hasattr(self.file_objects[0].variables[var_name], 'units'): units = self.file_objects[0].variables[var_name].units return (data, units)
Load data from netCDF file objects or list of netCDF file objects. Handles special variable name formats. Returns: Array of data loaded from files in (time, y, x) dimensions, Units
codesearchnet
def get_engine(self, filepath, kind=None): if (not kind): extension = os.path.splitext(filepath)[1] if (not extension): msg = 'Unable to discover settings format from an empty file extension: {}' raise SettingsDiscoveryError(msg.format(filepath)) elif (extension[1:] not in self.extensions): msg = 'Settings file extension is unknowed from available backends: {}' raise SettingsDiscoveryError(msg.format(filepath)) kind = self.extensions[extension[1:]] elif (kind not in self.engines): msg = 'Given settings format is unknow: {}' raise SettingsDiscoveryError(msg.format(kind)) return self.engines[kind]
From given filepath try to discover which backend format to use. Discovering is pretty naive as it find format from file extension. Args: filepath (str): Settings filepath or filename. Keyword Arguments: kind (str): A format name to enforce a specific backend. Can be any value from attribute ``_kind_name`` of available backend engines. Raises: boussole.exceptions.SettingsDiscoveryError: If extension is unknowed or if given format name is unknowed. Returns: object: Backend engine class.
codesearchnet
def is_directory_v2(path): try: return _pywrap_file_io.IsDirectory(compat.path_to_bytes(path)) except errors.OpError: return False
Returns whether the path is a directory or not. Args: path: string, path to a potential directory Returns: True, if the path is a directory; False otherwise
github-repos
def convert_prediction_values(values, serving_bundle, model_spec=None): if (serving_bundle.model_type == 'classification'): response = classification_pb2.ClassificationResponse() for example_index in range(len(values)): classification = response.result.classifications.add() for class_index in range(len(values[example_index])): class_score = classification.classes.add() class_score.score = values[example_index][class_index] class_score.label = str(class_index) else: response = regression_pb2.RegressionResponse() for example_index in range(len(values)): regression = response.result.regressions.add() regression.value = values[example_index] if model_spec: response.model_spec.CopyFrom(model_spec) return response
Converts tensor values into ClassificationResponse or RegressionResponse. Args: values: For classification, a 2D list of numbers. The first dimension is for each example being predicted. The second dimension are the probabilities for each class ID in the prediction. For regression, a 1D list of numbers, with a regression score for each example being predicted. serving_bundle: A `ServingBundle` object that contains the information about the serving request that the response was generated by. model_spec: Optional model spec to put into the response. Returns: A ClassificationResponse or RegressionResponse.
codesearchnet
def filter_bboxes_by_visibility(original_shape, bboxes, transformed_shape, transformed_bboxes, threshold=0.0, min_area=0.0): (img_height, img_width) = original_shape[:2] (transformed_img_height, transformed_img_width) = transformed_shape[:2] visible_bboxes = [] for (bbox, transformed_bbox) in zip(bboxes, transformed_bboxes): if (not all(((0.0 <= value <= 1.0) for value in transformed_bbox[:4]))): continue bbox_area = calculate_bbox_area(bbox, img_height, img_width) transformed_bbox_area = calculate_bbox_area(transformed_bbox, transformed_img_height, transformed_img_width) if (transformed_bbox_area < min_area): continue visibility = (transformed_bbox_area / bbox_area) if (visibility >= threshold): visible_bboxes.append(transformed_bbox) return visible_bboxes
Filter bounding boxes and return only those boxes whose visibility after transformation is above the threshold and minimal area of bounding box in pixels is more then min_area. Args: original_shape (tuple): original image shape bboxes (list): original bounding boxes transformed_shape(tuple): transformed image transformed_bboxes (list): transformed bounding boxes threshold (float): visibility threshold. Should be a value in the range [0.0, 1.0]. min_area (float): Minimal area threshold.
codesearchnet
def inverse_transform(self, y, lengths=None): y = np.argmax(y, (- 1)) inverse_y = [self._label_vocab.id2doc(ids) for ids in y] if (lengths is not None): inverse_y = [iy[:l] for (iy, l) in zip(inverse_y, lengths)] return inverse_y
Return label strings. Args: y: label id matrix. lengths: sentences length. Returns: list: list of list of strings.
codesearchnet
def partial_trace(tensor: np.ndarray, keep_indices: List[int]) -> np.ndarray: ndim = (tensor.ndim if (not all(((tensor.shape[i] == tensor.shape[(i + ndim)]) for i in range(ndim)))): raise ValueError('Tensors must have shape (d_0,...,d_{{k-1}},d_0,...,d_{{k-1}}) but had shape ({}).'.format(tensor.shape)) if (not all(((i < ndim) for i in keep_indices))): raise ValueError('keep_indices were {} but must be in first half, i.e. have index less that {}.'.format(keep_indices, ndim)) keep_set = set(keep_indices) keep_map = dict(zip(keep_indices, sorted(keep_indices))) left_indices = [(keep_map[i] if (i in keep_set) else i) for i in range(ndim)] right_indices = [((ndim + i) if (i in keep_set) else i) for i in left_indices] return np.einsum(tensor, (left_indices + right_indices))
Takes the partial trace of a given tensor. The input tensor must have shape `(d_0, ..., d_{k-1}, d_0, ..., d_{k-1})`. The trace is done over all indices that are not in keep_indices. The resulting tensor has shape `(d_{i_0}, ..., d_{i_r}, d_{i_0}, ..., d_{i_r})` where `i_j` is the `j`th element of `keep_indices`. Args: tensor: The tensor to sum over. This tensor must have a shape `(d_0, ..., d_{k-1}, d_0, ..., d_{k-1})`. keep_indices: Which indices to not sum over. These are only the indices of the first half of the tensors indices (i.e. all elements must be between `0` and `tensor.ndims / 2 - 1` inclusive). Raises: ValueError: if the tensor is not of the correct shape or the indices are not from the first half of valid indices for the tensor.
codesearchnet
def _get_instance_attributes(self): for (name, value) in self.__dict__.items(): if (name in map((lambda x: x[0]), self.get_class_attributes())): (yield (name, value))
Return a generator for instance attributes' name and value. .. code-block:: python3 for _name, _value in self._get_instance_attributes(): print("attribute name: {}".format(_name)) print("attribute value: {}".format(_value)) Returns: generator: tuples with attribute name and value.
codesearchnet
def _set_label(self, which, label, **kwargs): prop_default = {'fontsize': 18} for (prop, default) in prop_default.items(): kwargs[prop] = kwargs.get(prop, default) setattr(self.label, which, label) setattr(self.label, (which + '_kwargs'), kwargs) return
Private method for setting labels. Args: which (str): The indicator of which part of the plots to adjust. This currently handles `xlabel`/`ylabel`, and `title`. label (str): The label to be added. fontsize (int, optional): Fontsize for associated label. Default is None.
codesearchnet
def abs(self: EventSetOrNode) -> EventSetOrNode: from temporian.core.operators.unary import abs return abs(self)
Gets the absolute value of an [`EventSet`][temporian.EventSet]'s features. Example: ```python >>> a = tp.event_set( ... timestamps=[1, 2, 3], ... features={"M":[np.nan, -1., 2.], "N": [-1, -3, 5]}, ... ) >>> a.abs() indexes: ... 'M': [nan 1. 2.] 'N': [1 3 5] ... ``` Returns: EventSet with positive valued features.
github-repos
def read(self, input_stream, kmip_version=enums.KMIPVersion.KMIP_1_0): super(ObtainLeaseResponsePayload, self).read(input_stream, kmip_version=kmip_version) local_stream = utils.BytearrayStream(input_stream.read(self.length)) if self.is_tag_next(enums.Tags.UNIQUE_IDENTIFIER, local_stream): self._unique_identifier = primitives.TextString(tag=enums.Tags.UNIQUE_IDENTIFIER) self._unique_identifier.read(local_stream, kmip_version=kmip_version) if self.is_tag_next(enums.Tags.LEASE_TIME, local_stream): self._lease_time = primitives.Interval(tag=enums.Tags.LEASE_TIME) self._lease_time.read(local_stream, kmip_version=kmip_version) if self.is_tag_next(enums.Tags.LAST_CHANGE_DATE, local_stream): self._last_change_date = primitives.DateTime(tag=enums.Tags.LAST_CHANGE_DATE) self._last_change_date.read(local_stream, kmip_version=kmip_version) self.is_oversized(local_stream)
Read the data encoding the ObtainLease response payload and decode it into its constituent parts. Args: input_stream (stream): A data stream containing encoded object data, supporting a read method; usually a BytearrayStream object. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be decoded. Optional, defaults to KMIP 1.0. Raises: ValueError: Raised if the data attribute is missing from the encoded payload.
codesearchnet
def add_dspam_headers(self, results): for header in self.headers: hname = self.header_prefix + header if header.lower() in results: hvalue = results[header.lower()] logger.debug( '<{}> Adding header {}: {}'.format(self.id, hname, hvalue)) self.addheader(hname, hvalue) elif header == 'Processed': hvalue = datetime.datetime.now().strftime( '%a %b %d %H:%M:%S %Y') logger.debug( '<{}> Adding header {}: {}'.format(self.id, hname, hvalue)) self.addheader(hname, hvalue) else: logger.warning( '<{}> Not adding header {}, no data available in ' 'DSPAM results'.format(self.id, hname))
Format DSPAM headers with passed results, and add them to the message. Args: results -- A results dictionary from DspamClient.
juraj-google-style
def _SparseSliceGrad(op: ops.Operation, *grads): backprop_val_grad = grads[1] input_indices = op.inputs[0] input_start = op.inputs[3] output_indices = op.outputs[0] val_grad = gen_sparse_ops.sparse_slice_grad(backprop_val_grad, input_indices, input_start, output_indices) val_grad.set_shape(op.inputs[1].get_shape()) return (None, val_grad, None, None, None)
The backward operator for the SparseSlice op. This op takes in the upstream gradient w.r.t. non-empty values of the sliced `SparseTensor`, and outputs the gradients w.r.t. the non-empty values of input `SparseTensor`. Args: op: the SparseSlice op *grads: the incoming gradients, one element per output of `op` Returns: Gradient for each of the 5 input tensors of SparseSlice: (indices, values, shape, start, size) The gradients for the indices, shape, start and the size are None.
github-repos
def regex(self, *patterns, **kwargs): start = kwargs.pop("start", 0) stop = kwargs.pop("stop", None) keys_only = kwargs.pop("keys_only", False) flags = kwargs.pop("flags", 0) results = {pattern: [] for pattern in patterns} stop = stop if stop is not None else -1 for i, line in enumerate(self[start:stop]): for pattern in patterns: grps = re.search(pattern, line, flags=flags) if grps and keys_only: results[pattern].append(i) elif grps and grps.groups(): for group in grps.groups(): results[pattern].append((i, group)) elif grps: results[pattern].append((i, line)) if len(patterns) == 1: return results[patterns[0]] return results
Search the editor for lines matching the regular expression. re.MULTILINE is not currently supported. Args: \*patterns: Regular expressions to search each line for keys_only (bool): Only return keys flags (re.FLAG): flags passed to re.search Returns: results (dict): Dictionary of pattern keys, line values (or groups - default)
juraj-google-style
def __init__(self, channel): self.Dump = channel.unary_stream( '/debug.Debug/Dump', request_serializer=client_dot_debug_dot_debug__pb2.DumpRequest.SerializeToString, response_deserializer=google_dot_protobuf_dot_wrappers__pb2.BytesValue.FromString, ) self.Profile = channel.unary_stream( '/debug.Debug/Profile', request_serializer=client_dot_debug_dot_debug__pb2.ProfileRequest.SerializeToString, response_deserializer=google_dot_protobuf_dot_wrappers__pb2.BytesValue.FromString, ) self.Binary = channel.unary_stream( '/debug.Debug/Binary', request_serializer=client_dot_debug_dot_debug__pb2.BinaryRequest.SerializeToString, response_deserializer=google_dot_protobuf_dot_wrappers__pb2.BytesValue.FromString, )
Constructor. Args: channel: A grpc.Channel.
juraj-google-style
def _show_inputs_outputs_mgd(meta_graph_def, signature_def_key, indent): inputs_tensor_info = _get_inputs_tensor_info_from_meta_graph_def(meta_graph_def, signature_def_key) outputs_tensor_info = _get_outputs_tensor_info_from_meta_graph_def(meta_graph_def, signature_def_key) indent_str = ' ' * indent def in_print(s): print(indent_str + s) in_print('The given SavedModel SignatureDef contains the following input(s):') for input_key, input_tensor in sorted(inputs_tensor_info.items()): in_print(" inputs['%s'] tensor_info:" % input_key) _print_tensor_info(input_tensor, indent + 1) in_print('The given SavedModel SignatureDef contains the following output(s):') for output_key, output_tensor in sorted(outputs_tensor_info.items()): in_print(" outputs['%s'] tensor_info:" % output_key) _print_tensor_info(output_tensor, indent + 1) in_print('Method name is: %s' % meta_graph_def.signature_def[signature_def_key].method_name)
Prints input and output TensorInfos. Prints the details of input and output TensorInfos for the SignatureDef mapped by the given signature_def_key. Args: meta_graph_def: MetaGraphDef to inspect. signature_def_key: A SignatureDef key string. indent: How far (in increments of 2 spaces) to indent each line of output.
github-repos
def read_blocks(file_path, start=0.0, end=float('inf'), buffer_size=5760000): buffer = [] n_buffer = 0 n_samples = 0 with audioread.audio_open(file_path) as input_file: n_channels = input_file.channels sr_native = input_file.samplerate start_sample = (int(np.round((sr_native * start))) * n_channels) end_sample = end if (end_sample != np.inf): end_sample = (int(np.round((sr_native * end))) * n_channels) for block in input_file: block = librosa.util.buf_to_float(block) n_prev = n_samples n_samples += len(block) if (n_samples < start_sample): continue if (n_prev > end_sample): break if (n_samples > end_sample): block = block[:(end_sample - n_prev)] if (n_prev <= start_sample <= n_samples): block = block[(start_sample - n_prev):] n_buffer += len(block) buffer.append(block) if (n_buffer >= buffer_size): (yield process_buffer(buffer, n_channels)) buffer = [] n_buffer = 0 if (len(buffer) > 0): (yield process_buffer(buffer, n_channels))
Read an audio file block after block. The blocks are yielded one by one. Args: file_path (str): Path to the file to read. start (float): Start in seconds to read from. end (float): End in seconds to read to. ``inf`` means to the end of the file. buffer_size (int): Number of samples to load into memory at once and return as a single block. The exact number of loaded samples depends on the block-size of the audioread library. So it can be of x higher, where the x is typically 1024 or 4096. Returns: Generator: A generator yielding the samples for every block.
codesearchnet
def metrics(expected_box_encodings, expected_scores, actual_box_encodings, actual_scores): squashed_expected_scores = tf.math.divide(1.0, 1.0 + tf.math.exp(-expected_scores)) squashed_actual_scores = tf.math.divide(1.0, 1.0 + tf.math.exp(-actual_scores)) kld_metric = kl_divergence.symmetric_kl_divergence(expected_scores, actual_scores) high_scoring_indices = tf.math.logical_or(tf.math.greater(squashed_expected_scores, 0.1), tf.math.greater(squashed_actual_scores, 0.1)) high_scoring_actual_boxes = tf.where(condition=tf.broadcast_to(input=high_scoring_indices, shape=tf.shape(actual_box_encodings)), x=actual_box_encodings, y=expected_box_encodings) box_diff = high_scoring_actual_boxes - expected_box_encodings box_squared_diff = tf.math.pow(box_diff, 2) box_mse = tf.divide(tf.math.reduce_sum(box_squared_diff), tf.math.maximum(tf.math.count_nonzero(high_scoring_indices, dtype=tf.float32), 1.0)) ok = tf.logical_and(kld_metric < 0.1, box_mse < 0.01) return [kld_metric, box_mse, ok]
Calculate metrics from expected and actual blazeface outputs. Args: expected_box_encodings: box encodings from model expected_scores: classifications from model actual_box_encodings: golden box encodings actual_scores: golden classifications Returns: two-item list with classification error and localization error
github-repos
def read_file(*components, **kwargs): must_exist = kwargs.get("must_exist", True) if must_exist: path = fs.must_exist(*components) else: path = fs.path(*components) try: with open(path) as infile: return loads(infile.read()) except ValueError as e: raise ValueError( "malformed JSON file '{path}'. Message from parser: {err}" .format(path=fs.basename(path), err=str(e))) except IOError as e: if not must_exist: return {} else: return e
Load a JSON data blob. Arguments: path (str): Path to file. must_exist (bool, otional): If False, return empty dict if file does not exist. Returns: array or dict: JSON data. Raises: File404: If path does not exist, and must_exist is True. InvalidFile: If JSON is malformed.
juraj-google-style
def get_array_for_fit(observables: dict, track_pt_bin: int, jet_pt_bin: int) -> histogram.Histogram1D: for name, observable in observables.items(): if observable.track_pt_bin == track_pt_bin and observable.jet_pt_bin == jet_pt_bin: return histogram.Histogram1D.from_existing_hist(observable.hist) raise ValueError("Cannot find fit with jet pt bin {jet_pt_bin} and track pt bin {track_pt_bin}")
Get a Histogram1D associated with the selected jet and track pt bins. This is often used to retrieve data for fitting. Args: observables (dict): The observables from which the hist should be retrieved. track_pt_bin (int): Track pt bin of the desired hist. jet_ptbin (int): Jet pt bin of the desired hist. Returns: Histogram1D: Converted TH1 or uproot histogram. Raises: ValueError: If the requested observable couldn't be found.
juraj-google-style
def encode(self, inputs, attention_bias): with tf.name_scope("encode"): embedded_inputs = self.embedding_softmax_layer(inputs) inputs_padding = model_utils.get_padding(inputs) with tf.name_scope("add_pos_encoding"): length = tf.shape(embedded_inputs)[1] pos_encoding = model_utils.get_position_encoding( length, self.params.hidden_size) encoder_inputs = embedded_inputs + pos_encoding if self.train: mlperf_log.transformer_print( key=mlperf_log.MODEL_HP_LAYER_POSTPROCESS_DROPOUT, value=self.params.layer_postprocess_dropout) encoder_inputs = tf.nn.dropout( encoder_inputs, 1 - self.params.layer_postprocess_dropout) return self.encoder_stack(encoder_inputs, attention_bias, inputs_padding)
Generate continuous representation for inputs. Args: inputs: int tensor with shape [batch_size, input_length]. attention_bias: float tensor with shape [batch_size, 1, 1, input_length] Returns: float tensor with shape [batch_size, input_length, hidden_size]
juraj-google-style
def Open(self, path, ascii_codepage='cp1252'): path_specification = self._path_resolver.ResolvePath(path) if (path_specification is None): return None return self._OpenPathSpec(path_specification)
Opens the Windows Registry file specified by the path. Args: path (str): path of the Windows Registry file. ascii_codepage (Optional[str]): ASCII string codepage. Returns: WinRegistryFile: Windows Registry file or None.
codesearchnet
def generate_nb_data(P, R, n_cells, assignments=None): genes, clusters = P.shape output = np.zeros((genes, n_cells)) if assignments is None: cluster_probs = np.ones(clusters)/clusters labels = [] for i in range(n_cells): if assignments is None: c = np.random.choice(range(clusters), p=cluster_probs) else: c = assignments[i] labels.append(c) output[:,i] = np.random.negative_binomial(R[:,c], 1.0-P[:,c]) return output, np.array(labels)
Generates negative binomial data Args: P (array): genes x clusters R (array): genes x clusters n_cells (int): number of cells assignments (list): cluster assignment of each cell. Default: random uniform Returns: data array with shape genes x cells labels - array of cluster labels
juraj-google-style
def _FindAugmentingPath(self, queue): while queue: x = queue.popleft() for y in self.right - self.t: if not self._InEqualitySubgraph(x, y): continue if y not in self.matches: return (True, x, y) self.t.add(y) queue.append(self.matches[y]) self._AddToTree(self.matches[y], x) return (False, None, None)
Find an augmenting path for the current labeling. Perform a BFS to find an augmenting path for the current labeling. Args: queue: Queue for performing BFS traversal. Returns: found: True if path was found. x: Left vertex of final path edge. y: Right vertex of final path edge.
github-repos
def shannon_entropy(pvec, base=2): if (base == 2): def logfn(x): return ((- x) * np.log2(x)) elif (base == np.e): def logfn(x): return ((- x) * np.log(x)) else: def logfn(x): return (((- x) * np.log(x)) / np.log(base)) h = 0.0 for x in pvec: if (0 < x < 1): h += logfn(x) return h
Compute the Shannon entropy of a probability vector. The shannon entropy of a probability vector pv is defined as $H(pv) = - \\sum_j pv[j] log_b (pv[j])$ where $0 log_b 0 = 0$. Args: pvec (array_like): a probability vector. base (int): the base of the logarith Returns: float: The Shannon entropy H(pvec).
codesearchnet
def parse_account(config, auth, account): network_id = account advertiser_ids = None profile_id = None try: network_id, profile_id = network_id.split('@', 1) except: profile_id = None try: network_id, advertiser_ids = network_id.split(':', 1) except: pass if network_id is not None: network_id = int(network_id) if advertiser_ids is not None: advertiser_ids = [int(advertiser_id.strip()) for advertiser_id in advertiser_ids.split(',')] return (network_id, advertiser_ids)
Breaks a [account:advertiser@profile] string into parts if supplied. This function was created to accomodate supplying advertiser and profile information as a single token. It needs to be refactored as this approach is messy. Possible variants include: * [account:advertiser@profile] * [account:advertiser] * [account@profile] Args: * auth: (string) Either user or service. * account: (string) A string represeting [account:advertiser@profile] Returns: * ( network_id, advertiser_ids, profile_id) after parsing the account token.
github-repos
def _minigui_report_search_status(self, leaves): root = self._player.get_root() msg = { "id": hex(id(root)), "n": int(root.N), "q": float(root.Q), } msg["childQ"] = [int(round(q * 1000)) for q in root.child_Q] msg["childN"] = [int(n) for n in root.child_N] ranked_children = root.rank_children() variations = {} for i in ranked_children[:15]: if root.child_N[i] == 0 or i not in root.children: break c = coords.to_gtp(coords.from_flat(i)) child = root.children[i] nodes = child.most_visited_path_nodes() moves = [coords.to_gtp(coords.from_flat(m.fmove)) for m in nodes] variations[c] = { "n": int(root.child_N[i]), "q": float(root.child_Q[i]), "moves": [c] + moves, } if leaves: path = [] leaf = leaves[0] while leaf != root: path.append(leaf.fmove) leaf = leaf.parent if path: path.reverse() variations["live"] = { "n": int(root.child_N[path[0]]), "q": float(root.child_Q[path[0]]), "moves": [coords.to_gtp(coords.from_flat(m)) for m in path] } if variations: msg["variations"] = variations dbg("mg-update:%s" % json.dumps(msg, sort_keys=True))
Prints the current MCTS search status to stderr. Reports the current search path, root node's child_Q, root node's child_N, the most visited path in a format that can be parsed by one of the STDERR_HANDLERS in minigui.ts. Args: leaves: list of leaf MCTSNodes returned by tree_search().
juraj-google-style
def __init__(self, entries, elements=None): if elements is None: elements = set() for entry in entries: elements.update(entry.composition.elements) elements = list(elements) dim = len(elements) get_reduced_comp = lambda e: e.composition.reduced_composition entries = sorted(entries, key=get_reduced_comp) el_refs = {} min_entries = [] all_entries = [] for c, g in itertools.groupby(entries, key=get_reduced_comp): g = list(g) min_entry = min(g, key=lambda e: e.energy_per_atom) if c.is_element: el_refs[c.elements[0]] = min_entry min_entries.append(min_entry) all_entries.extend(g) if len(el_refs) != dim: raise PhaseDiagramError( "There are no entries associated with a terminal element!.") data = np.array([ [e.composition.get_atomic_fraction(el) for el in elements] + [ e.energy_per_atom] for e in min_entries ]) vec = [el_refs[el].energy_per_atom for el in elements] + [-1] form_e = -np.dot(data, vec) inds = np.where(form_e < -self.formation_energy_tol)[0].tolist() inds.extend([min_entries.index(el) for el in el_refs.values()]) qhull_entries = [min_entries[i] for i in inds] qhull_data = data[inds][:, 1:] extra_point = np.zeros(dim) + 1 / dim extra_point[-1] = np.max(qhull_data) + 1 qhull_data = np.concatenate([qhull_data, [extra_point]], axis=0) if dim == 1: self.facets = [qhull_data.argmin(axis=0)] else: facets = get_facets(qhull_data) finalfacets = [] for facet in facets: if max(facet) == len(qhull_data) - 1: continue m = qhull_data[facet] m[:, -1] = 1 if abs(np.linalg.det(m)) > 1e-14: finalfacets.append(facet) self.facets = finalfacets self.simplexes = [Simplex(qhull_data[f, :-1]) for f in self.facets] self.all_entries = all_entries self.qhull_data = qhull_data self.dim = dim self.el_refs = el_refs self.elements = elements self.qhull_entries = qhull_entries self._stable_entries = set(self.qhull_entries[i] for i in set(itertools.chain(*self.facets)))
Standard constructor for phase diagram. Args: entries ([PDEntry]): A list of PDEntry-like objects having an energy, energy_per_atom and composition. elements ([Element]): Optional list of elements in the phase diagram. If set to None, the elements are determined from the the entries themselves.
juraj-google-style
def FindServiceByName(self, full_name): full_name = _NormalizeFullyQualifiedName(full_name) if (full_name not in self._service_descriptors): self._FindFileContainingSymbolInDb(full_name) return self._service_descriptors[full_name]
Loads the named service descriptor from the pool. Args: full_name: The full name of the service descriptor to load. Returns: The service descriptor for the named service. Raises: KeyError: if the service cannot be found in the pool.
codesearchnet
def get_all_checkpoints(rundir='runinfo'): if (not os.path.isdir(rundir)): return [] dirs = sorted(os.listdir(rundir)) checkpoints = [] for runid in dirs: checkpoint = os.path.abspath('{}/{}/checkpoint'.format(rundir, runid)) if os.path.isdir(checkpoint): checkpoints.append(checkpoint) return checkpoints
Finds the checkpoints from all last runs. Note that checkpoints are incremental, and this helper will not find previous checkpoints from earlier than the most recent run. It probably should be made to do so. Kwargs: - rundir(str) : Path to the runinfo directory Returns: - a list suitable for the checkpointFiles parameter of DataFlowKernel constructor
codesearchnet
def merge_strings_files(old_strings_file, new_strings_file): old_localizable_dict = generate_localization_key_to_entry_dictionary_from_file(old_strings_file) output_file_elements = [] f = open_strings_file(new_strings_file, "r+") for header_comment, comments, key, value in extract_header_comment_key_value_tuples_from_file(f): if len(header_comment) > 0: output_file_elements.append(Comment(header_comment)) localize_value = value if key in old_localizable_dict: localize_value = old_localizable_dict[key].value output_file_elements.append(LocalizationEntry(comments, key, localize_value)) f.close() write_file_elements_to_strings_file(old_strings_file, output_file_elements)
Merges the old strings file with the new one. Args: old_strings_file (str): The path to the old strings file (previously produced, and possibly altered) new_strings_file (str): The path to the new strings file (newly produced).
juraj-google-style
def stack_template_url(bucket_name, blueprint, endpoint): key_name = stack_template_key_name(blueprint) return "%s/%s/%s" % (endpoint, bucket_name, key_name)
Produces an s3 url for a given blueprint. Args: bucket_name (string): The name of the S3 bucket where the resulting templates are stored. blueprint (:class:`stacker.blueprints.base.Blueprint`): The blueprint object to create the URL to. endpoint (string): The s3 endpoint used for the bucket. Returns: string: S3 URL.
juraj-google-style
def merge_from(self, lam_dict, op): for key, val in lam_dict.items(): if key in self: self[key] = op(self[key], val, key) else: self[key] = val for cur_id in range(lam_dict.aliases.latest_id): parent_id = lam_dict.aliases.parent[cur_id] cur_name = lam_dict.aliases.id2name[cur_id] parent_name = lam_dict.aliases.id2name[parent_id] if self.aliases.find_by_name(cur_name) != self.aliases.find_by_name(parent_name): self.add_alias(cur_name, parent_name, op)
Merge the other `AliasingDict` into current class. Args: lam_dict: The dict to merge from. op: The function used to merge the values.
github-repos
def _update_album_art_to_full_uri(self, item): if getattr(item, 'album_art_uri', False): item.album_art_uri = self.build_album_art_full_uri( item.album_art_uri)
Update an item's Album Art URI to be an absolute URI. Args: item: The item to update the URI for
juraj-google-style
def compose_tree_path(tree, issn=False): if issn: return join('/', ISSN_DOWNLOAD_KEY, basename(tree.issn)) return join('/', PATH_DOWNLOAD_KEY, quote_plus(tree.path).replace('%2F', '/'))
Compose absolute path for given `tree`. Args: pub (obj): :class:`.Tree` instance. issn (bool, default False): Compose URL using ISSN. Returns: str: Absolute path of the tree, without server's address and protocol.
codesearchnet
def _neigh_template(parameters, index, left=True, required=False, notfoundmsg=None): fn_string = ('has_neigh(%s, left=%s)' % (repr(parameters.fn_params)[1:(- 1)], repr(left))) output = (IND + 'el = dom.find(\n') output += ((IND + IND) + ('%s,\n' % repr(parameters.tag_name))) if parameters.params: output += ((IND + IND) + ('%s,\n' % repr(parameters.params))) output += ((IND + IND) + ('fn=%s\n' % fn_string)) output += (IND + ')\n\n') if required: return (output + _required_idiom(parameters.fn_params[0], index, notfoundmsg)) return (output + _index_idiom('el', index))
Generate neighbour matching call for HTMLElement, which returns only elements with required neighbours. Args: parameters (list): List of parameters for ``.match()``. index (int): Index of the item you want to get from ``.match()`` call. left (bool, default True): Look for neigbour in the left side of el. required (bool, default False): Use :func:`_required_idiom` to returned data. notfoundmsg (str, default None): Message which will be used for :func:`_required_idiom` if the item is not found. Returns: str: Python code.
codesearchnet
def unpad_image(tensor, original_size): if not isinstance(original_size, (list, tuple)): if not isinstance(original_size, (torch.Tensor, np.ndarray)): raise TypeError(f'image_size invalid type: {type(original_size)} not valid, should be either list, tuple, np.ndarray or tensor') original_size = original_size.tolist() original_height, original_width = original_size current_height, current_width = tensor.shape[1:] original_aspect_ratio = original_width / original_height current_aspect_ratio = current_width / current_height if original_aspect_ratio > current_aspect_ratio: scale_factor = current_width / original_width new_height = int(round(original_height * scale_factor, 7)) padding = (current_height - new_height) unpadded_tensor = tensor[:, padding:current_height - padding, :] else: scale_factor = current_height / original_height new_width = int(round(original_width * scale_factor, 7)) padding = (current_width - new_width) unpadded_tensor = tensor[:, :, padding:current_width - padding] return unpadded_tensor
Unpads a PyTorch tensor of a padded and resized image. Args: tensor (`torch.Tensor`): The image tensor, assumed to be of shape (num_channels, height, width). original_size (`tuple`): The original size of the image (height, width). Returns: `torch.Tensor`: The unpadded image tensor.
github-repos
def stop(self) -> None: self._stop()
Stops the server. Raises: tf.errors.OpError: Or one of its subclasses if an error occurs while stopping the server.
github-repos
def one_hot(indices, num_classes): return array_ops.one_hot(indices, depth=num_classes, axis=-1)
Computes the one-hot representation of an integer tensor. Args: indices: nD integer tensor of shape `(batch_size, dim1, dim2, ... dim(n-1))` num_classes: Integer, number of classes to consider. Returns: (n + 1)D one hot representation of the input with shape `(batch_size, dim1, dim2, ... dim(n-1), num_classes)` Returns: The one-hot tensor.
github-repos
def get_file_contents(self, file_key): self._raise_unimplemented_error() uri = '/'.join([self.api_uri, self.files_suffix, file_key, self.file_contents_suffix, ]) return self._req('get', uri)
Gets file contents Args: file_key key for the file return (status code, ?)
juraj-google-style
def cmd_list(options): (i_info, param_str) = gather_data(options) if i_info: awsc.get_all_aminames(i_info) param_str = (('Instance List - ' + param_str) + '\n') list_instances(i_info, param_str) else: print('No instances found with parameters: {}'.format(param_str))
Gather data for instances matching args and call display func. Args: options (object): contains args and data from parser.
codesearchnet
def load_data(path, dense=False): catalog = {'.csv': load_csv, '.sps': load_svmlight_file, '.h5': load_hdf5} ext = os.path.splitext(path)[1] func = catalog[ext] X, y = func(path) if dense and sparse.issparse(X): X = X.todense() return X, y
Load data from a CSV, LibSVM or HDF5 file based on the file extension. Args: path (str): A path to the CSV, LibSVM or HDF5 format file containing data. dense (boolean): An optional variable indicating if the return matrix should be dense. By default, it is false. Returns: Data matrix X and target vector y
juraj-google-style
def kill_plasma_store(self, check_alive=True): self._kill_process_type(ray_constants.PROCESS_TYPE_PLASMA_STORE, check_alive=check_alive)
Kill the plasma store. Args: check_alive (bool): Raise an exception if the process was already dead.
codesearchnet
def ReadFileObject(self, artifacts_reader, file_object): for artifact_definition in artifacts_reader.ReadFileObject(file_object): self.RegisterDefinition(artifact_definition)
Reads artifact definitions into the registry from a file-like object. Args: artifacts_reader (ArtifactsReader): an artifacts reader. file_object (file): file-like object to read from.
codesearchnet
def call_function_with_args(self, node, val, args): assert isinstance(val.data, abstract.INTERPRETER_FUNCTION_TYPES) with val.data.record_calls(): new_node, ret = self._call_function_in_frame(node, val, *attrs.astuple(args, recurse=False)) return (new_node, ret)
Call a function. Args: node: The given node. val: A cfg.Binding containing the function. args: A function.Args object. Returns: A tuple of (1) a node and (2) a cfg.Variable of the return value.
github-repos
def get_feature_variable_integer(self, feature_key, variable_key, user_id, attributes=None): variable_type = entities.Variable.Type.INTEGER return self._get_feature_variable_for_type(feature_key, variable_key, variable_type, user_id, attributes)
Returns value for a certain integer variable attached to a feature flag. Args: feature_key: Key of the feature whose variable's value is being accessed. variable_key: Key of the variable whose value is to be accessed. user_id: ID for user. attributes: Dict representing user attributes. Returns: Integer value of the variable. None if: - Feature key is invalid. - Variable key is invalid. - Mismatch with type of variable.
juraj-google-style
def create_image_lists(image_dir, testing_percentage, validation_percentage): if (not tf.gfile.Exists(image_dir)): tf.logging.error((("Image directory '" + image_dir) + "' not found.")) return None result = collections.OrderedDict() sub_dirs = sorted((x[0] for x in tf.gfile.Walk(image_dir))) is_root_dir = True for sub_dir in sub_dirs: if is_root_dir: is_root_dir = False continue extensions = sorted(set((os.path.normcase(ext) for ext in ['JPEG', 'JPG', 'jpeg', 'jpg', 'png']))) file_list = [] dir_name = os.path.basename((sub_dir[:(- 1)] if sub_dir.endswith('/') else sub_dir)) if (dir_name == image_dir): continue tf.logging.info((("Looking for images in '" + dir_name) + "'")) for extension in extensions: file_glob = os.path.join(image_dir, dir_name, ('*.' + extension)) file_list.extend(tf.gfile.Glob(file_glob)) if (not file_list): tf.logging.warning('No files found') continue if (len(file_list) < 20): tf.logging.warning('WARNING: Folder has less than 20 images, which may cause issues.') elif (len(file_list) > MAX_NUM_IMAGES_PER_CLASS): tf.logging.warning('WARNING: Folder {} has more than {} images. Some images will never be selected.'.format(dir_name, MAX_NUM_IMAGES_PER_CLASS)) label_name = re.sub('[^a-z0-9]+', ' ', dir_name.lower()) training_images = [] testing_images = [] validation_images = [] for file_name in file_list: base_name = os.path.basename(file_name) hash_name = re.sub('_nohash_.*$', '', file_name) hash_name_hashed = hashlib.sha1(tf.compat.as_bytes(hash_name)).hexdigest() percentage_hash = ((int(hash_name_hashed, 16) % (MAX_NUM_IMAGES_PER_CLASS + 1)) * (100.0 / MAX_NUM_IMAGES_PER_CLASS)) if (percentage_hash < validation_percentage): validation_images.append(base_name) elif (percentage_hash < (testing_percentage + validation_percentage)): testing_images.append(base_name) else: training_images.append(base_name) result[label_name] = {'dir': dir_name, 'training': training_images, 'testing': testing_images, 'validation': validation_images} return result
Builds a list of training images from the file system. Analyzes the sub folders in the image directory, splits them into stable training, testing, and validation sets, and returns a data structure describing the lists of images for each label and their paths. Args: image_dir: String path to a folder containing subfolders of images. testing_percentage: Integer percentage of the images to reserve for tests. validation_percentage: Integer percentage of images reserved for validation. Returns: An OrderedDict containing an entry for each label subfolder, with images split into training, testing, and validation sets within each label. The order of items defines the class indices.
codesearchnet
def dump_data(data, filename=None, file_type='json', klazz=YapconfError, open_kwargs=None, dump_kwargs=None): _check_file_type(file_type, klazz) open_kwargs = (open_kwargs or {'encoding': 'utf-8'}) dump_kwargs = (dump_kwargs or {}) if filename: with open(filename, 'w', **open_kwargs) as conf_file: _dump(data, conf_file, file_type, **dump_kwargs) else: _dump(data, sys.stdout, file_type, **dump_kwargs)
Dump data given to file or stdout in file_type. Args: data (dict): The dictionary to dump. filename (str, optional): Defaults to None. The filename to write the data to. If none is provided, it will be written to STDOUT. file_type (str, optional): Defaults to 'json'. Can be any of yapconf.FILE_TYPES klazz (optional): Defaults to YapconfError a special error to throw when something goes wrong. open_kwargs (dict, optional): Keyword arguments to open. dump_kwargs (dict, optional): Keyword arguments to dump.
codesearchnet
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None): output = [self.cls_token_id] + token_ids_0 + [self.sep_token_id] if token_ids_1 is not None: output += token_ids_1 + [self.sep_token_id] return output
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A LayoutLM sequence has the following format: - single sequence: `[CLS] X [SEP]` - pair of sequences: `[CLS] A [SEP] B [SEP]` Args: token_ids_0 (`List[int]`): List of IDs to which the special tokens will be added. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
github-repos
def forward(self, hidden_state): residual = hidden_state hidden_state = self.norm(hidden_state) if self.self_attn: batch_size, n_vars, num_patches, d_model = hidden_state.shape hidden_state_reshaped = hidden_state.reshape(batch_size * n_vars, num_patches, d_model) x_attn, _, _ = self.self_attn_layer(hidden_state_reshaped, output_attentions=False) x_attn = x_attn.reshape(batch_size, n_vars, num_patches, d_model) hidden_state = hidden_state.transpose(2, 3) hidden_state = self.mlp(hidden_state) if self.gated_attn: hidden_state = self.gating_block(hidden_state) hidden_state = hidden_state.transpose(2, 3) if self.self_attn: hidden_state = self.norm_attn(hidden_state + x_attn) out = hidden_state + residual return out
Args: hidden_state (`torch.Tensor`): Input tensor. Returns: `torch.Tensor`: Transformed tensor.
github-repos
def tokenize_numbers(text_array: List[str]) -> List[str]: tokenized = [] for i in range(len(text_array)): reg, sub = MATCH_NUMBERS replaced = re.sub(reg, sub, text_array[i]).split() tokenized.extend(replaced) return tokenized
Splits large comma-separated numbers and floating point values. This is done by replacing commas with ' @,@ ' and dots with ' @.@ '. Args: text_array: An already tokenized text as list. Returns: A list of strings with tokenized numbers. Example: ```python >>> tokenize_numbers(["$", "5,000", "1.73", "m"]) ['$', '5', '@,@', '000', '1', '@.@', '73', 'm'] ```
github-repos
def fromkeys(cls, iterable, value=None): if not callable(value): return cls(dict.fromkeys(iterable, value)) return cls((key, value(key)) for key in iterable)
Create a new d from Args: iterable: Iterable containing keys value: value to associate with each key. If callable, will be value[key] Returns: new DictWrapper Example: >>> from ww import d >>> sorted(d.fromkeys('123', value=4).items()) [('1', 4), ('2', 4), ('3', 4)] >>> sorted(d.fromkeys(range(3), value=lambda e:e**2).items()) [(0, 0), (1, 1), (2, 4)]
juraj-google-style
def alignment(self, align): if align=='left': align = '0' elif align=='center': align = '1' elif align=='right': align = '2' elif align=='justified': align = '3' else: raise RuntimeError('Invalid alignment in function alignment') self.send(chr(27)+'a'+align)
Sets the alignment of the printer. Args: align: desired alignment. Options are 'left', 'center', 'right', and 'justified'. Anything else will throw an error. Returns: None Raises: RuntimeError: Invalid alignment.
juraj-google-style
def get_reference_points(spatial_shapes, valid_ratios, device): reference_points_list = [] for lvl, (height, width) in enumerate(spatial_shapes): ref_y, ref_x = torch.meshgrid(torch.linspace(0.5, height - 0.5, height, dtype=valid_ratios.dtype, device=device), torch.linspace(0.5, width - 0.5, width, dtype=valid_ratios.dtype, device=device)) ref_y = ref_y.reshape(-1)[None] / (valid_ratios[:, None, lvl, 1] * height) ref_x = ref_x.reshape(-1)[None] / (valid_ratios[:, None, lvl, 0] * width) ref = torch.stack((ref_x, ref_y), -1) reference_points_list.append(ref) reference_points = torch.cat(reference_points_list, 1) reference_points = reference_points[:, :, None] * valid_ratios[:, None] return reference_points
Get reference points for each feature map. Used in decoder. Args: spatial_shapes (`torch.LongTensor` of shape `(num_feature_levels, 2)`): Spatial shapes of each feature map. valid_ratios (`torch.FloatTensor` of shape `(batch_size, num_feature_levels, 2)`): Valid ratios of each feature map. device (`torch.device`): Device on which to create the tensors. Returns: `torch.FloatTensor` of shape `(batch_size, num_queries, num_feature_levels, 2)`
github-repos