code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def reverse(self, transfer_id, data={}, **kwargs): url = "{}/{}/reversals".format(self.base_url, transfer_id) return self.post_url(url, data, **kwargs)
Reverse Transfer from given id Args: transfer_id : Id for which transfer object has to be reversed Returns: Transfer Dict which was reversed
juraj-google-style
def coverage_score(gold, pred, ignore_in_gold=[], ignore_in_pred=[]): gold, pred = _preprocess(gold, pred, ignore_in_gold, ignore_in_pred) return np.sum(pred != 0) / len(pred)
Calculate (global) coverage. Args: gold: A 1d array-like of gold labels pred: A 1d array-like of predicted labels (assuming abstain = 0) ignore_in_gold: A list of labels for which elements having that gold label will be ignored. ignore_in_pred: A list of labels for which elements having that pred label will be ignored. Returns: A float, the (global) coverage score
juraj-google-style
def _environment_variables(**kwargs): hdx_key = os.getenv('HDX_KEY') if hdx_key is not None: kwargs['hdx_key'] = hdx_key hdx_url = os.getenv('HDX_URL') if hdx_url is not None: kwargs['hdx_url'] = hdx_url else: hdx_site = os.getenv('HDX_SITE') if hdx_site is not None: kwargs['hdx_site'] = hdx_site return kwargs
Overwrite keyword arguments with environment variables Args: **kwargs: See below hdx_url (str): HDX url to use. Overrides hdx_site. hdx_site (str): HDX site to use eg. prod, test. Defaults to test. hdx_key (str): Your HDX key. Ignored if hdx_read_only = True. Returns: kwargs: Changed keyword arguments
juraj-google-style
def concat(self,array_like): arr = list(array_like) if len(set([x.microns_per_pixel for x in arr])) != 1: raise ValueError("Multiple microns per pixel set") cdf = CellDataFrame(pd.concat([pd.DataFrame(x) for x in arr])) cdf.microns_per_pixel = arr[0].microns_per_pixel return cdf
Concatonate multiple CellDataFrames throws an error if the microns_per_pixel is not uniform across the frames Args: array_like (list): a list of CellDataFrames with 1 or more CellDataFrames Returns: CellDataFrame
juraj-google-style
def ParseFileObject(self, parser_mediator, file_object): filename = parser_mediator.GetFilename() if (not filename.startswith('INFO2')): return file_header_map = self._GetDataTypeMap('recycler_info2_file_header') try: (file_header, _) = self._ReadStructureFromFileObject(file_object, 0, file_header_map) except (ValueError, errors.ParseError) as exception: raise errors.UnableToParseFile('Unable to parse Windows Recycler INFO2 file header with error: {0!s}'.format(exception)) if (file_header.unknown1 != 5): parser_mediator.ProduceExtractionWarning('unsupported format signature.') return file_entry_size = file_header.file_entry_size if (file_entry_size not in (280, 800)): parser_mediator.ProduceExtractionWarning('unsupported file entry size: {0:d}'.format(file_entry_size)) return file_offset = file_object.get_offset() file_size = file_object.get_size() while (file_offset < file_size): self._ParseInfo2Record(parser_mediator, file_object, file_offset, file_entry_size) file_offset += file_entry_size
Parses a Windows Recycler INFO2 file-like object. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. file_object (dfvfs.FileIO): file-like object. Raises: UnableToParseFile: when the file cannot be parsed.
codesearchnet
def detect_changepoints(points, min_time, data_processor=acc_difference): data = data_processor(points) changepoints = pelt(normal_mean(data, np.std(data)), len(data)) changepoints.append((len(points) - 1)) result = [] for (start, end) in pairwise(changepoints): time_diff = points[end].time_difference(points[start]) if (time_diff > min_time): result.append(start) result.append(0) result.append((len(points) - 1)) return sorted(list(set(result)))
Detects changepoints on points that have at least a specific duration Args: points (:obj:`Point`) min_time (float): Min time that a sub-segmented, bounded by two changepoints, must have data_processor (function): Function to extract data to feed to the changepoint algorithm. Defaults to `speed_difference` Returns: :obj:`list` of int: Indexes of changepoints
codesearchnet
def sample(self, bqm, beta_range=None, num_reads=10, num_sweeps=1000): if (not isinstance(num_reads, int)): raise TypeError("'samples' should be a positive integer") if (num_reads < 1): raise ValueError("'samples' should be a positive integer") (h, J, offset) = bqm.to_ising() samples = [] energies = [] for __ in range(num_reads): (sample, energy) = ising_simulated_annealing(h, J, beta_range, num_sweeps) samples.append(sample) energies.append(energy) response = SampleSet.from_samples(samples, Vartype.SPIN, energies) response.change_vartype(bqm.vartype, offset, inplace=True) return response
Sample from low-energy spin states using simulated annealing. Args: bqm (:obj:`.BinaryQuadraticModel`): Binary quadratic model to be sampled from. beta_range (tuple, optional): Beginning and end of the beta schedule (beta is the inverse temperature) as a 2-tuple. The schedule is applied linearly in beta. Default is chosen based on the total bias associated with each node. num_reads (int, optional, default=10): Number of reads. Each sample is the result of a single run of the simulated annealing algorithm. num_sweeps (int, optional, default=1000): Number of sweeps or steps. Returns: :obj:`.SampleSet` Note: This is a reference implementation, not optimized for speed and therefore not an appropriate sampler for benchmarking.
codesearchnet
def maybe_append_oov_vectors(embeddings, num_oov_buckets): num_embeddings = np.shape(embeddings)[0] embedding_dim = np.shape(embeddings)[1] embeddings.resize([(num_embeddings + num_oov_buckets), embedding_dim], refcheck=False)
Adds zero vectors for oov buckets if num_oov_buckets > 0. Since we are assigning zero vectors, adding more that one oov bucket is only meaningful if we perform fine-tuning. Args: embeddings: Embeddings to extend. num_oov_buckets: Number of OOV buckets in the extended embedding.
codesearchnet
def cmAccuracy(cm): cm = cm.type(torch.float64) return cm.diag().sum() / (cm.sum() + 1e-15)
Calculates accuracy using :class:`~ignite.metrics.ConfusionMatrix` metric. Args: cm (ConfusionMatrix): instance of confusion matrix metric Returns: MetricsLambda
juraj-google-style
def trace_sync(self, data, timeout=5.0): done = AwaitableResponse() self.trace(data, callback=done.set_result) return done.wait(timeout)
Send tracing data and wait for it to finish. This awaitable coroutine wraps VirtualIOTileDevice.trace() and turns the callback into an awaitable object. The appropriate usage of this method is by calling it inside the event loop as: await device.trace_sync(data) Args: data (bytes): The raw data that should be traced. timeout (float): The maximum number of seconds to wait before timing out. Returns: awaitable: An awaitable object with the result. The result will be True if the data was sent successfully or False if the data could not be sent in its entirety. When False is returned, there is no guarantee about how much of the data was sent, if any, just that it was not known to be successfully sent.
codesearchnet
def pandas_dataframe(self, start, stop, ncol, **kwargs): try: int(start) int(stop) except TypeError: print('start and stop must be ints') try: ncol = int(ncol) return pd.read_csv(six.StringIO('\n'.join(self[start:stop])), delim_whitespace=True, names=range(ncol), **kwargs) except TypeError: try: ncol = list(ncol) return pd.read_csv(six.StringIO('\n'.join(self[start:stop])), delim_whitespace=True, names=ncol, **kwargs) except TypeError: print('Cannot pandas_dataframe if ncol is {}, must be int or list'.format(type(ncol)))
Returns the result of tab-separated pandas.read_csv on a subset of the file. Args: start (int): line number where structured data starts stop (int): line number where structured data stops ncol (int or list): the number of columns in the structured data or a list of that length with column names Returns: pd.DataFrame: structured data
juraj-google-style
def write(self, fb): print('[{}.{}]'.format(fb.module, fb.func.__name__), file=self.file) print('class = {}'.format(fb.func_ins.name), file=self.file) print('inspecs = {}'.format(repr(fb.inspecs)), file=self.file) print('func_args = {}'.format(repr(fb.func_args)), file=self.file) print('func_kwargs = {}'.format(repr(fb.func_kwargs)), file=self.file) print('ext = ({}, {})'.format( repr(fb.ext), repr(fb.ext_kwargs)), file=self.file) if self.setup_stat is not None: self._write_a_stat('setup', self.setup_stat) if self.foward_stat is not None: self._write_a_stat('forward', self.forward_stat) if self.backward_stat is not None: self._write_a_stat('backward', self.backward_stat)
Write a single function benchmark. Args: fb (FunctionBenchmark): FunctionBenchmark class instance. Before passing to this, you should call ``fb.benchmark()``.
juraj-google-style
def ReportStatus(self, request, global_params=None): config = self.GetMethodConfig('ReportStatus') return self._RunMethod(config, request, global_params=global_params)
Reports the status of dataflow WorkItems leased by a worker. Args: request: (DataflowProjectsLocationsJobsWorkItemsReportStatusRequest) input message global_params: (StandardQueryParameters, default: None) global arguments Returns: (ReportWorkItemStatusResponse) The response message.
github-repos
def render_parse_load(raw_config, environment=None, validate=True): pre_rendered = render(raw_config, environment) rendered = process_remote_sources(pre_rendered, environment) config = parse(rendered) if config.namespace is None: namespace = environment.get("namespace") if namespace: logger.warn("DEPRECATION WARNING: specifying namespace in the " "environment is deprecated. See " "https: " "for more info.") config.namespace = namespace if validate: config.validate() return load(config)
Encapsulates the render -> parse -> validate -> load process. Args: raw_config (str): the raw stacker configuration string. environment (dict, optional): any environment values that should be passed to the config validate (bool): if provided, the config is validated before being loaded. Returns: :class:`Config`: the parsed stacker config.
juraj-google-style
def interpret_obj( self, obj, v_level_indexes, h_level_indexes, v_level_visibility, h_level_visibility, v_level_sort_keys, h_level_sort_keys, v_level_titles, h_level_titles, ): if not isinstance(obj, NonStringIterable): raise self.error("Cannot make a table from object {!r}".format(obj)) rectangular_rows = tabulate( obj, v_level_indexes=v_level_indexes, h_level_indexes=h_level_indexes, v_level_visibility=v_level_visibility, h_level_visibility=h_level_visibility, v_level_sort_keys=v_level_sort_keys, h_level_sort_keys=h_level_sort_keys, v_level_titles=v_level_titles, h_level_titles=h_level_titles, ) assert is_rectangular(rectangular_rows) num_rows, num_cols = size(rectangular_rows) return rectangular_rows, num_cols
Interpret the given Python object as a table. Args: obj: A sequence (later a mapping, too) Returns: A list of lists represents rows of cells. Raises: TypeError: If the type couldn't be interpreted as a table.
juraj-google-style
def _ParseEntryObject(self, file_object, file_offset): entry_object_map = self._GetDataTypeMap('systemd_journal_entry_object') try: (entry_object, _) = self._ReadStructureFromFileObject(file_object, file_offset, entry_object_map) except (ValueError, errors.ParseError) as exception: raise errors.ParseError('Unable to parse entry object at offset: 0x{0:08x} with error: {1!s}'.format(file_offset, exception)) if (entry_object.object_type != self._OBJECT_TYPE_ENTRY): raise errors.ParseError('Unsupported object type: {0:d}.'.format(entry_object.object_type)) if (entry_object.object_flags != 0): raise errors.ParseError('Unsupported object flags: 0x{0:02x}.'.format(entry_object.object_flags)) return entry_object
Parses an entry object. Args: file_object (dfvfs.FileIO): a file-like object. file_offset (int): offset of the entry object relative to the start of the file-like object. Returns: systemd_journal_entry_object: entry object. Raises: ParseError: if the entry object cannot be parsed.
codesearchnet
def __init__(self, input_energy: energy.BernoulliEnergy, num_expectation_samples: int, initial_seed: Union[None, tf.Tensor]=None, name: Union[None, str]=None): super().__init__(input_energy, num_expectation_samples, initial_seed, name) self._logits_variable = tf.Variable(input_energy.logits, trainable=False) self._distribution = tfd.Bernoulli(logits=self._logits_variable, dtype=tf.int8)
Initializes a BernoulliEnergyInference. Args: input_energy: The parameterized energy function which defines this distribution via the equations of an energy based model. This class assumes that all parameters of `energy` are `tf.Variable`s and that they are all returned by `energy.variables`. num_expectation_samples: Number of samples to draw and use for estimating the expectation value. initial_seed: PRNG seed; see tfp.random.sanitize_seed for details. This seed will be used in the `sample` method. If None, the seed is updated after every inference call. Otherwise, the seed is fixed. name: Optional name for the model.
github-repos
def __init__(self, msg, url, headers, status_code, writer=None): self.msg = msg self.url = url self.http_headers = headers self.status_code = status_code self._init_writer(writer)
Set the main attributes and instantiate the writer if given. Args: msg(pandasdmx.model.Message): the SDMX message url(str): the URL, if any, that had been sent to the SDMX server headers(dict): http headers status_code(int): the status code returned by the server writer(str): the module path for the writer class
juraj-google-style
def ffn_self_attention_layer(x, filter_depth, output_depth, num_parts, dropout_rate, share_kv=False, name=None): with tf.variable_scope(name, default_name='feedforward_self_attention', values=[x]): x_shape = common_layers.shape_list(x) part_depth = (filter_depth if (not share_kv): combined = common_layers.dense(x, (filter_depth * 3), use_bias=False, name='qkv_transform') combined = tf.expand_dims(combined, axis=2) (q, k, v) = tf.split(combined, 3, axis=3) else: q = tf.expand_dims(common_layers.dense(x, filter_depth, use_bias=False, name='q_transform'), axis=2) kv_combined = tf.expand_dims(common_layers.dense(tf.concat([x, x], axis=1), filter_depth, use_bias=False, name='kv_transform'), axis=2) (k, v) = tf.split(kv_combined, [x_shape[1], x_shape[1]], axis=1) batch_q = tf.reshape(q, [(- 1), 1, num_parts, part_depth]) batch_k = tf.reshape(k, [(- 1), 1, num_parts, part_depth]) batch_v = tf.reshape(v, [(- 1), 1, num_parts, part_depth]) batch_q *= (part_depth ** (- 0.5)) bias = None x = dot_product_attention(batch_q, batch_k, batch_v, bias, dropout_rate) x = tf.reshape(x, [x_shape[0], x_shape[1], filter_depth]) x = common_layers.dense(x, output_depth, use_bias=False, name='output_transform') return x
Self-attention feedforward layer. We use self-attention to do feedforward computations. We apply this function positionwise where for each position, we linearly transform the output to have depth filter_depth, and break up the result depth-wise into num_parts contiguous parts. The parts self-attend, we concatenate the results depth-wise, and we linearly transform to a depth of output_depth. The goal is to get multiplicative interactions between components of a representation. Args: x: a Tensor with shape [batch, length, channels] filter_depth: an integer output_depth: an integer num_parts: an integer dividing filter depth dropout_rate: a floating point number share_kv: Share the key value transform name: an optional string Returns: A Tensor with shape [batch, length, output_depth].
codesearchnet
def calibrate(self, dataset_gen): self._feed_tensors(dataset_gen, resize_input=True) return self._calibrator.Calibrate()
Calibrates the model with specified generator. Returns: A model with min and max calibration stats. Args: dataset_gen: A generator that generates calibration samples.
github-repos
def quoted_tweet(self): quote_tweet = tweet_embeds.get_quoted_tweet(self) if (quote_tweet is not None): try: return Tweet(quote_tweet) except NotATweetError as nate: raise NotATweetError(('The quote-tweet payload appears malformed.' + " Failed with '{}'".format(nate))) else: return None
The quoted Tweet as a Tweet object If the Tweet is not a quote Tweet, return None If the quoted Tweet payload cannot be loaded as a Tweet, this will raise a "NotATweetError" Returns: Tweet: A Tweet representing the quoted status (or None) (see tweet_embeds.get_quote_tweet, this is that value as a Tweet) Raises: NotATweetError: if quoted tweet is malformed
codesearchnet
def get_privkey(self, address: AddressHex, password: str) -> PrivateKey: address = add_0x_prefix(address).lower() if not self.address_in_keystore(address): raise ValueError('Keystore file not found for %s' % address) with open(self.accounts[address]) as data_file: data = json.load(data_file) acc = Account(data, password, self.accounts[address]) return acc.privkey
Find the keystore file for an account, unlock it and get the private key Args: address: The Ethereum address for which to find the keyfile in the system password: Mostly for testing purposes. A password can be provided as the function argument here. If it's not then the user is interactively queried for one. Returns The private key associated with the address
juraj-google-style
def atoms_string_from_file(filename): with zopen(filename, "rt") as fobject: f = fobject.readlines() coords = 0 atoms_str = [] for line in f: if coords == 0: find_atoms = line.find("ATOMS") if find_atoms >= 0: coords = 1 if coords == 1 and not ("END" in line): atoms_str.append(line.replace("\r", "")) return ''.join(atoms_str)
Reads atomic shells from file such as feff.inp or ATOMS file The lines are arranged as follows: x y z ipot Atom Symbol Distance Number with distance being the shell radius and ipot an integer identifying the potential used. Args: filename: File name containing atomic coord data. Returns: Atoms string.
juraj-google-style
def download_archive(self, name, file_path): uri = self.URI + "/archive/" + name return self._client.download(uri, file_path)
Download archived logs of the OS Volume. Args: name: Name of the OS Volume. file_path (str): Destination file path. Returns: bool: Indicates if the resource was successfully downloaded.
juraj-google-style
def _log_band_edge_information(bs, edge_data): if bs.is_spin_polarized: spins = edge_data['band_index'].keys() b_indices = [', '.join([str(i+1) for i in edge_data['band_index'][spin]]) + '({})'.format(spin.name.capitalize()) for spin in spins] b_indices = ', '.join(b_indices) else: b_indices = ', '.join([str(i+1) for i in edge_data['band_index'][Spin.up]]) kpoint = edge_data['kpoint'] kpoint_str = kpt_str.format(k=kpoint.frac_coords) k_indices = ', '.join(map(str, edge_data['kpoint_index'])) if kpoint.label: k_loc = kpoint.label else: branch = bs.get_branch(edge_data['kpoint_index'][0])[0] k_loc = 'between {}'.format(branch['name']) logging.info(' Energy: {:.3f} eV'.format(edge_data['energy'])) logging.info(' k-point: {}'.format(kpoint_str)) logging.info(' k-point location: {}'.format(k_loc)) logging.info(' k-point indices: {}'.format(k_indices)) logging.info(' Band indices: {}'.format(b_indices))
Log data about the valence band maximum or conduction band minimum. Args: bs (:obj:`~pymatgen.electronic_structure.bandstructure.BandStructureSymmLine`): The band structure. edge_data (dict): The :obj:`dict` from ``bs.get_vbm()`` or ``bs.get_cbm()``
juraj-google-style
def get_panel_info(panel_lines=None, panel_id=None, institute=None, version=None, date=None, display_name=None): panel_info = { 'panel_id': panel_id, 'institute': institute, 'version': version, 'date': date, 'display_name': display_name, } if panel_lines: for line in panel_lines: line = line.rstrip() if not line.startswith(' break info = line[2:].split('=') field = info[0] value = info[1] if not panel_info.get(field): panel_info[field] = value panel_info['date'] = get_date(panel_info['date']) return panel_info
Parse metadata for a gene panel For historical reasons it is possible to include all information about a gene panel in the header of a panel file. This function parses the header. Args: panel_lines(iterable(str)) Returns: panel_info(dict): Dictionary with panel information
juraj-google-style
def download(self, chunk_size=1024): stream = BytesIO() response = self._swimlane.request( 'get', 'attachment/download/{}'.format(self.file_id), stream=True ) for chunk in response.iter_content(chunk_size): stream.write(chunk) stream.seek(0) return stream
Download attachment Args: chunk_size (int): Byte-size of chunked download request stream Returns: BytesIO: Stream ready for reading containing the attachment file contents
juraj-google-style
def nr_genes(self, build=None): if build: LOG.info('Fetching all genes from build %s', build) else: LOG.info('Fetching all genes') return self.hgnc_collection.find({'build': build}).count()
Return the number of hgnc genes in collection If build is used, return the number of genes of a certain build Returns: result()
codesearchnet
def find_signature(self, signature_id=None, signer_email_address=None): if self.signatures: for signature in self.signatures: if signature.signature_id == signature_id or signature.signer_email_address == signer_email_address: return signature
Return a signature for the given parameters Args: signature_id (str): Id of the signature to retrieve. signer_email_address (str): Email address of the associated signer for the signature to retrieve. Returns: A Signature object or None
juraj-google-style
def verify_repo_matches_url(repo, url): repo_parts = urlparse(repo) url_parts = urlparse(url) errors = [] repo_path_parts = repo_parts.path.split('/') url_path_parts = url_parts.path.split('/') if repo_parts.hostname != url_parts.hostname: errors.append("verify_repo_matches_url: Hostnames don't match! {} {}".format( repo_parts.hostname, url_parts.hostname )) if not url_parts.path.startswith(repo_parts.path) or \ url_path_parts[:len(repo_path_parts)] != repo_path_parts: errors.append("verify_repo_matches_url: Paths don't match! {} {}".format( repo_parts.path, url_parts.path )) if errors: log.warning("\n".join(errors)) return False return True
Verify ``url`` is a part of ``repo``. We were using ``startswith()`` for a while, which isn't a good comparison. This function allows us to ``urlparse`` and compare host and path. Args: repo (str): the repo url url (str): the url to verify is part of the repo Returns: bool: ``True`` if the repo matches the url.
juraj-google-style
def _check_currency_format(self, format=None): defaults = self.settings['currency']['format'] if hasattr(format, '__call__'): format = format() if is_str(format) and re.match('%v', format): return { 'pos': format, 'neg': format.replace("-", "").replace("%v", "-%v"), 'zero': format } elif not format or not format['por'] or not re.match('%v', format['pos']): self.settings['currency']['format'] = { 'pos': defaults, 'neg': defaults.replace("%v", "-%v"), 'zero': defaults } return self.settings return format
Summary. Args: format (TYPE, optional): Description Returns: name (TYPE): Description
juraj-google-style
def _get_source_chunks(self, input_text, language=None): chunks = ChunkList() seek = 0 result = self._get_annotations(input_text, language=language) tokens = result['tokens'] language = result['language'] for i, token in enumerate(tokens): word = token['text']['content'] begin_offset = token['text']['beginOffset'] label = token['dependencyEdge']['label'] pos = token['partOfSpeech']['tag'] if begin_offset > seek: chunks.append(Chunk.space()) seek = begin_offset chunk = Chunk(word, pos, label) if chunk.label in _DEPENDENT_LABEL: chunk.dependency = i < token['dependencyEdge']['headTokenIndex'] if chunk.is_punct(): chunk.dependency = chunk.is_open_punct() chunks.append(chunk) seek += len(word) return chunks, language
Returns a chunk list retrieved from Syntax Analysis results. Args: input_text (str): Text to annotate. language (:obj:`str`, optional): Language of the text. Returns: A chunk list. (:obj:`budou.chunk.ChunkList`)
juraj-google-style
def validate(self): errors = [] for cls in self.OPTIONS: if 'validate' in cls.__dict__ and callable(cls.__dict__['validate']): errors.extend(self.options.view_as(cls).validate(self)) return errors
Calls validate on subclassess and returns a list of errors. validate will call validate method on subclasses, accumulate the returned list of errors, and returns the aggregate list. Returns: Aggregate list of errors after all calling all possible validate methods.
github-repos
def set_memory_growth(device, enable): context.context().set_memory_growth(device, enable)
Set if memory growth should be enabled for a `PhysicalDevice`. If memory growth is enabled for a `PhysicalDevice`, the runtime initialization will not allocate all memory on the device. Memory growth cannot be configured on a `PhysicalDevice` with virtual devices configured. For example: >>> physical_devices = tf.config.list_physical_devices('GPU') >>> try: ... tf.config.experimental.set_memory_growth(physical_devices[0], True) ... except: ... # Invalid device or cannot modify virtual devices once initialized. ... pass Args: device: `PhysicalDevice` to configure enable: (Boolean) Whether to enable or disable memory growth Raises: ValueError: Invalid `PhysicalDevice` specified. RuntimeError: Runtime is already initialized.
github-repos
def ProcessAst(serializable_ast, module_map): serializable_ast = _LookupClassReferences(serializable_ast, module_map, serializable_ast.ast.name) serializable_ast = serializable_ast.Replace(class_type_nodes=None) serializable_ast = FillLocalReferences(serializable_ast, {'': serializable_ast.ast, serializable_ast.ast.name: serializable_ast.ast}) return serializable_ast.ast
Postprocess a pickled ast. Postprocessing will either just fill the ClassType references from module_map or if module_name changed between pickling and loading rename the module internal references to the new module_name. Renaming is more expensive than filling references, as the whole AST needs to be rebuild. Args: serializable_ast: A SerializableAst instance. module_map: Used to resolve ClassType.cls links to already loaded modules. The loaded module will be added to the dict. Returns: A pytd.TypeDeclUnit, this is either the input raw_ast with the references set or a newly created AST with the new module_name and the references set. Raises: AssertionError: If module_name is already in module_map, which means that module_name is already loaded. UnrestorableDependencyError: If no concrete module exists in module_map for one of the references from the pickled ast.
github-repos
def merge_input_csv_forecast_json(input_csv_file, forecast_json_path, condition_models, dist_models): try: run_date = input_csv_file[:(- 4)].split('_')[(- 1)] print(run_date) ens_member = '_'.join(input_csv_file.split('/')[(- 1)][:(- 4)].split('_')[3:(- 1)]) ens_name = input_csv_file.split('/')[(- 1)].split('_')[2] input_data = pd.read_csv(input_csv_file, index_col='Step_ID') full_json_path = (forecast_json_path + '{0}/{1}/'.format(run_date, ens_member)) track_ids = sorted(input_data['Track_ID'].unique()) model_pred_cols = [] condition_models_ns = [] dist_models_ns = [] gamma_params = ['Shape', 'Location', 'Scale'] for condition_model in condition_models: model_pred_cols.append((condition_model.replace(' ', '-') + '_Condition')) condition_models_ns.append(condition_model.replace(' ', '-')) for dist_model in dist_models: dist_models_ns.append(dist_model.replace(' ', '-')) for param in gamma_params: model_pred_cols.append(((dist_model.replace(' ', '-') + '_') + param)) pred_data = pd.DataFrame(index=input_data.index, columns=model_pred_cols, dtype=float) for track_id in track_ids: track_id_num = track_id.split('_')[(- 1)] json_filename = (full_json_path + '{0}_{1}_{2}_model_track_{3}.json'.format(ens_name, run_date, ens_member, track_id_num)) json_file = open(json_filename) json_data = json.load(json_file) json_file.close() for (s, step) in enumerate(json_data['features']): step_id = (track_id + '_{0:02d}'.format(s)) for cond_model in condition_models_ns: pred_data.loc[(step_id, (cond_model + '_Condition'))] = step['properties'][('condition_' + cond_model)] for dist_model in dist_models_ns: pred_data.loc[(step_id, [((dist_model + '_') + p) for p in gamma_params])] = step['properties'][('dist_' + dist_model)] out_data = input_data.merge(pred_data, left_index=True, right_index=True) return (out_data, ens_name, ens_member) except Exception as e: print(traceback.format_exc()) raise e
Reads forecasts from json files and merges them with the input data from the step csv files. Args: input_csv_file: Name of the input data csv file being processed forecast_json_path: Path to the forecast json files toplevel directory condition_models: List of models used to forecast hail or no hail dist_models: List of models used to forecast the hail size distribution Returns:
codesearchnet
def add_slab(self, height, n_background=1., position='top'): assert position in ('top', 'bottom') name = str(self.slab_count) if not callable(n_background): n_back = lambda wl: n_background else: n_back = n_background height_discretised = self.y_step*((height y_min = self._next_start y_max = y_min + height_discretised self.slabs[name] = Slab(name, self.x_step, self.y_step, self.x_max, y_max, self.x_min, y_min, n_back, self._wl) self.y_max = y_max self._next_start = y_min + height_discretised self.slab_count += 1 if position == 'bottom': slabs = {} for k in self.slabs.keys(): slabs[str(int(k)+1)] = self.slabs[k] slabs['0'] = slabs.pop(str(self.slab_count)) self.slabs = slabs return name
Creates and adds a :class:`Slab` object. Args: height (float): Height of the slab. n_background (float): The nominal refractive index of the slab. Default is 1 (air). Returns: str: The name of the slab.
juraj-google-style
def __call__(self, shardable_tensors: Sequence[sharding_util.ShardableTensor]) -> Sequence[sharding_util.Shard]: tensors_by_task = {} for shardable_tensor in shardable_tensors: tensor = shardable_tensor.tensor checkpoint_key = shardable_tensor.checkpoint_key slice_spec = shardable_tensor.slice_spec tensors_by_task.setdefault(checkpoint_key, {})[slice_spec] = tensor return [tensors_by_task]
Callback to split tensors into shards based on their device spec task. Args: shardable_tensors: A list of ShardableTensors. Returns: List of shard dicts containing tensors. [ {checkpoint key: {slice_spec: tensor} } ]
github-repos
def __init__(self, logger=logging, instance_config_metadata=None): self.logger = logger self.instance_config_metadata = instance_config_metadata self.instance_config_header %= ( self.instance_config_script, self.instance_config_template) super(InstanceConfig, self).__init__( config_file=self.instance_config_template, config_header=self.instance_config_header) config_files = [self.instance_config, self.instance_config_distro] config_defaults = [] if self.instance_config_metadata: config = parser.Parser() try: config.read_file(stringio.StringIO(self.instance_config_metadata)) except parser.Error as e: self.logger.error('Error parsing metadata configs: %s', str(e)) else: config_defaults.append( dict((s, dict(config.items(s))) for s in config.sections())) for config_file in config_files: if os.path.exists(config_file): config = parser.Parser() try: config.read(config_file) except parser.Error as e: self.logger.error('Error parsing config file: %s', str(e)) else: config_defaults.append( dict((s, dict(config.items(s))) for s in config.sections())) config_defaults.append(self.instance_config_options) for defaults in config_defaults: for section, options in sorted(defaults.items()): for option, value in sorted(options.items()): super(InstanceConfig, self).SetOption( section, option, value, overwrite=False)
Constructor. Inherit from the ConfigManager class. Read the template for instance defaults and write new sections and options. This prevents package updates from overriding user set defaults. Args: logger: logger object, used to write to SysLog and serial port. instance_config_metadata: string, a config file specified in metadata.
juraj-google-style
def _StubMethod(self, stub, method_descriptor, rpc_controller, request, callback): return stub.rpc_channel.CallMethod( method_descriptor, rpc_controller, request, method_descriptor.output_type._concrete_class, callback)
The body of all service methods in the generated stub class. Args: stub: Stub instance. method_descriptor: Descriptor of the invoked method. rpc_controller: Rpc controller to execute the method. request: Request protocol message. callback: A callback to execute when the method finishes. Returns: Response message (in case of blocking call).
juraj-google-style
def calculate(self, token_list_x, token_list_y): match_list = [tanimoto_value for tanimoto_value in token_list_x if tanimoto_value in token_list_y] return float(len(match_list) / (len(token_list_x) + len(token_list_y) - len(match_list)))
Calculate similarity with the Tanimoto coefficient. Concrete method. Args: token_list_x: [token, token, token, ...] token_list_y: [token, token, token, ...] Returns: Similarity.
juraj-google-style
def get(self, value): config = self.get_block(('vrf definition %s' % value)) if (not config): return None response = dict(vrf_name=value) response.update(self._parse_rd(config)) response.update(self._parse_description(config)) config = self.get_block(('no ip routing vrf %s' % value)) if config: response['ipv4_routing'] = False else: response['ipv4_routing'] = True config = self.get_block(('no ipv6 unicast-routing vrf %s' % value)) if config: response['ipv6_routing'] = False else: response['ipv6_routing'] = True return response
Returns the VRF configuration as a resource dict. Args: value (string): The vrf name to retrieve from the running configuration. Returns: A Python dict object containing the VRF attributes as key/value pairs.
codesearchnet
def __xor__(self, other: 'TensorFluent') -> 'TensorFluent': return self._binary_op(self, other, tf.logical_xor, tf.bool)
Returns a TensorFluent for the xor logical operator. Args: self: The first operand. other: The second operand. Returns: A TensorFluent wrapping the operator's output.
juraj-google-style
def op_and(self, *elements): expression = self.add_operator(Operator(';')) for element in elements: expression.add_element(element) return expression
Update the ``Expression`` by joining the specified additional ``elements`` using an "AND" ``Operator`` Args: *elements (BaseExpression): The ``Expression`` and/or ``Constraint`` elements which the "AND" ``Operator`` applies to. Returns: Expression: ``self`` or related ``Expression``.
codesearchnet
def send_file(self, file_name, remote_destination=None, **kwargs): if not remote_destination: remote_destination = file_name return SubprocessTask( self._rsync_cmd() + ['-ut', file_name, '%s:%s' % (self.hostname, remote_destination)], **kwargs)
Send a file to a remote host with rsync. Args: file_name (str): The relative location of the file on the local host. remote_destination (str): The destination for the file on the remote host. If `None`, will be assumed to be the same as **file_name**. Default `None`. **kwargs: Passed to ``SubprocessTask``'s init method. Return: ``pyrem.task.SubprocessTask``: The resulting task.
juraj-google-style
def VerifyStructure(self, parser_mediator, lines): try: structure = self._SDF_HEADER.parseString(lines) except pyparsing.ParseException: logger.debug('Not a SkyDrive log file') return False try: dfdatetime_time_elements.TimeElementsInMilliseconds(time_elements_tuple=structure.header_date_time) except ValueError: logger.debug('Not a SkyDrive log file, invalid date and time: {0!s}'.format(structure.header_date_time)) return False return True
Verify that this file is a SkyDrive log file. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. lines (str): one or more lines from the text file. Returns: bool: True if this is the correct parser, False otherwise.
codesearchnet
def key_exists(self, namespace, key): return namespace in self.__data and key in self.__data[namespace]
Checks a namespace for the existence of a specific key Args: namespace (str): Namespace to check in key (str): Name of the key to check for Returns: `True` if key exists in the namespace, else `False`
juraj-google-style
def add(self, path, compress=None): if os.path.isdir(path): self.add_dir(path, compress) else: self.add_file(path, compress)
Add `path` to the MAR file. If `path` is a file, it will be added directly. If `path` is a directory, it will be traversed recursively and all files inside will be added. Args: path (str): path to file or directory on disk to add to this MAR file compress (str): One of 'xz', 'bz2', or None. Defaults to None.
codesearchnet
def ensure(self, func, *args, **kwargs): data = self.tryload() if (data is None): data = func(*args, **kwargs) self.save(data) return data
r""" Wraps around a function. A cfgstr must be stored in the base cacher. Args: func (callable): function that will compute data on cache miss *args: passed to func **kwargs: passed to func Example: >>> from ubelt.util_cache import * # NOQA >>> def func(): >>> return 'expensive result' >>> fname = 'test_cacher_ensure' >>> cfgstr = 'func params' >>> cacher = Cacher(fname, cfgstr) >>> cacher.clear() >>> data1 = cacher.ensure(func) >>> data2 = cacher.ensure(func) >>> assert data1 == 'expensive result' >>> assert data1 == data2 >>> cacher.clear()
codesearchnet
def delete_tag(self, key, update_session=True): existing_tags = {x.key: x for x in self.tags} if (key in existing_tags): if update_session: db.session.delete(existing_tags[key]) self.tags.remove(existing_tags[key]) return True return False
Removes a tag from a resource based on the tag key. Returns `True` if the tag was removed or `False` if the tag didn't exist Args: key (str): Key of the tag to delete update_session (bool): Automatically add the change to the SQLAlchemy session. Default: True Returns:
codesearchnet
def __init__(self, resolver_context, file_object=None): super(VHDIFile, self).__init__(resolver_context, file_object=file_object) self._parent_vhdi_files = [] self._sub_file_objects = []
Initializes a file-like object. Args: resolver_context (Context): resolver context. file_object (Optional[FileIO]): file-like object.
juraj-google-style
def cloud_train(train_dataset, eval_dataset, analysis_dir, output_dir, features, model_type, max_steps, num_epochs, train_batch_size, eval_batch_size, min_eval_frequency, top_n, layer_sizes, learning_rate, epsilon, job_name, job_name_prefix, config): import google.datalab.ml as ml if ((len(train_dataset.input_files) != 1) or (len(eval_dataset.input_files) != 1)): raise ValueError('CsvDataSets must be built with a file pattern, not list of files.') if file_io.file_exists(output_dir): raise ValueError('output_dir already exist. Use a new output path.') if isinstance(features, dict): if (not file_io.file_exists(output_dir)): file_io.recursive_create_dir(output_dir) features_file = os.path.join(output_dir, 'features_file.json') file_io.write_string_to_file(features_file, json.dumps(features)) else: features_file = features if (not isinstance(config, ml.CloudTrainingConfig)): raise ValueError('cloud should be an instance of google.datalab.ml.CloudTrainingConfig for cloud training.') _assert_gcs_files([output_dir, train_dataset.input_files[0], eval_dataset.input_files[0], features_file, analysis_dir]) args = [('--train-data-paths=%s' % train_dataset.input_files[0]), ('--eval-data-paths=%s' % eval_dataset.input_files[0]), ('--preprocess-output-dir=%s' % analysis_dir), ('--transforms-file=%s' % features_file), ('--model-type=%s' % model_type), ('--max-steps=%s' % str(max_steps)), ('--train-batch-size=%s' % str(train_batch_size)), ('--eval-batch-size=%s' % str(eval_batch_size)), ('--min-eval-frequency=%s' % str(min_eval_frequency)), ('--learning-rate=%s' % str(learning_rate)), ('--epsilon=%s' % str(epsilon))] if num_epochs: args.append(('--num-epochs=%s' % str(num_epochs))) if top_n: args.append(('--top-n=%s' % str(top_n))) if layer_sizes: for i in range(len(layer_sizes)): args.append(('--layer-size%s=%s' % ((i + 1), str(layer_sizes[i])))) job_request = {'package_uris': [_package_to_staging(output_dir), _TF_GS_URL, _PROTOBUF_GS_URL], 'python_module': 'mltoolbox._structured_data.trainer.task', 'job_dir': output_dir, 'args': args} job_request.update(dict(config._asdict())) if (not job_name): job_name = (job_name_prefix or 'structured_data_train') job_name += ('_' + datetime.datetime.now().strftime('%y%m%d_%H%M%S')) job = ml.Job.submit_training(job_request, job_name) print('Job request send. View status of job at') print(('https: return job
Train model using CloudML. See local_train() for a description of the args. Args: config: A CloudTrainingConfig object. job_name: Training job name. A default will be picked if None.
codesearchnet
def metric_streaming(self): if (not self.__metric_streaming): self.__metric_streaming = MetricStreaming(self.__connection) return self.__metric_streaming
Gets the MetricStreaming API client. Returns: MetricStreaming:
codesearchnet
def load_extra(cls, filename): try: with open(filename, 'rb') as configuration_file: cls.load_extra_data(configuration_file.read()) sys.stderr.write('Config successfully loaded from {0:s}\n'.format(filename)) return True except IOError: return False
Loads extra JSON configuration parameters from a file on the filesystem. Args: filename: str, the filename to open. Returns: bool: True if the extra configuration parameters were read.
codesearchnet
def list_outputs(self, args, screen_info=None): _ = screen_info parsed = self._arg_parsers['list_outputs'].parse_args(args) output = self._list_inputs_or_outputs(parsed.recursive, parsed.node_name, parsed.depth, parsed.control, parsed.op_type, do_outputs=True) node_name = debug_graphs.get_node_name(parsed.node_name) _add_main_menu(output, node_name=node_name, enable_list_outputs=False) return output
Command handler for inputs. Show inputs to a given node. Args: args: Command-line arguments, excluding the command prefix, as a list of str. screen_info: Optional dict input containing screen information such as cols. Returns: Output text lines as a RichTextLines object.
github-repos
def log_cert_info(logger, msg_str, cert_obj): list( map( logger, ["{}:".format(msg_str)] + [ " {}".format(v) for v in [ "Subject: {}".format( _get_val_str(cert_obj, ["subject", "value"], reverse=True) ), "Issuer: {}".format( _get_val_str(cert_obj, ["issuer", "value"], reverse=True) ), "Not Valid Before: {}".format( cert_obj.not_valid_before.isoformat() ), "Not Valid After: {}".format(cert_obj.not_valid_after.isoformat()), "Subject Alt Names: {}".format( _get_ext_val_str( cert_obj, "SUBJECT_ALTERNATIVE_NAME", ["value", "value"] ) ), "CRL Distribution Points: {}".format( _get_ext_val_str( cert_obj, "CRL_DISTRIBUTION_POINTS", ["value", "full_name", "value", "value"], ) ), "Authority Access Location: {}".format( extract_issuer_ca_cert_url(cert_obj) or "<not found>" ), ] ], ) )
Dump basic certificate values to the log. Args: logger: Logger Logger to which to write the certificate values. msg_str: str A message to write to the log before the certificate values. cert_obj: cryptography.Certificate Certificate containing values to log. Returns: None
juraj-google-style
def _replace_event_shape_in_shape_tensor(input_shape, event_shape_in, event_shape_out, validate_args): (output_tensorshape, is_validated) = _replace_event_shape_in_tensorshape(tensorshape_util.constant_value_as_shape(input_shape), event_shape_in, event_shape_out) validation_dependencies = (map(tf.identity, (event_shape_in, event_shape_out)) if validate_args else ()) if (tensorshape_util.is_fully_defined(output_tensorshape) and (is_validated or (not validate_args))): with tf.control_dependencies(validation_dependencies): output_shape = tf.convert_to_tensor(value=output_tensorshape, name='output_shape', dtype_hint=tf.int32) return (output_shape, output_tensorshape) with tf.control_dependencies(validation_dependencies): event_shape_in_ndims = (tf.size(input=event_shape_in) if (tensorshape_util.num_elements(event_shape_in.shape) is None) else tensorshape_util.num_elements(event_shape_in.shape)) (input_non_event_shape, input_event_shape) = tf.split(input_shape, num_or_size_splits=[(- 1), event_shape_in_ndims]) additional_assertions = [] if is_validated: pass elif validate_args: mask = (event_shape_in >= 0) explicit_input_event_shape = tf.boolean_mask(tensor=input_event_shape, mask=mask) explicit_event_shape_in = tf.boolean_mask(tensor=event_shape_in, mask=mask) additional_assertions.append(assert_util.assert_equal(explicit_input_event_shape, explicit_event_shape_in, message='Input `event_shape` does not match `event_shape_in`.')) with tf.control_dependencies(additional_assertions): output_shape = tf.concat([input_non_event_shape, event_shape_out], axis=0, name='output_shape') return (output_shape, output_tensorshape)
Replaces the rightmost dims in a `Tensor` representing a shape. Args: input_shape: a rank-1 `Tensor` of integers event_shape_in: the event shape expected to be present in rightmost dims of `shape_in`. event_shape_out: the event shape with which to replace `event_shape_in` in the rightmost dims of `input_shape`. validate_args: Python `bool` indicating whether arguments should be checked for correctness. Returns: output_shape: A rank-1 integer `Tensor` with the same contents as `input_shape` except for the event dims, which are replaced with `event_shape_out`.
codesearchnet
def __init__(self, config=None, all_linters=None): self._classes = all_linters or LINTERS self._config = config or Config(self._classes) LinterRunner.config = self._config
Initialize the only Config object and assign it to other classes. Args: config (Config): Config object. all_linters (dict): Names and classes of all available linters.
juraj-google-style
def getFilesFromAFolder(path): from os import listdir from os.path import isfile, join onlyFiles = [] for f in listdir(path): if isfile(join(path, f)): onlyFiles.append(f) return onlyFiles
Getting all the files in a folder. Args: ----- path: The path in which looking for the files Returns: -------- list: The list of filenames found.
codesearchnet
def get_urls(self): urls = self.get_subfields('856', 'u', i1='4', i2='2') return map((lambda x: x.replace('&amp;', '&')), urls)
Content of field ``856u42``. Typically URL pointing to producers homepage. Returns: list: List of URLs defined by producer.
codesearchnet
def build_offset_mapping_with_special_tokens(self, offset_mapping_0, offset_mapping_1=None): if offset_mapping_1 is None: return [(0, 0)] + offset_mapping_0 + [(0, 0)] return [(0, 0)] + offset_mapping_0 + [(0, 0), (0, 0)] + offset_mapping_1 + [(0, 0)]
Build offset map from a pair of offset map by concatenating and adding offsets of special tokens. An Ernie-M offset_mapping has the following format: - single sequence: `(0,0) X (0,0)` - pair of sequences: `(0,0) A (0,0) (0,0) B (0,0)` Args: offset_mapping_ids_0 (`List[tuple]`): List of char offsets to which the special tokens will be added. offset_mapping_ids_1 (`List[tuple]`, *optional*): Optional second list of wordpiece offsets for offset mapping pairs. Returns: `List[tuple]`: List of wordpiece offsets with the appropriate offsets of special tokens.
github-repos
def lchmod(self, path, mode): if self.filesystem.is_windows_fs: raise (NameError, "name 'lchmod' is not defined") self.filesystem.chmod(path, mode, follow_symlinks=False)
Change the permissions of a file as encoded in integer mode. If the file is a link, the permissions of the link are changed. Args: path: (str) Path to the file. mode: (int) Permissions.
juraj-google-style
def mark_locations(h,section,locs,markspec='or',**kwargs): xyz = get_section_path(h,section) (r,theta,phi) = sequential_spherical(xyz) rcum = np.append(0,np.cumsum(r)) if type(locs) is float or type(locs) is np.float64: locs = np.array([locs]) if type(locs) is list: locs = np.array(locs) lengths = locs*rcum[-1] xyz_marks = [] for targ_length in lengths: xyz_marks.append(find_coord(targ_length,xyz,rcum,theta,phi)) xyz_marks = np.array(xyz_marks) line, = plt.plot(xyz_marks[:,0], xyz_marks[:,1], \ xyz_marks[:,2], markspec, **kwargs) return line
Marks one or more locations on along a section. Could be used to mark the location of a recording or electrical stimulation. Args: h = hocObject to interface with neuron section = reference to section locs = float between 0 and 1, or array of floats optional arguments specify details of marker Returns: line = reference to plotted markers
juraj-google-style
def populate_native_libraries(version): with open(BINARY_EXT_TEMPLATE, 'r') as file_obj: template = file_obj.read() contents = template.format(revision=version) with open(BINARY_EXT_FILE, 'w') as file_obj: file_obj.write(contents)
Populates ``binary-extension.rst`` with release-specific data. Args: version (str): The current version.
codesearchnet
def custom(colors, bins=None, bin_method=BinMethod.quantiles): return { 'colors': colors, 'bins': bins if bins is not None else len(colors), 'bin_method': bin_method, }
Create a custom scheme. Args: colors (list of str): List of hex values for styling data bins (int, optional): Number of bins to style by. If not given, the number of colors will be used. bin_method (str, optional): Classification method. One of the values in :obj:`BinMethod`. Defaults to `quantiles`, which only works with quantitative data.
juraj-google-style
def build_inputs_with_special_tokens(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None) -> List[int]: if token_ids_1 is None: return [self.cls_token_id] + token_ids_0 + [self.sep_token_id] cls = [self.cls_token_id] sep = [self.sep_token_id] return cls + token_ids_0 + sep + token_ids_1 + sep
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A DeBERTa sequence has the following format: - single sequence: [CLS] X [SEP] - pair of sequences: [CLS] A [SEP] B [SEP] Args: token_ids_0 (`List[int]`): List of IDs to which the special tokens will be added. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
github-repos
class MusicgenProcessor(ProcessorMixin): feature_extractor_class = 'EncodecFeatureExtractor' tokenizer_class = ('T5Tokenizer', 'T5TokenizerFast') def __init__(self, feature_extractor, tokenizer): super().__init__(feature_extractor, tokenizer) self.current_processor = self.feature_extractor self._in_target_context_manager = False def get_decoder_prompt_ids(self, task=None, language=None, no_timestamps=True): return self.tokenizer.get_decoder_prompt_ids(task=task, language=language, no_timestamps=no_timestamps) def __call__(self, *args, **kwargs): if self._in_target_context_manager: return self.current_processor(*args, **kwargs) audio = kwargs.pop('audio', None) sampling_rate = kwargs.pop('sampling_rate', None) text = kwargs.pop('text', None) if len(args) > 0: audio = args[0] args = args[1:] if audio is None and text is None: raise ValueError('You need to specify either an `audio` or `text` input to process.') if text is not None: inputs = self.tokenizer(text, **kwargs) if audio is not None: audio_inputs = self.feature_extractor(audio, *args, sampling_rate=sampling_rate, **kwargs) if audio is None: return inputs elif text is None: return audio_inputs else: inputs['input_values'] = audio_inputs['input_values'] if 'padding_mask' in audio_inputs: inputs['padding_mask'] = audio_inputs['padding_mask'] return inputs def batch_decode(self, *args, **kwargs): audio_values = kwargs.pop('audio', None) padding_mask = kwargs.pop('padding_mask', None) if len(args) > 0: audio_values = args[0] args = args[1:] if audio_values is not None: return self._decode_audio(audio_values, padding_mask=padding_mask) else: return self.tokenizer.batch_decode(*args, **kwargs) def decode(self, *args, **kwargs): return self.tokenizer.decode(*args, **kwargs) def _decode_audio(self, audio_values, padding_mask: Optional=None) -> List[np.ndarray]: audio_values = to_numpy(audio_values) bsz, channels, seq_len = audio_values.shape if padding_mask is None: return list(audio_values) padding_mask = to_numpy(padding_mask) difference = seq_len - padding_mask.shape[-1] padding_value = 1 - self.feature_extractor.padding_value padding_mask = np.pad(padding_mask, ((0, 0), (0, difference)), 'constant', constant_values=padding_value) audio_values = audio_values.tolist() for i in range(bsz): sliced_audio = np.asarray(audio_values[i])[padding_mask[i][None, :] != self.feature_extractor.padding_value] audio_values[i] = sliced_audio.reshape(channels, -1) return audio_values
Constructs a MusicGen processor which wraps an EnCodec feature extractor and a T5 tokenizer into a single processor class. [`MusicgenProcessor`] offers all the functionalities of [`EncodecFeatureExtractor`] and [`TTokenizer`]. See [`~MusicgenProcessor.__call__`] and [`~MusicgenProcessor.decode`] for more information. Args: feature_extractor (`EncodecFeatureExtractor`): An instance of [`EncodecFeatureExtractor`]. The feature extractor is a required input. tokenizer (`T5Tokenizer`): An instance of [`T5Tokenizer`]. The tokenizer is a required input.
github-repos
def run_generate(verbose=True): parser = argparse.ArgumentParser() parser.add_argument('model_name', type=str, help='like facebook/bart-large-cnn,google-t5/t5-base, etc.') parser.add_argument('input_path', type=str, help='like cnn_dm/test.source') parser.add_argument('save_path', type=str, help='where to save summaries') parser.add_argument('--reference_path', type=str, required=False, help='like cnn_dm/test.target') parser.add_argument('--score_path', type=str, required=False, default='metrics.json', help='where to save metrics') parser.add_argument('--device', type=str, required=False, default=DEFAULT_DEVICE, help='cuda, cuda:1, cpu etc.') parser.add_argument('--prefix', type=str, required=False, default=None, help='will be added to the beginning of src examples') parser.add_argument('--task', type=str, default='summarization', help='used for task_specific_params + metrics') parser.add_argument('--bs', type=int, default=8, required=False, help='batch size') parser.add_argument('--n_obs', type=int, default=-1, required=False, help='How many observations. Defaults to all.') parser.add_argument('--fp16', action='store_true') parser.add_argument('--dump-args', action='store_true', help='print the custom hparams with the results') parser.add_argument('--info', nargs='?', type=str, const=datetime_now(), help="use in conjunction w/ --dump-args to print with the results whatever other info you'd like, e.g. lang=en-ru. If no value is passed, the current datetime string will be used.") args, rest = parser.parse_known_args() parsed_args = parse_numeric_n_bool_cl_kwargs(rest) if parsed_args and verbose: print(f'parsed the following generate kwargs: {parsed_args}') examples = [' ' + x.rstrip() if 't5' in args.model_name else x.rstrip() for x in open(args.input_path).readlines()] if args.n_obs > 0: examples = examples[:args.n_obs] Path(args.save_path).parent.mkdir(exist_ok=True) if args.reference_path is None and Path(args.score_path).exists(): warnings.warn(f'score_path {args.score_path} will be overwritten unless you type ctrl-c.') if args.device == 'cpu' and args.fp16: raise ValueError("Can't mix --fp16 and --device cpu") runtime_metrics = generate_summaries_or_translations(examples, args.save_path, args.model_name, batch_size=args.bs, device=args.device, fp16=args.fp16, task=args.task, prefix=args.prefix, **parsed_args) if args.reference_path is None: return {} score_fn = calculate_bleu if 'translation' in args.task else calculate_rouge output_lns = [x.rstrip() for x in open(args.save_path).readlines()] reference_lns = [x.rstrip() for x in open(args.reference_path).readlines()][:len(output_lns)] scores: dict = score_fn(output_lns, reference_lns) scores.update(runtime_metrics) if args.dump_args: scores.update(parsed_args) if args.info: scores['info'] = args.info if verbose: print(scores) if args.score_path is not None: json.dump(scores, open(args.score_path, 'w')) return scores
Takes input text, generates output, and then using reference calculates the BLEU scores. The results are saved to a file and returned to the caller, and printed out unless ``verbose=False`` is passed. Args: verbose (:obj:`bool`, `optional`, defaults to :obj:`True`): print results to stdout Returns: a tuple: ``(scores, params}`` - ``scores``: a dict of scores data ``{'bleu': 39.6501, 'n_obs': 2000, 'runtime': 186, 'seconds_per_sample': 0.093}`` - ``params``: a dict of custom params, e.g. ``{'num_beams': 5, 'length_penalty': 0.8}``
github-repos
def add_slot(self, var, slot_name, initializer='zeros', shape=None): if slot_name not in self._slot_names: self._slot_names.append(slot_name) var_key = _var_key(var) slot_dict = self._slots.setdefault(var_key, {}) weight = slot_dict.get(slot_name, None) if weight is None: if isinstance(initializer, str) or callable(initializer): initializer = initializers.get(initializer) if isinstance(initializer, trackable.CheckpointInitialValueCallable) or shape is not None: slot_shape = shape else: slot_shape = var.shape initial_value = functools.partial(initializer, shape=slot_shape, dtype=var.dtype) else: initial_value = initializer with self._distribution_strategy_scope(): strategy = distribute_lib.get_strategy() if not strategy.extended.variable_created_in_scope(var): raise ValueError("Trying to create optimizer slot variable under the scope for tf.distribute.Strategy ({}), which is different from the scope used for the original variable ({}). Make sure the slot variables are created under the same strategy scope. This may happen if you're restoring from a checkpoint outside the scope".format(strategy, var)) with strategy.extended.colocate_vars_with(var): weight = tf_variables.Variable(name='%s/%s' % (var._shared_name, slot_name), dtype=var.dtype, trainable=False, initial_value=initial_value) backend.track_variable(weight) slot_dict[slot_name] = weight self._restore_slot_variable(slot_name=slot_name, variable=var, slot_variable=weight) self._weights.append(weight) return weight
Add a new slot variable for `var`. A slot variable is an additional variable associated with `var` to train. It is allocated and managed by optimizers, e.g. `Adam`. Args: var: a `Variable` object. slot_name: name of the slot variable. initializer: initializer of the slot variable shape: (Optional) shape of the slot variable. If not set, it will default to the shape of `var`. Returns: A slot variable.
github-repos
def __init__(self, channel): self.ParseResume = channel.unary_unary( "/google.cloud.talent.v4beta1.ResumeService/ParseResume", request_serializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_resume__service__pb2.ParseResumeRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_resume__service__pb2.ParseResumeResponse.FromString, )
Constructor. Args: channel: A grpc.Channel.
juraj-google-style
def putfile(self, filepath, buildroot, metahash): def gen_obj_path(filename): filehash = util.hash_file(filepath).hexdigest() return (filehash, os.path.join(self.obj_cachedir, filehash[0:2], filehash[2:4], filehash)) filepath_relative = filepath.split(buildroot)[1][1:] incachepath = self._genpath(filepath_relative, metahash) (filehash, obj_path) = gen_obj_path(filepath) if (not os.path.exists(obj_path)): obj_dir = os.path.dirname(obj_path) if (not os.path.exists(obj_dir)): os.makedirs(obj_dir) log.debug('Adding to obj cache: %s -> %s', filepath, obj_path) os.link(filepath, obj_path) if os.path.exists(incachepath): existingfile_hash = util.hash_file(incachepath).hexdigest() if (filehash != existingfile_hash): log.warn('File found in mh cache, but checksum differs. Replacing with this new version. (File: %s)', filepath) log.warn('Possible reasons for this:') log.warn(' 1. This build is not hermetic, and something differs about the build environment compared to the previous build.') log.warn(' 2. This file has a timestamp or other build-time related data encoded into it, which will always cause the checksum to differ when built.') log.warn(' 3. Everything is terrible and nothing works.') os.unlink(incachepath) if (not os.path.exists(incachepath)): log.debug('Adding to mh cache: %s -> %s', filepath, incachepath) if (not os.path.exists(os.path.dirname(incachepath))): os.makedirs(os.path.dirname(incachepath)) os.link(obj_path, incachepath)
Put a file in the cache. Args: filepath: Path to file on disk. buildroot: Path to buildroot buildrule: The rule that generated this file. metahash: hash object
codesearchnet
def device(self, name): if isinstance(name, LogicalDevice): name = name.name elif pydev.is_device_spec(name): name = name.to_string() return _EagerDeviceContext(self, name)
Context-manager to force placement of operations and Tensors on a device. Args: name: Name of the device or None to get default placement. Returns: Context manager that forces device placement. Raises: ValueError: If name is not a string or is an invalid device name. RuntimeError: If device scopes are not properly nested.
github-repos
def _value_and_batch_jacobian(f, x): if tf.executing_eagerly(): with tf.GradientTape() as tape: tape.watch(x) value = f(x) batch_jacobian = tape.batch_jacobian(value, x) else: value = f(x) batch_jacobian = gradients.batch_jacobian(value, x) return (value, batch_jacobian)
Enables uniform interface to value and batch jacobian calculation. Works in both eager and graph modes. Arguments: f: The scalar function to evaluate. x: The value at which to compute the value and the batch jacobian. Returns: A tuple (f(x), J(x)), where J(x) is the batch jacobian.
codesearchnet
def read_first_header(self): self.file_obj.seek(0) (header_dict, pos) = self.read_header() self.file_obj.seek(0) return header_dict
Read first header in file Returns: header (dict): keyword:value pairs of header metadata
codesearchnet
def update(self, sparql_query_only=False, auto_refresh=None, update_binary=True): self._diff_graph() sq = SparqlUpdate(self.rdf.prefixes, self.rdf.diffs) if sparql_query_only: return sq.build_query() response = self.repo.api.http_request('PATCH', ('%s/fcr:metadata' % self.uri), data=sq.build_query(), headers={'Content-Type': 'application/sparql-update'}) if (response.status_code != 204): logger.debug(response.content) raise Exception(('HTTP %s, expecting 204' % response.status_code)) if ((type(self) == NonRDFSource) and update_binary and (type(self.binary.data) != requests.models.Response)): self.binary._prep_binary() binary_data = self.binary.data binary_response = self.repo.api.http_request('PUT', self.uri, data=binary_data, headers={'Content-Type': self.binary.mimetype}) if ((not auto_refresh) and (not self.repo.default_auto_refresh)): logger.debug('not refreshing resource RDF, but updated binary, so must refresh binary data') updated_self = self.repo.get_resource(self.uri) self.binary.refresh(updated_self) if hasattr(self, '_post_update'): self._post_update() '\n\t\tIf not updating binary, pass that bool to refresh as refresh_binary flag to avoid touching binary data\n\t\t' if auto_refresh: self.refresh(refresh_binary=update_binary) elif (auto_refresh == None): if self.repo.default_auto_refresh: self.refresh(refresh_binary=update_binary) return True
Method to update resources in repository. Firing this method computes the difference in the local modified graph and the original one, creates an instance of SparqlUpdate and builds a sparql query that represents these differences, and sends this as a PATCH request. Note: send PATCH request, regardless of RDF or NonRDF, to [uri]/fcr:metadata If the resource is NonRDF (Binary), this also method also updates the binary data. Args: sparql_query_only (bool): If True, returns only the sparql query string and does not perform any actual updates auto_refresh (bool): If True, refreshes resource after update. If left None, defaults to repo.default_auto_refresh update_binary (bool): If True, and resource is NonRDF, updates binary data as well Returns: (bool)
codesearchnet
def liquid_precipitation_depth(self, value=999.0): if (value is not None): try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float for field `liquid_precipitation_depth`'.format(value)) self._liquid_precipitation_depth = value
Corresponds to IDD Field `liquid_precipitation_depth` Args: value (float): value for IDD Field `liquid_precipitation_depth` Unit: mm Missing value: 999.0 if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
codesearchnet
def promote_artifacts(self, promote_stage='latest'): if promote_stage.lower() == 'alpha': self._sync_to_uri(self.s3_canary_uri) elif promote_stage.lower() == 'canary': self._sync_to_uri(self.s3_latest_uri) else: self._sync_to_uri(self.s3_latest_uri)
Promote artifact version to dest. Args: promote_stage (string): Stage that is being promoted
juraj-google-style
def kde_partition_data(data, estimate_tails=True): kde = stats.kde.gaussian_kde(data) evaluation_bins = np.linspace(start=(np.min(data) - (kde.covariance_factor() / 2)), stop=(np.max(data) + (kde.covariance_factor() / 2)), num=np.floor((((np.max(data) - np.min(data)) / kde.covariance_factor()) + 1)).astype(int)) cdf_vals = [kde.integrate_box_1d((- np.inf), x) for x in evaluation_bins] evaluation_weights = np.diff(cdf_vals) if estimate_tails: bins = np.concatenate(([(np.min(data) - (1.5 * kde.covariance_factor()))], evaluation_bins, [(np.max(data) + (1.5 * kde.covariance_factor()))])) else: bins = np.concatenate(([(- np.inf)], evaluation_bins, [np.inf])) weights = np.concatenate(([cdf_vals[0]], evaluation_weights, [(1 - cdf_vals[(- 1)])])) return {'bins': bins, 'weights': weights}
Convenience method for building a partition and weights using a gaussian Kernel Density Estimate and default bandwidth. Args: data (list-like): The data from which to construct the estimate estimate_tails (bool): Whether to estimate the tails of the distribution to keep the partition object finite Returns: A new partition_object:: { "partition": (list) The endpoints of the partial partition of reals, "weights": (list) The densities of the bins implied by the partition. }
codesearchnet
def get_soup_response(self): if (self.response is not None): if (self.__response_soup is None): result = BeautifulSoup(self.response.text, 'lxml') if self.decomposed: return result else: self.__response_soup = BeautifulSoup(self.response.text, 'lxml') return self.__response_soup
Get the response as a cached BeautifulSoup container. Returns: obj: The BeautifulSoup container.
codesearchnet
def MakeNewConfig(self): result = self.__class__() result.type_infos = self.type_infos result.defaults = self.defaults result.context = self.context result.valid_contexts = self.valid_contexts return result
Creates a new configuration option based on this one. Note that it is not normally possible to just instantiate the config object because it will have an empty set of type descriptors (i.e. no config options will be defined). Config options are normally defined at import time, and then they get added to the _CONFIG global in this module. To obtain a new configuration object, inheriting the regular config options, this method must be called from the global _CONFIG object, to make a copy. Returns: A new empty config object. which has the same parameter definitions as this one.
codesearchnet
def register(self, managed_object): if (not isinstance(managed_object, pobjects.ManagedObject)): raise TypeError('managed object must be a Pie ManagedObject') object_attributes = list() if hasattr(managed_object, 'cryptographic_usage_masks'): if (managed_object.cryptographic_usage_masks is not None): mask_attribute = self.attribute_factory.create_attribute(enums.AttributeType.CRYPTOGRAPHIC_USAGE_MASK, managed_object.cryptographic_usage_masks) object_attributes.append(mask_attribute) if hasattr(managed_object, 'operation_policy_name'): if (managed_object.operation_policy_name is not None): opn_attribute = self.attribute_factory.create_attribute(enums.AttributeType.OPERATION_POLICY_NAME, managed_object.operation_policy_name) object_attributes.append(opn_attribute) if hasattr(managed_object, 'names'): if managed_object.names: for name in managed_object.names: name_attribute = self.attribute_factory.create_attribute(enums.AttributeType.NAME, name) object_attributes.append(name_attribute) template = cobjects.TemplateAttribute(attributes=object_attributes) object_type = managed_object.object_type secret = self.object_factory.convert(managed_object) result = self.proxy.register(object_type, template, secret) status = result.result_status.value if (status == enums.ResultStatus.SUCCESS): return result.uuid else: reason = result.result_reason.value message = result.result_message.value raise exceptions.KmipOperationFailure(status, reason, message)
Register a managed object with a KMIP appliance. Args: managed_object (ManagedObject): A managed object to register. An instantiatable subclass of ManagedObject from the Pie API. Returns: string: The uid of the newly registered managed object. Raises: ClientConnectionNotOpen: if the client connection is unusable KmipOperationFailure: if the operation result is a failure TypeError: if the input argument is invalid
codesearchnet
def unpack(self, buff, offset=0): try: unpacked_data = struct.unpack('!4B', buff[offset:offset+4]) self._value = '.'.join([str(x) for x in unpacked_data]) except struct.error as exception: raise exceptions.UnpackException('%s; %s: %s' % (exception, offset, buff))
Unpack a binary message into this object's attributes. Unpack the binary value *buff* and update this object attributes based on the results. Args: buff (bytes): Binary data package to be unpacked. offset (int): Where to begin unpacking. Raises: Exception: If there is a struct unpacking error.
juraj-google-style
def dot(matrix, vector, matrix_ty, vector_ty): weld_obj = WeldObject(encoder_, decoder_) matrix_var = weld_obj.update(matrix) if isinstance(matrix, WeldObject): matrix_var = matrix.obj_id weld_obj.dependencies[matrix_var] = matrix vector_var = weld_obj.update(vector) loopsize_annotation = "" if isinstance(vector, WeldObject): vector_var = vector.obj_id weld_obj.dependencies[vector_var] = vector if isinstance(vector, np.ndarray): loopsize_annotation = "@(loopsize: %dL)" % len(vector) weld_template = weld_obj.weld_code = weld_template % {"matrix": matrix_var, "vector": vector_var, "matrix_ty": matrix_ty, "vector_ty": vector_ty, "loopsize_annotation": loopsize_annotation} return weld_obj
Computes the dot product between a matrix and a vector. Args: matrix (WeldObject / Numpy.ndarray): 2-d input matrix vector (WeldObject / Numpy.ndarray): 1-d input vector ty (WeldType): Type of each element in the input matrix and vector Returns: A WeldObject representing this computation
juraj-google-style
def time_and_memory(min_micros=1, min_bytes=1, min_accelerator_micros=0, min_cpu_micros=0, min_peak_bytes=0, min_residual_bytes=0, min_output_bytes=0): return {'max_depth': 10000, 'min_bytes': min_bytes, 'min_peak_bytes': min_peak_bytes, 'min_residual_bytes': min_residual_bytes, 'min_output_bytes': min_output_bytes, 'min_micros': min_micros, 'min_accelerator_micros': min_accelerator_micros, 'min_cpu_micros': min_cpu_micros, 'min_params': 0, 'min_float_ops': 0, 'min_occurrence': 0, 'order_by': 'micros', 'account_type_regexes': ['.*'], 'start_name_regexes': ['.*'], 'trim_name_regexes': [], 'show_name_regexes': ['.*'], 'hide_name_regexes': [], 'account_displayed_op_only': True, 'select': ['micros', 'bytes'], 'step': -1, 'output': 'stdout'}
Show operation time and memory consumptions. Args: min_micros: Only show profiler nodes with execution time no less than this. It sums accelerator and cpu times. min_bytes: Only show profiler nodes requested to allocate no less bytes than this. min_accelerator_micros: Only show profiler nodes spend no less than this time on accelerator (e.g. GPU). min_cpu_micros: Only show profiler nodes spend no less than this time on cpu. min_peak_bytes: Only show profiler nodes using no less than this bytes at peak (high watermark). For profiler nodes consist of multiple graph nodes, it sums the graph nodes' peak_bytes. min_residual_bytes: Only show profiler nodes have no less than this bytes not being de-allocated after Compute() ends. For profiler nodes consist of multiple graph nodes, it sums the graph nodes' residual_bytes. min_output_bytes: Only show profiler nodes have no less than this bytes output. The output are not necessarily allocated by this profiler nodes. Returns: A dict of profiling options.
github-repos
def __init__(self, req): super(Gateway, self).__init__(req) self.started_response = False self.env = self.get_environ() self.remaining_bytes_out = None
Initialize WSGI Gateway instance with request. Args: req (HTTPRequest): current HTTP request
juraj-google-style
def _check_preconditions(self, state: Sequence[tf.Tensor], action: Sequence[tf.Tensor], bound_constraints: Dict[(str, Constraints)], default: Sequence[tf.Tensor]) -> Tuple[(tf.Tensor, Sequence[tf.Tensor], tf.Tensor)]: def condition(i, a, checking): not_checking = tf.reduce_any(tf.logical_not(checking)) return not_checking def body(i, a, checking): new_action = [] new_sampled_action = self._sample_action(bound_constraints, default) new_preconds_checking = self.compiler.compile_action_preconditions_checking(state, new_sampled_action) for (action_fluent, new_sampled_action_fluent) in zip(a, new_sampled_action): new_action_fluent = tf.where(checking, action_fluent, new_sampled_action_fluent) new_action.append(new_action_fluent) new_action = tuple(new_action) new_checking = tf.logical_or(checking, new_preconds_checking) return ((i + 1), new_action, new_checking) i0 = tf.constant(0) preconds_checking = self.compiler.compile_action_preconditions_checking(state, action) return tf.while_loop(condition, body, loop_vars=[i0, action, preconds_checking])
Samples action fluents until all preconditions are satisfied. Checks action preconditions for the sampled `action` and current `state`, and iff all preconditions are satisfied it returns the sampled action fluents. Args: state (Sequence[tf.Tensor]): A list of state fluents. action (Sequence[tf.Tensor]): A list of action fluents. bound_constraints (Dict[str, Tuple[Optional[TensorFluent], Optional[TensorFluent]]]): The bounds for each action fluent. default (Sequence[tf.Tensor]): The default action fluents. Returns: Tuple[tf.Tensor, Sequence[tf.Tensor], tf.Tensor]: A tuple with an integer tensor corresponding to the number of samples, action fluents and a boolean tensor for checking all action preconditions.
codesearchnet
def add_multiple(self, flags): if (not isinstance(flags, list)): raise TypeError('Expected list of flags, got object of type{}'.format(type(flags))) for flag in flags: if isinstance(flag, Flag): self.add_item(flag) elif isinstance(flag, tuple): try: item = Flag(*flag) self.add_item(item) except TypeError as e: raise TypeError('Invalid arguments to initialize a flag definition, expect ({0} [, {1}]) but got {3}'.format(', '.join(Flag.REQUIRED_FIELDS), ', '.join(Flag.OPTIONAL_FIELDS), flag))
Add multiple command line flags Arguments: flags (:obj:`list` of :obj:`tuple`): List of flags in tuples (name, flag_type, description, (optional) default) Raises: TypeError: Provided wrong arguments or arguments of wrong types, method will raise TypeError
codesearchnet
def _reduced_stack(istart=3, iend=5, ipython=True): import inspect return [i[istart:iend] for i in inspect.stack() if _decorated_path(i[1])]
Returns the reduced function call stack that includes only relevant function calls (i.e., ignores any that are not part of the specified package or acorn. Args: package (str): name of the package that the logged method belongs to.
codesearchnet
def DEFINE_multi_enum_class(name, default, enum_class, help, flag_values=_flagvalues.FLAGS, module_name=None, **args): DEFINE_flag(_flag.MultiEnumClassFlag(name, default, help, enum_class), flag_values, module_name, **args)
Registers a flag whose value can be a list of enum members. Use the flag on the command line multiple times to place multiple enum values into the list. Args: name: str, the flag name. default: Union[Iterable[Enum], Iterable[Text], Enum, Text, None], the default value of the flag; see `DEFINE_multi`; only differences are documented here. If the value is a single Enum, it is treated as a single-item list of that Enum value. If it is an iterable, text values within the iterable will be converted to the equivalent Enum objects. enum_class: class, the Enum class with all the possible values for the flag. help: str, the help message. flag_values: FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. module_name: A string, the name of the Python module declaring this flag. If not provided, it will be computed using the stack trace of this call. **args: Dictionary with extra keyword args that are passed to the Flag __init__.
codesearchnet
def account(transition, direction=Direction.BIDIRECTIONAL): if (direction != Direction.BIDIRECTIONAL): return directed_account(transition, direction) return Account((directed_account(transition, Direction.CAUSE) + directed_account(transition, Direction.EFFECT)))
Return the set of all causal links for a |Transition|. Args: transition (Transition): The transition of interest. Keyword Args: direction (Direction): By default the account contains actual causes and actual effects.
codesearchnet
def marquee(text="", width=78, mark='*'): if not text: return (mark*width)[:width] nmark = (width-len(text)-2) if nmark < 0: nmark = 0 marks = mark * nmark return '%s %s %s' % (marks, text, marks)
Return the input string centered in a 'marquee'. Args: text (str): Input string width (int): Width of final output string. mark (str): Character used to fill string. :Examples: >>> marquee('A test', width=40) '**************** A test ****************' >>> marquee('A test', width=40, mark='-') '---------------- A test ----------------' marquee('A test',40, ' ') ' A test '
juraj-google-style
def _StartWorkerProcess(self, process_name, storage_writer): process_name = 'Worker_{0:02d}'.format(self._last_worker_number) logger.debug('Starting worker process {0:s}'.format(process_name)) if self._use_zeromq: queue_name = '{0:s} task queue'.format(process_name) task_queue = zeromq_queue.ZeroMQRequestConnectQueue( delay_open=True, linger_seconds=0, name=queue_name, port=self._task_queue_port, timeout_seconds=self._TASK_QUEUE_TIMEOUT_SECONDS) else: task_queue = self._task_queue process = worker_process.WorkerProcess( task_queue, storage_writer, self._artifacts_filter_helper, self.knowledge_base, self._session_identifier, self._processing_configuration, enable_sigsegv_handler=self._enable_sigsegv_handler, name=process_name) for handler in logging.root.handlers: logging.root.removeHandler(handler) handler.close() process.start() loggers.ConfigureLogging( debug_output=self._debug_output, filename=self._log_filename, mode='a', quiet_mode=self._quiet_mode) try: self._StartMonitoringProcess(process) except (IOError, KeyError) as exception: pid = process.pid logger.error(( 'Unable to monitor replacement worker process: {0:s} ' '(PID: {1:d}) with error: {2!s}').format( process_name, pid, exception)) self._TerminateProcess(process) return None self._RegisterProcess(process) self._last_worker_number += 1 return process
Creates, starts, monitors and registers a worker process. Args: process_name (str): process name. storage_writer (StorageWriter): storage writer for a session storage used to create task storage. Returns: MultiProcessWorkerProcess: extraction worker process or None if the process could not be started.
juraj-google-style
def expand_tile(units, axis): assert axis in (1, 2) n_time_steps = K.int_shape(units)[1] repetitions = [1, 1, 1, 1] repetitions[axis] = n_time_steps if axis == 1: expanded = Reshape(target_shape=( (1,) + K.int_shape(units)[1:] ))(units) else: expanded = Reshape(target_shape=(K.int_shape(units)[1:2] + (1,) + K.int_shape(units)[2:]))(units) return K.tile(expanded, repetitions)
Expand and tile tensor along given axis Args: units: tf tensor with dimensions [batch_size, time_steps, n_input_features] axis: axis along which expand and tile. Must be 1 or 2
juraj-google-style
def segmentation_images(self,*args,**kwargs): if not self.db: raise ValueError("Need to set db") segs = SegmentationImages.read_cellframe(self,*args,**kwargs) segs.microns_per_pixel = segs.microns_per_pixel return segs
Use the segmented images to create per-image graphics Args: verbose (bool): output more details if true Returns: SegmentationImages: returns a class used to construct the image graphics
juraj-google-style
def run_analysis(args): import google.datalab.bigquery as bq if args.bigquery_table: table = bq.Table(args.bigquery_table) schema_list = table.schema._bq_schema else: schema_list = json.loads( file_io.read_file_to_string(args.schema_file).decode()) table = bq.ExternalDataSource( source=args.input_file_pattern, schema=bq.Schema(schema_list)) for col_schema in schema_list: col_type = col_schema['type'].lower() if col_type != 'string' and col_type != 'integer' and col_type != 'float': raise ValueError('Schema contains an unsupported type %s.' % col_type) run_numerical_analysis(table, schema_list, args) run_categorical_analysis(table, schema_list, args) file_io.write_string_to_file( os.path.join(args.output_dir, SCHEMA_FILE), json.dumps(schema_list, indent=2, separators=(',', ': ')))
Builds an analysis file for training. Uses BiqQuery tables to do the analysis. Args: args: command line args Raises: ValueError if schema contains unknown types.
juraj-google-style
def post(self, resource): response = self.api.execute( "POST", self.endpoint, json=(resource.as_dict())) if not response.ok: raise Error.parse(response.json()) return self._cls.parse(response.json())
Creates a new instance of the resource. Args: resource - gophish.models.Model - The resource instance
juraj-google-style
def diff_bisectSplit(self, text1, text2, x, y, deadline): text1a = text1[:x] text2a = text2[:y] text1b = text1[x:] text2b = text2[y:] diffs = self.diff_main(text1a, text2a, False, deadline) diffsb = self.diff_main(text1b, text2b, False, deadline) return diffs + diffsb
Given the location of the 'middle snake', split the diff in two parts and recurse. Args: text1: Old string to be diffed. text2: New string to be diffed. x: Index of split point in text1. y: Index of split point in text2. deadline: Time at which to bail if not yet complete. Returns: Array of diff tuples.
juraj-google-style
def set_join_rule(self, room_id, join_rule): content = { "join_rule": join_rule } return self.send_state_event(room_id, "m.room.join_rules", content)
Set the rule for users wishing to join the room. Args: room_id(str): The room to set the rules for. join_rule(str): The chosen rule. One of: ["public", "knock", "invite", "private"]
juraj-google-style
def compressuser(path, home='~'): path = normpath(path) userhome_dpath = userhome() if path.startswith(userhome_dpath): if (len(path) == len(userhome_dpath)): path = home elif (path[len(userhome_dpath)] == os.path.sep): path = (home + path[len(userhome_dpath):]) return path
Inverse of `os.path.expanduser` Args: path (PathLike): path in system file structure home (str): symbol used to replace the home path. Defaults to '~', but you might want to use '$HOME' or '%USERPROFILE%' instead. Returns: PathLike: path: shortened path replacing the home directory with a tilde CommandLine: xdoctest -m ubelt.util_path compressuser Example: >>> path = expanduser('~') >>> assert path != '~' >>> assert compressuser(path) == '~' >>> assert compressuser(path + '1') == path + '1' >>> assert compressuser(path + '/1') == join('~', '1') >>> assert compressuser(path + '/1', '$HOME') == join('$HOME', '1')
codesearchnet
def remove_vtep(self, name, vtep, vlan=None): if (not vlan): cmd = 'vxlan flood vtep remove {}'.format(vtep) else: cmd = 'vxlan vlan {} flood vtep remove {}'.format(vlan, vtep) return self.configure_interface(name, cmd)
Removes a VTEP endpoint from the global or local flood list EosVersion: 4.13.7M Args: name (str): The name of the interface to configure vtep (str): The IP address of the remote VTEP endpoint to add vlan (str): The VLAN ID associated with this VTEP. If the VLAN keyword is used, then the VTEP is configured as a local flood endpoing Returns: True if the command completes successfully
codesearchnet