code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def save_variation_for_experiment(self, experiment_id, variation_id): self.experiment_bucket_map.update({ experiment_id: { self.VARIATION_ID_KEY: variation_id } })
Helper method to save new experiment/variation as part of the user's profile. Args: experiment_id: ID for experiment for which the decision is to be stored. variation_id: ID for variation that the user saw.
juraj-google-style
def callback_handler(self): decorator = self class OAuth2Handler(webapp.RequestHandler): 'Handler for the redirect_uri of the OAuth 2.0 dance.' @login_required def get(self): error = self.request.get('error') if error: errormsg = self.request.get('error_description', error) self.response.out.write('The authorization request failed: {0}'.format(_safe_html(errormsg))) else: user = users.get_current_user() decorator._create_flow(self) credentials = decorator.flow.step2_exchange(self.request.params) decorator._storage_class(decorator._credentials_class, None, decorator._credentials_property_name, user=user).put(credentials) redirect_uri = _parse_state_value(str(self.request.get('state')), user) if (redirect_uri is None): self.response.out.write('The authorization request failed') return if (decorator._token_response_param and credentials.token_response): resp_json = json.dumps(credentials.token_response) redirect_uri = _helpers._add_query_parameter(redirect_uri, decorator._token_response_param, resp_json) self.redirect(redirect_uri) return OAuth2Handler
RequestHandler for the OAuth 2.0 redirect callback. Usage:: app = webapp.WSGIApplication([ ('/index', MyIndexHandler), ..., (decorator.callback_path, decorator.callback_handler()) ]) Returns: A webapp.RequestHandler that handles the redirect back from the server during the OAuth 2.0 dance.
codesearchnet
def get_uri(dir_name): fullpath = os.path.abspath(dir_name) try: hostname = socket.gethostbyaddr(socket.gethostname())[0] except: hostname = socket.gethostname() return "{}:{}".format(hostname, fullpath)
Returns the URI path for a directory. This allows files hosted on different file servers to have distinct locations. Args: dir_name: A directory name. Returns: Full URI path, e.g., fileserver.host.com:/full/path/of/dir_name.
juraj-google-style
def load_model(file_path: str) -> NormalizedModel: with open(file_path) as f: model = json.load(f) model_flat = OrderedDict() for category in model: for item in model[category]: model_flat['%s:%s' % (category, item)] = model[category][item] weights = jnp.array(list(model_flat.values())) weights = weights / weights.std() weights = weights - weights.mean() keys = list(model_flat.keys()) return NormalizedModel(keys, weights)
Loads a model as a pair of a features list and a normalized weight vector. Args: file_path: A file path for the model JSON file. Returns: A normalized model, which is a pair of a list of feature identifiers and a normalized weight vector.
github-repos
def parse_meta(filename, data): if "." not in filename: raise MetaParsingException( "Can't recognize type of your metadata ('%s')!" % filename ) suffix = filename.rsplit(".", 1)[1].lower() if suffix not in SUPPORTED_FILES: raise MetaParsingException("Can't parse file of type '%s'!" % suffix) fp = validator.FieldParser() for key, val in SUPPORTED_FILES[suffix](data).items(): fp.process(key, val) return fp.get_epublication()
Parse `data` to EPublication. Args: filename (str): Used to choose right parser based at suffix. data (str): Content of the metadata file. Returns: EPublication: object.
juraj-google-style
def CheckTaskReadyForMerge(self, task): if (self._storage_type != definitions.STORAGE_TYPE_SESSION): raise IOError('Unsupported storage type.') if (not self._processed_task_storage_path): raise IOError('Missing processed task storage path.') processed_storage_file_path = self._GetProcessedStorageFilePath(task) try: stat_info = os.stat(processed_storage_file_path) except (IOError, OSError): return False task.storage_file_size = stat_info.st_size return True
Checks if a task is ready for merging with this session storage. If the task is ready to be merged, this method also sets the task's storage file size. Args: task (Task): task. Returns: bool: True if the task is ready to be merged. Raises: IOError: if the storage type is not supported or OSError: if the storage type is not supported or if the temporary path for the task storage does not exist.
codesearchnet
def AddRow(self, values): super(CLITableView, self).AddRow(values) value_length = len(values[0]) if value_length > self._column_width: self._column_width = value_length
Adds a row of values. Args: values (list[object]): values. Raises: ValueError: if the number of values is out of bounds.
juraj-google-style
def _register_and_parse_flags_with_usage(argv=None, flags_parser=parse_flags_with_usage): if _register_and_parse_flags_with_usage.done: raise SystemError('Flag registration can be done only once.') define_help_flags() original_argv = (sys.argv if (argv is None) else argv) args_to_main = flags_parser(original_argv) if (not FLAGS.is_parsed()): raise Error('FLAGS must be parsed after flags_parser is called.') if FLAGS.only_check_args: sys.exit(0) if FLAGS['verbosity'].using_default_value: FLAGS.verbosity = 0 _register_and_parse_flags_with_usage.done = True return args_to_main
Registers help flags, parses arguments and shows usage if appropriate. This also calls sys.exit(0) if flag --only_check_args is True. Args: argv: [str], a non-empty list of the command line arguments including program name, sys.argv is used if None. flags_parser: Callable[[List[Text]], Any], the function used to parse flags. The return value of this function is passed to `main` untouched. It must guarantee FLAGS is parsed after this function is called. Returns: The return value of `flags_parser`. When using the default `flags_parser`, it returns the following: [str], a non-empty list of remaining command line arguments after parsing flags, including program name. Raises: Error: Raised when flags_parser is called, but FLAGS is not parsed. SystemError: Raised when it's called more than once.
codesearchnet
def _policy_equivalent_to_dtype(policy): return type(policy) == Policy and list(policy.get_config().keys()) == ['name'] and (policy.name == '_infer' or _is_convertible_to_dtype(policy.name))
Returns True if the Policy is equivalent to a single dtype. A policy is equivalent to a single dtype if the policy's compute and variable dtypes are the same and the policy's type is Policy and not a subclass of Policy (such as PolicyV1). The "_infer" policy is considered equivalent to a single dtype. Args: policy: A Policy. Returns: True, if the policy is equivalent to a single dtype.
github-repos
async def retry_async(func, attempts=5, sleeptime_callback=calculate_sleep_time, retry_exceptions=Exception, args=(), kwargs=None, sleeptime_kwargs=None): kwargs = (kwargs or {}) attempt = 1 while True: try: return (await func(*args, **kwargs)) except retry_exceptions: attempt += 1 if (attempt > attempts): log.warning('retry_async: {}: too many retries!'.format(func.__name__)) raise sleeptime_kwargs = (sleeptime_kwargs or {}) sleep_time = sleeptime_callback(attempt, **sleeptime_kwargs) log.debug('retry_async: {}: sleeping {} seconds before retry'.format(func.__name__, sleep_time)) (await asyncio.sleep(sleep_time))
Retry ``func``, where ``func`` is an awaitable. Args: func (function): an awaitable function. attempts (int, optional): the number of attempts to make. Default is 5. sleeptime_callback (function, optional): the function to use to determine how long to sleep after each attempt. Defaults to ``calculateSleepTime``. retry_exceptions (list or exception, optional): the exception(s) to retry on. Defaults to ``Exception``. args (list, optional): the args to pass to ``function``. Defaults to () kwargs (dict, optional): the kwargs to pass to ``function``. Defaults to {}. sleeptime_kwargs (dict, optional): the kwargs to pass to ``sleeptime_callback``. If None, use {}. Defaults to None. Returns: object: the value from a successful ``function`` call Raises: Exception: the exception from a failed ``function`` call, either outside of the retry_exceptions, or one of those if we pass the max ``attempts``.
codesearchnet
def GetSubkeyByPath(self, key_path): pyregf_key = self._pyregf_key.get_sub_key_by_path(key_path) if not pyregf_key: return None key_path = key_paths.JoinKeyPath([self._key_path, key_path]) return REGFWinRegistryKey(pyregf_key, key_path=key_path)
Retrieves a subkey by path. Args: key_path (str): path of the subkey. Returns: WinRegistryKey: Windows Registry subkey or None if not found.
juraj-google-style
def __init__(self, dataFrame=None, editable=False): super(ColumnDtypeModel, self).__init__() self.headers = ['column', 'data type'] self._editable = editable self._dataFrame = pandas.DataFrame() if dataFrame is not None: self.setDataFrame(dataFrame)
the __init__ method. Args: dataFrame (pandas.core.frame.DataFrame, optional): initializes the model with given DataFrame. If none is given an empty DataFrame will be set. defaults to None. editable (bool, optional): apply changes while changing dtype. defaults to True.
juraj-google-style
def set_tick(self, index, interval): name = self.tick_name(index) if name is None: return pack_error(ControllerSubsystem.SENSOR_GRAPH, Error.INVALID_ARRAY_KEY) self.ticks[name] = interval return Error.NO_ERROR
Update the a tick's interval. Args: index (int): The index of the tick that you want to fetch. interval (int): The number of seconds between ticks. Setting this to 0 will disable the tick. Returns: int: An error code.
juraj-google-style
def save(self, data): if self.__nested: raise ConfigLoaderException("Cannot save the config if the 'nested' paramter is True!") if self.__loaded_config_file is None: raise ConfigLoaderException("Load not called yet!") try: with open(self.__loaded_config_file, 'w') as f: f.write(self.__formatter.encode(data)) except Exception as e: raise ConfigLoaderException("Config data is not serializable: %s" % e)
Save the config data Args: data: any serializable config data Raises: ConfigLoaderException: if the ConfigLoader.load not called, so there is no config file name, or the data is not serializable or the loader is nested
juraj-google-style
def _handle_location(self, location): if not isinstance(location, ElementTree.Element): element = self.find(location) if element is None: raise ValueError("Invalid path!") else: element = location return element
Return an element located at location with flexible args. Args: location: String xpath to use in an Element.find search OR an Element (which is simply returned). Returns: The found Element. Raises: ValueError if the location is a string that results in a find of None.
juraj-google-style
def do_post(self, uri, resource, timeout, custom_headers): self.validate_resource_uri(uri) task, entity = self._connection.post(uri, resource, custom_headers=custom_headers) if not task: return entity return self._task_monitor.wait_for_task(task, timeout)
Helps to make post requests. Args: uri: URI of the resource. resource: Resource data to post. timeout: Time out for the request in seconds. cutom_headers: Allows to add custom http headers. Returns: Retunrs Task object.
juraj-google-style
def write(self, ostream, kmip_version=enums.KMIPVersion.KMIP_1_0): tstream = BytearrayStream() self.extension_name.write(tstream, kmip_version=kmip_version) if self.extension_tag is not None: self.extension_tag.write(tstream, kmip_version=kmip_version) if self.extension_type is not None: self.extension_type.write(tstream, kmip_version=kmip_version) self.length = tstream.length() super(ExtensionInformation, self).write( ostream, kmip_version=kmip_version ) ostream.write(tstream.buffer)
Write the data encoding the ExtensionInformation object to a stream. Args: ostream (Stream): A data stream in which to encode object data, supporting a write method; usually a BytearrayStream object. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be encoded. Optional, defaults to KMIP 1.0.
juraj-google-style
def replace_by_etree(self, root_el, el_idx=0): el = self.get_element_by_name(root_el.tag, el_idx) el[:] = list(root_el) el.attrib = root_el.attrib
Replace element. Select element that has the same name as ``root_el``, then replace the selected element with ``root_el`` ``root_el`` can be a single element or the root of an element tree. Args: root_el : element New element that will replace the existing element.
juraj-google-style
def label_sequential_regions(inlist): import more_itertools as mit df = pd.DataFrame(inlist).set_index(0) labeled = {} for label in df[1].unique(): iterable = df[(df[1] == label)].index.tolist() labeled.update({'{}{}'.format(label, (i + 1)): items for (i, items) in enumerate([list(group) for group in mit.consecutive_groups(iterable)])}) return labeled
Input a list of labeled tuples and return a dictionary of sequentially labeled regions. Args: inlist (list): A list of tuples with the first number representing the index and the second the index label. Returns: dict: Dictionary of labeled regions. Examples: >>> label_sequential_regions([(1, 'O'), (2, 'O'), (3, 'O'), (4, 'M'), (5, 'M'), (6, 'I'), (7, 'M'), (8, 'O'), (9, 'O')]) {'O1': [1, 2, 3], 'M1': [4, 5], 'I1': [6], 'M2': [7], 'O2': [8, 9]}
codesearchnet
def aggregate_periods(self, periods): try: fieldname = self.raster_field.name except TypeError: raise exceptions.FieldDoesNotExist('Raster field not found') arrays = self.arrays(fieldname) arr = arrays[0] if len(arrays) > 1: if getattr(arr, 'ndim', 0) > 2: arrays = np.vstack(arrays) fill = getattr(arr, 'fill_value', None) arr = np.ma.masked_values(arrays, fill, copy=False) try: means = arr.reshape((periods, -1)).mean(axis=1) except ValueError: means = np.array([a.mean() for a in np.array_split(arr, periods)]) obj = self[0] setattr(obj, fieldname, means) return [obj]
Returns list of ndarrays averaged to a given number of periods. Arguments: periods -- desired number of periods as int
juraj-google-style
def log_warning(self, msg): if self.__logger: self.__logger.warning(msg) if self.__raise_exception_on_warning: raise RuntimeError(msg)
Log a warning if ``logger`` exists. Args: msg: Warning to log. Warning: Can raise a ``RuntimeError`` if this was asked in the constructor.
codesearchnet
def hgnc_genes(self, hgnc_symbol, build='37', search=False): LOG.debug(('Fetching genes with symbol %s' % hgnc_symbol)) if search: full_query = self.hgnc_collection.find({'$or': [{'aliases': hgnc_symbol}, {'hgnc_id': (int(hgnc_symbol) if hgnc_symbol.isdigit() else None)}], 'build': build}) if (full_query.count() != 0): return full_query return self.hgnc_collection.find({'aliases': {'$regex': hgnc_symbol, '$options': 'i'}, 'build': build}) return self.hgnc_collection.find({'build': build, 'aliases': hgnc_symbol})
Fetch all hgnc genes that match a hgnc symbol Check both hgnc_symbol and aliases Args: hgnc_symbol(str) build(str): The build in which to search search(bool): if partial searching should be used Returns: result()
codesearchnet
def get_index(uid, i): return _SHARED_SEQUENCES[uid][i]
Get the value from the Sequence `uid` at index `i`. To allow multiple Sequences to be used at the same time, we use `uid` to get a specific one. A single Sequence would cause the validation to overwrite the training Sequence. Args: uid: int, Sequence identifier i: index Returns: The value at index `i`.
github-repos
def bfs(self, graph, start): newstatediag = {} queue = [] visited = [] queue.append(start) while queue: state = queue.pop(0) visited.append(state.id) for key in state.trans: if state.trans[key] != []: if key not in visited: for nextstate in graph: if graph[nextstate].id == key: queue.append(graph[nextstate]) break i = 0 for state in graph: if graph[state].id in visited: newstatediag[i] = graph[state] i = i + 1 return newstatediag
Performs BFS operation for eliminating useless loop transitions Args: graph (PDA): the PDA object start (PDA state): The PDA initial state Returns: list: A cleaned, smaller list of DFA states
juraj-google-style
def perspective(img, startpoints, endpoints, interpolation=Image.BICUBIC): if not _is_pil_image(img): raise TypeError('img should be PIL Image. Got {}'.format(type(img))) coeffs = _get_perspective_coeffs(startpoints, endpoints) return img.transform(img.size, Image.PERSPECTIVE, coeffs, interpolation)
Perform perspective transform of the given PIL Image. Args: img (PIL Image): Image to be transformed. coeffs (tuple) : 8-tuple (a, b, c, d, e, f, g, h) which contains the coefficients. for a perspective transform. interpolation: Default- Image.BICUBIC Returns: PIL Image: Perspectively transformed Image.
juraj-google-style
def parse_options(cls, options): d = {} for (filename_check, dictionary) in cls.filename_checks.items(): filename_data = getattr(options, filename_check) if (len(filename_data) != 0): parsed_params = {} for single_line in filename_data: a = [s.strip() for s in single_line.split('=')] if (a[0] in ['filter_regex', 'filename_regex']): parsed_params[a[0]] = a[1] d[filename_check] = parsed_params cls.filename_checks.update(d) cls.filename_checks = {x: y for (x, y) in cls.filename_checks.items() if (len(y) > 0)}
Required by flake8 parse the options, called after add_options Args: options (dict): options to be parsed
codesearchnet
def predict_undirected_graph(self, data): graph = Graph() for idx_i, i in enumerate(data.columns): for idx_j, j in enumerate(data.columns[idx_i+1:]): score = self.predict(data[i].values, data[j].values) if abs(score) > 0.001: graph.add_edge(i, j, weight=score) return graph
Build a skeleton using a pairwise independence criterion. Args: data (pandas.DataFrame): Raw data table Returns: networkx.Graph: Undirected graph representing the skeleton.
juraj-google-style
def __process_node(self, node: yaml.Node, expected_type: Type) -> yaml.Node: logger.info('Processing node {} expecting type {}'.format( node, expected_type)) recognized_types, message = self.__recognizer.recognize( node, expected_type) if len(recognized_types) != 1: raise RecognitionError(message) recognized_type = recognized_types[0] logger.debug('Savorizing node {}'.format(node)) if recognized_type in self._registered_classes.values(): node = self.__savorize(node, recognized_type) logger.debug('Savorized, now {}'.format(node)) logger.debug('Recursing into subnodes') if is_generic_list(recognized_type): if node.tag != 'tag:yaml.org,2002:seq': raise RecognitionError('{}{}Expected a {} here'.format( node.start_mark, os.linesep, type_to_desc(expected_type))) for item in node.value: self.__process_node(item, generic_type_args(recognized_type)[0]) elif is_generic_dict(recognized_type): if node.tag != 'tag:yaml.org,2002:map': raise RecognitionError('{}{}Expected a {} here'.format( node.start_mark, os.linesep, type_to_desc(expected_type))) for _, value_node in node.value: self.__process_node(value_node, generic_type_args(recognized_type)[1]) elif recognized_type in self._registered_classes.values(): if (not issubclass(recognized_type, enum.Enum) and not issubclass(recognized_type, str) and not issubclass(recognized_type, UserString)): for attr_name, type_, _ in class_subobjects(recognized_type): cnode = Node(node) if cnode.has_attribute(attr_name): subnode = cnode.get_attribute(attr_name) new_subnode = self.__process_node( subnode.yaml_node, type_) cnode.set_attribute(attr_name, new_subnode) else: logger.debug('Not a generic class or a user-defined class, not' ' recursing') node.tag = self.__type_to_tag(recognized_type) logger.debug('Finished processing node {}'.format(node)) return node
Processes a node. This is the main function that implements yatiml's \ functionality. It figures out how to interpret this node \ (recognition), then applies syntactic sugar, and finally \ recurses to the subnodes, if any. Args: node: The node to process. expected_type: The type we expect this node to be. Returns: The transformed node, or a transformed copy.
juraj-google-style
def line_line_collide(line1, line2): s, t, success = segment_intersection( line1[:, 0], line1[:, 1], line2[:, 0], line2[:, 1] ) if success: return _helpers.in_interval(s, 0.0, 1.0) and _helpers.in_interval( t, 0.0, 1.0 ) else: disjoint, _ = parallel_lines_parameters( line1[:, 0], line1[:, 1], line2[:, 0], line2[:, 1] ) return not disjoint
Determine if two line segments meet. This is a helper for :func:`convex_hull_collide` in the special case that the two convex hulls are actually just line segments. (Even in this case, this is only problematic if both segments are on a single line.) Args: line1 (numpy.ndarray): ``2 x 2`` array of start and end nodes. line2 (numpy.ndarray): ``2 x 2`` array of start and end nodes. Returns: bool: Indicating if the line segments collide.
juraj-google-style
def _convert_to_hashable(data, types=True): if (data is None): hashable = b'NONE' prefix = b'NULL' elif isinstance(data, six.binary_type): hashable = data prefix = b'TXT' elif isinstance(data, six.text_type): hashable = data.encode('utf-8') prefix = b'TXT' elif isinstance(data, _intlike): hashable = _int_to_bytes(data) prefix = b'INT' elif isinstance(data, float): (a, b) = float(data).as_integer_ratio() hashable = ((_int_to_bytes(a) + b'/') + _int_to_bytes(b)) prefix = b'FLT' else: hash_func = _HASHABLE_EXTENSIONS.lookup(data) (prefix, hashable) = hash_func(data) if types: return (prefix, hashable) else: return (b'', hashable)
r""" Converts `data` into a hashable byte representation if an appropriate hashing function is known. Args: data (object): ordered data with structure types (bool): include type prefixes in the hash Returns: tuple(bytes, bytes): prefix, hashable: a prefix hinting the original data type and the byte representation of `data`. Raises: TypeError : if data has no registered hash methods Example: >>> assert _convert_to_hashable(None) == (b'NULL', b'NONE') >>> assert _convert_to_hashable('string') == (b'TXT', b'string') >>> assert _convert_to_hashable(1) == (b'INT', b'\x01') >>> assert _convert_to_hashable(1.0) == (b'FLT', b'\x01/\x01') >>> assert _convert_to_hashable(_intlike[-1](1)) == (b'INT', b'\x01')
codesearchnet
def register(self, name, option): if name in self._options: raise ValueError("Option {0} already exists.".format(name)) if not isinstance(option, opt.Option): raise TypeError("Options must be of type Option.") self._options[name] = option
Register a new option with the namespace. Args: name (str): The name to register the option under. option (option.Option): The option object to register. Raises: TypeError: If the option is not an option.Option object. ValueError: If the name is already registered.
juraj-google-style
def _batch_static_inner_shape(old_shape: tensor_shape.TensorShape, batch_size: Optional[int]) -> tensor_shape.TensorShape: head_dim = tensor_shape.dimension_at_index(old_shape, 0) * batch_size return head_dim + old_shape[1:]
Returns a copy of old_shape with axis=0 multiplied by batch_size. Only use if this is the inner_shape of a DynamicRaggedShape.Spec with one or more row partitions. Args: old_shape: the original inner_shape. batch_size: the batch size. Returns: a new shape.
github-repos
def _get_subclass_names(self, classname, namespace, deep_inheritance): assert ((classname is None) or isinstance(classname, (six.string_types, CIMClassName))) if isinstance(classname, CIMClassName): classname = classname.classname try: classes = self.classes[namespace] except KeyError: classes = NocaseDict() if (classname is None): rtn_classnames = [cl.classname for cl in six.itervalues(classes) if (cl.superclass is None)] else: rtn_classnames = [cl.classname for cl in six.itervalues(classes) if (cl.superclass and (cl.superclass.lower() == classname.lower()))] if deep_inheritance: subclass_names = [] if rtn_classnames: for cn in rtn_classnames: subclass_names.extend(self._get_subclass_names(cn, namespace, deep_inheritance)) rtn_classnames.extend(subclass_names) return rtn_classnames
Get class names that are subclasses of the classname input parameter from the repository. If DeepInheritance is False, get only classes in the repository for the defined namespace for which this class is a direct super class. If deep_inheritance is `True`, get all direct and indirect subclasses. If false, get only a the next level of the hiearchy. Returns: list of strings with the names of all subclasses of `classname`.
codesearchnet
def get_generated_cols(X_original, X_transformed, to_transform): original_cols = list(X_original.columns) if len(to_transform) > 0: [original_cols.remove(c) for c in to_transform] current_cols = list(X_transformed.columns) if len(original_cols) > 0: [current_cols.remove(c) for c in original_cols] return current_cols
Returns a list of the generated/transformed columns. Arguments: X_original: df the original (input) DataFrame. X_transformed: df the transformed (current) DataFrame. to_transform: [str] a list of columns that were transformed (as in the original DataFrame), commonly self.cols. Output: a list of columns that were transformed (as in the current DataFrame).
juraj-google-style
def _attach_files(filepaths, email_): for filepath in filepaths: base = os.path.basename(filepath) with open(filepath, 'rb') as file: part = MIMEApplication(file.read(), Name=base) part['Content-Disposition'] = ('attachment; filename="%s"' % base) email_.attach(part)
Take a list of filepaths and attach the files to a MIMEMultipart. Args: filepaths (list(str)): A list of filepaths. email_ (email.MIMEMultipart): A MIMEMultipart email_.
codesearchnet
def _ContainsExactlyElementsIn(self, expected, warn_elements_in=False): if not expected: if self._actual: self._FailWithProposition('is empty') return _InOrder() missing = _DuplicateCounter() extra = _DuplicateCounter() actual_iter = iter(self._actual) expected_iter = iter(expected) warning = '' if warn_elements_in: warning = ' Passing a single iterable to ContainsExactly(*expected) is often not the correct thing to do. Did you mean to call ContainsExactlyElementsIn(Iterable) instead?' while True: try: actual_element = next(actual_iter) except StopIteration: break try: expected_element = next(expected_iter) except StopIteration: extra.Increment(actual_element) break if actual_element != expected_element: missing.Increment(expected_element) for m in expected_iter: missing.Increment(m) if actual_element in missing: missing.Decrement(actual_element) else: extra.Increment(actual_element) for e in actual_iter: if e in missing: missing.Decrement(e) else: extra.Increment(e) if missing: if extra: self._FailWithProposition('contains exactly <{0!r}>. It is missing <{1}> and has unexpected items <{2}>'.format(expected, missing, extra), suffix=warning) else: self._FailWithBadResults('contains exactly', expected, 'is missing', missing, suffix=warning) if extra: self._FailWithBadResults('contains exactly', expected, 'has unexpected items', extra, suffix=warning) return _NotInOrder(self._actual, 'contains exactly these elements in order', expected) for e in actual_iter: extra.Increment(e) if extra: self._FailWithBadResults('contains exactly', expected, 'has unexpected items', extra, suffix=warning) for m in expected_iter: missing.Increment(m) if missing: self._FailWithBadResults('contains exactly', expected, 'is missing', missing, suffix=warning) return _InOrder()
Determines if the subject contains exactly the expected elements. Helper function for ContainsExactly() and ContainsExactlyElementsIn(). Args: expected: iterable of objects that should be contained in the subject. warn_elements_in: boolean, default False. If True, and the assertion fails, and the developer invoked ContainsExactly() with a single iterable, warn that this usage is error-prone. Returns: If the subject does contain exactly the expected elements, returns an _Ordered predicate on which .InOrder() can be subsequently called. Raises: TruthAssertionError: the subject is missing any of the expected elements, or the subject contains any element not in the expected elements.
github-repos
def problem_id(self, value): if value == self._defaults['problemId'] and 'problemId' in self._values: del self._values['problemId'] else: self._values['problemId'] = value
The problem_id property. Args: value (string). the property value.
juraj-google-style
def get_inspection_units(logdir='', event_file='', tag=''): if logdir: subdirs = io_wrapper.GetLogdirSubdirectories(logdir) inspection_units = [] for subdir in subdirs: generator = itertools.chain(*[generator_from_event_file(os.path.join(subdir, f)) for f in tf.io.gfile.listdir(subdir) if io_wrapper.IsTensorFlowEventsFile(os.path.join(subdir, f))]) inspection_units.append(InspectionUnit(name=subdir, generator=generator, field_to_obs=get_field_to_observations_map(generator, tag))) if inspection_units: print('Found event files in:\n{}\n'.format('\n'.join([u.name for u in inspection_units]))) elif io_wrapper.IsTensorFlowEventsFile(logdir): print('It seems that {} may be an event file instead of a logdir. If this is the case, use --event_file instead of --logdir to pass it in.'.format(logdir)) else: print('No event files found within logdir {}'.format(logdir)) return inspection_units elif event_file: generator = generator_from_event_file(event_file) return [InspectionUnit(name=event_file, generator=generator, field_to_obs=get_field_to_observations_map(generator, tag))] return []
Returns a list of InspectionUnit objects given either logdir or event_file. If logdir is given, the number of InspectionUnits should equal the number of directories or subdirectories that contain event files. If event_file is given, the number of InspectionUnits should be 1. Args: logdir: A log directory that contains event files. event_file: Or, a particular event file path. tag: An optional tag name to query for. Returns: A list of InspectionUnit objects.
codesearchnet
def uniform_binning_correction(x, n_bits=8): n_bins = 2**n_bits batch_size, height, width, n_channels = common_layers.shape_list(x) hwc = float(height * width * n_channels) x = x + tf.random_uniform( shape=(batch_size, height, width, n_channels), minval=0.0, maxval=1.0/n_bins) objective = -np.log(n_bins) * hwc * tf.ones(batch_size) return x, objective
Replaces x^i with q^i(x) = U(x, x + 1.0 / 256.0). Args: x: 4-D Tensor of shape (NHWC) n_bits: optional. Returns: x: x ~ U(x, x + 1.0 / 256) objective: Equivalent to -q(x)*log(q(x)).
juraj-google-style
def _ParseSourcePathOption(self, options): self._source_path = self.ParseStringOption(options, self._SOURCE_OPTION) if (not self._source_path): raise errors.BadConfigOption('Missing source path.') self._source_path = os.path.abspath(self._source_path)
Parses the source path option. Args: options (argparse.Namespace): command line arguments. Raises: BadConfigOption: if the options are invalid.
codesearchnet
def provide(self, cls): support.verify_class_type(cls, 'cls') if (not self._is_injectable_fn(cls)): provide_loc = locations.get_back_frame_loc() raise errors.NonExplicitlyBoundClassError(provide_loc, cls) try: return self._obj_provider.provide_class(cls, self._injection_context_factory.new(cls.__init__), direct_init_pargs=[], direct_init_kwargs={}) except errors.Error as e: if self._use_short_stack_traces: raise e else: raise
Provides an instance of the given class. Args: cls: a class (not an instance) Returns: an instance of cls Raises: Error: an instance of cls is not providable
codesearchnet
def transform_function(self, fn, user_context): cache_subkey = self.get_caching_key(user_context) if self._cache.has(fn, cache_subkey): factory = self._cached_factory(fn, cache_subkey) else: with self._cache_lock: if self._cache.has(fn, cache_subkey): factory = self._cached_factory(fn, cache_subkey) else: logging.log(1, '%s is not cached for subkey %s', fn, cache_subkey) nodes, ctx = super(PyToPy, self).transform_function(fn, user_context) if isinstance(nodes, gast.Lambda): nodes = gast.Assign(targets=[gast.Name(ctx.info.name, ctx=gast.Store(), annotation=None, type_comment=None)], value=nodes) else: nodes.name = ctx.info.name if logging.has_verbosity(2): logging.log(2, 'Transformed %s:\n\n%s\n', fn, parser.unparse(nodes)) factory = _PythonFnFactory(ctx.info.name, fn.__code__.co_freevars, self.get_extra_locals()) factory.create(nodes, ctx.namer, future_features=ctx.info.future_features) self._cache[fn][cache_subkey] = factory transformed_fn = factory.instantiate(globals_=fn.__globals__, closure=fn.__closure__ or (), defaults=fn.__defaults__, kwdefaults=getattr(fn, '__kwdefaults__', None)) return (transformed_fn, factory.module, factory.source_map)
Transforms a function. See GenericTranspiler.transform_function. This overload wraps the parent's `transform_function`, adding caching and facilities to instantiate the output as a Python object. It also adds facilities to make new symbols available to the generated Python code, visible as local variables - see `get_extra_locals`. Args: fn: A function or lambda. user_context: An opaque object (may be None) that is forwarded to transform_ast, through the ctx.user attribute. Returns: A tuple: * A function or lambda with the same signature and closure as `fn` * The temporary module into which the transformed function was loaded * The source map as a Dict[origin_info.LineLocation, origin_info.OriginInfo]
github-repos
def write(self, output_stream, kmip_version=enums.KMIPVersion.KMIP_1_0): local_stream = BytearrayStream() if self._wrapping_method: self._wrapping_method.write( local_stream, kmip_version=kmip_version ) else: raise ValueError( "Invalid struct missing the wrapping method attribute." ) if self._encryption_key_information: self._encryption_key_information.write( local_stream, kmip_version=kmip_version ) if self._mac_signature_key_information: self._mac_signature_key_information.write( local_stream, kmip_version=kmip_version ) if self._mac_signature: self._mac_signature.write( local_stream, kmip_version=kmip_version ) if self._iv_counter_nonce: self._iv_counter_nonce.write( local_stream, kmip_version=kmip_version ) if self._encoding_option: self._encoding_option.write( local_stream, kmip_version=kmip_version ) self.length = local_stream.length() super(KeyWrappingData, self).write( output_stream, kmip_version=kmip_version ) output_stream.write(local_stream.buffer)
Write the data encoding the KeyWrappingData struct to a stream. Args: output_stream (stream): A data stream in which to encode object data, supporting a write method; usually a BytearrayStream object. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be encoded. Optional, defaults to KMIP 1.0.
juraj-google-style
def _find_penultimate_layer(model, layer_idx, penultimate_layer_idx): if (penultimate_layer_idx is None): for (idx, layer) in utils.reverse_enumerate(model.layers[:(layer_idx - 1)]): if isinstance(layer, Wrapper): layer = layer.layer if isinstance(layer, (_Conv, _Pooling1D, _Pooling2D, _Pooling3D)): penultimate_layer_idx = idx break if (penultimate_layer_idx is None): raise ValueError('Unable to determine penultimate `Conv` or `Pooling` layer for layer_idx: {}'.format(layer_idx)) if (layer_idx < 0): layer_idx = (len(model.layers) + layer_idx) if (penultimate_layer_idx > layer_idx): raise ValueError('`penultimate_layer_idx` needs to be before `layer_idx`') return model.layers[penultimate_layer_idx]
Searches for the nearest penultimate `Conv` or `Pooling` layer. Args: model: The `keras.models.Model` instance. layer_idx: The layer index within `model.layers`. penultimate_layer_idx: The pre-layer to `layer_idx`. If set to None, the nearest penultimate `Conv` or `Pooling` layer is used. Returns: The penultimate layer.
codesearchnet
def Close(self, abort=False): if abort: self._queue.cancel_join_thread() self._queue.close() self._queue.join_thread()
Closes the queue. This needs to be called from any process or thread putting items onto the queue. Args: abort (Optional[bool]): True if the close was issued on abort.
juraj-google-style
def poll(self, transaction_hash: bytes): if (len(transaction_hash) != 32): raise ValueError('transaction_hash must be a 32 byte hash') transaction_hash = encode_hex(transaction_hash) last_result = None while True: transaction = self.web3.eth.getTransaction(transaction_hash) if ((transaction is None) and (last_result is not None)): raise Exception('invalid transaction, check gas price') if (transaction and (transaction['blockNumber'] is not None)): last_result = transaction transaction_block = transaction['blockNumber'] confirmation_block = (transaction_block + self.default_block_num_confirmations) block_number = self.block_number() if (block_number >= confirmation_block): return transaction gevent.sleep(1.0)
Wait until the `transaction_hash` is applied or rejected. Args: transaction_hash: Transaction hash that we are waiting for.
codesearchnet
def relevant_connections(n, _from, to): cm = np.zeros((n, n)) if ((not _from) or (not to)): return cm cm[np.ix_(_from, to)] = 1 return cm
Construct a connectivity matrix. Args: n (int): The dimensions of the matrix _from (tuple[int]): Nodes with outgoing connections to ``to`` to (tuple[int]): Nodes with incoming connections from ``_from`` Returns: np.ndarray: An |n x n| connectivity matrix with the |i,jth| entry is ``1`` if |i| is in ``_from`` and |j| is in ``to``, and 0 otherwise.
codesearchnet
def _export_work_errors(self, work, output_file): errors = set() for v in itervalues(work.work): if (v['is_completed'] and (v['error'] is not None)): errors.add(v['error']) with open(output_file, 'w') as f: for e in sorted(errors): f.write(e) f.write('\n')
Saves errors for given work pieces into file. Args: work: instance of either AttackWorkPieces or DefenseWorkPieces output_file: name of the output file
codesearchnet
async def do_run_task(context, run_cancellable, to_cancellable_process): status = 0 try: if context.config['verify_chain_of_trust']: chain = ChainOfTrust(context, context.config['cot_job_type']) (await run_cancellable(verify_chain_of_trust(chain))) status = (await run_task(context, to_cancellable_process)) generate_cot(context) except asyncio.CancelledError: log.info('CoT cancelled asynchronously') raise WorkerShutdownDuringTask except ScriptWorkerException as e: status = worst_level(status, e.exit_code) log.error('Hit ScriptWorkerException: {}'.format(e)) except Exception as e: log.exception('SCRIPTWORKER_UNEXPECTED_EXCEPTION task {}'.format(e)) raise return status
Run the task logic. Returns the integer status of the task. args: context (scriptworker.context.Context): the scriptworker context. run_cancellable (typing.Callable): wraps future such that it'll cancel upon worker shutdown to_cancellable_process (typing.Callable): wraps ``TaskProcess`` such that it will stop if the worker is shutting down Raises: Exception: on unexpected exception. Returns: int: exit status
codesearchnet
def setNetworkIDTimeout(self, iNwkIDTimeOut): print '%s call setNetworkIDTimeout' % self.port print iNwkIDTimeOut iNwkIDTimeOut /= 1000 try: cmd = 'networkidtimeout %s' % str(iNwkIDTimeOut) print cmd return self.__sendCommand(cmd)[0] == 'Done' except Exception, e: ModuleHelper.WriteIntoDebugLogger("setNetworkIDTimeout() Error: " + str(e))
set networkid timeout for Thread device Args: iNwkIDTimeOut: a given NETWORK_ID_TIMEOUT Returns: True: successful to set NETWORK_ID_TIMEOUT False: fail to set NETWORK_ID_TIMEOUT
juraj-google-style
def present(name, save=False, **kwargs): ret = {'name': name, 'result': True, 'changes': {}, 'comment': []} current_beacons = __salt__['beacons.list'](return_yaml=False, **kwargs) beacon_data = [{k: v} for (k, v) in six.iteritems(kwargs)] if (name in current_beacons): if (beacon_data == current_beacons[name]): ret['comment'].append('Job {0} in correct state'.format(name)) elif (('test' in __opts__) and __opts__['test']): kwargs['test'] = True result = __salt__['beacons.modify'](name, beacon_data, **kwargs) ret['comment'].append(result['comment']) ret['changes'] = result['changes'] else: result = __salt__['beacons.modify'](name, beacon_data, **kwargs) if (not result['result']): ret['result'] = result['result'] ret['comment'] = result['comment'] return ret elif ('changes' in result): ret['comment'].append('Modifying {0} in beacons'.format(name)) ret['changes'] = result['changes'] else: ret['comment'].append(result['comment']) elif (('test' in __opts__) and __opts__['test']): kwargs['test'] = True result = __salt__['beacons.add'](name, beacon_data, **kwargs) ret['comment'].append(result['comment']) else: result = __salt__['beacons.add'](name, beacon_data, **kwargs) if (not result['result']): ret['result'] = result['result'] ret['comment'] = result['comment'] return ret else: ret['comment'].append('Adding {0} to beacons'.format(name)) if save: __salt__['beacons.save'](**kwargs) ret['comment'].append('Beacon {0} saved'.format(name)) ret['comment'] = '\n'.join(ret['comment']) return ret
Ensure beacon is configured with the included beacon data. Args: name (str): The name of the beacon ensure is configured. save (bool): ``True`` updates the beacons.conf. Default is ``False``. Returns: dict: A dictionary of information about the results of the state Example: .. code-block:: yaml ps_beacon: beacon.present: - name: ps - save: True - enable: False - services: salt-master: running apache2: stopped
codesearchnet
def __ge__(self, other): if other.__class__ is not self.__class__: return NotImplemented return not self < other
Test if self is greater than or equal an object of the same class. Args: other: The object to compare against. Returns: True if self is greater than or equal to other; else False. Raises: TypeError: Raised if the objects are not of the same class.
juraj-google-style
def get_next_of_type(self, processor_type): with self._condition: if (processor_type not in self): self.wait_for_registration(processor_type) try: processor = self[processor_type].next_processor() except NoProcessorVacancyError: processor = self.wait_for_vacancy(processor_type) processor.inc_occupancy() return processor
Get the next available processor of a particular type and increment its occupancy counter. Args: processor_type (ProcessorType): The processor type associated with a zmq identity. Returns: (Processor): Information about the transaction processor
codesearchnet
def prop(pode, prop): form = pode[0][0] if prop.startswith(form): prop = prop[len(form):] if prop[0] == ':': prop = prop[1:] return pode[1]['props'].get(prop)
Return the valu of a given property on the node. Args: pode (tuple): A packed node. prop (str): Property to retrieve. Notes: The prop argument may be the full property name (foo:bar:baz), relative property name (:baz) , or the unadorned property name (baz). Returns:
juraj-google-style
def initialize_write(self): raise NotImplementedError
Initializes the sink before writing begins. Invoked before any data is written to the sink. Please see documentation in ``iobase.Sink`` for an example. Returns: An object that contains any sink specific state generated by initialization. This object will be passed to open_writer() and finalize_write() methods.
github-repos
def create_graph_from_data(self, data, **kwargs): self.arguments['{SCORE}'] = self.scores[self.score] self.arguments['{CUTOFF}'] = str(self.cutoff) self.arguments['{VARSEL}'] = str(self.variablesel).upper() self.arguments['{SELMETHOD}'] = self.var_selection[self.selmethod] self.arguments['{PRUNING}'] = str(self.pruning).upper() self.arguments['{PRUNMETHOD}'] = self.var_selection[self.prunmethod] self.arguments['{NJOBS}'] = str(self.nb_jobs) self.arguments['{VERBOSE}'] = str(self.verbose).upper() results = self._run_cam(data, verbose=self.verbose) return nx.relabel_nodes(nx.DiGraph(results), {idx: i for (idx, i) in enumerate(data.columns)})
Apply causal discovery on observational data using CAM. Args: data (pandas.DataFrame): DataFrame containing the data Returns: networkx.DiGraph: Solution given by the CAM algorithm.
codesearchnet
def set_db_row(db, start, size, _bytearray): client.db_write(db, start, size, _bytearray)
Here we replace a piece of data in a db block with new data Args: db (int): The db to use start(int): The start within the db size(int): The size of the data in bytes _butearray (enumerable): The data to put in the db
juraj-google-style
def BuildTypeDescriptor(self, value_cls): result = ApiRDFValueDescriptor( name=value_cls.__name__, parents=[klass.__name__ for klass in value_cls.__mro__], doc=value_cls.__doc__ or "", kind="PRIMITIVE") result.default = self.BuildDefaultValue(value_cls) return result
Renders metadata of a given value class. Args: value_cls: Metadata of this class will be rendered. This class has to be (or to be a subclass of) a self.value_class (i.e. a class that this renderer is capable of rendering). Returns: Dictionary with class metadata.
juraj-google-style
def GetEntries(self, parser_mediator, match=None, **unused_kwargs): shortcuts = match.get('UserShortcuts', {}) for search_text, data in iter(shortcuts.items()): datetime_value = data.get('LAST_USED', None) if not datetime_value: continue display_name = data.get('DISPLAY_NAME', '<DISPLAY_NAME>') path = data.get('PATH', '<PATH>') event_data = plist_event.PlistTimeEventData() event_data.desc = ( 'Spotlight term searched "{0:s}" associate to {1:s} ({2:s})').format( search_text, display_name, path) event_data.key = search_text event_data.root = '/UserShortcuts' event = time_events.PythonDatetimeEvent( datetime_value, definitions.TIME_DESCRIPTION_WRITTEN) parser_mediator.ProduceEventWithEventData(event, event_data)
Extracts relevant Spotlight entries. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. match (Optional[dict[str: object]]): keys extracted from PLIST_KEYS.
juraj-google-style
def switch(data, pred, dtype=None, name=None): with ops.name_scope(name, 'Switch', [data, pred]) as name: data = ops.internal_convert_to_tensor_or_composite(data, dtype=dtype, name='data', as_ref=True) pred = ops.convert_to_tensor(pred, name='pred') if isinstance(data, tensor_lib.Tensor): return gen_control_flow_ops.switch(data, pred, name=name) else: if not isinstance(data, composite_tensor.CompositeTensor): raise TypeError(f"'data' must be a Tensor or CompositeTensor. Received: {type(data)}.") tensors = nest.flatten(data, expand_composites=True) mapped = [gen_control_flow_ops.switch(tensor, pred) for tensor in tensors] mapped_f, mapped_t = zip(*mapped) return (nest.pack_sequence_as(data, mapped_f, expand_composites=True), nest.pack_sequence_as(data, mapped_t, expand_composites=True))
Forwards `data` to an output determined by `pred`. If `pred` is false, the `data` input is forwarded to the first output. Otherwise, the data goes to the second output. This op handles `Tensor`s and `IndexedSlices`. Args: data: The tensor to be forwarded to the appropriate output. pred: A scalar that specifies which output port will receive data. dtype: Optional element type for the returned tensor. If missing, the type is inferred from the type of `value`. name: A name for this operation (optional). Returns: `(output_false, output_true)`: If `pred` is true, data will be forwarded to `output_true`, otherwise it goes to `output_false`.
github-repos
def __init__(self, latent_size, hidden_size): super(EncoderStatic, self).__init__() self.latent_size = latent_size self.hidden_size = hidden_size self.bilstm = tf.keras.layers.Bidirectional( tf.keras.layers.LSTM(hidden_size), merge_mode="sum") self.output_layer = tf.keras.layers.Dense(2*latent_size)
Constructs an encoder for `f`. Args: latent_size: An integer corresponding to the dimensionality of the distribution. hidden_size: Dimensionality of the LSTM, RNN, and affine function parameters.
juraj-google-style
def show_rules(cls, *names, attr=None): from qnet.printing import srepr try: if attr is None: attr = cls._rules_attr() rules = getattr(cls, attr) except TypeError: rules = {} for (name, rule) in rules.items(): if len(names) > 0 and name not in names: continue pat, repl = rule print(name) print(" PATTERN:") print(textwrap.indent( textwrap.dedent(srepr(pat, indented=True)), prefix=" "*8)) print(" REPLACEMENT:") print(textwrap.indent( textwrap.dedent(inspect.getsource(repl).rstrip()), prefix=" "*8))
Print algebraic rules used by :class:`create` Print a summary of the algebraic rules with the given names, or all rules if not names a given. Args: names (str): Names of rules to show attr (None or str): Name of the class attribute from which to get the rules. Cf. :meth:`add_rule`. Raises: AttributeError: If invalid `attr`
juraj-google-style
class Wav2Vec2DecoderWithLMOutput(ModelOutput): text: Union[List[List[str]], List[str], str] logit_score: Union[List[List[float]], List[float], float] = None lm_score: Union[List[List[float]], List[float], float] = None word_offsets: Union[List[List[ListOfDict]], List[ListOfDict], ListOfDict] = None
Output type of [`Wav2Vec2DecoderWithLM`], with transcription. Args: text (list of `str` or `str`): Decoded logits in text from. Usually the speech transcription. logit_score (list of `float` or `float`): Total logit score of the beams associated with produced text. lm_score (list of `float`): Fused lm_score of the beams associated with produced text. word_offsets (list of `List[Dict[str, Union[int, str]]]` or `List[Dict[str, Union[int, str]]]`): Offsets of the decoded words. In combination with sampling rate and model downsampling rate word offsets can be used to compute time stamps for each word.
github-repos
def start_tpot(automated_run, session, path): module = functions.import_string_code_as_module(automated_run.source) extraction = session.query(models.Extraction).first() (X, y) = extraction.return_train_dataset() tpot_learner = module.tpot_learner tpot_learner.fit(X, y) temp_filename = os.path.join(path, 'tpot-temp-export-{}'.format(os.getpid())) tpot_learner.export(temp_filename) with open(temp_filename) as f: base_learner_source = f.read() base_learner_source = (constants.tpot_learner_docstring + base_learner_source) try: os.remove(temp_filename) except OSError: pass blo = models.BaseLearnerOrigin(source=base_learner_source, name='TPOT Learner', meta_feature_generator='predict') session.add(blo) session.commit()
Starts a TPOT automated run that exports directly to base learner setup Args: automated_run (xcessiv.models.AutomatedRun): Automated run object session: Valid SQLAlchemy session path (str, unicode): Path to project folder
codesearchnet
def set_style(self, style): if style is not None: try: self.style.update(style) except ValueError: for s in style.split(';'): k, v = s.split(':', 1) self.style[k.strip()] = v.strip()
Allows to set style properties for the widget. Args: style (str or dict): The style property dictionary or json string.
juraj-google-style
def HasBalance(self, assetId): for (key, fixed8) in self.Balances.items(): if (key == assetId): return True return False
Flag indicating if the asset has a balance. Args: assetId (UInt256): Returns: bool: True if a balance is present. False otherwise.
codesearchnet
def iter_variants_by_names(self, names): for name in names: for result in self.get_variant_by_name(name): yield result
Iterates over the genotypes for variants using a list of names. Args: names (list): The list of names for variant extraction.
juraj-google-style
def authorize(self, scheme, **params): if scheme not in self.schemes: return False for field, value in iteritems(params): setattr(self, field, value) if field in self.schemes[scheme][u'params'].keys() and value: self.schemes[scheme][u'params'][field] = value return True
Store credentials required to satisfy a given auth scheme. Args: scheme (str): The name of the Authentication scheme. **params: parameters for the specified scheme. Returns: True if parameters are set successfully (note that this doesn't mean the credentials are valid) False if the scheme specified is not supported
juraj-google-style
def separate_words(text, acronyms=None): words, _case, _sep = case_parse.parse_case(text, acronyms, preserve_case=True) return ' '.join(words)
Return text in "seperate words" style. Args: text: input string to convert case detect_acronyms: should attempt to detect acronyms acronyms: a list of acronyms to detect >>> separate_words("HELLO_WORLD") 'HELLO WORLD' >>> separate_words("helloHTMLWorld", True, ["HTML"]) 'hello HTML World'
juraj-google-style
async def _sync_all_conversations(client): conv_states = [] sync_timestamp = None request = hangouts_pb2.SyncRecentConversationsRequest(request_header=client.get_request_header(), max_conversations=CONVERSATIONS_PER_REQUEST, max_events_per_conversation=1, sync_filter=[hangouts_pb2.SYNC_FILTER_INBOX, hangouts_pb2.SYNC_FILTER_ARCHIVED]) for _ in range(MAX_CONVERSATION_PAGES): logger.info('Requesting conversations page %s', request.last_event_timestamp) response = (await client.sync_recent_conversations(request)) conv_states = (list(response.conversation_state) + conv_states) sync_timestamp = parsers.from_timestamp(response.response_header.current_server_time) if (response.continuation_end_timestamp == 0): logger.info('Reached final conversations page') break else: request.last_event_timestamp = response.continuation_end_timestamp else: logger.warning('Exceeded maximum number of conversation pages') logger.info('Synced %s total conversations', len(conv_states)) return (conv_states, sync_timestamp)
Sync all conversations by making paginated requests. Conversations are ordered by ascending sort timestamp. Args: client (Client): Connected client. Raises: NetworkError: If the requests fail. Returns: tuple of list of ``ConversationState`` messages and sync timestamp
codesearchnet
def from_backbone_config(cls, backbone_config: PretrainedConfig, **kwargs): return cls(backbone_config=backbone_config, **kwargs)
Instantiate a [`DetrConfig`] (or a derived class) from a pre-trained backbone model configuration. Args: backbone_config ([`PretrainedConfig`]): The backbone configuration. Returns: [`DetrConfig`]: An instance of a configuration object
github-repos
def gen_subject_info_tree(subject_info_pyxb, authn_subj, include_duplicates=False): class State(): 'self.' pass state = State() state.subject_info_pyxb = subject_info_pyxb state.include_duplicates = include_duplicates state.visited_set = set() state.tree = SubjectInfoNode('Root', TYPE_NODE_TAG) _add_subject(state, state.tree, authn_subj) symbolic_node = state.tree.add_child('Symbolic', TYPE_NODE_TAG) _add_subject(state, symbolic_node, d1_common.const.SUBJECT_AUTHENTICATED) _trim_tree(state) return state.tree
Convert the flat, self referential lists in the SubjectInfo to a tree structure. Args: subject_info_pyxb: SubjectInfo PyXB object authn_subj: str The authenticated subject that becomes the root subject in the tree of subjects built from the SubjectInfo. Only subjects that are authenticated by a direct or indirect connection to this subject are included in the tree. include_duplicates: Include branches of the tree that contain subjects that have already been included via other branches. If the tree is intended for rendering, including the duplicates will provide a more complete view of the SubjectInfo. Returns: SubjectInfoNode : Tree of nodes holding information about subjects that are directly or indirectly connected to the authenticated subject in the root.
codesearchnet
def _apply_conv(self, inputs, w): outputs = tf.nn.convolution(inputs, w, strides=self._stride, padding=self._conv_op_padding, dilation_rate=self._rate, data_format=self._data_format) return outputs
Apply a convolution operation on `inputs` using variable `w`. Args: inputs: A Tensor of shape `data_format` and of type `tf.float16`, `tf.bfloat16` or `tf.float32`. w: A weight matrix of the same type as `inputs`. Returns: outputs: The result of the convolution operation on `inputs`.
codesearchnet
def __init__(self, embedding_shape, initializer, weight_collections=None, trainable=True, name=None, **kwargs): super(_EmbeddingColumnLayer, self).__init__(trainable=trainable, name=name, **kwargs) self._embedding_shape = embedding_shape self._initializer = initializer self._weight_collections = weight_collections
Constructor. Args: embedding_shape: Shape of the embedding variable used for lookup. initializer: A variable initializer function to be used in embedding variable initialization. weight_collections: A list of collection names to which the Variable will be added. Note that, variables will also be added to collections `tf.GraphKeys.GLOBAL_VARIABLES` and `ops.GraphKeys.MODEL_VARIABLES`. trainable: If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`). name: Name of the layer **kwargs: keyword named properties.
github-repos
def gene_filter(self, query, mongo_query): LOG.debug('Adding panel and genes-related parameters to the query') gene_query = [] if query.get('hgnc_symbols') and query.get('gene_panels'): gene_query.append({'hgnc_symbols': {'$in': query['hgnc_symbols']}}) gene_query.append({'panels': {'$in': query['gene_panels']}}) mongo_query['$or']=gene_query else: if query.get('hgnc_symbols'): hgnc_symbols = query['hgnc_symbols'] mongo_query['hgnc_symbols'] = {'$in': hgnc_symbols} LOG.debug("Adding hgnc_symbols: %s to query" % ', '.join(hgnc_symbols)) if query.get('gene_panels'): gene_panels = query['gene_panels'] mongo_query['panels'] = {'$in': gene_panels} return gene_query
Adds gene-related filters to the query object Args: query(dict): a dictionary of query filters specified by the users mongo_query(dict): the query that is going to be submitted to the database Returns: mongo_query(dict): returned object contains gene and panel-related filters
juraj-google-style
async def shuffle_participants(self): res = (await self.connection('POST', 'tournaments/{}/participants/randomize'.format(self._id))) self._refresh_participants_from_json(res)
Shuffle participants' seeds |methcoro| Note: |from_api| Randomize seeds among participants. Only applicable before a tournament has started. Raises: APIException
codesearchnet
def get_available_host_port(): logging.warning('The method mobly.utils.get_available_host_port is deprecated because it is unreliable. Pass "tcp:0" to adb forward instead.') from mobly.controllers.android_device_lib import adb port = portpicker.pick_unused_port() if not adb.is_adb_available(): return port for _ in range(MAX_PORT_ALLOCATION_RETRY): if port not in adb.list_occupied_adb_ports(): return port port = portpicker.pick_unused_port() raise Error('Failed to find available port after {} retries'.format(MAX_PORT_ALLOCATION_RETRY))
Gets a host port number available for adb forward. DEPRECATED: This method is unreliable. Pass `tcp:0` to adb forward instead. Returns: An integer representing a port number on the host available for adb forward. Raises: Error: when no port is found after MAX_PORT_ALLOCATION_RETRY times.
github-repos
def add_args(self, args): for (key, value) in vars(args).items(): if (value is not None): setattr(self, key.upper(), value)
Add the args Args: args (namespace): The commandline args
codesearchnet
def get_message(self, block=False, timeout=None): try: message = self._inbox.get(block=block, timeout=timeout) return message except Exception: return None
Removes and returns a RTMMessage from self._inbox Args: block(bool): if True block until a RTMMessage is available, else it will return None when self._inbox is empty timeout(int): it blocks at most timeout seconds Returns: RTMMessage if self._inbox is not empty, else None
juraj-google-style
def get_hash(self, handle): response = self.open_url(url=handle, suffix='.hash') try: return response.read() finally: response.close()
Get the associated hash for the given handle, the hash file must exist (``handle + '.hash'``). Args: handle (str): Path to the template to get the hash from Returns: str: Hash for the given handle
juraj-google-style
def top_1(x, reduced_dim, dtype=tf.int32, name=None): reduced_dim = convert_to_dimension(reduced_dim) with tf.name_scope(name, default_name='top_1'): max_val = reduce_max(x, reduced_dim=reduced_dim) is_max = to_float(equal(x, max_val)) pos = mtf_range(x.mesh, reduced_dim, tf.float32) ret = reduce_max((is_max * pos), reduced_dim=reduced_dim) ret = cast(ret, dtype) return (ret, max_val)
Argmax and Max. Args: x: a Tensor reduced_dim: a Dimension in x.shape.dims dtype: a tf.dtype (for the output) name: an optional string Returns: indices: a Tensor with given dtype values: optional Tensor equal to mtf.reduce_max(x, reduced_dim=reduced_dim)
codesearchnet
def from_json(cls, json_data): return cls(user=json_data['client_email'], keydata=json_data['private_key'], token_uri=json_data['token_uri'])
Create an uploader given (parsed) JSON data. Note that this is a JSON-formatted key file downloaded from Google when the service account key is created, *NOT* a json-encoded oauth2client.client.SignedJwtAssertionCredentials object. Args: json_data: Dict containing the loaded JSON key data. Returns: a MfgInspectorCallback with credentials.
juraj-google-style
def _step(self, actions): self.assert_common_preconditions() assert (len(actions) == len(self._envs)) observations = [] rewards = [] dones = [] infos = [] for (env, action) in zip(self._envs, actions): (observation, reward, done, info) = env.step(action) observations.append(observation) rewards.append(reward) dones.append(done) infos.append(info) return tuple(map(np.stack, [observations, rewards, dones, infos]))
Takes a step in all environments, shouldn't pre-process or record. Subclasses should override this to do the actual step if something other than the default implementation is desired. Args: actions: (np.ndarray) with first dimension equal to the batch size. Returns: a tuple of stacked raw observations, raw rewards, dones and infos.
codesearchnet
def make_prior(num_topics, initial_value): def _softplus_inverse(x): return np.log(np.expm1(x)) logit_concentration = tf.compat.v1.get_variable( "logit_concentration", shape=[1, num_topics], initializer=tf.compat.v1.initializers.constant( _softplus_inverse(initial_value))) concentration = _clip_dirichlet_parameters( tf.nn.softplus(logit_concentration)) def prior(): return tfd.Dirichlet(concentration=concentration, name="topics_prior") prior_variables = [logit_concentration] return prior, prior_variables
Create the prior distribution. Args: num_topics: Number of topics. initial_value: The starting value for the prior parameters. Returns: prior: A `callable` that returns a `tf.distribution.Distribution` instance, the prior distribution. prior_variables: A `list` of `Variable` objects, the trainable parameters of the prior.
juraj-google-style
def add_messages(self, validation): if (not isinstance(validation, Validation)): raise TypeError('Argument must be of type Validation') self.messages.extend(validation.messages)
Adds all the messages in the specified `Validation` object to this instance's messages array. Args: validation (Validation): An object containing the messages to add to this instance's messages.
codesearchnet
def get_public_tokens(self): r = self.remote_utils.get_url(self.url() + "public_tokens/") return r.json()
Get a list of public tokens available on this server. Arguments: None Returns: str[]: list of public tokens
juraj-google-style
def should_execute_combination(self, kwargs): del kwargs return (True, None)
Indicates whether the combination of test arguments should be executed. If the environment doesn't satisfy the dependencies of the test combination, then it can be skipped. Args: kwargs: Arguments that are passed to the test combination. Returns: A tuple boolean and an optional string. The boolean False indicates that the test should be skipped. The string would indicate a textual description of the reason. If the test is going to be executed, then this method returns `None` instead of the string.
github-repos
def process_column(body: ProcessColumnRequest) -> ResponseReturnValue: credentials = get_credentials(body.auth_config) logger: Logger if not body.log_table: logger = PrintLogger() else: logger = BigQueryLogger(body.log_table, body.auth_config) logger.set_base_log(__version__, body.workflow_execution_id, body.display_source_table, datetime.utcnow()) bq_read_client = get_bq_read_client(credentials) parser, usable_rules = map_parser_to_rules(body.column_config['parser']) rules = generate_selected_rules(body.column_config['rules'], usable_rules) column_name = body.column_config['column'] cells_iterator = get_cells_iterator(bq_read_client, body.source_table, column_name) row_counter = 0 parse_failures = 0 rule_errors = 0 check_violations = 0 for cell in cells_iterator: try: value = parser(cell) except Exception as e: logger.parser(column_name, parser.__name__, str(e), cell) parse_failures += 1 else: for rule in rules: try: result = rule(value) except Exception as e: logger.rule(column_name, rule.__name__, str(e), value, rule.__kwdefaults__) rule_errors += 1 else: if result is not None: logger.rule(column_name, rule.__name__, result, value, rule.__kwdefaults__) check_violations += 1 row_counter += 1 logger.flush(force=True) if row_counter == 0: raise RuntimeError('Source table was empty.') message = f'DQM processed {row_counter} rows, with {parse_failures} parse failures, {rule_errors} rule errors, {check_violations} rule check violations.' return (DQMResponse(name='', description=message, code=200), 200)
Process a given column from the specified table. Args: * body: ProcessColumnRequest HTTP request body Returns: * DQMResponse for the run with a 200 status code Raises: * MalformedConfigError: if the request body was malformed
github-repos
def merge_pot1_files(self, delete_source=True): natom = len(self[0].input.structure) max_pertcase = 3 * natom pot1_files = [] for task in self: if not isinstance(task, DfptTask): continue paths = task.outdir.list_filepaths(wildcard="*_POT*") for path in paths: i = path.rindex("_POT") pertcase = int(path[i+4:].replace(".nc", "")) if pertcase <= max_pertcase: pot1_files.append(path) if not pot1_files: return None self.history.info("Will call mrgdvdb to merge %s files:" % len(pot1_files)) out_dvdb = self.outdir.path_in("out_DVDB") if len(pot1_files) == 1: shutil.copy(pot1_files[0], out_dvdb) else: mrgdvdb = wrappers.Mrgdvdb(manager=self[0].manager, verbose=0) mrgdvdb.merge(self.outdir.path, pot1_files, out_dvdb, delete_source=delete_source) return out_dvdb
This method is called when all the q-points have been computed. It runs `mrgdvdb` in sequential on the local machine to produce the final DVDB file in the outdir of the `Work`. Args: delete_source: True if POT1 files should be removed after (successful) merge. Returns: path to the output DVDB file. None if not DFPT POT file is found.
juraj-google-style
def localization_diff(localizable_file, translated_file, excluded_strings_file, output_translation_file): old_translated_file_dictionary = generate_localization_key_to_entry_dictionary_from_file(translated_file) if ((excluded_strings_file is not None) and os.path.isfile(excluded_strings_file)): excluded_file_dictionary = generate_localization_key_to_entry_dictionary_from_file(excluded_strings_file) else: excluded_file_dictionary = {} translated_list = old_translated_file_dictionary.keys() output_dictionary = {} output_file_elements = [] f = open_strings_file(localizable_file, 'r') output_file_elements.append(Comment((u'\n\n' % (VALUE_PLACEHOLDER,)))) for (_header_comment, comments, key, value) in extract_header_comment_key_value_tuples_from_file(f): if ((key in translated_list) or (key in excluded_file_dictionary)): if (key in old_translated_file_dictionary): old_translated_file_dictionary.pop(key) elif (value in output_dictionary): output_dictionary[value].add_comments(comments) output_file_elements.append(Comment((u"\n" % value))) else: loc_obj = LocalizationEntry(comments, value, VALUE_PLACEHOLDER) output_dictionary[value] = loc_obj output_file_elements.append(loc_obj) for (key, removed_trans) in old_translated_file_dictionary.items(): output_file_elements.append(Comment((u'\n\n' % (', '.join(removed_trans.comments), removed_trans.key, removed_trans.value)))) write_file_elements_to_strings_file(output_translation_file, output_file_elements)
Generates a strings file representing the strings that were yet to be translated. Args: localizable_file (str): The path to the localization strings file, meaning the file that represents the strings that require translation. translated_file (str): The path to the translated strings file, meaning the file containing the strings that were already translated. excluded_strings_file (str): The path to a file that contains all the strings we want to exclude from this and from future diffs. output_translation_file (str): The path to the output file, which will contain the strings the require translation, but are not in the already given translation file.
codesearchnet
def all_to_all_v3(communicator, t, group_assignment=None, timeout_seconds=None): if group_assignment is None: group_assignment = [] return gen_collective_ops.collective_all_to_all_v3(communicator=communicator, input=t, group_assignment=group_assignment, timeout_seconds=timeout_seconds)
Exchanges tensors mutually. Args: communicator: the resource `tf.Tensor` returned from `initialize_communicator`. t: a `tf.Tensor`. The first dimension should have the length as the size of the group. `t[i]` is sent to `rank i` within the group. group_assignment: Optional int32 `tf.Tensor` with shape [num_groups, num_ranks_per_group]. `group_assignment[i]` represents the ranks in the `ith` subgroup. timeout_seconds: If set to a non zero, set a completion timeout to detect staleness. If the timer goes off, a DeadlineExceededError is raised. The timeout value in seconds. This feature is experimental. Returns: a `tf.Tensor`. `t[i]` is sent from `rank i` within the group.
github-repos
def listen_forever( self, timeout_ms: int = 30000, exception_handler: Callable[[Exception], None] = None, bad_sync_timeout: int = 5, ): _bad_sync_timeout = bad_sync_timeout self.should_listen = True while self.should_listen: try: self._sync(timeout_ms) _bad_sync_timeout = bad_sync_timeout except MatrixRequestError as e: log.warning('A MatrixRequestError occured during sync.') if e.code >= 500: log.warning( 'Problem occured serverside. Waiting', wait_for=_bad_sync_timeout, ) gevent.sleep(_bad_sync_timeout) _bad_sync_timeout = min(_bad_sync_timeout * 2, self.bad_sync_timeout_limit) else: raise except MatrixHttpLibError: log.exception('A MatrixHttpLibError occured during sync.') if self.should_listen: gevent.sleep(_bad_sync_timeout) _bad_sync_timeout = min(_bad_sync_timeout * 2, self.bad_sync_timeout_limit) except Exception as e: log.exception('Exception thrown during sync') if exception_handler is not None: exception_handler(e) else: raise
Keep listening for events forever. Args: timeout_ms: How long to poll the Home Server for before retrying. exception_handler: Optional exception handler function which can be used to handle exceptions in the caller thread. bad_sync_timeout: Base time to wait after an error before retrying. Will be increased according to exponential backoff.
juraj-google-style
def extract(self, html_text: str, strategy: Strategy=Strategy.ALL_TEXT) \ -> List[Extraction]: if html_text: if strategy == Strategy.ALL_TEXT: soup = BeautifulSoup(html_text, 'html.parser') texts = soup.findAll(text=True) visible_texts = filter(self._tag_visible, texts) all_text = u" ".join(t.strip() for t in visible_texts) return [Extraction(all_text, self.name)] else: relax = strategy == Strategy.MAIN_CONTENT_RELAXED readable = Document(html_text, recallPriority=relax).summary(html_partial=False) clean_text = BeautifulSoup(readable.encode('utf-8'), 'lxml').strings readability_text = ' '.join(clean_text) return [Extraction(readability_text, self.name)] else: return []
Extracts text from an HTML page using a variety of strategies Args: html_text (str): html page in string strategy (enum[Strategy.ALL_TEXT, Strategy.MAIN_CONTENT_RELAXED, Strategy.MAIN_CONTENT_STRICT]): one of Strategy.ALL_TEXT, Strategy.MAIN_CONTENT_STRICT and Strategy.MAIN_CONTENT_RELAXED Returns: List[Extraction]: typically a singleton list with the extracted text
juraj-google-style
def write(self, b): if not self._writable: raise UnsupportedOperation('write') size = len(b) b_view = memoryview(b) size_left = size buffer_size = self._buffer_size max_buffers = self._max_buffers with self._seek_lock: end = self._buffer_seek buffer_view = memoryview(self._write_buffer) while size_left > 0: start = end end = start + size_left if end > buffer_size: end = buffer_size flush = True else: flush = False buffer_range = end - start b_start = size - size_left size_left -= buffer_range buffer_view[start:end] = b_view[b_start: b_start + buffer_range] if flush: self._buffer_seek = end self._seek += 1 if max_buffers: futures = self._write_futures flush_wait = self._FLUSH_WAIT while sum(1 for future in futures if not future.done()) >= max_buffers: sleep(flush_wait) with handle_os_exceptions(): self._flush() self._write_buffer = bytearray(buffer_size) buffer_view = memoryview(self._write_buffer) end = 0 self._buffer_seek = end return size
Write the given bytes-like object, b, to the underlying raw stream, and return the number of bytes written. Args: b (bytes-like object): Bytes to write. Returns: int: The number of bytes written.
juraj-google-style
def get_requires(self, build_requires=False, private_build_requires=False): requires = self.requires or [] if build_requires: requires = requires + (self.build_requires or []) if private_build_requires: requires = requires + (self.private_build_requires or []) return requires
Get the requirements of the variant. Args: build_requires (bool): If True, include build requirements. private_build_requires (bool): If True, include private build requirements. Returns: List of `Requirement` objects.
juraj-google-style
def get_okeeffe_params(el_symbol): el = Element(el_symbol) if el not in list(BV_PARAMS.keys()): raise RuntimeError("Could not find O'Keeffe parameters for element" " \"{}\" in \"BV_PARAMS\"dictonary" " provided by pymatgen".format(el_symbol)) return BV_PARAMS[el]
Returns the elemental parameters related to atom size and electronegativity which are used for estimating bond-valence parameters (bond length) of pairs of atoms on the basis of data provided in 'Atoms Sizes and Bond Lengths in Molecules and Crystals' (O'Keeffe & Brese, 1991). Args: el_symbol (str): element symbol. Returns: (dict): atom-size ('r') and electronegativity-related ('c') parameter.
juraj-google-style
def data_impl(self, request): run = request.args.get('run') tool = request.args.get('tag') host = request.args.get('host') run_dir = self._run_dir(run) profile_run = os.path.basename(run_dir) if tool not in TOOLS: return None self.start_grpc_stub_if_necessary() if tool == 'trace_viewer@' and self.stub is not None: from tensorflow.contrib.tpu.profiler import tpu_profiler_analysis_pb2 grpc_request = tpu_profiler_analysis_pb2.ProfileSessionDataRequest() grpc_request.repository_root = run_dir grpc_request.session_id = profile_run[:-1] grpc_request.tool_name = 'trace_viewer' grpc_request.host_name = host.rstrip('.') grpc_request.parameters['resolution'] = request.args.get('resolution') if request.args.get('start_time_ms') is not None: grpc_request.parameters['start_time_ms'] = request.args.get( 'start_time_ms') if request.args.get('end_time_ms') is not None: grpc_request.parameters['end_time_ms'] = request.args.get('end_time_ms') grpc_response = self.stub.GetSessionToolData(grpc_request) return grpc_response.output if tool not in TOOLS: return None tool_name = str(host) + TOOLS[tool] asset_path = os.path.join(run_dir, tool_name) raw_data = None try: with tf.io.gfile.GFile(asset_path, 'rb') as f: raw_data = f.read() except tf.errors.NotFoundError: logger.warn('Asset path %s not found', asset_path) except tf.errors.OpError as e: logger.warn("Couldn't read asset path: %s, OpError %s", asset_path, e) if raw_data is None: return None if tool == 'trace_viewer': return process_raw_trace(raw_data) if tool in _RAW_DATA_TOOLS: return raw_data return None
Retrieves and processes the tool data for a run and a host. Args: request: XMLHttpRequest Returns: A string that can be served to the frontend tool or None if tool, run or host is invalid.
juraj-google-style
def sub_index(self, sub, start=0, end=None): start_index = self.index(sub[0], start, end) end = self._fix_end_index(end) if start_index + len(sub) > end: raise ValueError for i in range(1, len(sub)): if sub[i] != self[start_index + i]: raise ValueError return start_index
Return the index of a subsequence. This runs in O(len(sub)) Args: sub (Sequence): An Iterable to search for Returns: int: The index of the first element of sub Raises: ValueError: If sub isn't a subsequence TypeError: If sub isn't iterable IndexError: If start or end are out of range
juraj-google-style
def training(loss_op): global_step = tf.Variable(0, name='global_step', trainable=False) with tf.name_scope('train'): optimizer = tf.train.AdamOptimizer(epsilon=0.001) train_op = optimizer.minimize(loss_op, global_step) return train_op, global_step
Calculates the loss from the logits and the labels. Args: logits: Logits tensor, float - [batch_size, NUM_CLASSES]. labels: Labels tensor, int32 - [batch_size]. Returns: loss: Loss tensor of type float.
juraj-google-style
def from_comm(cls, pub): filename = None if pub.b64_data: filename = cls._save_to_unique_filename(pub) return cls( isbn=pub.isbn, uuid=pub.uuid, aleph_id=pub.aleph_id, dir_pointer=filename )
Convert communication namedtuple to this class. Args: pub (obj): :class:`.Archive` instance which will be converted. Returns: obj: :class:`DBArchive` instance.
juraj-google-style