code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def most_recent(path, startswith=None, endswith=None): candidate_files = [] for filename in all_files_in_directory(path): if (startswith and (not os.path.basename(filename).startswith(startswith))): continue if (endswith and (not filename.endswith(endswith))): continue ...
Recursively inspect all files under a directory and return the most recent Args: path (str): the path of the directory to traverse startswith (str): the file name start with (optional) endswith (str): the file name ends with (optional) Returns: the most recent file within the subdirectory
codesearchnet
def discover(): candidate_path = os.path.abspath(os.path.join(os.curdir, os.pardir, 'data')) if os.path.exists(candidate_path): return Project(os.path.abspath(os.path.join(candidate_path, os.pardir))) candidate_path = os.path.abspath(os.path.join(os.curdir, 'data')) if os.path.exists(candidate_p...
Automatically discover the paths to various data folders in this project and compose a Project instance. Returns: A constructed Project object. Raises: ValueError: if the paths could not be figured out automatically. In this case, you have to create a Project manually using the initializer.
codesearchnet
def DeregisterAnalyzer(cls, analyzer_class): analyzer_name = analyzer_class.NAME.lower() if analyzer_name not in cls._analyzer_classes: raise KeyError('analyzer class not set for name: {0:s}'.format( analyzer_class.NAME)) del cls._analyzer_classes[analyzer_name]
Deregisters a analyzer class. The analyzer classes are identified based on their lower case name. Args: analyzer_class (type): class object of the analyzer. Raises: KeyError: if analyzer class is not set for the corresponding name.
juraj-google-style
def read(self, size=None): data = EMPTY if size == 0: return data while True: if size and len(data) >= size: return data if not self.buffer: self._fetch() if not self.buffer: ...
Read a chunk from rfile buffer and return it. Args: size (int): amount of data to read Returns: bytes: Chunk from rfile, limited by size if specified.
juraj-google-style
def restore_variables(self, sess, saver, import_scope=None): with sess.graph.as_default(): if saver is None and (not variables._all_saveable_objects(scope=import_scope)): tf_logging.info('The specified SavedModel has no variables; no checkpoints were restored.') elif isinstance(saver, tf...
Restore SavedModel variable values into the session. Args: sess: tf.compat.v1.Session to restore variable values. saver: a tf.compat.v1.train.Saver object. Can be None if there are no variables in graph. This may be the saver returned by the load_graph() function, or a default `tf.compat.v1.train.Saver()`. import_scop...
github-repos
def scan_chain_len(self, scan_chain): res = self._dll.JLINKARM_MeasureSCLen(scan_chain) if (res < 0): raise errors.JLinkException(res) return res
Retrieves and returns the number of bits in the scan chain. Args: self (JLink): the ``JLink`` instance scan_chain (int): scan chain to be measured Returns: Number of bits in the specified scan chain. Raises: JLinkException: on error.
codesearchnet
def get_properties_of_kind(kind, start=None, end=None): q = Property.query(ancestor=Property.key_for_kind(kind)) if ((start is not None) and (start != '')): q = q.filter((Property.key >= Property.key_for_property(kind, start))) if (end is not None): if (end == ''): return [] ...
Return all properties of kind in the specified range. NOTE: This function does not return unindexed properties. Args: kind: name of kind whose properties you want. start: only return properties >= start if start is not None. end: only return properties < end if end is not None. Returns: A list of property names of k...
codesearchnet
def resize_num_qa_labels(self, num_labels): cur_qa_logit_layer = self.get_qa_logit_layer() if num_labels is None or cur_qa_logit_layer is None: return new_qa_logit_layer = self._resize_qa_labels(num_labels) self.config.num_qa_labels = num_labels self.num_qa_labels = num_labels return new...
Build a resized question answering linear layer Module from a provided new linear layer. Increasing the size will add newly initialized weights. Reducing the size will remove weights from the end Args: num_labels (`int`, *optional*): New number of labels in the linear layer weight matrix. Increasing the size will add ...
github-repos
def stats_per_key(self): self.raise_error_if_not_open() all_stats = {} for (key, data) in self._file.items(): data = data[()] all_stats[key] = stats.DataStats(float(np.mean(data)), float(np.var(data)), np.min(data), np.max(data), data.size) return all_stats
Return statistics calculated for each key in the container. Note: The feature container has to be opened in advance. Returns: dict: A dictionary containing a DataStats object for each key.
codesearchnet
def vflip(img): if not _is_pil_image(img): raise TypeError('img should be PIL Image. Got {}'.format(type(img))) return img.transpose(Image.FLIP_TOP_BOTTOM)
Vertically flip the given PIL Image. Args: img (PIL Image): Image to be flipped. Returns: PIL Image: Vertically flipped image.
juraj-google-style
def tf_optimization(self, states, internals, actions, terminal, reward, next_states=None, next_internals=None): arguments = self.optimizer_arguments(states=states, internals=internals, actions=actions, terminal=terminal, reward=reward, next_states=next_states, next_internals=next_internals) return self.optimize...
Creates the TensorFlow operations for performing an optimization update step based on the given input states and actions batch. Args: states: Dict of state tensors. internals: List of prior internal state tensors. actions: Dict of action tensors. terminal: Terminal boolean tensor. reward: Reward tensor. next_states: D...
codesearchnet
def json_set_instructions(recipe, variables): if 'script' in recipe: if 'instructions' in recipe['script']: try: recipe['script']['instructions'] = [text_set_fields(instruction, variables) for instruction in recipe['script']['instructions']] except KeyError: ...
Replaces all fields in instructions with values provided. Checks if recipe['script']['instructions'] exist. The replaces all %(???)s variables with values provided. Note: %(???)s must match { "field":{ "name":"???" }} in JSON. Args: recipe: (dict) A dictionary representation of the JSON script. variables: (dict) A ...
github-repos
def delete(self, instance): self.backend.storage.remove(instance) return DeprovisionServiceSpec(False, "done")
Delete the instance Args: instance (AtlasServiceInstance.Instance): an existing instance Returns: DeprovisionServiceSpec: Status
juraj-google-style
def __init__(self, name, aliases=None, description=None, urls=None): super(EnumerationDefinition, self).__init__( name, aliases=aliases, description=description, urls=urls) self.values = [] self.values_per_alias = {} self.values_per_name = {} self.values_per_number = {}
Initializes an enumeration data type definition. Args: name (str): name. aliases (Optional[list[str]]): aliases. description (Optional[str]): description. urls (Optional[list[str]]): URLs.
juraj-google-style
def sys_check_for_event(mask: int, k: Optional[Key], m: Optional[Mouse]) -> int: return int(lib.TCOD_sys_check_for_event(mask, (k.key_p if k else ffi.NULL), (m.mouse_p if m else ffi.NULL)))
Check for and return an event. Args: mask (int): :any:`Event types` to wait for. k (Optional[Key]): A tcod.Key instance which might be updated with an event. Can be None. m (Optional[Mouse]): A tcod.Mouse instance which might be updated with an event. Can be None. .. deprecated:: 9.3 Use the :any:`tcod.event.get` f...
codesearchnet
def search_device_by_id(self, deviceID) -> Device: for d in self.devices: if d.id == deviceID: return d return None
searches a device by given id Args: deviceID(str): the device to search for Returns the Device object or None if it couldn't find a device
juraj-google-style
def _should_catch_error(self, error, errors=()): caught_errors = ( errors or self.session.driver.invalid_element_errors + (ElementNotFound,)) return isinstance(error, caught_errors)
Returns whether to catch the given error. Args: error (Exception): The error to consider. errors (Tuple[Type[Exception], ...], optional): The exception types that should be caught. Defaults to :class:`ElementNotFound` plus any driver-specific invalid element errors. Returns: bool: Whether to catch the given error.
juraj-google-style
def enable_cpu_offload(self, accelerator_id: Optional[int]=0, **kwargs): if is_accelerate_available(): from accelerate import cpu_offload_with_hook else: raise ImportError('`enable_model_cpu_offload` requires `accelerate`.') gpu_id = kwargs.get('gpu_id', 0) if gpu_id != 0: warnin...
Offloads all sub-models to CPU using accelerate, reducing memory usage with a low impact on performance. This method moves one whole sub-model at a time to the accelerator when it is used, and the sub-model remains in accelerator until the next sub-model runs. Args: accelerator_id (`int`, *optional*, defaults to 0): a...
github-repos
def time2timestr(time, fmt='hhmmss'): if (fmt.count(':') == 2): if (not (fmt.index('h') < fmt.index('m') < fmt.index('s'))): raise ValueError('Invalid format string. {}'.format(VALID_TIME_FORMATS_TEXT)) (h, m, s) = fmt.split(':') elif (fmt.count(':') == 1): if (not (fmt.index...
Turns a datetime.time object into a string. The string must have one of the formats from VALID_TIME_FORMATS_TEXT to make it compatible with timestr2time. Args: time (datetime.time) the time to be translated fmt (str) a format string. Returns: (str) that represents a time. Raises: ValueError if the format is not valid.
codesearchnet
def Hash(self): if (not self.__hash): ba = bytearray(binascii.unhexlify(self.GetHashData())) hash = Crypto.Hash256(ba) self.__hash = UInt256(data=hash) return self.__hash
Get the hash of the transaction. Returns: UInt256:
codesearchnet
def _refresh_http(api_request, operation_name): path = "operations/{}".format(operation_name) api_response = api_request(method="GET", path=path) return json_format.ParseDict(api_response, operations_pb2.Operation())
Refresh an operation using a JSON/HTTP client. Args: api_request (Callable): A callable used to make an API request. This should generally be :meth:`google.cloud._http.Connection.api_request`. operation_name (str): The name of the operation. Returns: google.longrunning.operations_pb2.Operation: The operation.
juraj-google-style
def __strip_tags(self, node: yaml.Node) -> None: if isinstance(node, yaml.SequenceNode): for subnode in node.value: self.__strip_tags(subnode) elif isinstance(node, yaml.MappingNode): node.tag = 'tag:yaml.org,2002:map' for (key_node, value_node) in node.value: sel...
Strips tags from mappings in the tree headed by node. This keeps yaml from constructing any objects in this tree. Args: node: Head of the tree to strip
codesearchnet
def plot_helmholtz_free_energy(self, tmin, tmax, ntemp, ylim=None, **kwargs): temperatures = np.linspace(tmin, tmax, ntemp) if self.structure: ylabel = '$\\Delta F$ (kJ/mol)' else: ylabel = '$\\Delta F$ (kJ/mol-c)' fig = self._plot_thermo(self.dos.helmholtz_free_energy, temperatures, yla...
Plots the vibrational contribution to the Helmoltz free energy in a temperature range. Args: tmin: minimum temperature tmax: maximum temperature ntemp: number of steps ylim: tuple specifying the y-axis limits. kwargs: kwargs passed to the matplotlib function 'plot'. Returns: matplotlib figure
codesearchnet
def experience(self, agent_indices, observ, action, reward, unused_done, unused_nextob): with tf.name_scope('experience/'): return tf.cond(self._is_training, (lambda : self._define_experience(agent_indices, observ, action, reward)), str)
Process the transition tuple of the current step. When training, add the current transition tuple to the memory and update the streaming statistics for observations and rewards. A summary string is returned if requested at this step. Args: agent_indices: Tensor containing current batch indices. observ: Batch tensor o...
codesearchnet
def _CountClientStatisticByLabel(self, statistic, day_buckets, cursor): day_buckets = sorted(day_buckets) sum_clauses = [] ping_cast_clauses = [] timestamp_buckets = [] now = rdfvalue.RDFDatetime.Now() for day_bucket in day_buckets: column_name = "days_active_{}".format(day_bucket) ...
Returns client-activity metrics for a given statistic. Args: statistic: The name of the statistic, which should also be a column in the 'clients' table. day_buckets: A set of n-day-active buckets. cursor: MySQL cursor for executing queries.
juraj-google-style
def overlap_and_add(signal, frame_step, name=None): with ops.name_scope(name, 'overlap_and_add', [signal, frame_step]): signal = ops.convert_to_tensor(signal, name='signal') signal.shape.with_rank_at_least(2) frame_step = ops.convert_to_tensor(frame_step, name='frame_step') frame_ste...
Reconstructs a signal from a framed representation. Adds potentially overlapping frames of a signal with shape `[..., frames, frame_length]`, offsetting subsequent frames by `frame_step`. The resulting tensor has shape `[..., output_size]` where output_size = (frames - 1) * frame_step + frame_length Args: signal: A ...
github-repos
def get_pyof_version(module_fullname): ver_module_re = re.compile(r'(pyof\.)(v0x\d+)(\..*)') matched = ver_module_re.match(module_fullname) if matched: version = matched.group(2) return version return None
Get the module pyof version based on the module fullname. Args: module_fullname (str): The fullname of the module (e.g.: pyof.v0x01.common.header) Returns: str: openflow version. The openflow version, on the format 'v0x0?' if any. Or None if there isn't a version on the fullname.
juraj-google-style
def prepare_partitions(self) -> t.Iterator[Config]: for option in itertools.product(*[self.config.selection[key] for key in self.config.partition_keys]): yield self._create_partition_config(option)
Iterate over client parameters, partitioning over `partition_keys`. This produces a Cartesian-Cross over the range of keys. For example, if the keys were 'year' and 'month', it would produce an iterable like: ( ('2020', '01'), ('2020', '02'), ('2020', '03'), ...) Returns: An iterator of `Config`s.
github-repos
def _create_complete_graph(node_ids): g = nx.Graph() g.add_nodes_from(node_ids) for (i, j) in combinations(node_ids, 2): g.add_edge(i, j) return g
Create a complete graph from the list of node ids. Args: node_ids: a list of node ids Returns: An undirected graph (as a networkx.Graph)
juraj-google-style
def sort_by_timestamp(self, in_place=True): timestamps, values = zip(*sorted(zip(self.timestamps, self.values))) if not in_place: return MetricContainer(values=values, timestamps=timestamps) self.timestamps, self.values = zip(*sorted(zip(self.timestamps, self.values)))
Sorts the metric values and timestamps in ascending order wrt timestamps. Args: in_place: If True, sort the metric values and timestamps in place.
github-repos
def _ParseIndexTable(self, parser_mediator, file_system, file_entry, index_table): path_segments = file_system.SplitPath(file_entry.path_spec.location) data_block_files = {} for cache_address in index_table: if (cache_address.filename not in data_block_files): path_segments.pop() ...
Parses a Chrome Cache index table. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. file_system (dfvfs.FileSystem): file system. file_entry (dfvfs.FileEntry): file entry. index_table (list[CacheAddress]): the cache addresses which are stored...
codesearchnet
def _create_RSA_private_key(self, bytes): try: private_key = serialization.load_pem_private_key(bytes, password=None, backend=default_backend()) return private_key except Exception: private_key = serialization.load_der_private_key(bytes, password=None, backend=default_backend()) ...
Instantiates an RSA key from bytes. Args: bytes (byte string): Bytes of RSA private key. Returns: private_key (cryptography.hazmat.primitives.asymmetric.rsa.RSAPrivateKey): RSA private key created from key bytes.
codesearchnet
def parse_meta(filename, data): if ('.' not in filename): raise MetaParsingException(("Can't recognize type of your metadata ('%s')!" % filename)) suffix = filename.rsplit('.', 1)[1].lower() if (suffix not in SUPPORTED_FILES): raise MetaParsingException(("Can't parse file of type '%s'!" % su...
Parse `data` to EPublication. Args: filename (str): Used to choose right parser based at suffix. data (str): Content of the metadata file. Returns: EPublication: object.
codesearchnet
def ScriptHash(self): if (self._scriptHash is None): self._scriptHash = Crypto.ToScriptHash(self.Script, unhex=False) return self._scriptHash
Get the script hash. Returns: UInt160:
codesearchnet
def filepath(self): if hasattr(self, 'local_path'): return self.local_path if self.scheme in ['ftp', 'http', 'https', 'globus']: return self.filename elif self.scheme in ['file']: return self.path else: raise Exception('Cannot ret...
Return the resolved filepath on the side where it is called from. The appropriate filepath will be returned when called from within an app running remotely as well as regular python on the client side. Args: - self Returns: - filepath (string)
juraj-google-style
def select_action_key(self, next_action_arr, next_q_arr): epsilon_greedy_flag = bool(np.random.binomial(n=1, p=self.epsilon_greedy_rate)) if epsilon_greedy_flag is False: key = np.random.randint(low=0, high=next_action_arr.shape[0]) else: key = next_q_arr.argmax(...
Select action by Q(state, action). Args: next_action_arr: `np.ndarray` of actions. next_q_arr: `np.ndarray` of Q-Values. Retruns: `np.ndarray` of keys.
juraj-google-style
class EetqConfig(QuantizationConfigMixin): def __init__(self, weights: str='int8', modules_to_not_convert: Optional[List]=None, **kwargs): self.quant_method = QuantizationMethod.EETQ self.weights = weights self.modules_to_not_convert = modules_to_not_convert self.post_init() de...
This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using `eetq`. Args: weights (`str`, *optional*, defaults to `"int8"`): The target dtype for the weights. Supported value is only "int8" modules_to_not_convert (`list`, *optional*, default to `None`): ...
github-repos
def set_cookie(self, key, value, domain=None, path='/', secure=False, httponly=True): self._cookies[key] = value if domain: self._cookies[key]['domain'] = domain if path: self._cookies[key]['path'] = path if secure: self._cookies[key]['secure'] = secure if httponly: s...
Set a cookie. Args: key (:obj:`str`): Cookie name value (:obj:`str`): Cookie value domain (:obj:`str`): Cookie domain path (:obj:`str`): Cookie value secure (:obj:`bool`): True if secure, False otherwise httponly (:obj:`bool`): True if it's a HTTP only cookie, False otherwise
codesearchnet
def output_selector_schema(config_cls): config_type = resolve_config_cls_arg(config_cls) check.param_invariant(config_type.is_selector, 'config_cls') def _wrap(func): def _selector(context, config_value, runtime_value): selector_key, selector_value = single_item(config_value) ...
A decorator for a annotating a function that can take the selected properties of a ``config_value`` and an instance of a custom type and materialize it. Args: config_cls (Selector):
juraj-google-style
def _tf_restore_batch_dims(x, num_nonbatch_dims, prototype): assert x.shape.ndims == 1 + num_nonbatch_dims new_shape = ( prototype.shape.as_list()[:-num_nonbatch_dims] + x.shape.as_list()[1:]) assert None not in new_shape if new_shape != x.shape.as_list(): x = tf.reshape(x, new_shape) return x
Reverse op of _tf_flatten_batch_dims. Un-flatten the first dimension of x to match all but the last num_nonbatch_dims dimensions of prototype. Args: x: a tf.Tensor with 1 + num_nonbatch_dims dimensions num_nonbatch_dims: an integer prototype: a tf.Tensor Returns: a tf.Tensor
juraj-google-style
def titles(self, unique=False): if unique: return tools.uniqued(s.title for s in self._items) return [s.title for s in self._items]
Return a list of contained worksheet titles. Args: unique (bool): drop duplicates Returns: list: list of titles/name strings
juraj-google-style
def _parse_version(version): parsed_version = parse_version(version) return (tuple((int(dot_version) for dot_version in parsed_version.base_version.split('.'))) + (parsed_version.is_prerelease,))
Parse a version string. Args: version (str): A string representing a version e.g. '1.9rc2' Returns: tuple: major, minor, patch parts cast as integer and whether or not it was a pre-release version.
codesearchnet
def mkp(*args, **kwargs): mk = kwargs.pop('mk', False) path = os.sep.join(list(args)) if mk: while sep2 in path: path = path.replace(sep2, os.sep) try: os.makedirs(path) except FileExistsError: pass return path
Generate a directory path, and create it if requested. .. code-block:: Python filepath = mkp('base', 'folder', 'file') dirpath = mkp('root', 'path', 'folder', mk=True) Args: \*args: File or directory path segments to be concatenated mk (bool): Make the directory (if it doesn't exist) Returns: path (str): File or di...
juraj-google-style
def add_business_days(self, date_tensor, num_days, roll_convention=constants.BusinessDayConvention.NONE): control_deps = [] biz_days, is_bizday = self._to_biz_space(dt.convert_to_date_tensor(date_tensor).ordinal()) if roll_convention == constants.BusinessDayConvention.NONE: control_deps.append(tf.de...
Adds given number of business days to given dates. Note that this is different from calling `add_period_and_roll` with PeriodType.DAY. For example, adding 5 business days to Monday gives the next Monday (unless there are holidays on this week or next Monday). Adding 5 days and rolling means landing on Saturday and the...
github-repos
def pivot(self, md5, tag=''): ss = self.workbench.generate_sample_set(md5) if ss: tag = md5 if not tag else tag md5 = ss if self.workbench.is_sample_set(md5): ss = self.workbench.get_sample_set(md5) if len...
Pivot on an md5 (md5 can be a single sample or a sample_set) Args: md5: The md5 can be a single sample or a sample_set tags (optional): a tag for the sample (for the prompt) Returns: Nothing but it's sets the active sample/sample_set
juraj-google-style
def _request(self, resource, action, data=None, headers=None): url, httpmethod = res_to_url(resource, action) return self.ajax(url, httpmethod, data, headers)
Send request Args: resource: resource action: action data: string or object which can be json.dumps headers: http headers
juraj-google-style
def is_valid(self, addr, protocol='http', timeout=5): start = time.time() try: r = requests.get(self.test_url[protocol], timeout=timeout, proxies={protocol: 'http: except KeyboardInterrupt: raise e...
Check if a proxy is valid Args: addr: A string in the form of 'ip:port' protocol: Either 'http' or 'https', different test urls will be used according to protocol. timeout: A integer indicating the timeout of connecting the test url. Returns: dict: If the proxy is valid, returns {'valid': True, 'response_time': xx} o...
juraj-google-style
def __eq__(self, other): try: return other and \ self.id == other.id and \ self.name == other.name and \ self.profile_image_url == other.profile_image_url and \ self.about == other.about and \ self.website == ot...
Compare two user objects against one another. Args: other (User): another User object against which to compare the current user.
juraj-google-style
def is_compatible_with(self, spec_or_tensor): return super(TensorSpec, self).is_compatible_with(spec_or_tensor)
Returns True if spec_or_tensor is compatible with this TensorSpec. Two tensors are considered compatible if they have the same dtype and their shapes are compatible (see `tf.TensorShape.is_compatible_with`). Args: spec_or_tensor: A tf.TensorSpec or a tf.Tensor Returns: True if spec_or_tensor is compatible with self.
github-repos
def getDiskSpace(self, file_path, upload_path = '', overwrite = False): self.checkAccount() url = nurls['checkUpload'] file_size = os.stat(file_path).st_size file_name = os.path.basename(file_path) now = datetime.datetime.now().isoformat() data = {'userid': ...
getDiskSpace Args: file_path: Full path for a file you want to checkUpload upload_path: Ndrive path where you want to upload file ex) /Picture/ Returns: True: Possible to upload a file with a given file_size False: Impossible to upload a file with a given file_size
juraj-google-style
def _parse_validators(valids): outvals = [] for val in valids: if isinstance(val, str): args = [] elif (len(val) > 1): args = val[1:] val = val[0] else: raise ValidationError('You must pass either an n-tuple or a string to define a validato...
Parse a list of validator names or n-tuples, checking for errors. Returns: list((func_name, [args...])): A list of validator function names and a potentially empty list of optional parameters for each function.
codesearchnet
def make_dir(self, path, relative=False): if not relative: path = self.relpath(path) self._make_dir(self.get_client_kwargs(self.ensure_dir_path( path, relative=True)))
Make a directory. Args: path (str): Path or URL. relative (bool): Path is relative to current root.
juraj-google-style
def _validate_instantiation_options(self, datafile, skip_json_validation): if ((not skip_json_validation) and (not validator.is_datafile_valid(datafile))): raise exceptions.InvalidInputException(enums.Errors.INVALID_INPUT_ERROR.format('datafile')) if (not validator.is_event_dispatcher_valid(self.event_d...
Helper method to validate all instantiation parameters. Args: datafile: JSON string representing the project. skip_json_validation: Boolean representing whether JSON schema validation needs to be skipped or not. Raises: Exception if provided instantiation options are valid.
codesearchnet
def get_forwarding_information_base(self, filter=''): uri = '{}{}'.format(self.data['uri'], self.FORWARDING_INFORMATION_PATH) return self._helper.get_collection(uri, filter=filter)
Gets the forwarding information base data for a logical interconnect. A maximum of 100 entries is returned. Optional filtering criteria might be specified. Args: filter (list or str): Filtering criteria may be specified using supported attributes: interconnectUri, macAddress, internalVlan, externalVlan, and supported ...
codesearchnet
def load_pluggable_device_library(library_location): if os.path.exists(library_location): if os.path.isdir(library_location): directory_contents = os.listdir(library_location) pluggable_device_libraries = [os.path.join(library_location, f) for f in directory_contents if _is_shared_ob...
Loads a TensorFlow PluggableDevice plugin. "library_location" can be a path to a specific shared object, or a folder. If it is a folder, all shared objects will be loaded. when the library is loaded, devices/kernels registered in the library via StreamExecutor C API and Kernel/Op Registration C API are made available ...
github-repos
def compile(self, ops): def _compile(): code = [] for op in ops: if isinstance(op, SyscallInvoke): code.extend(self.syscall(op)) elif isinstance(op, LoadRegister): code.extend(self.reg_load(op.register, op.value)) elif isinstance(o...
Translate a list of operations into its assembler source. Arguments: ops(list): A list of shellcode operations. Returns: str: The assembler source code that implements the shellcode.
codesearchnet
def _prepare_summary_table(rows): if not rows: return [] key_field = 'job-name' if key_field not in rows[0]: key_field = 'job-id' grouped = collections.defaultdict(lambda: collections.defaultdict(lambda: [])) for row in rows: grouped[row.get(key_field, '')][row.get('status', '')] += [ro...
Create a new table that is a summary of the input rows. All with the same (job-name or job-id, status) go together. Args: rows: the input rows, a list of dictionaries. Returns: A new row set of summary information.
juraj-google-style
def write(self, inputdata): if VERBOSE: _print_out((('\nDummy_serial: Writing to port. Given:' + repr(inputdata)) + '\n')) if (sys.version_info[0] > 2): if (not (type(inputdata) == bytes)): raise TypeError(('The input must be type bytes. Given:' + repr(inputdata))) inputstrin...
Write to a port on dummy_serial. Args: inputdata (string/bytes): data for sending to the port on dummy_serial. Will affect the response for subsequent read operations. Note that for Python2, the inputdata should be a **string**. For Python3 it should be of type **bytes**.
codesearchnet
def check_submission_successful(self, submission_id=None): status = self.submission_status(submission_id) success = bool(status['concordance']['value']) return success
Check if the last submission passes submission criteria. Args: submission_id (str, optional): submission of interest, defaults to the last submission done with the account Return: bool: True if the submission passed all checks, False otherwise. Example: >>> api = NumerAPI(secret_key="..", public_id="..") >>> api.upl...
codesearchnet
def _get_image(structure, site): original_site = structure[NearNeighbors._get_original_site(structure, site)] image = np.around(np.subtract(site.frac_coords, original_site.frac_coords)) image = tuple(image.astype(int)) return image
Private convenience method for get_nn_info, gives lattice image from provided PeriodicSite and Structure. Image is defined as displacement from original site in structure to a given site. i.e. if structure has a site at (-0.1, 1.0, 0.3), then (0.9, 0, 2.3) -> jimage = (1, -1, 2). Note that this method takes O(number o...
codesearchnet
def get_config(config_schema, env=None): if (env is None): env = os.environ return parser.parse_env(config_schema, env)
Parse config from the environment against a given schema Args: config_schema: A dictionary mapping keys in the environment to envpy Schema objects describing the expected value. env: An optional dictionary used to override the environment rather than getting it from the os. Returns: A dictionary which maps the values...
codesearchnet
def read_video_decord(video_path: str, sample_indices_fn: Optional[Callable]=None, **kwargs): requires_backends(read_video_decord, ['decord']) from decord import VideoReader, cpu vr = VideoReader(uri=video_path, ctx=cpu(0)) video_fps = vr.get_avg_fps() total_num_frames = len(vr) duration = total...
Decode a video using the Decord backend. Args: video_path (`str`): Path to the video file. sample_indices_fn (`Callable`, *optional*): A callable function that will return indices at which the video should be sampled. If the video has to be loaded using by a different sampling technique than provided by `num_frames` o...
github-repos
def start_automated_run(path, automated_run_id): with functions.DBContextManager(path) as session: automated_run = session.query(models.AutomatedRun).filter_by(id=automated_run_id).first() if not automated_run: raise exceptions.UserError('Automated run {} ' ...
Starts automated run. This will automatically create base learners until the run finishes or errors out. Args: path (str): Path to Xcessiv notebook automated_run_id (str): Automated Run ID
juraj-google-style
def split(path): filesystem = FileSystems.get_filesystem(path) return filesystem.split(path)
Splits the given path into two parts. Splits the path into a pair (head, tail) such that tail contains the last component of the path and head contains everything up to that. For file-systems other than the local file-system, head should include the prefix. Args: path: path as a string Returns: a pair of path compon...
github-repos
def forward(self, input, tokens_per_expert): return sequential_experts_gemm(input, self.weight, tokens_per_expert.cpu())
Perform grouped matrix multiplication. Args: input (`torch.Tensor`): Input tensor of shape (num_tokens, in_features). tokens_per_expert (`torch.Tensor`): Number of tokens assigned to each expert. Returns: torch.Tensor: Output tensor of shape (num_tokens, out_features).
github-repos
def add_argument(self, arg_name, arg_value): if len(self._employers) > 0: self._logger.log( 'warn', 'Adding an argument after the employers have been created' ) if self._args is None: self._args = {} self._args[arg_nam...
Add an additional argument to be passed to the fitness function via additional arguments dictionary; this argument/value is not tuned Args: arg_name (string): name/dictionary key of argument arg_value (any): dictionary value of argument
juraj-google-style
def get_storage_account(access_token, subscription_id, rgname, account_name): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourcegroups/', rgname, '/providers/Microsoft.Storage/storageAccounts/', account_n...
Get the properties for the named storage account. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. rgname (str): Azure resource group name. account_name (str): Name of the new storage account. Returns: HTTP response. JSON body of storage account properties.
juraj-google-style
def set_module_quantized_tensor_to_device(module, tensor_name, device, value=None, quantized_stats=None): if '.' in tensor_name: splits = tensor_name.split('.') for split in splits[:-1]: new_module = getattr(module, split) if new_module is None: raise ValueErr...
A helper function to set a given tensor (parameter of buffer) of a module on a specific device (note that doing `param.to(device)` creates a new tensor not linked to the parameter, which is why we need this function). The function is adapted from `set_module_tensor_to_device` function from accelerate that is adapted to...
github-repos
def set_position_i(self, ivalue): ivalue_msb = int(ivalue) >> 8 ivalue_lsb = int(ivalue) & 0xff data = [] data.append(0x0B) data.append(self.servoid) data.append(RAM_WRITE_REQ) data.append(POSITION_KI_RAM) data.append(BYTE2) data.append(i...
Set the I gain of the position PID Args: ivalue (int): I value
juraj-google-style
def decode(self, encoded): encoded = super().decode(encoded) if encoded.numel() > 1: raise ValueError( '``decode`` decodes one label at a time, use ``batch_decode`` instead.') return self.itos[encoded.squeeze().item()]
Decodes ``encoded`` label. Args: encoded (torch.Tensor): Encoded label. Returns: object: Label decoded from ``encoded``.
juraj-google-style
def get(self): with self._not_empty: while not self._queue: self._not_empty.wait() item = self._queue.popleft() self._not_full.notify() return item
Remove and return an item from the queue. If the queue is empty, blocks until an item is available. Returns: an item from the queue
github-repos
def generate_multi_set_examples(options, test_sets): _prepare_dir(options) multi_gen_state = MultiGenState() options.multi_gen_state = multi_gen_state zip_path = os.path.join(options.output_path, options.zip_to_output) with zipfile.PyZipFile(zip_path, 'w') as archive: multi_gen_state.archive...
Generate examples for test sets. Args: options: Options containing information to generate examples. test_sets: List of the name of test sets to generate examples.
github-repos
def upload_dict(s3_conn, s3_prefix, data_to_sync): bucket_name, prefix = split_s3_path(s3_prefix) bucket = s3_conn.get_bucket(bucket_name) for key, value in data_to_sync.items(): full_name = '{}/{}.json'.format(prefix, key) s3_key = boto.s3.key.Key( bucket=bucket, ...
Syncs a dictionary to an S3 bucket, serializing each value in the dictionary as a JSON file with the key as its name. Args: s3_conn: (boto.s3.connection) an s3 connection s3_prefix: (str) the destination prefix data_to_sync: (dict)
juraj-google-style
def map_concepts_to_indicators( self, n: int = 1, min_temporal_res: Optional[str] = None ): for node in self.nodes(data=True): query_parts = [ "select Indicator from concept_to_indicator_mapping", f"where `Concept` like '{node[0]}'", ...
Map each concept node in the AnalysisGraph instance to one or more tangible quantities, known as 'indicators'. Args: n: Number of matches to keep min_temporal_res: Minimum temporal resolution that the indicators must have data for.
juraj-google-style
def ready_op(self): return self._ready_op
Return the Ready Op used by the supervisor. Returns: An Op or `None`.
github-repos
def GetTokenBalance(self, token, watch_only=0): total = Decimal(0) if watch_only > 0: for addr in self._watch_only: balance = token.GetBalance(self, addr) total += balance else: for contract in self._contracts.values(): ...
Get the balance of the specified token. Args: token (NEP5Token): an instance of type neo.Wallets.NEP5Token to get the balance from. watch_only (bool): True, to limit to watch only wallets. Returns: Decimal: total balance for `token`.
juraj-google-style
def predict(self, structure, icsd_vol=False): std_x = np.std([site.specie.X for site in structure]) sub_sites = [] bp_dict = {} for sp in list(structure.composition.keys()): if sp.atomic_radius: sub_sites.extend([site for ...
Given a structure, returns the predicted volume. Args: structure (Structure) : a crystal structure with an unknown volume. icsd_vol (bool) : True if the input structure's volume comes from ICSD. Returns: a float value of the predicted volume.
juraj-google-style
def all_files(path_name, keyword='', ext='', full_path=True, has_date=False, date_fmt=DATE_FMT) -> list: if (not os.path.exists(path=path_name)): return [] path_name = path_name.replace('\\', '/') if (keyword or ext): keyword = (f'*{keyword}*' if keyword else '*') if (not ext): ...
Search all files with criteria Returned list will be sorted by last modified Args: path_name: full path name keyword: keyword to search ext: file extensions, split by ',' full_path: whether return full path (default True) has_date: whether has date in file name (default False) date_fmt: date format to check for has_da...
codesearchnet
def depth_texture(self, size, data=None, *, samples=0, alignment=4) -> 'Texture': res = Texture.__new__(Texture) res.mglo, res._glo = self.mglo.depth_texture(size, data, samples, alignment) res._size = size res._components = 1 res._samples = samples res._dtype =...
Create a :py:class:`Texture` object. Args: size (tuple): The width and height of the texture. data (bytes): Content of the texture. Keyword Args: samples (int): The number of samples. Value 0 means no multisample format. alignment (int): The byte alignment 1, 2, 4 or 8. Returns: :py:class:`Texture` object
juraj-google-style
def get_content(url, headers={}, decoded=True): logging.debug('get_content: %s' % url) req = request.Request(url, headers=headers) if cookies: cookies.add_cookie_header(req) req.headers.update(req.unredirected_hdrs) response = urlopen_with_retry(req) data = response.read() ...
Gets the content of a URL via sending a HTTP GET request. Args: url: A URL. headers: Request headers used by the client. decoded: Whether decode the response body using UTF-8 or the charset specified in Content-Type. Returns: The content as a string.
juraj-google-style
async def _run_and_verify(self, examples: List[Example]): async with GRPCClient() as client: await self._get_statuses(client, examples) await self._verify_examples(client, examples, self._origin)
Run beam examples and keep their output. Call the backend to start code processing for the examples. Then receive code output. Args: examples: beam examples that should be run
github-repos
class Constant(Initializer): def __init__(self, value=0.0): self.value = value def __call__(self, shape, dtype=None): dtype = standardize_dtype(dtype) return ops.cast(self.value, dtype=dtype) * ops.ones(shape=shape, dtype=dtype) def get_config(self): return {'value': seria...
Initializer that generates tensors with constant values. Only scalar values are allowed. The constant value provided must be convertible to the dtype requested when calling the initializer. Examples: >>> # Standalone usage: >>> initializer = Constant(10.) >>> values = initializer(shape=(2, 2)) >>> # Usage in a Kera...
github-repos
def get_range_tracker(self, start_position: Union[int, str, ObjectId]=None, stop_position: Union[int, str, ObjectId]=None) -> Union[_ObjectIdRangeTracker, OffsetRangeTracker, LexicographicKeyRangeTracker]: start_position, stop_position = self._replace_none_positions(start_position, stop_position) if isinstance(...
Returns a RangeTracker for a given position range depending on type. Args: start_position: starting position of the range. If 'None' default start position of the source must be used. stop_position: ending position of the range. If 'None' default stop position of the source must be used. Returns: a ``_ObjectIdRangeTr...
github-repos
def route_method(method_name, extra_part=False): def wrapper(callable_obj): if (method_name.lower() not in DEFAULT_ROUTES): raise HandlerHTTPMethodError('Invalid http method in method: {}'.format(method_name)) callable_obj.http_method = method_name.upper() callable_obj.url_extra...
Custom handler routing decorator. Signs a web handler callable with the http method as attribute. Args: method_name (str): HTTP method name (i.e GET, POST) extra_part (bool): Indicates if wrapped callable name should be a part of the actual endpoint. Returns: A wrapped handler callable. examples: >>> @route_method('...
codesearchnet
def equals(self, rhs): try: return round(rhs-self._float_value, self._places) == 0 except TypeError: return False
Check to see if RHS is almost equal to float_value Args: rhs: the value to compare to float_value Returns: bool
juraj-google-style
def extract_keywords_from_text(index_page, no_items=5): index_page = MLStripper.strip_tags(index_page) tokenized_index = TextBlob(index_page).lower() def to_str(key): if isinstance(key, unicode): return key.encode('utf-8') return key present_keywords = [KEYWORDS_LOWER[key] f...
Try to process text on the `index_page` deduce the keywords and then try to match them on the Aleph's dataset. Function returns maximally `no_items` items, to prevent spamming the user. Args: index_page (str): Content of the page as UTF-8 string no_items (int, default 5): Number of items to return. Returns: list: Li...
codesearchnet
async def runCmdLine(self, line): if self.echoline: self.outp.printf(f'{self.cmdprompt}{line}') ret = None name = line.split(None, 1)[0] cmdo = self.getCmdByName(name) if cmdo is None: self.printf('cmd not found: %s' % (name,)) retu...
Run a single command line. Args: line (str): Line to execute. Examples: Execute the 'woot' command with the 'help' switch: await cli.runCmdLine('woot --help') Returns: object: Arbitrary data from the cmd class.
juraj-google-style
def pickle(self, path): with open(os.path.expanduser(path), 'wb') as pickle: cPickle.Pickler(pickle, cPickle.HIGHEST_PROTOCOL).dump(self)
Write objects to python pickle. Pickling is Python's method for serializing/deserializing Python objects. This allows you to save a fully functional JSSObject to disk, and then load it later, without having to retrieve it from the JSS. This method will pickle each item as it's current type; so JSSListData objects wil...
codesearchnet
def channels(self): resp = self._rtm_client.get('v1/current_team.channels') if resp.is_fail(): raise RTMServiceError('Failed to get channels of current team', resp) return resp.data['result']
Gets channels of current team Returns: list of Channel Throws: RTMServiceError when request failed
codesearchnet
def read(self, viewport=None, components=3, *, attachment=0, alignment=1, dtype='f1') -> bytes: return self.mglo.read(viewport, components, attachment, alignment, dtype)
Read the content of the framebuffer. Args: viewport (tuple): The viewport. components (int): The number of components to read. Keyword Args: attachment (int): The color attachment. alignment (int): The byte alignment of the pixels. dtype (str): Data type. Returns: bytes
juraj-google-style
def randomize_weights(model, random_seed=0, buffers_to_skip=None): random.seed(random_seed) buffers = model.buffers buffer_ids = range(1, len(buffers)) if buffers_to_skip is not None: buffer_ids = [idx for idx in buffer_ids if idx not in buffers_to_skip] buffer_types = {} for graph in mo...
Randomize weights in a model. Args: model: The model in which to randomize weights. random_seed: The input to the random number generator (default value is 0). buffers_to_skip: The list of buffer indices to skip. The weights in these buffers are left unmodified.
github-repos
def _CheckIsLink(self, file_entry): if definitions.FILE_ENTRY_TYPE_LINK not in self._file_entry_types: return False return file_entry.IsLink()
Checks the is_link find specification. Args: file_entry (FileEntry): file entry. Returns: bool: True if the file entry matches the find specification, False if not.
juraj-google-style
def _WriteHeader(self, output_writer): header_string = '' if self._title: header_string = ' {0:s} '.format(self._title) header_string = self._HEADER_FORMAT_STRING.format(header_string) output_writer.Write(header_string)
Writes a header. Args: output_writer (OutputWriter): output writer.
juraj-google-style
def InitFromAff4Object(self, aff4_obj): attr_blacklist = [] self.types = [] for aff4_cls in aff4_obj.__class__.__mro__: if not hasattr(aff4_cls, "SchemaCls"): continue type_repr = ApiAff4ObjectType().InitFromAff4Object( aff4_obj, aff4_cls, attr_blacklist) if typ...
Initializes the current instance from an Aff4Object. Iterates the inheritance hierarchy of the given Aff4Object and adds a ApiAff4ObjectType for each class found in the hierarchy. Args: aff4_obj: An Aff4Object as source for the initialization. Returns: A reference to the current instance.
juraj-google-style
def parse_headers(cls, msg): return list(email.parser.Parser().parsestr(msg).items())
Parse HTTP headers. Args: msg (str): HTTP message. Returns: (List[Tuple[str, str]): List of header tuples.
codesearchnet
def delete_data(self, url, *args, **kwargs): res = self._conn.delete(url, headers=self._prepare_headers(**kwargs)) if res.status_code == 200 or res.status_code == 202: return True else: return False
Deletes data under provided url Returns status as boolean. Args: **url**: address of file to be deleted .. versionadded:: 0.3.2 **additional_headers**: (optional) Additional headers to be used with request Returns: Boolean. True if request was successful. False if not.
juraj-google-style
def vertex_indices_in_segments(self, segments, ret_face_indices=False): import numpy as np import warnings face_indices = np.array([]) vertex_indices = np.array([]) if self.segm is not None: try: segments = [self.segm[name] for name in segmen...
Given a list of segment names, return an array of vertex indices for all the vertices in those faces. Args: segments: a list of segment names, ret_face_indices: if it is `True`, returns face indices
juraj-google-style
def volatility_fn(self): return self._volatility_fn
Python callable calculating the instantaneous volatility. The callable should accept two real `Tensor` arguments of the same dtype and shape `times_shape`. The first argument is the scalar time t, the second argument is the value of Ito process X - `Tensor` of shape `batch_shape + sample_shape + [dim]`, where `batch_s...
github-repos
def Address(self): if (self._address is None): self._address = Crypto.ToAddress(self.ScriptHash) return self._address
Get the wallet address associated with the token. Returns: str: base58 encoded string representing the wallet address.
codesearchnet
def add_buffer(self, buf_header, buf_payload): if ('num_buffers' in self._header): self._header['num_buffers'] += 1 else: self._header['num_buffers'] = 1 self._header_json = None self._buffers.append((buf_header, buf_payload))
Associate a buffer header and payload with this message. Args: buf_header (``JSON``) : a buffer header buf_payload (``JSON`` or bytes) : a buffer payload Returns: None Raises: MessageError
codesearchnet