code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def _on_report(self, report, connection_id): self._logger.info('Received report: %s', str(report)) self._trigger_callback('on_report', connection_id, report) return False
Callback function called when a report has been processed. Args: report (IOTileReport): The report object connection_id (int): The connection id related to this report Returns: - True to indicate that IOTileReportParser should also keep a copy of the report or False to indicate it should delete it.
codesearchnet
def stateful_ops(self): self._create_definition_if_needed() return self._stateful_ops
Returns the list of stateful ops in function definition. Returns: A list of (op.name, op.type) pairs.
github-repos
def apply_middleware(*middlewares): def inner(create_store_): def create_wrapper(reducer, enhancer=None): store = create_store_(reducer, enhancer) dispatch = store['dispatch'] middleware_api = { 'get_state': store['get_state'], 'dispat...
creates an enhancer function composed of middleware Args: *middlewares: list of middleware functions to apply Returns: an enhancer for subsequent calls to create_store()
juraj-google-style
def save_spectre_plot(self, filename='spectre.pdf', img_format='pdf', sigma=0.05, step=0.01): (d, plt) = self.get_spectre_plot(sigma, step) plt.savefig(filename, format=img_format)
Save matplotlib plot of the spectre to a file. Args: filename: Filename to write to. img_format: Image format to use. Defaults to EPS. sigma: Full width at half maximum in eV for normal functions. step: bin interval in eV
codesearchnet
def union_of_bboxes(height, width, bboxes, erosion_rate=0.0, to_int=False): (x1, y1) = (width, height) (x2, y2) = (0, 0) for b in bboxes: (w, h) = ((b[2] - b[0]), (b[3] - b[1])) (lim_x1, lim_y1) = ((b[0] + (erosion_rate * w)), (b[1] + (erosion_rate * h))) (lim_x2, lim_y2) = ((b[2] - ...
Calculate union of bounding boxes. Args: height (float): Height of image or space. width (float): Width of image or space. bboxes (list): List like bounding boxes. Format is `[x_min, y_min, x_max, y_max]`. erosion_rate (float): How much each bounding box can be shrinked, useful for erosive cropping. Set this in range ...
codesearchnet
def ConvertStringToFilename(name): return re.sub( r"\W", lambda x: "%%%02X" % ord(x.group(0)), name, flags=re.UNICODE).rstrip("/")
Converts an unicode string to a filesystem safe filename. For maximum compatibility we escape all chars which are not alphanumeric (in the unicode sense). Args: name: a unicode string that is part of a subject. Returns: A safe filename with escaped special chars.
juraj-google-style
def add(self, name, value, bitmask=DEFMASK): _add_enum_member(self._eid, name, value, bitmask)
Add an enum member Args: name: Name of the member value: value of the member bitmask: bitmask. Only use if enum is a bitfield.
juraj-google-style
def setMaximum(self, maximum): if (not isinstance(maximum, int)): raise TypeError('Argument is not of type int or long') self._maximum = maximum
setter to _maximum. Args: maximum (int or long): new _maximum value
codesearchnet
def Readdir(self, path, fh=None): if self.DataRefreshRequired(path): self._RunAndWaitForVFSFileUpdate(path) return super(GRRFuse, self).Readdir(path, fh=None)
Updates the directory listing from the client. Args: path: The path to the directory to update. Client is inferred from this. fh: A file handler. Not used. Returns: A list of filenames.
juraj-google-style
def load_weights_from_hdf5_group(f, model): if 'keras_version' in f.attrs: original_keras_version = f.attrs['keras_version'] if hasattr(original_keras_version, 'decode'): original_keras_version = original_keras_version.decode('utf8') else: original_keras_version = '1' if ...
Implements topological (order-based) weight loading. Args: f: A pointer to a HDF5 group. model: Model instance. Raises: ValueError: in case of mismatch between provided layers and weights file.
github-repos
def build_subresource_uri(self, resource_id_or_uri=None, subresource_id_or_uri=None, subresource_path=''): if subresource_id_or_uri and "/" in subresource_id_or_uri: return subresource_id_or_uri else: if not resource_id_or_uri: raise exceptions.HPOneViewV...
Helps to build a URI with resource path and its sub resource path. Args: resoure_id_or_uri: ID/URI of the main resource. subresource_id__or_uri: ID/URI of the sub resource. subresource_path: Sub resource path to be added with the URI. Returns: Returns URI
juraj-google-style
def GetNTFSFileEntryByPathSpec(self, path_spec): location = getattr(path_spec, 'location', None) mft_attribute = getattr(path_spec, 'mft_attribute', None) mft_entry = getattr(path_spec, 'mft_entry', None) if mft_attribute is not None and mft_entry is not None: fsntfs_file_entry = s...
Retrieves the NTFS file entry for a path specification. Args: path_spec (PathSpec): a path specification. Returns: pyfsntfs.file_entry: NTFS file entry. Raises: PathSpecError: if the path specification is missing location and MFT entry.
juraj-google-style
def list_documents(self, limit=None): limit_str = '' if limit: try: limit_str = 'LIMIT {}'.format(int(limit)) except (TypeError, ValueError): pass query = ('SELECT identifier FROM identifier_index ' + limit_str) for row in self.backend.library.database.connection....
Generates vids of all indexed identifiers. Args: limit (int, optional): If not empty, the maximum number of results to return Generates: str: vid of the document.
codesearchnet
def _get_oxm_field_int(self): if (self.oxm_class == OxmClass.OFPXMC_OPENFLOW_BASIC): return OxmOfbMatchField(self.oxm_field).value elif ((not isinstance(self.oxm_field, int)) or (self.oxm_field > 127)): raise ValueError('oxm_field above 127: "{self.oxm_field}".') return self.oxm_field
Return a valid integer value for oxm_field. Used while packing. Returns: int: valid oxm_field value. Raises: ValueError: If :attribute:`oxm_field` is bigger than 7 bits or should be :class:`OxmOfbMatchField` and the enum has no such value.
codesearchnet
def multiprocess_mapping(func, iterable): if os.name == 'nt': return list(map(func, iterable)) try: p = multiprocessing.Pool() return_data = list(p.imap(func, iterable)) p.close() p.join() return return_data except OSError: return list(map(func,...
Multiprocess mapping the given function on the given iterable. This only works in Linux and Mac systems since Windows has no forking capability. On Windows we fall back on single processing. Also, if we reach memory limits we fall back on single cpu processing. Args: func (func): the function to apply iterable (itera...
juraj-google-style
def read_scan(self): def floatList(l): ' return a list of float from a list of string ' return [float(v) for v in l] scan_patt = re.compile('^\\sSummary of the potential surface scan:') optscan_patt = re.compile('^\\sSummary of Optimized Potential Surface Scan') data = {'energies': list...
Read a potential energy surface from a gaussian scan calculation. Returns: A dict: {"energies": [ values ], "coords": {"d1": [ values ], "A2", [ values ], ... }} "energies" are the energies of all points of the potential energy surface. "coords" are the internal coordinates used to compute the potential energy surfa...
codesearchnet
def _mark_maybe_missing_members(self, values): values = list(values) seen = set() while values: v = values.pop(0) if v not in seen: seen.add(v) if isinstance(v, abstract.SimpleValue): v.maybe_missing_members = True for child in v.instan...
Set maybe_missing_members to True on these values and their type params. Args: values: A list of BaseValue objects. On every instance among the values, recursively set maybe_missing_members to True on the instance and its type parameters.
github-repos
def _get_job_metadata(provider, user_id, job_name, script, task_ids, user_project, unique_job_id): create_time = dsub_util.replace_timezone(datetime.datetime.now(), tzlocal()) user_id = (user_id or dsub_util.get_os_user()) job_metadata = provider.prepare_job_metadata(script.name, job_name, user_id, create_t...
Allow provider to extract job-specific metadata from command-line args. Args: provider: job service provider user_id: user submitting the job job_name: name for the job script: the script to run task_ids: a set of the task-ids for all tasks in the job user_project: name of the project to be billed for the request uniq...
codesearchnet
def register_site(self): if self.oxd_id: logger.info('Client is already registered. ID: %s', self.oxd_id) return self.oxd_id params = {'authorization_redirect_uri': self.authorization_redirect_uri, 'oxd_rp_programming_language': 'python'} for op in self.opt_params: if self.config.get...
Function to register the site and generate a unique ID for the site Returns: **string:** The ID of the site (also called client id) if the registration is successful Raises: **OxdServerError:** If the site registration fails.
codesearchnet
def get_morph_files(directory): lsdir = (os.path.join(directory, m) for m in os.listdir(directory)) return list(filter(_is_morphology_file, lsdir))
Get a list of all morphology files in a directory Returns: list with all files with extensions '.swc' , 'h5' or '.asc' (case insensitive)
codesearchnet
def log(cls, event=None, actor=None, data=None): from cloud_inquisitor.log import auditlog auditlog(event=event, actor=actor, data=data)
Generate and insert a new event Args: event (str): Action performed actor (str): Actor (user or subsystem) triggering the event data (dict): Any extra data necessary for describing the event Returns: `None`
juraj-google-style
def basis_state(str_state, num): n = int(str_state, 2) if (num >= len(str_state)): state = np.zeros((1 << num), dtype=complex) state[n] = 1 return state else: raise QiskitError('size of bitstring is greater than num.')
Return a basis state ndarray. Args: str_state (string): a string representing the state. num (int): the number of qubits Returns: ndarray: state(2**num) a quantum state with basis basis state. Raises: QiskitError: if the dimensions is wrong
codesearchnet
def from_tuples(year_month_day_tuples, validate=True): years, months, days = ([], [], []) for t in year_month_day_tuples: years.append(t[0]) months.append(t[1]) days.append(t[2]) years = tf.constant(years, dtype=tf.int32) months = tf.constant(months, dtype=tf.int32) days = tf...
Creates DateTensor from a sequence of year-month-day Tuples. Args: year_month_day_tuples: Sequence of (year, month, day) Tuples. Months are 1-based; constants from Months enum can be used instead of ints. Days are also 1-based. validate: Whether to validate the dates. Returns: DateTensor object. #### Example ```pyt...
github-repos
def GetArtifactsForCollection(os_name, artifact_list): artifact_arranger = ArtifactArranger(os_name, artifact_list) artifact_names = artifact_arranger.GetArtifactsInProperOrder() return artifact_names
Wrapper for the ArtifactArranger. Extend the artifact list by dependencies and sort the artifacts to resolve the dependencies. Args: os_name: String specifying the OS name. artifact_list: List of requested artifact names. Returns: A list of artifacts such that if they are collected in the given order their dependenc...
juraj-google-style
def get_variable(self, feature_column, name): if name in self._cols_to_vars_map[feature_column]: return self._cols_to_vars_map[feature_column][name] raise ValueError('Variable does not exist.')
Returns an existing variable. Args: feature_column: A `FeatureColumn` object this variable corresponds to. name: variable name.
github-repos
def get_vms(self, vm_names=None): if not vm_names: return self._vms.copy() missing_vms = [] vms = {} for name in vm_names: try: vms[name] = self._vms[name] except KeyError: missing_vms.append(n...
Returns the vm objects associated with vm_names if vm_names is None, return all the vms in the prefix Args: vm_names (list of str): The names of the requested vms Returns dict: Which contains the requested vm objects indexed by name Raises: utils.LagoUserException: If a vm name doesn't exist
juraj-google-style
def pack_tangents(tensors): return TangentInfo(*pywrap_tfe.TFE_Py_PackJVPs(tensors))
Packs forward accumulator state into a TangentInfo tuple. Args: tensors: A flat list of Tensors to pack forward accumulator state for. Returns: A tuple of (indices, tangents): indices: A sequence of sequences of two-element tuples. Each forward accumulator is represented as a sequence of tuples with (primal_index, jv...
github-repos
def _SetupDatabase(host=None, port=None, user=None, password=None, database=None, client_key_path=None, client_cert_path=None, ca_cert_path=None): with contextlib.closing( _Con...
Connect to the given MySQL host and create a utf8mb4_unicode_ci database. Args: host: The hostname to connect to. port: The port to connect to. user: The username to connect as. password: The password to connect with. database: The database name to create. client_key_path: The path of the client private key file. clie...
juraj-google-style
def create_position_ids_from_input_ids(input_ids, padding_idx, past_key_values_length=0): mask = input_ids.ne(padding_idx).int() incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask return incremental_indices.long() + padding_idx
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols are ignored. This is modified from fairseq's `utils.make_positions`. Args: x: torch.Tensor x: Returns: torch.Tensor
github-repos
def serial_wire_viewer(jlink_serial, device): buf = StringIO.StringIO() jlink = pylink.JLink(log=buf.write, detailed_log=buf.write) jlink.open(serial_no=jlink_serial) jlink.set_tif(pylink.enums.JLinkInterfaces.SWD) jlink.connect(device, verbose=True) jlink.coresight_configure() ...
Implements a Serial Wire Viewer (SWV). A Serial Wire Viewer (SWV) allows us implement real-time logging of output from a connected device over Serial Wire Output (SWO). Args: jlink_serial (str): the J-Link serial number device (str): the target CPU Returns: Always returns ``0``. Raises: JLinkException: on error
juraj-google-style
def depricated_name(newmethod): def decorator(func): @wraps(func) def wrapper(*args, **kwargs): warnings.simplefilter('always', DeprecationWarning) warnings.warn( "Function {} is depricated, please use {} instead.".format(func.__name__, newmethod), ...
Decorator for warning user of depricated functions before use. Args: newmethod (str): Name of method to use instead.
juraj-google-style
def CheckCommaSpacing(filename, clean_lines, linenum, error): raw = clean_lines.lines_without_raw_strings line = clean_lines.elided[linenum] if (Search(',[^,\\s]', ReplaceAll('\\boperator\\s*,\\s*\\(', 'F(', line)) and Search(',[^,\\s]', raw[linenum])): error(filename, linenum, 'whitespace/comma', 3...
Checks for horizontal spacing near commas and semicolons. Args: filename: The name of the current file. clean_lines: A CleansedLines instance containing the file. linenum: The number of the line to check. error: The function to call with any errors found.
codesearchnet
def match_global_phase(a: np.ndarray, b: np.ndarray) -> Tuple[(np.ndarray, np.ndarray)]: if (a.shape != b.shape): return (a, b) k = max(np.ndindex(*a.shape), key=(lambda t: abs(b[t]))) def dephase(v): r = np.real(v) i = np.imag(v) if (i == 0): return ((- 1) if (r...
Phases the given matrices so that they agree on the phase of one entry. To maximize precision, the position with the largest entry from one of the matrices is used when attempting to compute the phase difference between the two matrices. Args: a: A numpy array. b: Another numpy array. Returns: A tuple (a', b') where...
codesearchnet
def _set_checkpoint_initializer(variable, ckpt_file, tensor_name, slice_spec, name='checkpoint_initializer'): base_type = variable.dtype.base_dtype with ops.device(variable.device), ops.device('/cpu:0'): restore_op = io_ops.restore_v2(ckpt_file, [tensor_name], [slice_spec], [base_type], name=name)[0] ...
Overrides given variable's initialization op. Sets variable initializer to assign op that initializes variable from tensor's value in the checkpoint. Args: variable: `tf.Variable` object. ckpt_file: string, full path of the checkpoint. tensor_name: Name of the tensor to load from the checkpoint. slice_spec: Slice spe...
github-repos
def save(self, items): rows = [] indx = self.indx size = 0 tick = s_common.now() for item in items: byts = s_msgpack.en(item) size += len(byts) lkey = s_common.int64en(indx) indx += 1 rows.append((lkey, by...
Save a series of items to a sequence. Args: items (tuple): The series of items to save into the sequence. Returns: The index of the first item
juraj-google-style
def healthy_services(self, role=None): try: query = self.rr.table(self.table) if role: query = query.get_all(role, index='role') query = query.filter( lambda svc: r.now().sub(svc["last_heartbeat"]) < svc["ttl"] ).order_b...
Look up healthy services in the registry. A service is considered healthy if its 'last_heartbeat' was less than 'ttl' seconds ago Args: role (str, optional): role name Returns: If `role` is supplied, returns list of healthy services for the given role, otherwise returns list of all healthy services. May return an em...
juraj-google-style
def parse_uniprot_txt_file(infile): uniprot_metadata_dict = {} metadata = old_parse_uniprot_txt_file(infile) metadata_keys = list(metadata.keys()) if metadata_keys: metadata_key = metadata_keys[0] else: return uniprot_metadata_dict uniprot_metadata_dict['seq_len'] = len(s...
Parse a raw UniProt metadata file and return a dictionary. Args: infile: Path to metadata file Returns: dict: Metadata dictionary
juraj-google-style
def build_twisted_request(self, method, url, extra_headers={}, body_producer=None, full_url=False): uri = (url if full_url else self._url(url)) raw_headers = self.get_headers() if extra_headers: raw_headers.update(extra_headers) headers = http_headers.Headers() for header in raw_headers: ...
Build a request for twisted Args: method (str): Request method (GET/POST/PUT/DELETE/etc.) If not specified, it will be POST if post_data is not None url (str): Destination URL (full, or relative) Kwargs: extra_headers (dict): Headers (override default connection headers, if any) body_producer (:class:`twisted.web.iwe...
codesearchnet
def assert_keys_exist(self, caller, *keys): assert keys, '*keys parameter must be specified.' for key in keys: self.assert_key_exists(key, caller)
Assert that context contains keys. Args: keys: validates that these keys exists in context caller: string. calling function or module name - this used to construct error messages Raises: KeyNotInContextError: When key doesn't exist in context.
codesearchnet
def read_parquet(path, engine="auto", columns=None, **kwargs): return DataFrame( query_compiler=BaseFactory.read_parquet( path=path, columns=columns, engine=engine, **kwargs ) )
Load a parquet object from the file path, returning a DataFrame. Args: path: The filepath of the parquet file. We only support local files for now. engine: This argument doesn't do anything for now. kwargs: Pass into parquet's read_pandas function.
juraj-google-style
def get_neighbors_of_site_with_index(struct, n, approach='min_dist', delta=0.1, cutoff=10.0): if (approach == 'min_dist'): return MinimumDistanceNN(tol=delta, cutoff=cutoff).get_nn(struct, n) elif (approach == 'voronoi'): return VoronoiNN(tol=delta, cutoff=cutoff).get_nn(struct, n) elif (app...
Returns the neighbors of a given site using a specific neighbor-finding method. Args: struct (Structure): input structure. n (int): index of site in Structure object for which motif type is to be determined. approach (str): type of neighbor-finding approach, where "min_dist" will use the MinimumDistanceNN class, "voro...
codesearchnet
def validate(self, read_tuple_name): if (reg_lrn.match(read_tuple_name) is None): self.report_error(read_tuple_name=read_tuple_name, error_name='wrong_read_tuple_name_structure', message="'{}' is not matched".format(reg_lrn)) else: parts = read_tuple_name.split('__') if (reg_prefix_part....
Check RNF validity of a read tuple. Args: read_tuple_name (str): Read tuple name to be checked.s
codesearchnet
def get_associated_uplink_groups(self): uri = '{}/associatedUplinkGroups'.format(self.data['uri']) return self._helper.do_get(uri)
Gets the uplink sets which are using an Ethernet network. Returns: list: URIs of the associated uplink sets.
codesearchnet
def _delocalize_logging_command(self, logging_path, user_project): logging_prefix = os.path.splitext(logging_path.uri)[0] if logging_path.file_provider == job_model.P_LOCAL: mkdir_cmd = 'mkdir -p "%s"\n' % os.path.dirname(logging_prefix) cp_cmd = 'cp' elif logging_path.file_prov...
Returns a command to delocalize logs. Args: logging_path: location of log files. user_project: name of the project to be billed for the request. Returns: eg. 'gs://bucket/path/myfile' or 'gs://bucket/script-foobar-12'
juraj-google-style
def _gen_indicator_method(self, name, custom_class, value_count): method_name = name.replace(' ', '_').lower() def method_1(value1, xid, **kwargs): indicator_obj = custom_class(value1, xid, **kwargs) return self._indicator(indicator_obj) ...
Dynamically generate custom Indicator methods. Args: name (str): The name of the method. custom_class (object): The class to add. value_count (int): The number of value parameters to support.
juraj-google-style
def _determine_trace_and_create_report(self, graph, ops_in_exec_path, graph_summary_tag): self._check_trace_files() graph_order = tensor_tracer_report.sort_tensors_and_ops(graph) tensor_trace_points = graph.get_collection(_TENSOR_TRACER_COLLECTION) report_handler = tensor_tracer_report.TTReportHandle() ...
Work needs to be done prior to TPU or CPU tracing. Args: graph: tf.graph ops_in_exec_path: Set of operations in the execution path. graph_summary_tag: the summary tag name for the given graph. Returns: An instance of tensor_tracer_report.TensorTraceOrder, containing list of tensors to be traced with their topological ...
github-repos
def slice_inputs(self, indices_dataset, inputs): flat_inputs = nest.flatten(inputs) def dynamic_shape_like(t): shape = list(t.shape) shape[0] = None return tuple(shape) flat_dtypes = [inp.dtype for inp in flat_inputs] contiguous = True if self._shuffle and self._shuffle != '...
Slice inputs into a Dataset of batches. Given a Dataset of batch indices and the unsliced inputs, this step slices the inputs in a parallelized fashion and produces a dataset of input batches. Args: indices_dataset: A Dataset of batched indices inputs: A python data structure that contains the inputs, targets, and po...
github-repos
def convert_variables_to_constants(sess, input_graph_def, output_node_names, variable_names_whitelist=None, variable_names_blacklist=None): ret = convert_variables_to_constants_from_session_graph(session=sess, graph_def=input_graph_def, output_node_names=output_node_names, variable_names_allowlist=variable_names_wh...
Replaces all the variables in a graph with constants of the same values. If you have a trained graph containing Variable ops, it can be convenient to convert them all to Const ops holding the same values. This makes it possible to describe the network fully with a single GraphDef file, and allows the removal of a lot ...
github-repos
def dfa(self, ttab: TransitionTable, init: int = 0) -> int: state = init while True: disp = ttab[state] ch = self.peek() state = disp.get(ch, disp[""])() if state < 0: return state self.offset += 1
Run a DFA and return the final (negative) state. Args: ttab: Transition table (with possible side-effects). init: Initial state. Raises: EndOfInput: If past the end of `self.input`.
juraj-google-style
def waitForEvent(self, event_name, predicate, timeout=None): if timeout is None: timeout = self.default_timeout_sec deadline = time.perf_counter() + timeout while time.perf_counter() <= deadline: single_rpc_timeout = deadline - time.perf_counter() if single_rpc_timeout < 0: ...
Waits for an event of the specific name that satisfies the predicate. This call will block until the expected event has been received or time out. The predicate function defines the condition the event is expected to satisfy. It takes an event and returns True if the condition is satisfied, False otherwise. Note all...
github-repos
def check(self, dsm, simplicity_factor=2, **kwargs): economy_of_mechanism = False message = '' data = dsm.data categories = dsm.categories dsm_size = dsm.size[0] if not categories: categories = ['appmodule'] * dsm_size dependency_nu...
Check economy of mechanism. As first abstraction, number of dependencies between two modules < 2 * the number of modules (dependencies to the framework are NOT considered). Args: dsm (:class:`DesignStructureMatrix`): the DSM to check. simplicity_factor (int): simplicity factor. Returns: bool: True if economic, else ...
juraj-google-style
def configure_profile(msg_type, profile_name, data, auth): with jsonconfig.Config('messages', indent=4) as cfg: write_data(msg_type, profile_name, data, cfg) write_auth(msg_type, profile_name, auth, cfg) print((('[+] Configuration entry for <' + profile_name) + '> created.')) print(('[+] Con...
Create the profile entry. Args: :msg_type: (str) message type to create config entry. :profile_name: (str) name of the profile entry :data: (dict) dict values for the 'settings' :auth: (dict) auth parameters
codesearchnet
def post_process_segmentation(self, outputs: 'MaskFormerForInstanceSegmentationOutput', target_size: Optional[Tuple[int, int]]=None) -> 'torch.Tensor': warnings.warn('`post_process_segmentation` is deprecated and will be removed in v5 of Transformers, please use `post_process_instance_segmentation`', FutureWarning)...
Converts the output of [`MaskFormerForInstanceSegmentationOutput`] into image segmentation predictions. Only supports PyTorch. Args: outputs ([`MaskFormerForInstanceSegmentationOutput`]): The outputs from [`MaskFormerForInstanceSegmentation`]. target_size (`Tuple[int, int]`, *optional*): If set, the `masks_queries_lo...
github-repos
def find_in_coord_list(coord_list, coord, atol=1e-08): if (len(coord_list) == 0): return [] diff = (np.array(coord_list) - np.array(coord)[(None, :)]) return np.where(np.all((np.abs(diff) < atol), axis=1))[0]
Find the indices of matches of a particular coord in a coord_list. Args: coord_list: List of coords to test coord: Specific coordinates atol: Absolute tolerance. Defaults to 1e-8. Accepts both scalar and array. Returns: Indices of matches, e.g., [0, 1, 2, 3]. Empty list if not found.
codesearchnet
def aerosol_optical_depth(self, value=0.999): if (value is not None): try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float for field `aerosol_optical_depth`'.format(value)) self._aerosol_optical_depth = value
Corresponds to IDD Field `aerosol_optical_depth` Args: value (float): value for IDD Field `aerosol_optical_depth` Unit: thousandths Missing value: 0.999 if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
codesearchnet
def GetSectionByIndex(self, section_index): if (not self._is_parsed): self._Parse() self._is_parsed = True if ((section_index < 0) or (section_index >= len(self._sections))): return None return self._sections[section_index]
Retrieves a specific section based on the index. Args: section_index (int): index of the section. Returns: VolumeExtent: a volume extent or None if not available.
codesearchnet
def every_other(x, name=None): with tf.name_scope(name, 'every_other', [x]) as scope: x = tf.convert_to_tensor(x, name='x') return tf.reshape( tf.slice( tf.reshape(x, [-1, 2]), [0, 0], [-1, 1]), [-1], name=scope)
Drops every other value from the tensor and returns a 1D tensor. This is useful if you are running multiple inputs through a model tower before splitting them and you want to line it up with some other data. Args: x: the target tensor. name: the name for this op, defaults to every_other Returns: A tensorflow op.
juraj-google-style
def make_seeds(self, count=1): alg = self.algorithm if alg in (a.value for a in random_ops_util.Algorithm): keys = self._make_int64_keys(shape=[count]) zeros = array_ops.zeros_like(keys) return array_ops_stack.stack([keys, zeros]) else: raise ValueError(stateless_random_ops.u...
Generates seeds for stateless random ops. For example: ```python seeds = get_global_generator().make_seeds(count=10) for i in range(10): seed = seeds[:, i] numbers = stateless_random_normal(shape=[2, 3], seed=seed) ... ``` Args: count: the number of seed pairs (note that stateless random ops need a pair of seeds to ...
github-repos
def parallel(processor_list: Sequence[PartProcessor]) -> PartProcessor: if not processor_list: raise ValueError('processor_list is empty') return _ParallelPartProcessor(processor_list)
Create a sequence of part processors to be run in parallel. Args: processor_list: list of part processors. Returns: A processor consisting of the parallel run of all the processors in the list. The execution is sequential from the first processor to the last but parts are processed concurrently overall.
github-repos
def get_metrics_namespace(self) -> str: return 'RunInference'
Returns: A namespace for metrics collected by the RunInference transform.
github-repos
def product_category(request, category_id): PRODUCTS_FORM_PREFIX = 'products' VOUCHERS_FORM_PREFIX = 'vouchers' v = _handle_voucher(request, VOUCHERS_FORM_PREFIX) (voucher_form, voucher_handled) = v category_id = int(category_id) category = inventory.Category.objects.get(pk=category_id) with...
Form for selecting products from an individual product category. Arguments: category_id (castable to int): The id of the category to display. Returns: redirect or render: If the form has been sucessfully submitted, redirect to ``dashboard``. Otherwise, render ``registrasion/product_category.html`` with data:: { "cat...
codesearchnet
def generate_defect_structure(self, supercell=(1, 1, 1)): defect_structure = self.bulk_structure.copy() defect_structure.make_supercell(supercell) defect_properties = self.site.properties.copy() if ('velocities' in self.bulk_structure.site_properties) and \ ...
Returns Defective Substitution structure, decorated with charge Args: supercell (int, [3x1], or [[]] (3x3)): supercell integer, vector, or scaling matrix
juraj-google-style
def export(self, top=True): out = [] if top: out.append(self._internal_name) out.append(self._to_str(self.holiday_name)) out.append(self._to_str(self.holiday_day)) return ",".join(out)
Exports object to its string representation. Args: top (bool): if True appends `internal_name` before values. All non list objects should be exported with value top=True, all list objects, that are embedded in as fields inlist objects should be exported with `top`=False Returns: str: The objects string representatio...
juraj-google-style
def on_connection_state_change(self, event_type, callback): listeners = self._connection_state_listeners.get(event_type, []) listeners.append(callback) self._connection_state_listeners[event_type] = listeners
Register a callback for a specific connection state change. Register a callback to be triggered when the connection changes to the specified state, signified by a ConnectionEvent. The callback must be a coroutine. Args: event_type (ConnectionEvent): the connection event to listen for callback (coroutine): a coroutin...
codesearchnet
def get_log(self): log_path = self.meta_data['logs_resource'] conn = Qubole.agent() r = conn.get_raw(log_path) return r.text
Fetches log for the command represented by this object Returns: The log as a string
codesearchnet
def reinforce_grid(self): for grid_district in self.mv_grid_districts(): grid_district.mv_grid.reinforce_grid() for lv_load_area in grid_district.lv_load_areas(): if not lv_load_area.is_aggregated: for lv_...
Performs grid reinforcement measures for all MV and LV grids Args: Returns:
juraj-google-style
def setup_data_stream(self, connection_factory: Callable[([tuple], Connection)], data_stream_factory: Callable[([Connection], DataStream)]=DataStream) -> DataStream: (yield from self._control_stream.write_command(Command('TYPE', 'I'))) reply = (yield from self._control_stream.read_reply()) self.raise_if_not...
Create and setup a data stream. This function will set up passive and binary mode and handle connecting to the data connection. Args: connection_factory: A coroutine callback that returns a connection data_stream_factory: A callback that returns a data stream Coroutine. Returns: DataStream
codesearchnet
def resize(self, image: np.ndarray, size: Dict[str, int], resample: PILImageResampling=PILImageResampling.LANCZOS, data_format: Optional[Union[str, ChannelDimension]]=None, input_data_format: Optional[Union[str, ChannelDimension]]=None, **kwargs) -> np.ndarray: if input_data_format is None: input_data_forma...
Resize an image. The longest edge of the image is resized to size["longest_edge"], with the shortest edge resized to keep the input aspect ratio. Can also be used with size["height"] and size["width"]. Args: image (`np.ndarray`): Image to resize. size (`Dict[str, int]`): Size of the output image. resample (`PILImageRes...
github-repos
def double_relaxation_run(cls, vasp_cmd, auto_npar=True, ediffg=(- 0.05), half_kpts_first_relax=False, auto_continue=False): incar_update = {'ISTART': 1} if ediffg: incar_update['EDIFFG'] = ediffg settings_overide_1 = None settings_overide_2 = [{'dict': 'INCAR', 'action': {'_set': incar_update}}...
Returns a list of two jobs corresponding to an AFLOW style double relaxation run. Args: vasp_cmd (str): Command to run vasp as a list of args. For example, if you are using mpirun, it can be something like ["mpirun", "pvasp.5.2.11"] auto_npar (bool): Whether to automatically tune NPAR to be sqrt( number of cores) as r...
codesearchnet
def get_sketch(self, sketch_id): resource_url = '{0:s}/sketches/{1:d}/'.format(self.api_base_url, sketch_id) response = self.session.get(resource_url) response_dict = response.json() try: response_dict['objects'] except KeyError: raise ValueError('Sketch does not exist or you have n...
Get information on the specified sketch. Args: sketch_id (int): ID of sketch Returns: dict: Dictionary of sketch information Raises: ValueError: Sketch is inaccessible
juraj-google-style
def _VerifyHandValues(self, tensor_in_sizes, filter_in_sizes, stride, padding, expected, use_gpu): total_size_1 = 1 total_size_2 = 1 for s in tensor_in_sizes: total_size_1 *= s for s in filter_in_sizes: total_size_2 *= s x1 = [f * 1.0 for f in range(1, total_size_1 + 1)] x2 = [f ...
Verifies the output values of the depthwise convolution function. Args: tensor_in_sizes: Input tensor dimensions in [batch, input_rows, input_cols, input_depth]. filter_in_sizes: Filter tensor dimensions in [filter_rows, filter_cols, input_depth, depth_multiplier]. stride: Stride. padding: Padding type. expected: An a...
github-repos
class TrainState(train_state.TrainState): logits_fn: Callable = struct.field(pytree_node=False) loss_fn: Callable = struct.field(pytree_node=False)
Train state with an Optax optimizer. The two functions below differ depending on whether the task is classification or regression. Args: logits_fn: Applied to last layer to obtain the logits. loss_fn: Function to compute the loss.
github-repos
def filter_genes_and_strains(self, remove_genes_not_in_reference_model=True, remove_strains_with_no_orthology=True, remove_strains_with_no_differences=False, custom_keep_strains=None, custom_keep_genes=None): if (len(self.df_orthology_matrix) == 0): raise RuntimeError('Empty orthology matrix, please calcula...
Filters the analysis by keeping a subset of strains or genes based on certain criteria. Args: remove_genes_not_in_reference_model (bool): Remove genes from reference model not in orthology matrix remove_strains_with_no_orthology (bool): Remove strains which have no orthologous genes found remove_strains_with_no_differ...
codesearchnet
def replace_batch_norm(model): for name, module in model.named_children(): if isinstance(module, nn.BatchNorm2d): new_module = DabDetrFrozenBatchNorm2d(module.num_features) if not module.weight.device == torch.device('meta'): new_module.weight.data.copy_(module.weight...
Recursively replace all `torch.nn.BatchNorm2d` with `DabDetrFrozenBatchNorm2d`. Args: model (torch.nn.Module): input model
github-repos
def get_event(self, event_name, event_history=None): if (event_history is None): event_history = (event_name + '_history') return self._db.rpoplpush(event_name, event_history)
Get an event from the database. Gets an event from the named event list removing the event and adding it to the event history. Args: event_name (str): Event list key. event_history (str, optional): Event history list. Returns: str: string representation of the event object
codesearchnet
def download_models(self, uniprot_acc, outdir='', force_rerun=False): downloaded = [] subset = self.get_models(uniprot_acc) for entry in subset: ident = '{}_{}_{}_{}'.format(uniprot_acc, entry['template'], entry['from'], entry['to']) outfile = op.join(outdir, (ident + '.pdb')) if ssb...
Download all models available for a UniProt accession number. Args: uniprot_acc (str): UniProt ACC/ID outdir (str): Path to output directory, uses working directory if not set force_rerun (bool): Force a redownload the models if they already exist Returns: list: Paths to the downloaded models
codesearchnet
def set_interface(self, vrf_name, interface, default=False, disable=False): cmds = [('interface %s' % interface)] cmds.append(self.command_builder('vrf forwarding', value=vrf_name, default=default, disable=disable)) return self.configure(cmds)
Adds a VRF to an interface Notes: Requires interface to be in routed mode. Must apply ip address after VRF has been applied. This feature can also be accessed through the interfaces api. Args: vrf_name (str): The VRF name to configure interface (str): The interface to add the VRF too default (bool): Set interface VRF...
codesearchnet
def get_sid_string(principal): if (principal is None): principal = 'NULL SID' try: return win32security.ConvertSidToStringSid(principal) except TypeError: principal = get_sid(principal) try: return win32security.ConvertSidToStringSid(principal) except pywintypes.error...
Converts a PySID object to a string SID. Args: principal(str): The principal to lookup the sid. Must be a PySID object. Returns: str: A string sid Usage: .. code-block:: python # Get a PySID object py_sid = salt.utils.win_dacl.get_sid('jsnuffy') # Get the string version of the SID salt.utils.win_dacl.get_sid_str...
codesearchnet
def create_s3_event(app_name, env, region, bucket, triggers): session = boto3.Session(profile_name=env, region_name=region) s3_client = session.client('s3') lambda_alias_arn = get_lambda_alias_arn(app_name, env, region) LOG.debug("Lambda ARN for lambda function %s is %s.", app_name, lambda_alias_...
Create S3 lambda events from triggers Args: app_name (str): name of the lambda function env (str): Environment/Account for lambda function region (str): AWS region of the lambda function triggers (list): List of triggers from the settings
juraj-google-style
def dump_tree(self, statement=None, indent_level=0): out = u'' indent = (u' ' * indent_level) if (statement is None): for root_statement in self.statements: out += self.dump_tree(root_statement, indent_level) else: out += ((indent + str(statement)) + u'\n') if (len(st...
Dump the AST for this parsed file. Args: statement (SensorGraphStatement): the statement to print if this function is called recursively. indent_level (int): The number of spaces to indent this statement. Used for recursively printing blocks of statements. Returns: str: The AST for this parsed sg file as a nested tre...
codesearchnet
def slice(self, start, end): reverse = False if start > end: temp = start start = end end = temp reverse = True seg = self.copy() seg.points = seg.points[start:end+1] if reverse: seg.points = list(reversed(seg...
Creates a copy of the current segment between indexes. If end > start, points are reverted Args: start (int): Start index end (int): End index Returns: :obj:`Segment`
juraj-google-style
def create_bagit_stream(dir_name, payload_info_list): zip_file = zipstream.ZipFile(mode='w', compression=zipstream.ZIP_DEFLATED) _add_path(dir_name, payload_info_list) (payload_byte_count, payload_file_count) = _add_payload_files(zip_file, payload_info_list) tag_info_list = _add_tag_files(zip_file, dir_...
Create a stream containing a BagIt zip archive. Args: dir_name : str The name of the root directory in the zip file, under which all the files are placed (avoids "zip bombs"). payload_info_list: list List of payload_info_dict, each dict describing a file. - keys: pid, filename, iter, checksum, checksum_algorithm - I...
codesearchnet
def get_images_by_catid_and_aoi(self, catid, aoi_wkt): self.logger.debug('Retrieving IDAHO metadata') url = '%s/search' % self.base_url body = {"filters": ["catalogID = '%s'" % catid], "types": ["IDAHOImage"], "searchAreaWkt": aoi_wkt} ...
Retrieves the IDAHO image records associated with a given catid. Args: catid (str): The source catalog ID from the platform catalog. aoi_wkt (str): The well known text of the area of interest. Returns: results (json): The full catalog-search response for IDAHO images within the catID.
juraj-google-style
def needle_statistics_alignio(infile): alignments = list(AlignIO.parse(infile, "emboss")) if len(alignments) > 1: raise ValueError('Alignment file contains more than one pairwise alignment') alignment = alignments[0] with open(infile) as f: line = f.readline() for i in ...
Reads in a needle alignment file and returns an AlignIO object with annotations Args: infile (str): Alignment file name Returns: AlignIO: annotated AlignIO object
juraj-google-style
def success(channel, stats, name, platform, dp): datapacks = [("Platform", platform, False)] for stat in stats: if stat[0] in ("Duel 1v1", "Doubles 2v2", "Solo Standard 3v3", "Standard 3v3"): stat_name = "__" + stat[0] + "__" stat_value = "**" + stat[1] + "**"...
Creates an embed UI containing the Rocket League stats Args: channel (discord.Channel): The Discord channel to bind the embed to stats (tuple): Tuples of (field, value, percentile) name (str): The name of the player platform (str): The playfor to search on, can be 'steam', 'ps', or 'xbox' dp (str): URL to the player's...
juraj-google-style
def _set_device(self, device) -> None: self._set_device_from_string(compat.as_str(_device_string(device)))
Set the device of this operation. Args: device: string or device.. The device to set.
github-repos
def create_image_uri(region, framework, instance_type, framework_version, py_version=None, account='520713654638', accelerator_type=None, optimized_families=None): optimized_families = (optimized_families or []) if (py_version and (py_version not in VALID_PY_VERSIONS)): raise ValueError('invalid py_vers...
Return the ECR URI of an image. Args: region (str): AWS region where the image is uploaded. framework (str): framework used by the image. instance_type (str): SageMaker instance type. Used to determine device type (cpu/gpu/family-specific optimized). framework_version (str): The version of the framework. py_version (s...
codesearchnet
def load_variables(defines, config_file): if config_file is not None: with open(config_file, "r") as conf_file: variables = yaml.load(conf_file) else: variables = {} for define in defines: name, equ, value = define.partition('=') if equ != '=': ...
Load all variables from cmdline args and/or a config file. Args: defines (list of str): A list of name=value pairs that define free variables. config_file (str): An optional path to a yaml config file that defines a single dict with name=value variable definitions.
juraj-google-style
def insert_before(self, value: Union[(RawValue, Value)], raw: bool=False) -> 'ArrayEntry': return ArrayEntry(self.index, self.before, self.after.cons(self.value), self._cook_value(value, raw), self.parinst, self.schema_node, datetime.now())
Insert a new entry before the receiver. Args: value: The value of the new entry. raw: Flag to be set if `value` is raw. Returns: An instance node of the new inserted entry.
codesearchnet
def get_all_dataset_names(configuration=None, **kwargs): dataset = Dataset(configuration=configuration) dataset['id'] = 'all dataset names' return dataset._write_to_hdx('list', kwargs, 'id')
Get all dataset names in HDX Args: configuration (Optional[Configuration]): HDX configuration. Defaults to global configuration. **kwargs: See below limit (int): Number of rows to return. Defaults to all dataset names. offset (int): Offset in the complete result for where the set of returned dataset names should begin...
codesearchnet
def _estimate_data_distribution(c, num_examples_per_class_seen): num_classes = num_examples_per_class_seen.get_shape()[0] num_examples_per_class_seen = math_ops.add(num_examples_per_class_seen, math_ops.reduce_sum(array_ops.one_hot(c, num_classes, dtype=dtypes.int64), 0)) init_prob_estimate = math_ops.trued...
Estimate data distribution as labels are seen. Args: c: The class labels. Type `int32`, shape `[batch_size]`. num_examples_per_class_seen: Type `int64`, shape `[num_classes]`, containing counts. Returns: num_examples_per_lass_seen: Updated counts. Type `int64`, shape `[num_classes]`. dist: The updated distribution....
github-repos
def cancel_id(cls, id): conn = Qubole.agent() data = {'status': 'kill'} return conn.put(cls.element_path(id), data)
Cancels command denoted by this id Args: `id`: command id
codesearchnet
def render_registered(url_id, remote_info): return template(read_index_template(), registered=True, url=remote_info['url'], seeder_data=json.dumps(remote_info), url_id=url_id)
Render template file for the registered user, which has some of the values prefilled. Args: url_id (str): Seeder URL id. remote_info (dict): Informations read from Seeder. Returns: str: Template filled with data.
codesearchnet
def forward_event_shape_tensor(self, input_shape, name='forward_event_shape_tensor'): with self._name_scope(name, [input_shape]): input_shape = ops.convert_to_tensor(input_shape, dtype=dtypes.int32, name='input_shape') return self._forward_event_shape_tensor(input_shape)
Shape of a single sample from a single batch as an `int32` 1D `Tensor`. Args: input_shape: `Tensor`, `int32` vector indicating event-portion shape passed into `forward` function. name: name to give to the op Returns: forward_event_shape_tensor: `Tensor`, `int32` vector indicating event-portion shape after applying `f...
github-repos
def suggestions(self, word): suggestions = set(self._misspelling_dict.get(word, [])).union(set(self._misspelling_dict.get(word.lower(), []))) return sorted([same_case(source=word, destination=w) for w in suggestions])
Returns a list of suggestions for a misspelled word. Args: word: The word to check. Returns: List of zero or more suggested replacements for word.
codesearchnet
def set_napp(self, user, napp, version=None): self.user = user self.napp = napp self.version = version or 'latest'
Set info about NApp. Args: user (str): NApps Server username. napp (str): NApp name. version (str): NApp version.
juraj-google-style
def CheckEmptyBlockBody(filename, clean_lines, linenum, error): line = clean_lines.elided[linenum] matched = Match(r'\s*(for|while|if)\s*\(', line) if matched: (end_line, end_linenum, end_pos) = CloseExpression( clean_lines, linenum, line.find('(')) if en...
Look for empty loop/conditional body with only a single semicolon. Args: filename: The name of the current file. clean_lines: A CleansedLines instance containing the file. linenum: The number of the line to check. error: The function to call with any errors found.
juraj-google-style
def up(self) -> 'InstanceNode': ts = max(self.timestamp, self.parinst.timestamp) return self.parinst._copy(self._zip(), ts)
Return an instance node corresponding to the receiver's parent. Raises: NonexistentInstance: If there is no parent.
codesearchnet
def log_transition(self, transition, from_state, instance, *args, **kwargs): logger = logging.getLogger('xworkflows.transitions') try: instance_repr = u(repr(instance), 'ignore') except (UnicodeEncodeError, UnicodeDecodeError): instance_repr = u('<bad repr>') logger.info(u('%s performed ...
Log a transition. Args: transition (Transition): the name of the performed transition from_state (State): the source state instance (object): the modified object Kwargs: Any passed when calling the transition
codesearchnet
def _on_scan(self, info): device_id = info['uuid'] expiration_time = info.get('validity_period', 60) infocopy = deepcopy(info) infocopy['expiration_time'] = (monotonic() + expiration_time) with self._scan_lock: self._scanned_devices[device_id] = infocopy
Callback called when a new device is discovered on this CMDStream Args: info (dict): Information about the scanned device
codesearchnet