code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def get_lagged_subsequences(self, sequence: torch.Tensor, subsequences_length: int, shift: int=0) -> torch.Tensor: indices = [lag - shift for lag in self.config.lags_sequence] sequence_length = sequence.shape[1] if max(indices) + subsequences_length > sequence_length: raise ValueError(f'lags cannot go further than history length, found lag {max(indices)} while history length is only {sequence_length}') lagged_values = [] for lag_index in indices: begin_index = -lag_index - subsequences_length end_index = -lag_index if lag_index > 0 else None lagged_values.append(sequence[:, begin_index:end_index, ...]) return torch.stack(lagged_values, dim=-1)
Returns lagged subsequences of a given sequence. Returns a tensor of shape (batch_size, subsequences_length, feature_size, indices_length), containing lagged subsequences. Specifically, lagged[i, j, :, k] = sequence[i, -indices[k]-subsequences_length+j, :]. Args: sequence (`torch.Tensor` or shape `(batch_size, context_length, feature_size)`): The sequence from which lagged subsequences should be extracted. subsequences_length (`int`): Length of the subsequences to be extracted. shift (`int`, *optional* defaults to 0): Shift the lags by this amount back in the time index.
github-repos
def prune_conv1d_layer(layer: Conv1D, index: torch.LongTensor, dim: int=1) -> Conv1D: index = index.to(layer.weight.device) W = layer.weight.index_select(dim, index).detach().clone() if dim == 0: b = layer.bias.detach().clone() else: b = layer.bias[index].detach().clone() new_size = list(layer.weight.size()) new_size[dim] = len(index) new_layer = Conv1D(new_size[1], new_size[0]).to(layer.weight.device) new_layer.weight.requires_grad = False new_layer.weight.copy_(W.contiguous()) new_layer.weight.requires_grad = True new_layer.bias.requires_grad = False new_layer.bias.copy_(b.contiguous()) new_layer.bias.requires_grad = True return new_layer
Prune a Conv1D layer to keep only entries in index. A Conv1D work as a Linear layer (see e.g. BERT) but the weights are transposed. Used to remove heads. Args: layer ([`~pytorch_utils.Conv1D`]): The layer to prune. index (`torch.LongTensor`): The indices to keep in the layer. dim (`int`, *optional*, defaults to 1): The dimension on which to keep the indices. Returns: [`~pytorch_utils.Conv1D`]: The pruned layer as a new layer with `requires_grad=True`.
github-repos
def get_setter(proto): _, type_registrations = _REVIVED_TYPE_REGISTRY.get(proto.identifier, (None, None)) if type_registrations is not None: for type_registration in type_registrations: if type_registration.should_load(proto): return type_registration.setter return None
Gets the registered setter function for the SavedUserObject proto. See VersionedTypeRegistration for info about the setter function. Args: proto: SavedUserObject proto Returns: setter function
github-repos
def _get_facet_chempots(self, facet): complist = [self.qhull_entries[i].composition for i in facet] energylist = [self.qhull_entries[i].energy_per_atom for i in facet] m = [[c.get_atomic_fraction(e) for e in self.elements] for c in complist] chempots = np.linalg.solve(m, energylist) return dict(zip(self.elements, chempots))
Calculates the chemical potentials for each element within a facet. Args: facet: Facet of the phase diagram. Returns: { element: chempot } for all elements in the phase diagram.
codesearchnet
def create_branch(profile, name, branch_off): branch_off_sha = get_branch_sha(profile, branch_off) ref = ('heads/' + name) data = refs.create_ref(profile, ref, branch_off_sha) return data
Create a branch. Args: profile A profile generated from ``simplygithub.authentication.profile``. Such profiles tell this module (i) the ``repo`` to connect to, and (ii) the ``token`` to connect with. name The name of the new branch. branch_off The name of a branch to create the new branch off of. Returns: A dict with data about the new branch.
codesearchnet
def get_meshes_vec(step, var): if step.geom.twod_xz: (xmesh, ymesh) = (step.geom.x_mesh[(:, 0, :)], step.geom.z_mesh[(:, 0, :)]) vec1 = step.fields[(var + '1')][(:, 0, :, 0)] vec2 = step.fields[(var + '3')][(:, 0, :, 0)] elif (step.geom.cartesian and step.geom.twod_yz): (xmesh, ymesh) = (step.geom.y_mesh[(0, :, :)], step.geom.z_mesh[(0, :, :)]) vec1 = step.fields[(var + '2')][(0, :, :, 0)] vec2 = step.fields[(var + '3')][(0, :, :, 0)] else: (xmesh, ymesh) = (step.geom.x_mesh[(0, :, :)], step.geom.y_mesh[(0, :, :)]) pmesh = step.geom.p_mesh[(0, :, :)] vec_phi = step.fields[(var + '2')][(0, :, :, 0)] vec_r = step.fields[(var + '3')][(0, :, :, 0)] vec1 = ((vec_r * np.cos(pmesh)) - (vec_phi * np.sin(pmesh))) vec2 = ((vec_phi * np.cos(pmesh)) + (vec_r * np.sin(pmesh))) return (xmesh, ymesh, vec1, vec2)
Return vector field components along with coordinates meshes. Only works properly in 2D geometry. Args: step (:class:`~stagpy.stagyydata._Step`): a step of a StagyyData instance. var (str): vector field name. Returns: tuple of :class:`numpy.array`: xmesh, ymesh, fldx, fldy 2D arrays containing respectively the x position, y position, x component and y component of the requested vector field.
codesearchnet
def parse_args(arglist=None): climan = CLIManager(conf, **SUB_CMDS) create_complete_files(climan, CONFIG_DIR, 'stagpy', 'stagpy-git', zsh_sourceable=True) cmd_args, all_subs = climan.parse_args(arglist) sub_cmd = cmd_args.loam_sub_name if sub_cmd is None: return cmd_args.func if sub_cmd != 'config': commands.report_parsing_problems(PARSING_OUT) if conf.common.set: set_conf_str(conf, conf.common.set) if conf.common.config: commands.config_pp(all_subs) load_mplstyle() try: _steps_to_slices() except AttributeError: pass return cmd_args.func
Parse cmd line arguments. Update :attr:`stagpy.conf` accordingly. Args: arglist (list of str): the list of cmd line arguments. If set to None, the arguments are taken from :attr:`sys.argv`. Returns: function: the function implementing the sub command to be executed.
juraj-google-style
def query_properties_with_values(self, query, include_defaults=True): themed_keys = set() result = dict() if include_defaults: keys = self.properties() else: keys = set(self._property_values.keys()) | set(self._unstable_default_values.keys()) if self.themed_values(): themed_keys = set(self.themed_values().keys()) keys |= themed_keys for key in keys: descriptor = self.lookup(key) if not query(descriptor): continue value = descriptor.serializable_value(self) if not include_defaults and key not in themed_keys: if isinstance(value, PropertyValueContainer) and key in self._unstable_default_values: continue result[key] = value return result
Query the properties values of |HasProps| instances with a predicate. Args: query (callable) : A callable that accepts property descriptors and returns True or False include_defaults (bool, optional) : Whether to include properties that have not been explicitly set by a user (default: True) Returns: dict : mapping of property names and values for matching properties
juraj-google-style
def add(self, text, checked=False, sort=None): node = ListItem(parent_id=self.id, parent_server_id=self.server_id) node.checked = checked node.text = text if (sort is not None): node.sort = sort self.append(node, True) self.touch(True) return node
Add a new item to the list. Args: text (str): The text. checked (bool): Whether this item is checked. sort (int): Item id for sorting.
codesearchnet
def _get_full_name(self): full_name_parts = [self._get_class(), self._get_name()] return '
Gets the qualified name of the test method corresponding to the instrumentation block. Returns: A string containing the fully qualified name of the instrumentation test method. If parts are missing, then degrades steadily.
github-repos
def parse_metadata(lines): meta = defaultdict(list) for line in lines: line = line.rstrip() if line.startswith("!"): if "_table_begin" in line or "_table_end" in line: continue key, value = __parse_entry(line) meta[key].append(value) return dict(meta)
Parse list of lines with metadata information from SOFT file. Args: lines (:obj:`Iterable`): Iterator over the lines. Returns: :obj:`dict`: Metadata from SOFT file.
juraj-google-style
def run(self, dag): for node in dag.op_nodes(self.gate): if (not node.op.definition): continue rule = node.op.definition decomposition = DAGCircuit() decomposition.add_qreg(rule[0][1][0][0]) if rule[0][2]: decomposition.add_creg(rule[0][2][0][0]) for inst in rule: decomposition.apply_operation_back(*inst) dag.substitute_node_with_dag(node, decomposition) return dag
Expand a given gate into its decomposition. Args: dag(DAGCircuit): input dag Returns: DAGCircuit: output dag where gate was expanded.
codesearchnet
def get_tensor_num_entries(self, tensor_name, partial_layout=None, mesh_dimension_to_size=None): shape = self.get_tensor_shape(tensor_name) num_entries = 1 for dim in shape.dims: num_entries = num_entries * dim.value if not partial_layout: return num_entries for mtf_dimension_name in self.get_tensor_mtf_dimension_names(tensor_name): if mtf_dimension_name not in partial_layout: continue mesh_dimension_name = partial_layout[mtf_dimension_name] mesh_dimension_size = mesh_dimension_to_size[mesh_dimension_name] num_entries = int(math.ceil(num_entries / mesh_dimension_size)) return num_entries
The number of entries in a tensor. If partial_layout is specified, then mesh_dimension_to_size must also be. In this case, the number of entries on a single device is returned. Args: tensor_name: a string, name of a tensor in the graph. partial_layout: an optional {string: string}, from MTF dimension name to mesh dimension name. mesh_dimension_to_size: an optional {string: int}, from mesh dimension name to size. Returns: an integer
juraj-google-style
def _assert_float_dtype(dtype): if not dtype.is_floating: raise ValueError(f'Argument `dtype` is expected to be floating point. Received: {dtype}.') return dtype
Validate and return floating point type based on `dtype`. `dtype` must be a floating point type. Args: dtype: The data type to validate. Returns: Validated type. Raises: ValueError: if `dtype` is not a floating point type.
github-repos
def get_data_location(self, catalog_id): try: record = self.get(catalog_id) except: return None if 'Landsat8' in record['type'] and 'LandsatAcquisition' in record['type']: bucket = record['properties']['bucketName'] prefix = record['properties']['bucketPrefix'] return 's3: if 'DigitalGlobeAcquisition' in record['type']: o = Ordering() res = o.location([catalog_id]) return res['acquisitions'][0]['location'] return None
Find and return the S3 data location given a catalog_id. Args: catalog_id: The catalog ID Returns: A string containing the s3 location of the data associated with a catalog ID. Returns None if the catalog ID is not found, or if there is no data yet associated with it.
juraj-google-style
def group_systems(self, group_name, systems): api_group_id = None headers = {'Content-Type': 'application/json'} group_path = (self.api_url + '/v1/groups') group_get_path = (group_path + ('?display_name=%s' % quote(group_name))) logger.debug('GET group: %s', group_get_path) net_logger.info('GET %s', group_get_path) get_group = self.session.get(group_get_path) logger.debug('GET group status: %s', get_group.status_code) if (get_group.status_code == 200): api_group_id = get_group.json()['id'] if (get_group.status_code == 404): logger.debug('POST group') data = json.dumps({'display_name': group_name}) net_logger.info('POST', group_path) post_group = self.session.post(group_path, headers=headers, data=data) logger.debug('POST group status: %s', post_group.status_code) logger.debug('POST Group: %s', post_group.json()) self.handle_fail_rcs(post_group) api_group_id = post_group.json()['id'] logger.debug('PUT group') data = json.dumps(systems) net_logger.info('PUT %s', (group_path + ('/%s/systems' % api_group_id))) put_group = self.session.put((group_path + ('/%s/systems' % api_group_id)), headers=headers, data=data) logger.debug('PUT group status: %d', put_group.status_code) logger.debug('PUT Group: %s', put_group.json())
Adds an array of systems to specified group Args: group_name: Display name of group systems: Array of {'machine_id': machine_id}
codesearchnet
def _pull_response(self, namespace, req_type, **params): self._validate_namespace(namespace) context_id = params['EnumerationContext'] try: context_data = self.enumeration_contexts[context_id] except KeyError: raise CIMError(CIM_ERR_INVALID_ENUMERATION_CONTEXT, _format('EnumerationContext {0!A} not found in mock server enumeration contexts.', context_id)) if (context_data['pull_type'] != req_type): raise CIMError(CIM_ERR_INVALID_ENUMERATION_CONTEXT, _format('Invalid pull operations {0!A} does not match expected {1!A} for EnumerationContext {2!A}', context_data['pull_type'], req_type, context_id)) objs_list = context_data['data'] max_obj_cnt = params['MaxObjectCount'] if (not max_obj_cnt): max_obj_cnt = _DEFAULT_MAX_OBJECT_COUNT if (len(objs_list) <= max_obj_cnt): eos = u'TRUE' rtn_objs_list = objs_list del self.enumeration_contexts[context_id] context_id = '' else: eos = u'FALSE' rtn_objs_list = objs_list[0:max_obj_cnt] del objs_list[0:max_obj_cnt] return self._make_pull_imethod_resp(rtn_objs_list, eos, context_id)
Common method for all of the Pull methods. Since all of the pull methods operate independent of the type of data, this single function severs as common code This method validates the namespace, gets data on the enumeration sequence from the enumeration_contexts table, validates the pull type, and returns the required number of objects. This method assumes the same context_id throughout the sequence. Raises: CIMError: CIM_ERR_INVALID_ENUMERATION_CONTEXT
codesearchnet
def find_duplicate_items(items, k=2): r import utool as ut duplicate_map = ut.ddict(list) for count, item in enumerate(items): duplicate_map[item].append(count) singleton_keys = [] for key in six.iterkeys(duplicate_map): if len(duplicate_map[key]) == 1: singleton_keys.append(key) for key in singleton_keys: del duplicate_map[key] duplicate_map = dict(duplicate_map) return duplicate_map
r""" Args: items (list): Returns: dict: duplicate_map of indexes CommandLine: python -m utool.util_list --test-find_duplicate_items Example: >>> # DISABLE_DOCTEST >>> from utool.util_list import * # NOQA >>> items = [0, 1, 2, 3, 3, 0, 12, 2, 9] >>> duplicate_map = find_duplicate_items(items) >>> result = str(duplicate_map) >>> print(result)
juraj-google-style
def destroy_connection(self, connection): log.debug('Destroying connection at <{0}>'.format(hex(id(connection)))) self._decontextualise_connection(connection) connection.unbind()
Destroys a connection. Removes the connection from the appcontext, and unbinds it. Args: connection (ldap3.Connection): The connnection to destroy
codesearchnet
def Send(self, message): if (not isinstance(message, common_pb2.Message)): raise ValueError('Send requires a fleetspeak.Message') if (message.destination.service_name == 'system'): raise ValueError('Only predefined messages can have destination.service_name == "system"') return self._SendImpl(message)
Send a message through Fleetspeak. Args: message: A message protocol buffer. Returns: Size of the message in bytes. Raises: ValueError: If message is not a common_pb2.Message.
codesearchnet
def print_network_spec(mlmodel_spec, interface_only=False): inputs, outputs, layers_info = summarize_neural_network_spec(mlmodel_spec) print('Inputs:') for i in inputs: name, description = i print(' {} {}'.format(name, description)) print('Outputs:') for o in outputs: name, description = o print(' {} {}'.format(name, description)) if layers_info is None: print('\n(This MLModel is not a neural network model or does not contain any layers)') if layers_info and not interface_only: print('\nLayers:') for idx, l in enumerate(layers_info): layer_type, name, in_blobs, out_blobs, params_info = l print('[{}] ({}) {}'.format(idx, layer_type, name)) print(' Input blobs: {}'.format(in_blobs)) print(' Output blobs: {}'.format(out_blobs)) if len(params_info) > 0: print(' Parameters: ') for param in params_info: print(' {} = {}'.format(param[0], param[1])) print('\n')
Print the network information summary. Args: mlmodel_spec : the mlmodel spec interface_only : Shows only the input and output of the network
juraj-google-style
def orient_graph(self, df_data, graph, nb_runs=6, printout=None, **kwargs): if (type(graph) == nx.DiGraph): edges = [a for a in list(graph.edges()) if ((a[1], a[0]) in list(graph.edges()))] oriented_edges = [a for a in list(graph.edges()) if ((a[1], a[0]) not in list(graph.edges()))] for a in edges: if ((a[1], a[0]) in list(graph.edges())): edges.remove(a) output = nx.DiGraph() for i in oriented_edges: output.add_edge(*i) elif (type(graph) == nx.Graph): edges = list(graph.edges()) output = nx.DiGraph() else: raise TypeError('Data type not understood.') res = [] for (idx, (a, b)) in enumerate(edges): weight = self.predict_proba(df_data[a].values.reshape(((- 1), 1)), df_data[b].values.reshape(((- 1), 1)), idx=idx, nb_runs=nb_runs, **kwargs) if (weight > 0): output.add_edge(a, b, weight=weight) else: output.add_edge(b, a, weight=abs(weight)) if (printout is not None): res.append([((str(a) + '-') + str(b)), weight]) DataFrame(res, columns=['SampleID', 'Predictions']).to_csv(printout, index=False) for node in list(df_data.columns.values): if (node not in output.nodes()): output.add_node(node) return output
Orient an undirected graph using the pairwise method defined by the subclass. The pairwise method is ran on every undirected edge. Args: df_data (pandas.DataFrame): Data umg (networkx.Graph): Graph to orient nb_runs (int): number of times to rerun for each pair (bootstrap) printout (str): (optional) Path to file where to save temporary results Returns: networkx.DiGraph: a directed graph, which might contain cycles .. warning: Requirement : Name of the nodes in the graph correspond to name of the variables in df_data
codesearchnet
def from_args(cls: Type[ConfigT], args: Namespace) -> ConfigT: parsed_args = cls.parse_args(args) return cls(args, host=args.host, port=args.port, debug=args.debug, reject_insecure_auth=not args.insecure_login, cert_file=args.cert, key_file=args.key, **parsed_args)
Build and return a new :class:`IMAPConfig` using command-line arguments. Args: args: The arguments parsed from the command-line.
juraj-google-style
def sampling_query(sql, context, fields=None, count=5, sampling=None, udfs=None, data_sources=None): return Query(_sampling.Sampling.sampling_query(sql, fields, count, sampling), context=context, udfs=udfs, data_sources=data_sources)
Returns a sampling Query for the SQL object. Args: sql: the SQL statement (string) or Query object to sample. context: a Context object providing project_id and credentials. fields: an optional list of field names to retrieve. count: an optional count of rows to retrieve which is used if a specific sampling is not specified. sampling: an optional sampling strategy to apply to the table. udfs: array of UDFs referenced in the SQL. data_sources: dictionary of federated (external) tables referenced in the SQL. Returns: A Query object for sampling the table.
codesearchnet
def filter_by_conditional_statement(self, statement): _filt_values, _filt_datetimes = self._filter_by_statement(statement) if self._enumeration is None: self._get_mutable_enumeration() col_obj = self._enumeration['mutable'][self._collection_type] collection = col_obj(self.header.duplicate(), _filt_values, _filt_datetimes) collection._validated_a_period = self._validated_a_period return collection
Filter the Data Collection based on a conditional statement. Args: statement: A conditional statement as a string (e.g. a > 25 and a%5 == 0). The variable should always be named as 'a' (without quotations). Return: A new Data Collection containing only the filtered data
juraj-google-style
class API: def __init__(self, config, api): self.config = config self.api = api['api'] self.version = api['version'] self.auth = api['auth'] self.uri = api.get('uri') self.key = api.get('key') self.labels = api.get('labels') self.function_stack = list(filter(None, api.get('function', '').split('.'))) self.function_kwargs = API.__clean__(api.get('kwargs', {})) self.iterate = api.get('iterate', False) self.limit = api.get('limit') self.headers = api.get('headers', {}) self.function = None self.job = None self.response = None def __str__(self): return '%s.%s.%s' % (self.api, self.version, '.'.join(self.function_stack)) def __getattr__(self, function_name): self.function_stack.append(function_name) def function_call(**kwargs): self.function_kwargs = API.__clean__(kwargs) return self return function_call @staticmethod def __clean__(struct: Union[dict, list]) -> Union[dict, list]: if isinstance(struct, dict): for key, value in struct.items(): if isinstance(value, bytes): struct[key] = base64.standard_b64encode(value).decode('ascii') elif isinstance(value, date): struct[key] = str(value) else: API.__clean__(value) elif isinstance(struct, list): for index, value in enumerate(struct): if isinstance(value, bytes): struct[index] = base64.standard_b64encode(value).decode('ascii') elif isinstance(value, date): struct[index] = str(value) else: API.__clean__(value) return struct def call(self, function_chain): for function_name in function_chain.split('.'): self.function_stack.append(function_name) return self def execute(self, run=True, iterate=False, limit=None): self.function = get_service(config=self.config, api=self.api, version=self.version, auth=self.auth, headers=self.headers, key=self.key, labels=self.labels, uri_file=self.uri) for f_n in self.function_stack: self.function = getattr(self.function if isinstance(self.function, Resource) else self.function(), f_n) self.job = self.function(**self.function_kwargs) if run: self.response = API_Retry(self.job) if iterate or self.iterate: return API_Iterator(self.function, self.function_kwargs, self.response, limit or self.limit) else: return self.response else: return self.job def upload(self, retries=5, wait=61): job = self.execute(run=False) response = None while response is None: error = None try: print('Uploading file...') status, response = job.next_chunk() if 'id' in response: print("Object id '%s' was successfully uploaded." % response['id']) else: exit('The upload failed with an unexpected response: %s' % response) except HttpError as e: if retries > 0 and e.resp.status in RETRIABLE_STATUS_CODES: error = 'A retriable HTTP error %d occurred:\n%s' % (e.resp.status, e.content.decode()) else: raise except RETRIABLE_EXCEPTIONS as e: if retries > 0: error = 'A retriable error occurred: %s' % e else: raise if error is not None: print(error) retries -= 1 wait = wait * 2 print('Sleeping %d seconds and then retrying...' % wait) time.sleep(wait)
A wrapper around Google API with built in helpers for StarThinker. The wrapper mimics function calls, storing the m in a stack, until it encounters execute(). Then it uses the stored stack and arguments to call the actual API. This allows handlers on execute such as API_Retry and API_Iterator. See module level description for wrapped changes to Google API. The class is designed to be a connector to JSON, hence the configuraton is a JSON object. api = { "api":"doubleclickbidmanager", "version":"v1.1", "auth":"user", "iterate":False } api = API(config, api).placements().list(profile_id=1234, archived=False).execute() Args: config: (json) see example above, configures all authentication parameters api: (json) see example above, configures all API parameters Returns: If nextpageToken in result or iterate is True: return iterator of API response Otherwise: returns API response
github-repos
def spliceext(filepath, s): (root, ext) = os.path.splitext(safepath(filepath)) return ((root + s) + ext)
Add s into filepath before the extension Args: filepath (str, path): file path s (str): string to splice Returns: str
codesearchnet
def __init__(self, json_data=None, **kwargs): if isinstance(json_data, OhPickle): return if isinstance(json_data, basestring): json_data = json.loads(json_data) if json_data is not None: kwargs = type(self).json_to_initkwargs(json_data, kwargs) super(JsonRecordList, self).__init__(**kwargs)
Build a new JsonRecord sub-class. Args: ``json_data=``\ *LIST|other* JSON data (string or already ``json.loads``'d) ``**kwargs`` Other initializer attributes, for lists with extra attributes (eg, paging information)
juraj-google-style
def __init__(self, url_formatter, mapsources): super().__init__(url_formatter) self.map_folders = { root: { "folders": folders, "maps": maps } for root, folders, maps in walk_mapsources(mapsources) } self.add_maps(parent=self.kml_doc)
Create a KML master document. Args: mapsources (list of MapSource):
juraj-google-style
def _from_to_as_term(self, frm, to): from_year = '' to_year = '' def year_or_empty(prefix, year, suffix): try: return prefix + str(int(year)) + suffix except (ValueError, TypeError): return '' if frm: from_year = year_or_empty('', frm, ' ') if to: to_year = year_or_empty(' ', to, '') if bool(from_year) or bool(to_year): return '[{}TO{}]'.format(from_year, to_year) else: return None
Turns from and to into the query format. Args: frm (str): from year to (str): to year Returns: FTS query str with years range.
juraj-google-style
def _generate_splits(self, m, r): new_rects = [] if r.left > m.left: new_rects.append(Rectangle(m.left, m.bottom, r.left-m.left, m.height)) if r.right < m.right: new_rects.append(Rectangle(r.right, m.bottom, m.right-r.right, m.height)) if r.top < m.top: new_rects.append(Rectangle(m.left, r.top, m.width, m.top-r.top)) if r.bottom > m.bottom: new_rects.append(Rectangle(m.left, m.bottom, m.width, r.bottom-m.bottom)) return new_rects
When a rectangle is placed inside a maximal rectangle, it stops being one and up to 4 new maximal rectangles may appear depending on the placement. _generate_splits calculates them. Arguments: m (Rectangle): max_rect rectangle r (Rectangle): rectangle placed Returns: list : list containing new maximal rectangles or an empty list
juraj-google-style
def enable_preset_args(include_all_preset_kwargs: bool=False, preset_name: str='global') -> Callable[[types.FunctionType], types.FunctionType]: def decorator(func): sig = inspect.signature(func) positional_arg_names = [p.name for p in sig.parameters.values() if p.kind == inspect.Parameter.POSITIONAL_OR_KEYWORD] arg_defaults = {} has_preset_value = False has_varkw = False for p in sig.parameters.values(): if p.kind == inspect.Parameter.VAR_KEYWORD: has_varkw = True continue if p.kind == inspect.Parameter.VAR_POSITIONAL: continue if p.default == inspect.Parameter.empty: continue if isinstance(p.default, PresetArgValue): has_preset_value = True arg_defaults[p.name] = p.default if has_preset_value: @functools.wraps(func) def _func(*args, **kwargs): presets = utils.thread_local_peek(_TLS_KEY_PRESET_KWARGS, None) preset_kwargs = presets.get_preset(preset_name) if presets else {} args, kwargs = PresetArgValue.resolve_args(args, kwargs, positional_arg_names, arg_defaults, preset_kwargs, include_all_preset_kwargs=include_all_preset_kwargs and has_varkw) return func(*args, **kwargs) return _func return func return decorator
Decorator for functions that maybe use preset argument values. Usage:: @pg.typing.enable_preset_args def foo(x, y=pg.typing.PresetArgValue(default=1)): return x + y with pg.typing.preset_args(y=2): print(foo(x=1)) # 3: y=2 print(foo(x=1)) # 2: y=1 Args: include_all_preset_kwargs: Whether to include all preset kwargs (even not makred as `PresetArgValue`) when callng the function. preset_name: The name of the preset to specify kwargs. Returns: A decorated function that could consume the preset argument values.
github-repos
def _ParseIdentifierMappingsTable(self, parser_mediator, esedb_table): identifier_mappings = {} for esedb_record in esedb_table.records: if parser_mediator.abort: break (identifier, mapped_value) = self._ParseIdentifierMappingRecord(parser_mediator, esedb_table.name, esedb_record) if ((identifier is None) or (mapped_value is None)): continue if (identifier in identifier_mappings): parser_mediator.ProduceExtractionWarning('identifier: {0:d} already exists in mappings.'.format(identifier)) continue identifier_mappings[identifier] = mapped_value return identifier_mappings
Extracts identifier mappings from the SruDbIdMapTable table. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. esedb_table (pyesedb.table): table. Returns: dict[int, str]: mapping of numeric identifiers to their string representation.
codesearchnet
def plot_seebeck_temp(self, doping='all', output='average'): import matplotlib.pyplot as plt if (output == 'average'): sbk = self._bz.get_seebeck(output='average') elif (output == 'eigs'): sbk = self._bz.get_seebeck(output='eigs') plt.figure(figsize=(22, 14)) tlist = sorted(sbk['n'].keys()) doping = (self._bz.doping['n'] if (doping == 'all') else doping) for (i, dt) in enumerate(['n', 'p']): plt.subplot((121 + i)) for dop in doping: d = self._bz.doping[dt].index(dop) sbk_temp = [] for temp in tlist: sbk_temp.append(sbk[dt][temp][d]) if (output == 'average'): plt.plot(tlist, sbk_temp, marker='s', label=(str(dop) + ' $cm^{-3}$')) elif (output == 'eigs'): for xyz in range(3): plt.plot(tlist, zip(*sbk_temp)[xyz], marker='s', label=(((str(xyz) + ' ') + str(dop)) + ' $cm^{-3}$')) plt.title((dt + '-type'), fontsize=20) if (i == 0): plt.ylabel('Seebeck \n coefficient ($\\mu$V/K)', fontsize=30.0) plt.xlabel('Temperature (K)', fontsize=30.0) p = ('lower right' if (i == 0) else '') plt.legend(loc=p, fontsize=15) plt.grid() plt.xticks(fontsize=25) plt.yticks(fontsize=25) plt.tight_layout() return plt
Plot the Seebeck coefficient in function of temperature for different doping levels. Args: dopings: the default 'all' plots all the doping levels in the analyzer. Specify a list of doping levels if you want to plot only some. output: with 'average' you get an average of the three directions with 'eigs' you get all the three directions. Returns: a matplotlib object
codesearchnet
def feature_path(self, gff_path): if not gff_path: self.feature_dir = None self.feature_file = None else: if not op.exists(gff_path): raise OSError('{}: file does not exist!'.format(gff_path)) if not op.dirname(gff_path): self.feature_dir = '.' else: self.feature_dir = op.dirname(gff_path) self.feature_file = op.basename(gff_path)
Load a GFF file with information on a single sequence and store features in the ``features`` attribute Args: gff_path: Path to GFF file.
juraj-google-style
def __init__(self, resolver_context): super(TSKPartitionFile, self).__init__(resolver_context) self._file_system = None
Initializes a file-like object. Args: resolver_context (Context): resolver context.
juraj-google-style
def run(self, input_dir, output_dir, epsilon): print('Running attack ', self.name) cmd = [self.docker_binary(), 'run', '-v', '{0}:/input_images'.format(input_dir), '-v', '{0}:/output_images'.format(output_dir), '-v', '{0}:/code'.format(self.directory), '-w', '/code', self.container, './' + self.entry_point, '/input_images', '/output_images', str(epsilon)] print(' '.join(cmd)) subprocess.call(cmd)
Runs attack inside Docker. Args: input_dir: directory with input (dataset). output_dir: directory where output (adversarial images) should be written. epsilon: maximum allowed size of adversarial perturbation, should be in range [0, 255].
juraj-google-style
def __init__(self, inputs, mesh=None, name=None): if mesh is None: if not inputs: raise ValueError("mesh must be specified if no inputs") mesh = inputs[0].mesh self._inputs = inputs self._outputs = [] self._mesh = mesh self._splittable_dims, self._unsplittable_dims = ( self._initialize_all_dimensions_as_splittable()) assert name is not None self._name = mesh.graph.unique_name(name) mesh.graph.operations.append(self)
Initializer. Args: inputs: a list of Tensor mesh: an optional Mesh (if unspecified, will be inferred from first input) name: a string, which will get uniquified (in TensorFlow style) Raises: ValueError: mesh was not provided and there were no inputs to infer from.
juraj-google-style
def frame_counts(self,subsets=None): mergeon = self.cdf.frame_columns+['region_label'] if subsets is None: cnts = self.groupby(mergeon+['phenotype_label']).count()[['cell_index']].\ rename(columns={'cell_index':'count'}) mr = self.measured_regions mr['_key'] = 1 mp = pd.DataFrame({'phenotype_label':self.measured_phenotypes}) mp['_key'] = 1 mr = mr.merge(mp,on='_key').drop(columns='_key') cnts = mr.merge(cnts,on=mergeon+['phenotype_label'],how='left').fillna(0) else: if isinstance(subsets,SL): subsets=[subsets] cnts = [] labels = set([s.label for s in subsets]) for x in subsets: if x.label is None: raise ValueError("Subsets must be named") if len(labels) != len(subsets): raise ValueError("Subsets must be uniquely named.") seen_labels = [] for sl in subsets: if sl.label in seen_labels: raise ValueError("cannot use the same label twice in the subsets list") seen_labels.append(sl.label) df = self.cdf.subset(sl) df = df.groupby(mergeon).count()[['cell_index']].\ rename(columns={'cell_index':'count'}).reset_index() df = self.measured_regions.merge(df,on=mergeon,how='left').fillna(0) df['phenotype_label'] = sl.label cnts.append(df) cnts = pd.concat(cnts) cnts = cnts[mergeon+['region_area_pixels','phenotype_label','count']] cnts['region_area_mm2'] = cnts.apply(lambda x: (x['region_area_pixels']/1000000)*(self.microns_per_pixel*self.microns_per_pixel),1) cnts['density_mm2'] = cnts.apply(lambda x: np.nan if x['region_area_mm2'] == 0 else x['count']/x['region_area_mm2'],1) cnts.loc[cnts['region_area_pixels']<self.minimum_region_size_pixels,['count','density_mm2']] = np.nan return cnts
Frame counts is the core of all the counting operations. It counts on a per-frame/per-region basis. Args: subsets (list): a list of Subset Objects. if not specified, the phenotypes are used. Returns: pandas.DataFrame: A dataframe of count data
juraj-google-style
def CheckDefaultLambdaCaptures(filename, clean_lines, linenum, error): line = clean_lines.elided[linenum] match = Match(r'^(.*)\[\s*(?:=|&[^\w])', line) if match: line, _, pos = CloseExpression(clean_lines, linenum, len(match.group(1))) if pos >= 0 and Match(r'^\s*[{(]', line[pos:]): error(filename, linenum, 'build/c++11', 4, 'Default lambda captures are an unapproved C++ feature.')
Check that default lambda captures are not used. Args: filename: The name of the current file. clean_lines: A CleansedLines instance containing the file. linenum: The number of the line to check. error: The function to call with any errors found.
juraj-google-style
def find_all(self, model_class, params={}): url = '{host}/{namespace}/{model}{params}'.format(host=self._host, namespace=self._namespace, model=self._translate_name(model_class.__name__), params=self._build_param_string(params)) data = self._get_json(url)['data'] fresh_models = [] for item in data: fresh_model = model_class(item['attributes']) fresh_model.id = item['id'] fresh_model.validate() fresh_models.append(fresh_model) if (self._cache is not None): self._cache.set_record(model_class.__name__, fresh_model.id, fresh_model) return fresh_models
Return an list of models from the API and caches the result. Args: model_class (:class:`cinder_data.model.CinderModel`): A subclass of :class:`cinder_data.model.CinderModel` of your chosen model. params (dict, optional): Description Returns: list: A list of instances of you model_class or and empty list.
codesearchnet
def range_index_map(batch_shape, num_segments, name='range_index_map'): device = num_segments.device if torch.is_tensor(num_segments) else 'cpu' batch_shape = torch.as_tensor(batch_shape, dtype=torch.long, device=device) assert len(batch_shape.size()) == 1 num_segments = torch.as_tensor(num_segments, device=device) assert len(num_segments.size()) == 0 indices = torch.arange(start=0, end=num_segments, device=num_segments.device) new_tensor = torch.cat([torch.ones_like(batch_shape, dtype=torch.long, device=num_segments.device), num_segments.unsqueeze(dim=0)], dim=0) new_shape = [int(x) for x in new_tensor.tolist()] indices = indices.view(new_shape) multiples = torch.cat([batch_shape, torch.as_tensor([1], device=device)], dim=0) indices = indices.repeat(multiples.tolist()) return IndexMap(indices=indices, num_segments=num_segments, batch_dims=list(batch_shape.size())[0])
Constructs an index map equal to range(num_segments). Args: batch_shape (`torch.Size`): Batch shape num_segments (`int`): Number of segments name (`str`, *optional*, defaults to 'range_index_map'): Name for the operation. Currently not used Returns: (`IndexMap`): IndexMap of shape batch_shape with elements equal to range(num_segments).
github-repos
def calculate_hashes(self): hashers = [] if (not self.mardata.signatures): return [] for s in self.mardata.signatures.sigs: h = make_hasher(s.algorithm_id) hashers.append((s.algorithm_id, h)) for block in get_signature_data(self.fileobj, self.mardata.signatures.filesize): [h.update(block) for (_, h) in hashers] return [(algo_id, h.finalize()) for (algo_id, h) in hashers]
Return hashes of the contents of this MAR file. The hashes depend on the algorithms defined in the MAR file's signature block. Returns: A list of (algorithm_id, hash) tuples
codesearchnet
def _create_partition_config(option: t.Tuple, config: Config) -> Config: copy = cp.deepcopy(config.selection) out = cp.deepcopy(config) for idx, key in enumerate(config.partition_keys): copy[key] = [option[idx]] if 'hdate' in copy: copy['hdate'] = [generate_hdate(copy['date'][0], v) for v in copy['hdate']] out.selection = copy return out
Create a config for a single partition option. Output a config dictionary, overriding the range of values for each key with the partition instance in 'selection'. Continuing the example from prepare_partitions, the selection section would be: { 'foo': ..., 'year': ['2020'], 'month': ['01'], ... } { 'foo': ..., 'year': ['2020'], 'month': ['02'], ... } { 'foo': ..., 'year': ['2020'], 'month': ['03'], ... } Args: option: A single item in the range of partition_keys. config: The download config, including the parameters and selection sections. Returns: A configuration with that selects a single download partition.
github-repos
def from_nested_row_lengths(cls, flat_values, nested_row_lengths, name=None, validate=True): if not isinstance(validate, bool): raise TypeError(f'Argument `validate` must have type bool. Received {validate}.') if isinstance(nested_row_lengths, tensor_lib.Tensor): raise TypeError(f'Argument `nested_row_lengths` must be a list of Tensors. Received {nested_row_lengths}.') with ops.name_scope(name, 'RaggedFromNestedRowlengths', [flat_values] + list(nested_row_lengths)): result = flat_values for lengths in reversed(nested_row_lengths): result = cls.from_row_lengths(result, lengths, validate=validate) return result
Creates a `RaggedTensor` from a nested list of `row_lengths` tensors. Equivalent to: ```python result = flat_values for row_lengths in reversed(nested_row_lengths): result = from_row_lengths(result, row_lengths) ``` Args: flat_values: A potentially ragged tensor. nested_row_lengths: A list of 1-D integer tensors. The `i`th tensor is used as the `row_lengths` for the `i`th ragged dimension. name: A name prefix for the RaggedTensor (optional). validate: If true, then use assertions to check that the arguments form a valid `RaggedTensor`. Note: these assertions incur a runtime cost, since they must be checked for each tensor value. Returns: A `RaggedTensor` (or `flat_values` if `nested_row_lengths` is empty).
github-repos
def record(self, auth, resource, entries, options={}, defer=False): return self._call('record', auth, [resource, entries, options], defer)
Records a list of historical entries to the resource specified. Note: This API is depricated, use recordbatch instead. Calls a function that bulids a request that writes a list of historical entries to the specified resource. Args: auth: Takes the device cik resource: Takes the dataport alias or rid. entries: A list of entries to write to the resource. options: Currently unused.
juraj-google-style
def logs_urlpatterns(admin_view=lambda x: x): return [ url(r'^$', admin_view(LogsMenu.as_view()), name='logs'), url(r'^status_codes$', admin_view(LogsStatusCodes.as_view()), name='logs_status_codes'), url(r'^status_codes_by_date$', admin_view(LogsStatusCodesByDate.as_view()), name='logs_status_codes_by_date'), url(r'^most_visited_pages$', admin_view(LogsMostVisitedPages.as_view()), name='logs_most_visited_pages') ]
Return the URL patterns for the logs views. Args: admin_view (callable): admin_view method from an AdminSite instance. Returns: list: the URL patterns for the logs views.
juraj-google-style
def get_user_groups(name, sid=False): if (name == 'SYSTEM'): groups = [name] else: groups = win32net.NetUserGetLocalGroups(None, name) if (not sid): return groups ret_groups = set() for group in groups: ret_groups.add(get_sid_from_name(group)) return ret_groups
Get the groups to which a user belongs Args: name (str): The user name to query sid (bool): True will return a list of SIDs, False will return a list of group names Returns: list: A list of group names or sids
codesearchnet
def autocov(x): acorr = autocorr(x) varx = ((np.var(x, ddof=1) * (len(x) - 1)) / len(x)) acov = (acorr * varx) return acov
Compute autocovariance estimates for every lag for the input array. Args: x (array-like): An array containing MCMC samples. Returns: np.ndarray: An array of the same size as the input array.
codesearchnet
def lint(cls, document, is_saved, flags=''): if (not is_saved): return cls.last_diags[document.path] path = document.path if sys.platform.startswith('win'): path = path.replace('\\', '/') (out, _err) = py_run('{} -f json {}'.format(path, flags), return_std=True) json_str = out.getvalue() if (not json_str.strip()): cls.last_diags[document.path] = [] return [] diagnostics = [] for diag in json.loads(json_str): line = (diag['line'] - 1) col = diag['column'] end_col = (len(document.lines[line]) if document.lines else 0) err_range = {'start': {'line': line, 'character': col}, 'end': {'line': line, 'character': end_col}} if (diag['type'] == 'convention'): severity = lsp.DiagnosticSeverity.Information elif (diag['type'] == 'error'): severity = lsp.DiagnosticSeverity.Error elif (diag['type'] == 'fatal'): severity = lsp.DiagnosticSeverity.Error elif (diag['type'] == 'refactor'): severity = lsp.DiagnosticSeverity.Hint elif (diag['type'] == 'warning'): severity = lsp.DiagnosticSeverity.Warning diagnostics.append({'source': 'pylint', 'range': err_range, 'message': '[{}] {}'.format(diag['symbol'], diag['message']), 'severity': severity, 'code': diag['message-id']}) cls.last_diags[document.path] = diagnostics return diagnostics
Plugin interface to pyls linter. Args: document: The document to be linted. is_saved: Whether or not the file has been saved to disk. flags: Additional flags to pass to pylint. Not exposed to pyls_lint, but used for testing. Returns: A list of dicts with the following format: { 'source': 'pylint', 'range': { 'start': { 'line': start_line, 'character': start_column, }, 'end': { 'line': end_line, 'character': end_column, }, } 'message': msg, 'severity': lsp.DiagnosticSeverity.*, }
codesearchnet
def validate_source_dir(script, directory): if directory: if not os.path.isfile(os.path.join(directory, script)): raise ValueError('No file named "{}" was found in directory "{}".'.format(script, directory)) return True
Validate that the source directory exists and it contains the user script Args: script (str): Script filename. directory (str): Directory containing the source file. Raises: ValueError: If ``directory`` does not exist, is not a directory, or does not contain ``script``.
juraj-google-style
def __convertIp6PrefixStringToIp6Address(self, strIp6Prefix): prefix1 = strIp6Prefix.rstrip('L') prefix2 = prefix1.lstrip("0x") hexPrefix = str(prefix2).ljust(16,'0') hexIter = iter(hexPrefix) finalMac = ':'.join(a + b + c + d for a,b,c,d in zip(hexIter, hexIter,hexIter,hexIter)) prefix = str(finalMac) strIp6Prefix = prefix[:20] return strIp6Prefix +':'
convert IPv6 prefix string to IPv6 dotted-quad format for example: 2001000000000000 -> 2001:: Args: strIp6Prefix: IPv6 address string Returns: IPv6 address dotted-quad format
juraj-google-style
def nb_fit(data, P_init=None, R_init=None, epsilon=1e-8, max_iters=100): means = data.mean(1) variances = data.var(1) if (means > variances).any(): raise ValueError("For NB fit, means must be less than variances") genes, cells = data.shape P = 1.0 - means/variances R = means*(1-P)/P for i in range(genes): result = minimize(nb_ll_row, [P[i], R[i]], args=(data[i,:],), bounds = [(0, 1), (eps, None)]) params = result.x P[i] = params[0] R[i] = params[1] return P,R
Fits the NB distribution to data using method of moments. Args: data (array): genes x cells P_init (array, optional): NB success prob param - genes x 1 R_init (array, optional): NB stopping param - genes x 1 Returns: P, R - fit to data
juraj-google-style
def yield_batch(iterable, batch_size, num_tensors=1): tensors = [[] for i in range(num_tensors)] for item in iterable: if item is None: break for i in range(num_tensors): tmp = str(item[i]) if type(item[i]) is bytearray else item[i] tensors[i].append(tmp) if len(tensors[0]) >= batch_size: yield tensors tensors = [[] for i in range(num_tensors)] if len(tensors[0]) > 0: yield tensors
Generator that yields batches of a DataFrame iterator. Args: :iterable: Spark partition iterator. :batch_size: number of items to retrieve per invocation. :num_tensors: number of tensors (columns) expected in each item. Returns: An array of ``num_tensors`` arrays, each of length `batch_size`
juraj-google-style
def count(self, axis=0, level=None, numeric_only=False): axis = self._get_axis_number(axis) if axis is not None else 0 return self._reduce_dimension( self._query_compiler.count( axis=axis, level=level, numeric_only=numeric_only ) )
Get the count of non-null objects in the DataFrame. Arguments: axis: 0 or 'index' for row-wise, 1 or 'columns' for column-wise. level: If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a DataFrame. numeric_only: Include only float, int, boolean data Returns: The count, in a Series (or DataFrame if level is specified).
juraj-google-style
def get_metadata(self, key) -> str: return (self.metadata[key] if (key in self.metadata) else None)
Get the value of a metadata. Returns None if metadata does not exist. Args: key (str): name of the metadata Returns: str: the value of the metadata (or None)
codesearchnet
def normalize_log_line_timestamp(log_line_timestamp): return sanitize_filename(log_line_timestamp)
Replace special characters in log line timestamp with normal characters. .. deprecated:: 1.10 This method is obsolete with the more general `sanitize_filename` method and is only kept for backwards compatibility. In a future update, this method may be removed. Args: log_line_timestamp: A string in the log line timestamp format. Obtained with get_log_line_timestamp. Returns: A string representing the same time as input timestamp, but without special characters.
github-repos
def GetHashers(cls, hasher_names): hashers = [] for (hasher_name, hasher_class) in iter(cls._hasher_classes.items()): if (hasher_name in hasher_names): hashers.append(hasher_class()) return hashers
Retrieves instances for all the specified hashers. Args: hasher_names (list[str]): names of the hashers to retrieve. Returns: list[BaseHasher]: hashers.
codesearchnet
def operator(name=None, operators=None, aliases=None, kind=None): def delegator(assertion, subject, expected, *args, **kw): return assertion.test(subject, expected, *args, **kw) def decorator(fn): operator = Operator(fn=fn, aliases=aliases, kind=kind) _name = (name if isinstance(name, six.string_types) else fn.__name__) operator.operators = (_name,) _operators = operators if isinstance(_operators, list): _operators = tuple(_operators) if isinstance(_operators, tuple): operator.operators += _operators Engine.register(operator) return functools.partial(delegator, operator) return (decorator(name) if inspect.isfunction(name) else decorator)
Registers a new operator function in the test engine. Arguments: *args: variadic arguments. **kw: variadic keyword arguments. Returns: function
codesearchnet
def addAllowMAC(self, xEUI): print '%s call addAllowMAC' % self.port print xEUI if isinstance(xEUI, str): macAddr = xEUI else: macAddr = self.__convertLongToString(xEUI) try: if self._addressfilterMode != 'whitelist': if self.__setAddressfilterMode('Whitelist'): self._addressfilterMode = 'whitelist' cmd = WPANCTL_CMD + 'insert MAC:Whitelist:Entries %s' % macAddr ret = self.__sendCommand(cmd)[0] != 'Fail' self._addressfilterSet.add(macAddr) print 'current whitelist entries:' for addr in self._addressfilterSet: print addr return ret except Exception, e: ModuleHelper.WriteIntoDebugLogger('addAllowMAC() Error: ' + str(e))
add a given extended address to the whitelist addressfilter Args: xEUI: a given extended address in hex format Returns: True: successful to add a given extended address to the whitelist entry False: fail to add a given extended address to the whitelist entry
juraj-google-style
def build_example(label, param_dict_real, zip_path_label): np.random.seed(RANDOM_SEED) report = {'tflite_converter': report_lib.NOTRUN, 'tf': report_lib.FAILED} report['tf_log'] = '' report['tflite_converter_log'] = '' tf.compat.v1.reset_default_graph() with tf.Graph().as_default(): with tf.device('/cpu:0'): try: inputs, outputs = make_graph(param_dict_real) inputs = [x for x in inputs if x is not None] except (tf.errors.UnimplementedError, tf.errors.InvalidArgumentError, ValueError): report['tf_log'] += traceback.format_exc() return (None, report) sess = tf.compat.v1.Session() try: baseline_inputs, baseline_outputs = make_test_inputs(param_dict_real, sess, inputs, outputs) baseline_inputs = [x for x in baseline_inputs if x is not None] input_names = [_normalize_input_name(x.name) for x in inputs] output_names = [_normalize_output_name(x.name) for x in outputs] baseline_input_map = dict(zip(input_names, baseline_inputs)) baseline_output_map = dict(zip(output_names, baseline_outputs)) except (tf.errors.UnimplementedError, tf.errors.InvalidArgumentError, ValueError): report['tf_log'] += traceback.format_exc() return (None, report) report['tflite_converter'] = report_lib.FAILED report['tf'] = report_lib.SUCCESS input_names, tensor_info_inputs = _get_tensor_info(inputs, 'input_', _normalize_input_name) output_tensors, tensor_info_outputs = _get_tensor_info(outputs, 'output_', _normalize_output_name) input_tensors = [(name, t.shape, t.dtype) for name, t in zip(input_names, inputs)] inference_signature = tf.compat.v1.saved_model.signature_def_utils.build_signature_def(inputs=tensor_info_inputs, outputs=tensor_info_outputs, method_name='op_test') saved_model_dir = tempfile.mkdtemp('op_test') saved_model_tags = [tf.saved_model.SERVING] signature_key = signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY builder = tf.compat.v1.saved_model.builder.SavedModelBuilder(saved_model_dir) builder.add_meta_graph_and_variables(sess, saved_model_tags, signature_def_map={signature_key: inference_signature}, strip_default_attrs=True) builder.save(as_text=False) graph_def = freeze_graph(sess, tf.compat.v1.global_variables() + inputs + outputs) if use_frozen_graph else sess.graph_def if 'split_tflite_lstm_inputs' in param_dict_real: extra_convert_options.split_tflite_lstm_inputs = param_dict_real['split_tflite_lstm_inputs'] tflite_model_binary, converter_log = options.tflite_convert_function(options, saved_model_dir, input_tensors, output_tensors, extra_convert_options=extra_convert_options, test_params=param_dict_real) report['tflite_converter'] = report_lib.SUCCESS if tflite_model_binary is not None else report_lib.FAILED report['tflite_converter_log'] = converter_log if options.save_graphdefs: zipinfo = zipfile.ZipInfo(zip_path_label + '.pbtxt') archive.writestr(zipinfo, text_format.MessageToString(graph_def), zipfile.ZIP_DEFLATED) if tflite_model_binary: if options.make_edgetpu_tests: baseline_input_map, baseline_output_map = generate_inputs_outputs(tflite_model_binary, min_value=0, max_value=255) zipinfo = zipfile.ZipInfo(zip_path_label + '.bin') if sys.byteorder == 'big': tflite_model_binary = flatbuffer_utils.byte_swap_tflite_buffer(tflite_model_binary, 'big', 'little') archive.writestr(zipinfo, tflite_model_binary, zipfile.ZIP_DEFLATED) example = {'inputs': baseline_input_map, 'outputs': baseline_output_map} example_fp = io.StringIO() write_examples(example_fp, [example]) zipinfo = zipfile.ZipInfo(zip_path_label + '.inputs') archive.writestr(zipinfo, example_fp.getvalue(), zipfile.ZIP_DEFLATED) example_fp2 = io.StringIO() write_test_cases(example_fp2, zip_path_label + '.bin', [example]) zipinfo = zipfile.ZipInfo(zip_path_label + '_tests.txt') archive.writestr(zipinfo, example_fp2.getvalue(), zipfile.ZIP_DEFLATED) zip_manifest_label = zip_path_label + ' ' + label if zip_path_label == label: zip_manifest_label = zip_path_label zip_manifest.append(zip_manifest_label + '\n') return (tflite_model_binary, report)
Build the model with parameter values set in param_dict_real. Args: label: Label of the model param_dict_real: Parameter dictionary (arguments to the factories make_graph and make_test_inputs) zip_path_label: Filename in the zip Returns: (tflite_model_binary, report) where tflite_model_binary is the serialized flatbuffer as a string and report is a dictionary with keys `tflite_converter_log` (log of conversion), `tf_log` (log of tf conversion), `converter` (a string of success status of the conversion), `tf` (a string success status of the conversion).
github-repos
def tokenize(self, text: TextInput, **kwargs) -> list[str]: split_special_tokens = kwargs.pop('split_special_tokens', self.split_special_tokens) text, kwargs = self.prepare_for_tokenization(text, **kwargs) if kwargs: logger.warning(f'Keyword arguments {kwargs} not recognized.') if hasattr(self, 'do_lower_case') and self.do_lower_case: escaped_special_toks = [re.escape(s_tok) for s_tok in self.all_special_tokens] escaped_special_toks += [re.escape(s_tok.content) for s_tok in self._added_tokens_decoder.values() if not s_tok.special and s_tok.normalized] pattern = '(' + '|'.join(escaped_special_toks) + ')|' + '(.+?)' text = re.sub(pattern, lambda m: m.groups()[0] or m.groups()[1].lower(), text) if split_special_tokens: no_split_token = [] tokens = [text] else: no_split_token = self._added_tokens_encoder.keys() tokens = self.tokens_trie.split(text) for i, token in enumerate(tokens): if token in no_split_token: tok_extended = self._added_tokens_decoder.get(self._added_tokens_encoder[token], None) left = tokens[i - 1] if i > 0 else None right = tokens[i + 1] if i < len(tokens) - 1 else None if isinstance(tok_extended, AddedToken): if tok_extended.rstrip and right: tokens[i + 1] = right.lstrip() if tok_extended.lstrip and left: tokens[i - 1] = left.rstrip() if tok_extended.single_word and left and (left[-1] != ' '): tokens[i - 1] += token tokens[i] = '' elif tok_extended.single_word and right and (right[0] != ' '): tokens[i + 1] = token + tokens[i + 1] tokens[i] = '' else: raise ValueError(f'{tok_extended} cannot be tokenized because it was not properly added to the tokenizer. This means that it is not an `AddedToken` but a {type(tok_extended)}') tokenized_text = [] for token in tokens: if not token: continue if token in no_split_token: tokenized_text.append(token) else: tokenized_text.extend(self._tokenize(token)) return tokenized_text
Converts a string into a sequence of tokens, using the tokenizer. Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces). Takes care of added tokens. Args: text (`str`): The sequence to be encoded. **kwargs (additional keyword arguments): Passed along to the model-specific `prepare_for_tokenization` preprocessing method. Returns: `List[str]`: The list of tokens.
github-repos
def new_reviewer(self, name, anomalous=None): n = self._reviewer_cls( self, name=name, credibility=self.credibility, anomalous=anomalous) self.graph.add_node(n) self.reviewers.append(n) return n
Create a new reviewer. Args: name: name of the new reviewer. anomalous: initial anomalous score. (default: None) Returns: A new reviewer instance.
juraj-google-style
def box_area(boxes: Tensor) -> Tensor: boxes = _upcast(boxes) return (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
Computes the area of a set of bounding boxes, which are specified by its (x1, y1, x2, y2) coordinates. Args: boxes (`torch.FloatTensor` of shape `(number_of_boxes, 4)`): Boxes for which the area will be computed. They are expected to be in (x1, y1, x2, y2) format with `0 <= x1 < x2` and `0 <= y1 < y2`. Returns: `torch.FloatTensor`: a tensor containing the area for each box.
github-repos
def __init__(self, model, generation_config: GenerationConfig, manual_eviction: bool=False, max_queue_size=0, streaming: bool=True): self.model = model self.generation_config = generation_config self.input_queue = queue.Queue(maxsize=max_queue_size) self.output_queue = queue.Queue() self.stop_event = threading.Event() self.streaming = streaming self.log_prob_generation = getattr(generation_config, 'log_prob_generation', False) self._generation_thread = None self._request_counter = 0 self._request_lock = threading.Lock() self.model.generation_config.top_p = None self.do_sample = getattr(generation_config, 'do_sample', True) self.logit_processor = self.model._get_logits_processor(self.model.generation_config) self.use_cuda_graph = getattr(generation_config, 'use_cuda_graph', True) self.profile = getattr(generation_config, 'profile', False) self.manual_eviction = manual_eviction self.batch_processor: Optional[ContinuousBatchProcessor] = None
Initialize the continuous batching manager. Args: model: The language model for generation generation_config: Configuration for generation parameters max_queue_size: Maximum size of the request queue (0 = unlimited) streaming: Whether to stream tokens as they are generated
github-repos
def _IsMetadataFile(self, file_entry): if (file_entry.type_indicator == dfvfs_definitions.TYPE_INDICATOR_TSK and file_entry.path_spec.location in self._METADATA_FILE_LOCATIONS_TSK): return True return False
Determines if the file entry is a metadata file. Args: file_entry (dfvfs.FileEntry): a file entry object. Returns: bool: True if the file entry is a metadata file.
juraj-google-style
def list_storage_accounts_sub(access_token, subscription_id): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/providers/Microsoft.Storage/storageAccounts', '?api-version=', STORAGE_API]) return do_get(endpoint, access_token)
List the storage accounts in the specified subscription. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. Returns: HTTP response. JSON body list of storage accounts.
codesearchnet
def __str__(self): return self.str_internal()
Generates a useful string for this object. Compactly displays interesting fields. In particular, pickled fields are not displayed. Note that we collapse the fields of the contained Worker* object into this object, since there is a 1-1 mapping between Operation and operation_specs.Worker*. Returns: Compact string representing this object.
github-repos
def _broadcast(value, target): return tf.broadcast_to( tf.convert_to_tensor(value=value, dtype=target.dtype), distribution_util.prefer_static_shape(target)[:-1])
Broadcast a value to match the batching dimensions of a target. If necessary the value is converted into a tensor. Both value and target should be of the same dtype. Args: value: A value to broadcast. target: A `Tensor` of shape [b1, ..., bn, d]. Returns: A `Tensor` of shape [b1, ..., bn] and same dtype as the target.
juraj-google-style
def get_section_header(self, section): self._ensure_section_headers_loaded() if type(section) is int: return self._section_headers_by_index[section] else: return self._section_headers_by_name[section]
Get a specific section header by index or name. Args: section(int or str): The index or name of the section header to return. Returns: :class:`~ELF.SectionHeader`: The section header. Raises: KeyError: The requested section header does not exist.
juraj-google-style
def write(self, output_buffer, kmip_version=enums.KMIPVersion.KMIP_1_0): local_buffer = utils.BytearrayStream() if self._object_type: self._object_type.write(local_buffer, kmip_version=kmip_version) else: raise exceptions.InvalidField('The DeriveKey request payload is missing the object type field.') if self._unique_identifiers: for unique_identifier in self._unique_identifiers: unique_identifier.write(local_buffer, kmip_version=kmip_version) else: raise exceptions.InvalidField('The DeriveKey request payload is missing the unique identifiers field.') if self._derivation_method: self._derivation_method.write(local_buffer, kmip_version=kmip_version) else: raise exceptions.InvalidField('The DeriveKey request payload is missing the derivation method field.') if self._derivation_parameters: self._derivation_parameters.write(local_buffer, kmip_version=kmip_version) else: raise exceptions.InvalidField('The DeriveKey request payload is missing the derivation parameters field.') if (kmip_version < enums.KMIPVersion.KMIP_2_0): if self._template_attribute: self._template_attribute.write(local_buffer, kmip_version=kmip_version) else: raise exceptions.InvalidField('The DeriveKey request payload is missing the template attribute field.') elif self._template_attribute: attrs = objects.convert_template_attribute_to_attributes(self._template_attribute) attrs.write(local_buffer, kmip_version=kmip_version) else: raise exceptions.InvalidField('The DeriveKey request payload is missing the template attribute field.') self.length = local_buffer.length() super(DeriveKeyRequestPayload, self).write(output_buffer, kmip_version=kmip_version) output_buffer.write(local_buffer.buffer)
Write the data encoding the DeriveKey request payload to a stream. Args: output_buffer (stream): A data stream in which to encode object data, supporting a write method; usually a BytearrayStream object. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be encoded. Optional, defaults to KMIP 1.0. Raises: ValueError: Raised if the data attribute is not defined.
codesearchnet
def edge(self, tail_name, head_name, label=None, _attributes=None, **attrs): tail_name = self._quote_edge(tail_name) head_name = self._quote_edge(head_name) attr_list = self._attr_list(label, attrs, _attributes) line = self._edge % (tail_name, head_name, attr_list) self.body.append(line)
Create an edge between two nodes. Args: tail_name: Start node identifier. head_name: End node identifier. label: Caption to be displayed near the edge. attrs: Any additional edge attributes (must be strings).
juraj-google-style
def f(): return constant_op.constant(1)
First sentence. Second sentence. Returns: Something.
github-repos
def set_precision(predictions, labels, weights_fn=common_layers.weights_nonzero): with tf.variable_scope("set_precision", values=[predictions, labels]): labels = tf.squeeze(labels, [2, 3]) weights = weights_fn(labels) labels = tf.one_hot(labels, predictions.shape[-1]) labels = tf.reduce_max(labels, axis=1) labels = tf.cast(labels, tf.bool) return tf.to_float(tf.equal(labels, predictions)), weights
Precision of set predictions. Args: predictions : A Tensor of scores of shape [batch, nlabels]. labels: A Tensor of int32s giving true set elements, of shape [batch, seq_length]. weights_fn: A function to weight the elements. Returns: hits: A Tensor of shape [batch, nlabels]. weights: A Tensor of shape [batch, nlabels].
juraj-google-style
def kaiser_sinc_filter1d(cutoff, half_width, kernel_size): is_even = kernel_size % 2 == 0 half_size = kernel_size delta_f = 4 * half_width attenuation = 2.285 * (half_size - 1) * math.pi * delta_f + 7.95 if attenuation > 50.0: beta = 0.1102 * (attenuation - 8.7) elif attenuation >= 21.0: beta = 0.5842 * (attenuation - 21) ** 0.4 + 0.07886 * (attenuation - 21.0) else: beta = 0.0 kaiser_window = torch.kaiser_window(kernel_size, beta=beta, periodic=False, dtype=torch.float32) if is_even: time_indices = torch.arange(-half_size, half_size) + 0.5 else: time_indices = torch.arange(kernel_size) - half_size if cutoff == 0: return torch.zeros((1, 1, kernel_size), dtype=torch.float32) sinc_filter = torch.sinc(2 * cutoff * time_indices) normalized_filter = 2 * cutoff * kaiser_window * sinc_filter normalized_filter /= normalized_filter.sum() return normalized_filter.view(1, 1, kernel_size)
Generates a 1D Kaiser-windowed sinc filter. Args: cutoff (float): Normalized cutoff frequency (0 to 0.5). half_width (float): Transition bandwidth. kernel_size (int): Number of filter taps. Returns: torch.Tensor: A tensor of shape (1, 1, kernel_size) representing the filter.
github-repos
def transform(self, X, y=None): word_ids = [self._word_vocab.doc2id(doc) for doc in X] word_ids = pad_sequences(word_ids, padding='post') if self._use_char: char_ids = [[self._char_vocab.doc2id(w) for w in doc] for doc in X] char_ids = pad_nested_sequences(char_ids) features = [word_ids, char_ids] else: features = word_ids if y is not None: y = [self._label_vocab.doc2id(doc) for doc in y] y = pad_sequences(y, padding='post') y = to_categorical(y, self.label_size).astype(int) y = y if len(y.shape) == 3 else np.expand_dims(y, axis=0) return features, y else: return features
Transform documents to document ids. Uses the vocabulary learned by fit. Args: X : iterable an iterable which yields either str, unicode or file objects. y : iterabl, label strings. Returns: features: document id matrix. y: label id matrix.
juraj-google-style
def __init__(self, caption, content, enabled=True): self._caption = caption self._content = content self._enabled = enabled
Menu constructor. TODO(cais): Nested menu is currently not supported. Support it. Args: caption: (str) caption of the menu item. content: Content of the menu item. For a menu item that triggers a command, for example, content is the command string. enabled: (bool) whether this menu item is enabled.
github-repos
def period_neighborhood_probability(self, radius, smoothing, threshold, stride, start_time, end_time): neighbor_x = self.x[(::stride, ::stride)] neighbor_y = self.y[(::stride, ::stride)] neighbor_kd_tree = cKDTree(np.vstack((neighbor_x.ravel(), neighbor_y.ravel())).T) neighbor_prob = np.zeros((self.data.shape[0], neighbor_x.shape[0], neighbor_x.shape[1])) print('Forecast Hours: {0}-{1}'.format(start_time, end_time)) for m in range(len(self.members)): period_max = self.data[(m, start_time:end_time, :, :)].max(axis=0) (valid_i, valid_j) = np.where((period_max >= threshold)) print(self.members[m], len(valid_i)) if (len(valid_i) > 0): var_kd_tree = cKDTree(np.vstack((self.x[(valid_i, valid_j)], self.y[(valid_i, valid_j)])).T) exceed_points = np.unique(np.concatenate(var_kd_tree.query_ball_tree(neighbor_kd_tree, radius))).astype(int) (exceed_i, exceed_j) = np.unravel_index(exceed_points, neighbor_x.shape) neighbor_prob[m][(exceed_i, exceed_j)] = 1 if (smoothing > 0): neighbor_prob[m] = gaussian_filter(neighbor_prob[m], smoothing, mode='constant') return neighbor_prob
Calculate the neighborhood probability over the full period of the forecast Args: radius: circular radius from each point in km smoothing: width of Gaussian smoother in km threshold: intensity of exceedance stride: number of grid points to skip for reduced neighborhood grid Returns: (neighborhood probabilities)
codesearchnet
def generic_object_comparison(lhs, rhs, lhs_path, rhs_path, max_depth): if id(lhs) == id(rhs): return 0 if type(lhs) != type(rhs): return compare(str(type(lhs)), str(type(rhs))) if type(lhs) in [int, float, bool, str, bool, bytes, bytearray]: return compare(lhs, rhs) if isinstance(lhs, enum.Enum): return compare(lhs.name, rhs.name) max_depth -= 1 if max_depth < 0: return 0 if id(lhs) in lhs_path or id(rhs) in rhs_path: return 0 lhs_path.append(id(lhs)) rhs_path.append(id(rhs)) result = _generic_object_comparison_recursive_path(lhs, rhs, lhs_path, rhs_path, max_depth) lhs_path.pop() rhs_path.pop() return result
Identifies which object goes first in an (almost) total order of objects. Args: lhs: An arbitrary Python object or built-in type. rhs: An arbitrary Python object or built-in type. lhs_path: Traversal path from the root lhs object up to, but not including, lhs. The original contents of lhs_path are restored before the function returns. rhs_path: Same as lhs_path except for the rhs. max_depth: Maximum recursion depth. Returns: -1, 0, or 1 depending on whether lhs or rhs goes first in the total order. 0 if max_depth is exhausted. 0 if lhs is in lhs_path or rhs is in rhs_path (there is a cycle).
github-repos
def publish(self, subject, msg, reply=None): if msg is None: msg = '' if reply is None: command = 'PUB %s %d' % (subject, len(msg)) else: command = 'PUB %s %s %d' % (subject, reply, len(msg)) self._send(command) self._send(msg)
Publish publishes the data argument to the given subject. Args: subject (string): a string with the subject msg (string): payload string reply (string): subject used in the reply
juraj-google-style
def remove_alias(type_): if isinstance(type_, cpptypes.type_t): type_ref = type_ elif isinstance(type_, typedef.typedef_t): type_ref = type_.decl_type else: return type_ if type_ref.cache.remove_alias: return type_ref.cache.remove_alias no_alias = __remove_alias(type_ref.clone()) type_ref.cache.remove_alias = no_alias return no_alias
Returns `type_t` without typedef Args: type_ (type_t | declaration_t): type or declaration Returns: type_t: the type associated to the inputted declaration
codesearchnet
def _check_id(entity, entity_type): if entity is None: raise ParseError('{} ID missing'.format(entity_type)) elif not isinstance(entity, string_types): msg = '{} ID must be a string, id was {}.'.format(entity_type, entity) if isinstance(entity, bool): msg += (' You may have accidentally used an ID value that YAML' ' interprets as a boolean, such as "yes", "no", "on",' ' "off", "true" or "false". To use this ID, you have to' ' quote it with single or double quotes') raise ParseError(msg) elif len(entity) == 0: raise ParseError('{} ID must not be empty'.format(entity_type))
Check whether the ID is valid. First check if the ID is missing, and then check if it is a qualified string type, finally check if the string is empty. For all checks, it would raise a ParseError with the corresponding message. Args: entity: a string type object to be checked. entity_type: a string that shows the type of entities to check, usually `Compound` or 'Reaction'.
juraj-google-style
def pauli_from_char(ch, n=0): ch = ch.upper() if (ch == 'I'): return I if (ch == 'X'): return X(n) if (ch == 'Y'): return Y(n) if (ch == 'Z'): return Z(n) raise ValueError('ch shall be X, Y, Z or I')
Make Pauli matrix from an character. Args: ch (str): "X" or "Y" or "Z" or "I". n (int, optional): Make Pauli matrix as n-th qubits. Returns: If ch is "X" => X, "Y" => Y, "Z" => Z, "I" => I Raises: ValueError: When ch is not "X", "Y", "Z" nor "I".
codesearchnet
def match_variables(self, pattern, return_type='name'): pattern = re.compile(pattern) vars_ = [v for v in self.variables.values() if pattern.search(v.name)] return vars_ if return_type.startswith('var') \ else [v.name for v in vars_]
Return columns whose names match the provided regex pattern. Args: pattern (str): A regex pattern to match all variable names against. return_type (str): What to return. Must be one of: 'name': Returns a list of names of matching variables. 'variable': Returns a list of Variable objects whose names match.
juraj-google-style
def _create_controller_info_record(self, controller_module_name): module = self._controller_modules[controller_module_name] controller_info = None try: controller_info = module.get_info(copy.copy(self._controller_objects[controller_module_name])) except AttributeError: logging.warning('No optional debug info found for controller %s. To provide it, implement `get_info`.', controller_module_name) try: yaml.dump(controller_info) except TypeError: logging.warning('The info of controller %s in class "%s" is not YAML serializable! Coercing it to string.', controller_module_name, self._class_name) controller_info = str(controller_info) return records.ControllerInfoRecord(self._class_name, module.MOBLY_CONTROLLER_CONFIG_NAME, controller_info)
Creates controller info record for a particular controller type. Info is retrieved from all the controller objects spawned from the specified module, using the controller module's `get_info` function. Args: controller_module_name: string, the name of the controller module to retrieve info from. Returns: A records.ControllerInfoRecord object.
codesearchnet
def returnListOfConfigurationValues(util): VALUES = {} configPath = os.path.join(getConfigPath()["appPath"], "general.cfg") if not os.path.exists(configPath): defaultConfigPath = os.path.join(getConfigPath()["appPathDefaults"], "general.cfg") try: with open(defaultConfigPath) as iF: cont = iF.read() with open(configPath, "w") as oF: oF.write(cont) except Exception as e: raise errors.DefaultConfigurationFileNotFoundError(configPath, defaultConfigPath); config = ConfigParser.ConfigParser() config.read(configPath) LISTS = ["tlds", "domains", "platforms", "extension", "exclude_platforms", "exclude_domains"] for section in config.sections(): incomplete = False if section.lower() == util.lower(): for (param, value) in config.items(section): if value == '': if param in LISTS: value = [] else: value = "" elif param in LISTS: value = value.split(' ') elif param == "threads": try: value = int(value) except Exception as err: raise errors.ConfigurationParameterNotValidError(configPath, section, param, value) elif param == "debug": try: if int(value) == 0: value = False else: value = True except Exception as err: print("Something happened when processing this debug option. Resetting to default.") defaultConfigPath = os.path.join(getConfigPath()["appPathDefaults"], "general.cfg") try: with open(defaultConfigPath) as iF: cont = iF.read() with open(configPath, "w") as oF: oF.write(cont) except Exception as e: raise errors.DefaultConfigurationFileNotFoundError(configPath, defaultConfigPath); VALUES[param] = value break return VALUES
Method that recovers the configuration information about each program TODO: Grab the default file from the package data instead of storing it in the main folder. Args: ----- util: Any of the utils that are contained in the framework: domainfy, entify, mailfy, phonefy, searchfy, usufy. Returns: -------- A dictionary containing the default configuration.
juraj-google-style
def get_message(self, metadata=False, asctime=True): msg = self.msg if is_string(self.msg) else str(self.msg) if self.args: try: msg = msg % self.args except: msg += str(self.args) if asctime: msg = "[" + self.asctime + "] " + msg if metadata: msg += "\nCalled by %s at %s:%s\n" % (self.func_name, self.pathname, self.lineno) return msg
Return the message after merging any user-supplied arguments with the message. Args: metadata: True if function and module name should be added. asctime: True if time string should be added.
juraj-google-style
def segmentation_to_mask(polys, height, width): polys = [p.flatten().tolist() for p in polys] assert len(polys) > 0, "Polygons are empty!" import pycocotools.mask as cocomask rles = cocomask.frPyObjects(polys, height, width) rle = cocomask.merge(rles) return cocomask.decode(rle)
Convert polygons to binary masks. Args: polys: a list of nx2 float array. Each array contains many (x, y) coordinates. Returns: a binary matrix of (height, width)
juraj-google-style
def _build_document_scrapers(cls, session: AppSession): html_parser = session.factory['HTMLParser'] element_walker = session.factory.new('ElementWalker') scrapers = [session.factory.new('HTMLScraper', html_parser, element_walker, followed_tags=session.args.follow_tags, ignored_tags=session.args.ignore_tags, only_relative=session.args.relative, robots=session.args.robots, encoding_override=session.args.remote_encoding)] if ('css' in session.args.link_extractors): css_scraper = session.factory.new('CSSScraper', encoding_override=session.args.remote_encoding) scrapers.append(css_scraper) element_walker.css_scraper = css_scraper if ('javascript' in session.args.link_extractors): javascript_scraper = session.factory.new('JavaScriptScraper', encoding_override=session.args.remote_encoding) scrapers.append(javascript_scraper) element_walker.javascript_scraper = javascript_scraper if session.args.sitemaps: scrapers.append(session.factory.new('SitemapScraper', html_parser, encoding_override=session.args.remote_encoding)) return scrapers
Create the document scrapers. Returns: A list of document scrapers
codesearchnet
def _set_state(self, shard_state, tstate, task_directive): if (task_directive in (self._TASK_DIRECTIVE.RETRY_TASK, self._TASK_DIRECTIVE.DROP_TASK)): return task_directive if (task_directive == self._TASK_DIRECTIVE.ABORT_SHARD): shard_state.set_for_abort() return task_directive if (task_directive == self._TASK_DIRECTIVE.PROCEED_TASK): shard_state.advance_for_next_slice() tstate.advance_for_next_slice() return task_directive if (task_directive == self._TASK_DIRECTIVE.RECOVER_SLICE): tstate.advance_for_next_slice(recovery_slice=True) shard_state.advance_for_next_slice(recovery_slice=True) return task_directive if (task_directive == self._TASK_DIRECTIVE.RETRY_SLICE): task_directive = self._attempt_slice_retry(shard_state, tstate) if (task_directive == self._TASK_DIRECTIVE.RETRY_SHARD): task_directive = self._attempt_shard_retry(shard_state, tstate) if (task_directive == self._TASK_DIRECTIVE.FAIL_TASK): shard_state.set_for_failure() return task_directive
Set shard_state and tstate based on task_directive. Args: shard_state: model.ShardState for current shard. tstate: model.TransientShardState for current shard. task_directive: self._TASK_DIRECTIVE for current shard. Returns: A _TASK_DIRECTIVE enum. PROCEED_TASK if task should proceed normally. RETRY_SHARD if shard should be retried. RETRY_SLICE if slice should be retried. FAIL_TASK if sahrd should fail. RECOVER_SLICE if slice should be recovered. ABORT_SHARD if shard should be aborted. RETRY_TASK if task should be retried. DROP_TASK if task should be dropped.
codesearchnet
def add_multiple_to_queue(self, items, container=None): if container is not None: container_uri = container.resources[0].uri container_metadata = to_didl_string(container) else: container_uri = '' container_metadata = '' chunk_size = 16 item_list = list(items) for index in range(0, len(item_list), chunk_size): chunk = item_list[index:index + chunk_size] uris = ' '.join([item.resources[0].uri for item in chunk]) uri_metadata = ' '.join([to_didl_string(item) for item in chunk]) self.avTransport.AddMultipleURIsToQueue([ ('InstanceID', 0), ('UpdateID', 0), ('NumberOfURIs', len(chunk)), ('EnqueuedURIs', uris), ('EnqueuedURIsMetaData', uri_metadata), ('ContainerURI', container_uri), ('ContainerMetaData', container_metadata), ('DesiredFirstTrackNumberEnqueued', 0), ('EnqueueAsNext', 0) ])
Add a sequence of items to the queue. Args: items (list): A sequence of items to the be added to the queue container (DidlObject, optional): A container object which includes the items.
juraj-google-style
def create_from(cls, backend): backend_config = backend.configuration() try: backend_default = backend.defaults() except ModelValidationError: from collections import namedtuple BackendDefault = namedtuple('BackendDefault', ('qubit_freq_est', 'meas_freq_est')) backend_default = BackendDefault( qubit_freq_est=backend_config.defaults['qubit_freq_est'], meas_freq_est=backend_config.defaults['meas_freq_est'] ) n_qubits = backend_config.n_qubits n_registers = backend_config.n_registers n_uchannels = backend_config.n_uchannels if n_uchannels > 0 and n_uchannels != n_qubits: raise PulseError("This version assumes no U-channels or qubit_lo_freqs = backend_default.qubit_freq_est qubit_lo_ranges = backend_config.qubit_lo_range meas_lo_freqs = backend_default.meas_freq_est meas_lo_ranges = backend_config.meas_lo_range drives = [ DriveChannel(i, qubit_lo_freqs[i], tuple(qubit_lo_ranges[i])) for i in range(n_qubits) ] measures = [ MeasureChannel(i, meas_lo_freqs[i], tuple(meas_lo_ranges[i])) for i in range(n_qubits) ] acquires = [AcquireChannel(i) for i in range(n_qubits)] controls = [ControlChannel(i) for i in range(n_uchannels)] qubits = [] for i in range(n_qubits): qubit = Qubit(i, drive_channels=[drives[i]], control_channels=None if n_uchannels == 0 else controls[i], measure_channels=[measures[i]], acquire_channels=[acquires[i]]) qubits.append(qubit) registers = [RegisterSlot(i) for i in range(n_registers)] mem_slots = [MemorySlot(i) for i in range(len(qubits))] return DeviceSpecification(qubits, registers, mem_slots)
Create device specification with values in backend configuration. Args: backend(Backend): backend configuration Returns: DeviceSpecification: created device specification Raises: PulseError: when an invalid backend is specified
juraj-google-style
def ParseDom(self, dom, feed): shape_num = 0 for node in dom.getElementsByTagName('Placemark'): p = self.ParsePlacemark(node) if p.IsPoint(): (lon, lat) = p.coordinates[0] m = self.stopNameRe.search(p.name) feed.AddStop(lat, lon, m.group(1)) elif p.IsLine(): self.ConvertPlacemarkToShape(p, feed)
Parses the given kml dom tree and updates the Google transit feed object. Args: dom - kml dom tree feed - an instance of Schedule class to be updated
juraj-google-style
def reqAccountUpdatesMulti( self, account: str = '', modelCode: str = ''): self._run(self.reqAccountUpdatesMultiAsync(account, modelCode))
It is recommended to use :meth:`.accountValues` instead. Request account values of multiple accounts and keep updated. This method is blocking. Args: account: If specified, filter for this account name. modelCode: If specified, filter for this account model.
juraj-google-style
def to_soft(self, path_or_handle, as_gzip=False): if isinstance(path_or_handle, str): if as_gzip: with gzip.open(path_or_handle, 'wt') as outfile: outfile.write(self._get_object_as_soft()) else: with open(path_or_handle, 'w') as outfile: outfile.write(self._get_object_as_soft()) else: path_or_handle.write(self._get_object_as_soft())
Save the object in a SOFT format. Args: path_or_handle (:obj:`str` or :obj:`file`): Path or handle to output file as_gzip (:obj:`bool`): Save as gzip
codesearchnet
def cut_setting(self, cut): cut_settings = {'full' : 0b00000001, 'half' : 0b00000010, 'chain': 0b00000100, 'special': 0b00001000 } if cut in cut_settings: self.send(chr(27)+'iC'+chr(cut_settings[cut])) else: raise RuntimeError('Invalid cut type.')
Set cut setting for printer. Args: cut: The type of cut setting we want. Choices are 'full', 'half', 'chain', and 'special'. Returns: None Raises: RuntimeError: Invalid cut type.
juraj-google-style
def recode_curesim_reads(curesim_fastq_fo, rnf_fastq_fo, fai_fo, genome_id, number_of_read_tuples=(10 ** 9), recode_random=False): curesim_pattern = re.compile('@(.*)_([0-9]+)_([0-9]+)_([0-9]+)_([0-9]+)_([0-9]+)_([0-9]+)_([0-9]+)') '\n\t\t\tCuReSim read name format\n\n\t\t\t@< max_seq_len = 0 fai_index = rnftools.utils.FaIdx(fai_fo=fai_fo) read_tuple_id_width = len(format(number_of_read_tuples, 'x')) fq_creator = rnftools.rnfformat.FqCreator(fastq_fo=rnf_fastq_fo, read_tuple_id_width=read_tuple_id_width, genome_id_width=2, chr_id_width=fai_index.chr_id_width, coor_width=fai_index.coor_width, info_reads_in_tuple=True, info_simulator='curesim') read_tuple_id = 0 i = 0 for line in curesim_fastq_fo: if ((i % 4) == 0): m = curesim_pattern.search(line) if (m is None): rnftools.utils.error("Read '{}' was not generated by CuReSim.".format(line[1:]), program='RNFtools', subprogram='MIShmash', exception=ValueError) contig_name = m.group(1) start_pos = int(m.group(2)) direction = ('R' if int(m.group(3)) else 'F') random = bool(m.group(4)) ins_nb = int(m.group(5)) del_nb = int(m.group(6)) subst_nb = int(m.group(7)) rd_id = int(m.group(8)) end_pos = (((start_pos - 1) - ins_nb) + del_nb) chr_id = 0 random = (contig_name[:4] == 'rand') elif ((i % 4) == 1): bases = line.strip() end_pos += len(bases) if recode_random: left = 0 right = 0 else: left = (start_pos + 1) right = end_pos segment = rnftools.rnfformat.Segment(genome_id=genome_id, chr_id=chr_id, direction=direction, left=left, right=right) elif ((i % 4) == 2): pass elif ((i % 4) == 3): qualities = line.strip() if (random == recode_random): fq_creator.add_read(read_tuple_id=read_tuple_id, bases=bases, qualities=qualities, segments=[segment]) read_tuple_id += 1 i += 1 fq_creator.flush_read_tuple()
Recode CuReSim output FASTQ file to the RNF-compatible output FASTQ file. Args: curesim_fastq_fo (file object): File object of CuReSim FASTQ file. fastq_rnf_fo (file object): File object of RNF FASTQ. fai_fo (file object): File object for FAI file of the reference genome. genome_id (int): RNF genome ID to be used. number_of_read_tuples (int): Expected number of read tuples (to estimate number of digits in RNF). recode_random (bool): Recode random reads. Raises: ValueError
codesearchnet
def _txn_is_in_valid_batch(self, txn_id): batch = self._batches_by_txn_id[txn_id] return all( self._txn_results[sig].is_valid for sig in set(self._txn_results).intersection( (txn.header_signature for txn in batch.transactions)))
Returns whether the transaction is in a valid batch. Args: txn_id (str): The transaction header signature. Returns: (bool): True if the txn's batch is valid, False otherwise.
juraj-google-style
def l2_distance_sq(t1, t2, name=None): with tf.name_scope(name, 'l2_distance_sq', [t1, t2]) as scope: t1 = tf.convert_to_tensor(t1, name='t1') t2 = tf.convert_to_tensor(t2, name='t2') return length_squared(tf.subtract(t1, t2), name=scope)
Square of l2 distance between t1 and t2. Args: t1: A tensor. t2: A tensor that is the same size as t1. name: Optional name for this op. Returns: The l2 distance between t1 and t2.
juraj-google-style
def _allocate_channel(self): try: channel = (yield self.channel()) except pika.exceptions.NoFreeChannels: raise NoFreeChannels() _std_log.debug('Created AMQP channel id %d', channel.channel_number) if self._confirms: (yield channel.confirm_delivery()) defer.returnValue(channel)
Allocate a new AMQP channel. Raises: NoFreeChannels: If this connection has reached its maximum number of channels.
codesearchnet