code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def delete(self): clone = copy.deepcopy(self) return [(item.delete() and item) for item in clone]
Deletes all objects that matches to the queryset. Note: Unlike RDBMS systems, this method makes individual save calls to backend DB store. So this is exists as more of a comfortable utility method and not a performance enhancement. Returns: List of deleted objects or None if *confirm* not set. Example: >>> Person.ob...
codesearchnet
def plot_compare(self, other_plotter): data_orig = self.bs_plot_data() data = other_plotter.bs_plot_data() if len(data_orig['distances']) != len(data['distances']): raise ValueError('The two objects are not compatible.') plt = self.get_plot() band_linewidt...
plot two band structure for comparison. One is in red the other in blue. The two band structures need to be defined on the same symmetry lines! and the distance between symmetry lines is the one of the band structure used to build the PhononBSPlotter Args: another PhononBSPlotter object defined along the same symmetry...
juraj-google-style
def _verifyClusterSpecEquality(self, cluster_spec, expected_proto): self.assertProtoEquals(expected_proto, cluster_spec.as_cluster_def()) self.assertProtoEquals(expected_proto, server_lib.ClusterSpec(cluster_spec).as_cluster_def()) self.assertProtoEquals(expected_proto, server_lib.ClusterSpec(cluster_spec.a...
Verifies that the ClusterSpec generates the correct proto. We are testing this four different ways to ensure that the ClusterSpec returned by the TPUClusterResolver behaves identically to a normal ClusterSpec when passed into the generic ClusterSpec libraries. Args: cluster_spec: ClusterSpec returned by the TPUCluste...
github-repos
def construct_policy(app='coreforrest', env='dev', group='forrest', region='us-east-1', pipeline_settings=None): LOG.info('Create custom IAM Policy for %s.', app) services = pipeline_settings.get('services', {}) LOG.debug('Found requested services: %s', services) services = auto_service(pipeline_...
Assemble IAM Policy for _app_. Args: app (str): Name of Spinnaker Application. env (str): Environment/Account in AWS group (str):A Application group/namespace region (str): AWS region pipeline_settings (dict): Settings from *pipeline.json*. Returns: json: Custom IAM Policy for _app_. None: When no *services* have bee...
juraj-google-style
def _RDFClass(cls, table): rdf_cls_name = "OsqueryTable{}".format(hash(table.query)) try: return cls._rdf_cls_cache[rdf_cls_name] except KeyError: pass rdf_cls = compatibility.MakeType(rdf_cls_name, (rdf_structs.RDFProtoStruct,), {}) rdf_cls.Ad...
Creates a dynamic RDF proto struct class for given osquery table. The fields of the proto will correspond to the columns of the table. Args: table: An osquery table for which the class is about to be generated. Returns: A class object corresponding to the given table.
juraj-google-style
def QA_fetch_user(user_cookie, db=DATABASE): collection = DATABASE.account return [res for res in collection.find({'user_cookie': user_cookie}, {"_id": 0})]
get the user Arguments: user_cookie : str the unique cookie_id for a user Keyword Arguments: db: database for query Returns: list --- [ACCOUNT]
juraj-google-style
def load_chkpt_vars(model_path): model_path = get_checkpoint_path(model_path) reader = tfv1.train.NewCheckpointReader(model_path) var_names = reader.get_variable_to_shape_map().keys() result = {} for n in var_names: result[n] = reader.get_tensor(n) return result
Load all variables from a checkpoint to a dict. Args: model_path(str): path to a checkpoint. Returns: dict: a name:value dict
juraj-google-style
def unstack(df, level=(- 1), reset_index=True): df = df.unstack(level=level) if reset_index: df = df.reset_index() df.columns = df.columns.map(_join_names) return df
pd.DataFrame.unstack adapter. Call the `df.unstack` method using the indicated level and afterwards join the column names using an underscore. Args: df (pandas.DataFrame): DataFrame to unstack. level (str, int or list): Level(s) of index to unstack, can pass level name reset_index (bool): Whether to reset the index a...
codesearchnet
def Mean(self): old_p = 0 total = 0.0 for (x, new_p) in zip(self.xs, self.ps): p = (new_p - old_p) total += (p * x) old_p = new_p return total
Computes the mean of a CDF. Returns: float mean
codesearchnet
def speed_info(self): speed_info = structs.JLinkSpeedInfo() self._dll.JLINKARM_GetSpeedInfo(ctypes.byref(speed_info)) return speed_info
Retrieves information about supported target interface speeds. Args: self (JLink): the ``JLink`` instance Returns: The ``JLinkSpeedInfo`` instance describing the supported target interface speeds.
juraj-google-style
def value_of( self, value: Union[sympy.Basic, float, str] ) -> Union[sympy.Basic, float]: if isinstance(value, str): return self.param_dict.get(value, sympy.Symbol(value)) if isinstance(value, sympy.Basic): if sys.version_info.major < 3: ...
Attempt to resolve a Symbol or name or float to its assigned value. If unable to resolve a sympy.Symbol, returns it unchanged. If unable to resolve a name, returns a sympy.Symbol with that name. Args: value: The sympy.Symbol or name or float to try to resolve into just a float. Returns: The value of the parameter as...
juraj-google-style
def add_features(self, features, append=True, merge='outer', duplicates='ignore', min_studies=0.0, threshold=0.001): if ((not append) or (not hasattr(self, 'feature_table'))): self.feature_table = FeatureTable(self) self.feature_table.add_features(features, merge=merge, duplicates=duplicates, min_studie...
Construct a new FeatureTable from file. Args: features: Feature data to add. Can be: (a) A text file containing the feature data, where each row is a study in the database, with features in columns. The first column must contain the IDs of the studies to match up with the image data. (b) A pandas DataFrame, where stud...
codesearchnet
def get_flat_tensor_specs(element_spec): return list(itertools.chain.from_iterable((spec._flat_tensor_specs for spec in nest.flatten(element_spec))))
Returns a list `tf.TypeSpec`s for the element tensor representation. Args: element_spec: A nested structure of `tf.TypeSpec` objects representing to element type specification. Returns: A list `tf.TypeSpec`s for the element tensor representation.
github-repos
def build_inputs_with_special_tokens(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None) -> List[int]: if token_ids_1 is None: return self.bos_token_id + token_ids_0 + self.eos_token_id return self.bos_token_id + token_ids_0 + token_ids_1 + self.eos_token_id
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. The special tokens depend on calling set_lang. An NLLB sequence has the following format, where `X` represents the sequence: - `input_ids` (for encoder) `X [eos, src_lang_code]` - `de...
github-repos
def __init__(self, value=HashingAlgorithmEnum.SHA_256): super(HashingAlgorithm, self).__init__( enums.HashingAlgorithm, value, Tags.HASHING_ALGORITHM)
Construct a HashingAlgorithm object. Args: value (HashingAlgorithm): A HashingAlgorithm enumeration value, (e.g., HashingAlgorithm.MD5). Optional, defaults to HashingAlgorithm.SHA_256.
juraj-google-style
async def _create_remote_user(self, **payload): read_action = get_crud_action(method='create', model='user') user_data = await self.event_broker.ask( action_type=read_action, payload=payload ) return json.loads(user_data)
This method creates a service record in the remote user service with the given email. Args: uid (str): the user identifier to create Returns: (dict): a summary of the user that was created
juraj-google-style
def _init_metadata_service(self, version): metadata_cfg = self._load_config_section(CONFIG_METADATA_SECTION) self._token_metadata = metadata_cfg[CONFIG_TOKEN] proto = metadata_cfg[CONFIG_PROTOCOL] host = metadata_cfg[CONFIG_HOST] self._metadata = MetadataService(host, version) self._metadata.bas...
Method to initialize the Metadata Service from the config data Args: version (string): Version of Boss API to use. Returns: None Raises: (KeyError): if given invalid version.
codesearchnet
class SeamlessM4TProcessor(ProcessorMixin): feature_extractor_class = 'SeamlessM4TFeatureExtractor' tokenizer_class = ('SeamlessM4TTokenizer', 'SeamlessM4TTokenizerFast') def __init__(self, feature_extractor, tokenizer): super().__init__(feature_extractor, tokenizer) def __call__(self, text=No...
Constructs a SeamlessM4T processor which wraps a SeamlessM4T feature extractor and a SeamlessM4T tokenizer into a single processor. [`SeamlessM4TProcessor`] offers all the functionalities of [`SeamlessM4TFeatureExtractor`] and [`SeamlessM4TTokenizerFast`]. See the [`~SeamlessM4TProcessor.__call__`] and [`~SeamlessM4TP...
github-repos
def _update_task(self, task): self.task = task self.task.data.update(self.task_data) self.task_type = task.task_spec.__class__.__name__ self.spec = task.task_spec self.task_name = task.get_name() self.activity = getattr(self.spec, 'service_class', '') sel...
Assigns current task step to self.task then updates the task's data with self.task_data Args: task: Task object.
juraj-google-style
def update_context(self, context, app=None): if ((app is None) and (self._context is _CONTEXT_MISSING) and (not in_app_context())): raise RuntimeError('Attempted to update component context without a bound app context or eager app set! Please pass the related app you want to update the context for!') if...
Replace the component's context with a new one. Args: context (dict): The new context to set this component's context to. Keyword Args: app (flask.Flask, optional): The app to update this context for. If not provided, the result of ``Component.app`` will be used.
codesearchnet
def _term(self, term): term = str(term) if term: self.__query['q'] += term return self
Add a term to the query. Arguments: term (str): The term to add. Returns: SearchHelper: Self
codesearchnet
def subtract(inputs, **kwargs): return Subtract(**kwargs)(inputs)
Functional interface to the `Subtract` layer. Args: inputs: A list of input tensors (exactly 2). **kwargs: Standard layer keyword arguments. Returns: A tensor, the difference of the inputs. Examples: ```python import keras input1 = keras.layers.Input(shape=(16,)) x1 = keras.layers.Dense(8, activation='relu')(input...
github-repos
def _update_explicit_bucket_count(a_float, dist): buckets = dist.explicitBuckets if (buckets is None): raise ValueError((_BAD_UNSET_BUCKETS % u'explicit buckets')) bucket_counts = dist.bucketCounts bounds = buckets.bounds if (len(bucket_counts) < (len(bounds) + 1)): raise ValueError(...
Adds `a_float` to `dist`, updating its explicit buckets. Args: a_float (float): a new value dist (:class:`endpoints_management.gen.servicecontrol_v1_messages.Distribution`): the Distribution being updated Raises: ValueError: if `dist` does not already have explict buckets defined ValueError: if there are not enough b...
codesearchnet
def eval(self, feed_dict=None, session=None): return _eval_using_default_session(self, feed_dict, self.graph, session)
Evaluates this tensor in a `Session`. Note: If you are not using `compat.v1` libraries, you should not need this, (or `feed_dict` or `Session`). In eager execution (or within `tf.function`) you do not need to call `eval`. Calling this method will execute all preceding operations that produce the inputs needed for th...
github-repos
def clear_events(self, event_name): self.lock.acquire() try: q = self.get_event_q(event_name) q.queue.clear() except queue.Empty: return finally: self.lock.release()
Clear all events of a particular name. Args: event_name: Name of the events to be popped.
juraj-google-style
def List(self, request, global_params=None): config = self.GetMethodConfig('List') return self._RunMethod(config, request, global_params=global_params)
Lists previously requested builds. Previously requested builds may still be in-progress, or may have finished successfully or unsuccessfully. Args: request: (CloudbuildProjectsBuildsListRequest) input message global_params: (StandardQueryParameters, default: None) global arguments Returns: (ListBuildsResponse) The res...
github-repos
def get_statistics(self, id_or_uri, port_name=''): uri = self._client.build_uri(id_or_uri) + "/statistics" if port_name: uri = uri + "/" + port_name return self._client.get(uri)
Gets the statistics from an interconnect. Args: id_or_uri: Can be either the interconnect id or the interconnect uri. port_name (str): A specific port name of an interconnect. Returns: dict: The statistics for the interconnect that matches id.
juraj-google-style
def CheckKeyCompatibility(cls, key_path): key_path_upper = key_path.upper() for key_path_prefix in cls._COMPATIBLE_REGISTRY_KEY_PATH_PREFIXES: if key_path_upper.startswith(key_path_prefix): return True logger.warning('Key path: "{0:s}" is currently not supported'.format( key_path...
Checks if a Windows Registry key path is supported by dfWinReg. Args: key_path (str): path of the Windows Registry key. Returns: bool: True if key is compatible or False if not.
juraj-google-style
def previous_weekday(date): weekday = date.weekday() if (weekday == 0): n_days = 3 elif (weekday == 6): n_days = 2 else: n_days = 1 return (date - datetime.timedelta(days=n_days))
Returns the last weekday before date Args: date (datetime or datetime.date) Returns: (datetime or datetime.date) Raises: -
codesearchnet
def add_filter(ds, patterns): if (not plugins.is_datasource(ds)): raise Exception('Filters are applicable only to datasources.') delegate = dr.get_delegate(ds) if delegate.raw: raise Exception("Filters aren't applicable to raw datasources.") if (not delegate.filterable): raise Ex...
Add a filter or list of filters to a datasource. A filter is a simple string, and it matches if it is contained anywhere within a line. Args: ds (@datasource component): The datasource to filter patterns (str, [str]): A string, list of strings, or set of strings to add to the datasource's filters.
codesearchnet
def put(self, closure, tag=None): closure.tag = tag if tag is not None: with self._queue_lock: self._tagged_queue[tag].put(closure, block=False) self._closures_queued_condition.notify_all() else: with self._put_wait_lock, self._queue_lock: self._queue_free...
Put a closure into the queue for later execution. If `mark_failed` was called before `put`, the error from the first invocation of `mark_failed` will be raised. Args: closure: The `Closure` to put into the queue. tag: if not None, put into a queue with the given tag.
github-repos
def pymmh3_hash128_x86(key: Union[bytes, bytearray], seed: int) -> int: def fmix(h): h ^= h >> 16 h = (h * 0x85ebca6b) & 0xFFFFFFFF h ^= h >> 13 h = (h * 0xc2b2ae35) & 0xFFFFFFFF h ^= h >> 16 return h length = len(key) nblocks = int(length / 16) h1...
Implements 128-bit murmur3 hash for x86, as per ``pymmh3``, with some bugfixes. Args: key: data to hash seed: seed Returns: integer hash
juraj-google-style
def GetCampaigns(self, client_customer_id): self.client.SetClientCustomerId(client_customer_id) max_tries = 3 today = time.strftime('%Y%m%d', time.localtime()) for i in xrange(1, max_tries + 1): try: selector = { 'fields': ['Id', 'Name', 'Status', 'BudgetId'...
Returns a client account's Campaigns that haven't been removed. Args: client_customer_id: str Client Customer Id used to retrieve Campaigns. Returns: list List of Campaign data objects.
juraj-google-style
def convert_reduce_sum(params, w_name, scope_name, inputs, layers, weights, names): print('Converting reduce_sum ...') keepdims = params['keepdims'] > 0 axis = params['axes'] def target_layer(x, keepdims=keepdims, axis=axis): import keras.backend as K return K.sum(x, keepdims=keep...
Convert reduce_sum layer. Args: params: dictionary with layer parameters w_name: name prefix in state_dict scope_name: pytorch scope name inputs: pytorch node inputs layers: dictionary with keras tensors weights: pytorch state_dict names: use short names for keras layers
juraj-google-style
def create_pipeline(gcp_project_id, region, pipeline_name, pipeline_root, csv_file, module_file, beam_runner, metadata_file): example_gen = tfx.components.CsvExampleGen(input_base=csv_file) statistics_gen = tfx.components.StatisticsGen(examples=example_gen.outputs['examples']) schema_gen = tfx.components.Sc...
Create the TFX pipeline. Args: gcp_project_id (str): ID for the google cloud project to deploy the pipeline to. region (str): Region in which to deploy the pipeline. pipeline_name (str): Name for the Beam pipeline pipeline_root (str): Path to artifact repository where TFX stores a pipeline’s artifacts. csv_file (str):...
github-repos
def bool(name, execute_bool=True, default=None): def wrapped(func): @functools.wraps(func) def _decorator(*args, **kwargs): if core.isset(name) and core.bool(name) == execute_bool: return func(*args, **kwargs) elif default is not None and default == execu...
Only execute the function if the boolean variable is set. Args: name: The name of the environment variable execute_bool: The boolean value to execute the function on default: The default value if the environment variable is not set (respects `execute_bool`) Returns: The function return value or `None` if the function...
juraj-google-style
def get_subscription_from_cli(name=None): home = os.path.expanduser('~') azure_profile_path = home + os.sep + '.azure' + os.sep + 'azureProfile.json' if os.path.isfile(azure_profile_path) is False: print('Error from get_subscription_from_cli(): Cannot find ' + azure_profile_path) ...
Get the default, or named, subscription id from CLI's local cache. Args: name (str): Optional subscription name. If this is set, the subscription id of the named subscription is returned from the CLI cache if present. If not set, the subscription id of the default subscription is returned. Returns: Azure subscription...
juraj-google-style
def intersects(self, rect, edges=False): if (self.bottom > rect.top or \ self.top < rect.bottom or \ self.left > rect.right or \ self.right < rect.left): return False if not edges: if (self.bottom == rect.top o...
Detect intersections between this rectangle and rect. Args: rect (Rectangle): Rectangle to test for intersections. edges (bool): Accept edge touching rectangles as intersects or not Returns: bool: True if the rectangles intersect, False otherwise
juraj-google-style
def add_field_with_label(self, key, label_description, field): self.inputs[key] = field label = Label(label_description) label.style['margin'] = '0px 5px' label.style['min-width'] = '30%' container = HBox() container.style.update({'justify-content':'space-between...
Adds a field to the dialog together with a descriptive label and a unique identifier. Note: You can access to the fields content calling the function GenericDialog.get_field(key). Args: key (str): The unique identifier for the field. label_description (str): The string content of the description label. field (Widget)...
juraj-google-style
def _isValidQuery(self, query, mode='phonefy'): try: validator = self.modes[mode].get('query_validator') if validator: try: compiledRegexp = re.compile('^{expr}$'.format(expr=validator)) return compiledRegexp.match(query) except AttributeError ...
Method to verify if a given query is processable by the platform. The system looks for the forbidden characters in self.Forbidden list. Args: ----- query: The query to be launched. mode: To be chosen amongst mailfy, phonefy, usufy, searchfy. Return: ------- True | False
codesearchnet
def get_gene_info(ensembl_ids=None, hgnc_symbols=None): uniq_ensembl_ids = set((ensembl_id for ensembl_id in (ensembl_ids or []))) uniq_hgnc_symbols = set((hgnc_symbol for hgnc_symbol in (hgnc_symbols or []))) genes = [] gene_data = [] if uniq_ensembl_ids: for ensembl_id in uniq_ensembl_ids:...
Return the genes info based on the transcripts found Args: ensembl_ids (Optional[list]): list of Ensembl gene ids hgnc_symbols (Optional[list]): list of HGNC gene symbols Returns: iterable: an iterable with `Gene` objects
codesearchnet
def get_min_instability(self, min_voltage=None, max_voltage=None): data = [] for pair in self._select_in_voltage_range(min_voltage, max_voltage): if pair.decomp_e_charge is not None: data.append(pair.decomp_e_charge) if pair.decomp_e_discharge is not None...
The minimum instability along a path for a specific voltage range. Args: min_voltage: The minimum allowable voltage. max_voltage: The maximum allowable voltage. Returns: Minimum decomposition energy of all compounds along the insertion path (a subset of the path can be chosen by the optional arguments)
juraj-google-style
def get_group(self, name, user_name=None): return self.service.get_group( name, user_name, self.url_prefix, self.auth, self.session, self.session_send_opts)
Get owner of group and the resources it's attached to. Args: name (string): Name of group to query. user_name (optional[string]): Supply None if not interested in determining if user is a member of the given group. Returns: (dict): Keys include 'owner', 'name', 'resources'. Raises: requests.HTTPError on failure.
juraj-google-style
def load_pyfile(self, path): with open(path) as config_file: contents = config_file.read() try: exec(compile(contents, path, 'exec'), self) except Exception as e: raise MalformedConfig(path, six.text_type(e))
Load python file as config. Args: path (string): path to the python file
codesearchnet
def tf_step(self, x, iteration, deltas, improvement, last_improvement, estimated_improvement): x, next_iteration, deltas, improvement, last_improvement, estimated_improvement = super(LineSearch, self).tf_step( x, iteration, deltas, improvement, last_improvement, estimated_improvement ...
Iteration loop body of the line search algorithm. Args: x: Current solution estimate $x_t$. iteration: Current iteration counter $t$. deltas: Current difference $x_t - x'$. improvement: Current improvement $(f(x_t) - f(x')) / v'$. last_improvement: Last improvement $(f(x_{t-1}) - f(x')) / v'$. estimated_improvement: C...
juraj-google-style
def decode_list(self, ids): decoded_ids = [] for id_ in ids: if 0 <= id_ < self._num_reserved_ids: decoded_ids.append(RESERVED_TOKENS[int(id_)]) else: decoded_ids.append(id_ - self._num_reserved_ids) return [str(d) for d in decoded_ids]
Transform a sequence of int ids into a their string versions. This method supports transforming individual input/output ids to their string versions so that sequence to/from text conversions can be visualized in a human readable format. Args: ids: list of integers to be converted. Returns: strs: list of human-readab...
juraj-google-style
def generate_exact(self, model, vcpu_num, host_cpu): nested = {'Intel': 'vmx', 'AMD': 'svm'} cpu = ET.Element('cpu', match='exact') ET.SubElement(cpu, 'model').text = model cpu.append(self.generate_topology(vcpu_num)) vendor = host_cpu.findtext('vendor') if not...
Generate exact CPU model with nested virtualization CPU feature. Args: model(str): libvirt supported CPU model vcpu_num(int): number of virtual cpus host_cpu(lxml.etree.Element): the host CPU model Returns: lxml.etree.Element: CPU XML node
juraj-google-style
def explain(self, entry): d = self.get_explanation_dict(entry) print("The uncorrected value of the energy of %s is %f eV" % (entry.composition, d["uncorrected_energy"])) print("The following corrections / screening are applied for %s:\n" % d["compatibility"])...
Prints an explanation of the corrections that are being applied for a given compatibility scheme. Inspired by the "explain" methods in many database methodologies. Args: entry: A ComputedEntry.
juraj-google-style
def print_fhir_to_json_string_for_analytics(fhir_proto: message.Message) -> str: printer = _json_printer.JsonPrinter.compact_printer_for_analytics(_PRIMITIVE_HANDLER) return printer.print(fhir_proto)
Returns an Analytic FHIR JSON representation with no spaces or newlines. Args: fhir_proto: The proto to serialize into a JSON string. Returns: An Analytic FHIR JSON representation with no spaces or newlines.
github-repos
def get_stability_criteria(self, s, n): n = get_uvec(n) stress = s * np.outer(n, n) sym_wallace = self.get_symmetric_wallace_tensor(stress) return np.linalg.det(sym_wallace.voigt)
Gets the stability criteria from the symmetric Wallace tensor from an input vector and stress value. Args: s (float): Stress value at which to evaluate the stability criteria n (3x1 array-like): direction of the applied stress
juraj-google-style
def get_videos_for_ids(edx_video_ids, sort_field=None, sort_dir=SortDirection.asc): (videos, __) = _get_videos_for_filter({'edx_video_id__in': edx_video_ids}, sort_field, sort_dir) return videos
Returns an iterator of videos that match the given list of ids. Args: edx_video_ids (list) sort_field (VideoSortField) sort_dir (SortDirection) Returns: A generator expression that contains the videos found, sorted by the given field and direction, with ties broken by edx_video_id to ensure a total order
codesearchnet
def Delete(self, request, global_params=None): config = self.GetMethodConfig('Delete') return self._RunMethod(config, request, global_params=global_params)
Deletes the routine specified by routineId from the dataset. Args: request: (BigqueryRoutinesDeleteRequest) input message global_params: (StandardQueryParameters, default: None) global arguments Returns: (BigqueryRoutinesDeleteResponse) The response message.
github-repos
def serialize_keras_object(obj): if obj is None: return obj if isinstance(obj, PLAIN_TYPES): return obj if isinstance(obj, (list, tuple)): config_arr = [serialize_keras_object(x) for x in obj] return tuple(config_arr) if isinstance(obj, tuple) else config_arr if isinstanc...
Retrieve the config dict by serializing the Keras object. `serialize_keras_object()` serializes a Keras object to a python dictionary that represents the object, and is a reciprocal function of `deserialize_keras_object()`. See `deserialize_keras_object()` for more information about the config format. Args: obj: the ...
github-repos
def main(raw_args=None): if (raw_args is None): raw_args = sys.argv[1:] parser = build_parser() args = parser.parse_args(raw_args) if ((args.firmware_image is None) and (args.gdb is None)): print('You must specify either a firmware image or attach a debugger with --gdb <PORT>') r...
Run the iotile-emulate script. Args: raw_args (list): Optional list of commmand line arguments. If not passed these are pulled from sys.argv.
codesearchnet
def _quadratic_sum_cost(self, state: _STATE) -> float: cost = 0.0 total_len = float(len(self._c)) (seqs, _) = state for seq in seqs: cost += ((len(seq) / total_len) ** 2) return (- cost)
Cost function that sums squares of lengths of sequences. Args: state: Search state, not mutated. Returns: Cost which is minus the normalized quadratic sum of each linear sequence section in the state. This promotes single, long linear sequence solutions and converges to number -1. The solution with a lowest cost cons...
codesearchnet
def get_python_version(): ver = str(sys.version_info) mmm = re.search('.*major=([\\d]), minor=([\\d]), micro=([\\d]+),.*', ver) return mmm.group(1) + '.' + mmm.group(2) + '.' + mmm.group(3)
Retrieves default Python version. Returns: String that is the version of default Python. e.g. '2.7.4'
github-repos
def inverse_removing(self, words_to_remove): mask = np.ones(self.as_np.shape[0], dtype='bool') mask[self.__get_idxs(words_to_remove)] = False if (not self.bow): return ''.join([(self.as_list[i] if mask[i] else 'UNKWORDZ') for i in range(mask.shape[0])]) return ''.join([self.as_list[v] for v in m...
Returns a string after removing the appropriate words. If self.bow is false, replaces word with UNKWORDZ instead of removing it. Args: words_to_remove: list of ids (ints) to remove Returns: original raw string with appropriate words removed.
codesearchnet
def top_rated(self, **kwargs): path = self._get_path('top_rated') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Get the list of top rated movies. By default, this list will only include movies that have 10 or more votes. This list refreshes every day. Args: page: (optional) Minimum value of 1. Expected value is an integer. language: (optional) ISO 639-1 code. Returns: A dict representation of the JSON returned from the API.
codesearchnet
def add_exac_info(genes, alias_genes, exac_lines): LOG.info("Add exac pli scores") for exac_gene in parse_exac_genes(exac_lines): hgnc_symbol = exac_gene['hgnc_symbol'].upper() pli_score = exac_gene['pli_score'] for hgnc_id in get_correct_ids(hgnc_symbol, alias_genes): ...
Add information from the exac genes Currently we only add the pLi score on gene level The exac resource only use HGNC symbol to identify genes so we need our alias mapping. Args: genes(dict): Dictionary with all genes alias_genes(dict): Genes mapped to all aliases ensembl_lines(iteable): Iteable with raw ensembl inf...
juraj-google-style
def data(self, index, role=Qt.DisplayRole): if (not index.isValid()): return None col = index.column() columnName = self._dataFrame.columns[index.row()] columnDtype = self._dataFrame[columnName].dtype if ((role == Qt.DisplayRole) or (role == Qt.EditRole)): if (col == 0): ...
Retrieve the data stored in the model at the given `index`. Args: index (QtCore.QModelIndex): The model index, which points at a data object. role (Qt.ItemDataRole, optional): Defaults to `Qt.DisplayRole`. You have to use different roles to retrieve different data for an `index`. Accepted roles are `Qt.DisplayRole`, `...
codesearchnet
def save(hdf5_filename, array): hdf5_filename = os.path.expanduser(hdf5_filename) try: h = h5py.File(hdf5_filename, "w") h.create_dataset('CUTOUT', data=array) h.close() except Exception as e: raise ValueError("Could not save HDF5 file {0}.".format(hdf5_filename)) ...
Export a numpy array to a HDF5 file. Arguments: hdf5_filename (str): A filename to which to save the HDF5 data array (numpy.ndarray): The numpy array to save to HDF5 Returns: String. The expanded filename that now holds the HDF5 data
juraj-google-style
def exp(cls, x: 'TensorFluent') -> 'TensorFluent': return cls._unary_op(x, tf.exp, tf.float32)
Returns a TensorFluent for the exp function. Args: x: The input fluent. Returns: A TensorFluent wrapping the exp function.
codesearchnet
def on_run_start(self, request): self._is_run_start = True self._update_run_calls_state(request.run_call_count, request.fetches, request.feed_dict, is_callable_runner=request.is_callable_runner) if self._active_tensor_filter: return self._active_tensor_filter_run_start_response self._exit_if_req...
Overrides on-run-start callback. Args: request: An instance of `OnRunStartRequest`. Returns: An instance of `OnRunStartResponse`.
github-repos
def decrypt_block(self, cipherText): if not self.initialized: raise TypeError("CamCrypt object has not been initialized") if len(cipherText) != BLOCK_SIZE: raise ValueError("cipherText must be %d bytes long (received %d bytes)" % (BLOCK_SIZE, len(cipherText))) plain =...
Decrypt a 16-byte block of data. NOTE: This function was formerly called `decrypt`, but was changed when support for decrypting arbitrary-length strings was added. Args: cipherText (str): 16-byte data. Returns: 16-byte str. Raises: TypeError if CamCrypt object has not been initialized. ValueError if `cipherText` is...
juraj-google-style
def fstat(self, file_des): file_object = self.filesystem.get_open_file(file_des).get_object() return file_object.stat_result.copy()
Return the os.stat-like tuple for the FakeFile object of file_des. Args: file_des: The file descriptor of filesystem object to retrieve. Returns: The FakeStatResult object corresponding to entry_path. Raises: OSError: if the filesystem object doesn't exist.
juraj-google-style
def __init__(self, xcli, product_name, product_version): self.xcli = xcli self.product_name = product_name self.product_version = product_version self.server_name = getfqdn() self.platform = get_platform_details() if not self.product_name: ra...
init an EventsManager Args: xcli (XCLIClient): xcli client to send the event product_name (string): the sending product's name product_version (string): the sending product's version Raises: ValueError: if missing product_name or product_version
juraj-google-style
def _expected_exercise_fn(design, calibration_indices, continuation_value, exercise_value): mask = exercise_value > 0 design_t = tf.transpose(design, [0, 2, 1]) masked = tf.where(tf.expand_dims(tf.transpose(mask), axis=-1), design_t, tf.zeros_like(design_t)) if calibration_indices is None: subma...
Returns the expected continuation value for each path. Args: design: A real `Tensor` of shape `[batch_size, basis_size, num_samples]`. calibration_indices: A rank 1 integer `Tensor` denoting indices of samples used for regression. continuation_value: A `Tensor` of shape `[num_samples, batch_size]` and of the same dtyp...
github-repos
def _ReadOperatingSystemArtifactValues(self, operating_system_values): if not operating_system_values: raise errors.MalformedPresetError('Missing operating system values.') family = operating_system_values.get('family', None) product = operating_system_values.get('product', None) version = o...
Reads an operating system artifact from a dictionary. Args: operating_system_values (dict[str, object]): operating system values. Returns: OperatingSystemArtifact: an operating system artifact attribute container. Raises: MalformedPresetError: if the format of the operating system values are not set or incorrect.
juraj-google-style
def load_parent_implems(self, parent_implems): for trname, attr, implem in parent_implems.get_custom_implementations(): self.implementations[trname] = implem.copy() self.transitions_at[trname] = attr self.custom_implems.add(trname)
Import previously defined implementations. Args: parent_implems (ImplementationList): List of implementations defined in a parent class.
juraj-google-style
def _validate_at_hash(claims, access_token, algorithm): if (('at_hash' not in claims) and (not access_token)): return elif (('at_hash' in claims) and (not access_token)): msg = 'No access_token provided to compare against at_hash claim.' raise JWTClaimsError(msg) elif (access_token a...
Validates that the 'at_hash' parameter included in the claims matches with the access_token returned alongside the id token as part of the authorization_code flow. Args: claims (dict): The claims dictionary to validate. access_token (str): The access token returned by the OpenID Provider. algorithm (str): The algorith...
codesearchnet
def _load_credentials_file(credentials_file): try: credentials_file.seek(0) data = json.load(credentials_file) except Exception: logger.warning('Credentials file could not be loaded, will ignore and overwrite.') return {} if (data.get('file_version') != 2): logger.war...
Load credentials from the given file handle. The file is expected to be in this format: { "file_version": 2, "credentials": { "key": "base64 encoded json representation of credentials." } } This function will warn and return empty credentials instead of raising exceptions. Args: credentials_file: An open file handl...
codesearchnet
def _normalize_array(array, domain=(0, 1)): array = np.array(array) array = np.squeeze(array) assert len(array.shape) <= 3 assert np.issubdtype(array.dtype, np.number) assert not np.isnan(array).any() low, high = np.min(array), np.max(array) if domain is None: message = "No domain specified,...
Given an arbitrary rank-3 NumPy array, produce one representing an image. This ensures the resulting array has a dtype of uint8 and a domain of 0-255. Args: array: NumPy array representing the image domain: expected range of values in array, defaults to (0, 1), if explicitly set to None will use the array's own range...
juraj-google-style
def _sanitize_slices(slices, intended_shape, deficient_shape): sanitized_slices = [] idx = 0 for slc in slices: if slc is Ellipsis: if idx < 0: raise ValueError('Found multiple `...` in slices {}'.format(slices)) num_remaining_non_newaxis_slices = sum((s is no...
Restricts slices to avoid overflowing size-1 (broadcast) dimensions. Args: slices: iterable of slices received by `__getitem__`. intended_shape: int `Tensor` shape for which the slices were intended. deficient_shape: int `Tensor` shape to which the slices will be applied. Must have the same rank as `intended_shape`. R...
github-repos
def enter(self, layer, inputs, build_graph, training, saving=None): state = {'layer': layer, 'inputs': inputs, 'build_graph': build_graph, 'training': training, 'saving': saving} return CallContextManager(self, state)
Push a Layer and its inputs and state onto the current call context. Args: layer: The `Layer` whose `call` is currently active. inputs: The inputs to the currently active `Layer`. build_graph: Whether currently inside a Graph or FuncGraph. training: Whether currently executing in training or inference mode. saving: Wh...
github-repos
def search_groups(self, group): group_url = "%s/%s/%s" % (self.url, "group", group) response = self.jss.get(group_url) return LDAPGroupsResults(self.jss, response)
Search for LDAP groups. Args: group: Group to search for. It is not entirely clear how the JSS determines the results- are regexes allowed, or globbing? Returns: LDAPGroupsResult object. Raises: JSSGetError if no results are found.
juraj-google-style
def convert_padding(padding, expected_length=4): explicit_paddings = [] if padding == 'EXPLICIT': raise ValueError("'EXPLICIT' is not a valid value for `padding`. To use explicit padding, `padding` must be a list.") if isinstance(padding, (list, tuple)): for i, dim_paddings in enumerate(padd...
Converts Python padding to C++ padding for ops which take EXPLICIT padding. Args: padding: the `padding` argument for a Python op which supports EXPLICIT padding. expected_length: Expected number of entries in the padding list when explicit padding is used. Returns: (padding, explicit_paddings) pair, which should be ...
github-repos
def extract(self, text: str) -> List[Extraction]: doc = self._tokenizer.tokenize_to_spacy_doc(text) self._load_matcher() matches = [x for x in self._matcher(doc) if (x[1] != x[2])] pos_filtered_matches = [] neg_filtered_matches = [] for (idx, start, end) in matches: span_doc = self._toke...
Extract from text Args: text (str): input str to be extracted. Returns: List[Extraction]: the list of extraction or the empty list if there are no matches.
codesearchnet
def bearing(self, format='numeric'): bearings = [] for segment in self: if (len(segment) < 2): bearings.append([]) else: bearings.append(segment.bearing(format)) return bearings
Calculate bearing between locations in segments. Args: format (str): Format of the bearing string to return Returns: list of list of float: Groups of bearings between points in segments
codesearchnet
def GetEventTagByIdentifier(self, storage_file, event_identifier): if (not self._index): self._Build(storage_file) lookup_key = event_identifier.CopyToString() event_tag_identifier = self._index.get(lookup_key, None) if (not event_tag_identifier): return None return storage_file.GetE...
Retrieves the most recently updated event tag for an event. Args: storage_file (BaseStorageFile): storage file. event_identifier (AttributeContainerIdentifier): event attribute container identifier. Returns: EventTag: event tag or None if the event has no event tag.
codesearchnet
def dialog_open(self, *, dialog: dict, trigger_id: str, **kwargs) -> SlackResponse: kwargs.update({'dialog': dialog, 'trigger_id': trigger_id}) return self.api_call('dialog.open', json=kwargs)
Open a dialog with a user. Args: dialog (dict): A dictionary of dialog arguments. { "callback_id": "46eh782b0", "title": "Request something", "submit_label": "Request", "state": "Max", "elements": [ { "type": "text", "label": "Origin", "name": "loc_origin" }, { "type": "text", "label": "Destination", "name": "loc_dest...
codesearchnet
def _get_executor_init(self, workers): raise NotImplementedError
Gets the Pool initializer for multiprocessing. Args: workers: Number of workers. Returns: Function, a Function to initialize the pool
github-repos
def get_type_name_in_language(cls, type_name, sub_type, language): if language in cls.type_methods_cache: m = cls.type_methods_cache[language] if not m: return type_name return m(type_name) found, method = load_language_plugins(language, 'get...
Get the type for the given language Args: type_name (str): the type to convert language (str): the language to use Returns: a type name in the given language Example: get_type_name_in_language("Varchar", "python") >>> str
juraj-google-style
def sort_edge(edges): return sorted(edges, key=(lambda x: (x.L, x.R)))
Sort iterable of edges first by left node indices then right. Args: edges(list[Edge]): List of edges to be sorted. Returns: list[Edge]: Sorted list by left and right node indices.
codesearchnet
def laid_out_pcoord(self, mesh_axis): divisor = list_product(self.shape.to_integer_list[mesh_axis + 1:]) modulus = self.shape[mesh_axis].size def my_fn(pnum): return (pnum return self.slicewise(my_fn, self.laid_out_pnum())
Returns a LaidOutTensor containing the processor coordinate. Args: mesh_axis: int. Returns: LaidOutTensor where each slice is an integer scalar.
juraj-google-style
def fetch(self, invoice_id, data={}, **kwargs): return super(Invoice, self).fetch(invoice_id, data, **kwargs)
Fetch Invoice for given Id Args: invoice_id : Id for which invoice object has to be retrieved Returns: Invoice dict for given invoice Id
juraj-google-style
def autocorrelation(ts, normalized=False, unbiased=False): ts = np.squeeze(ts) if (ts.ndim <= 1): if normalized: ts = ((ts - ts.mean()) / ts.std()) N = ts.shape[0] ar = np.asarray(ts) acf = np.correlate(ar, ar, mode='full') outlen = ((acf.shape[0] + 1) / 2) ...
Returns the discrete, linear convolution of a time series with itself, optionally using unbiased normalization. N.B. Autocorrelation estimates are necessarily inaccurate for longer lags, as there are less pairs of points to convolve separated by that lag. Therefore best to throw out the results except for shorter lags...
codesearchnet
def switch_to_line_in(self, source=None): if source: uid = source.uid else: uid = self.uid self.avTransport.SetAVTransportURI([('InstanceID', 0), ('CurrentURI', 'x-rincon-stream:{0}'.format(uid)), ('CurrentURIMetaData', '')])
Switch the speaker's input to line-in. Args: source (SoCo): The speaker whose line-in should be played. Default is line-in from the speaker itself.
codesearchnet
def topics(self): cluster = self._client.cluster if (self._client._metadata_refresh_in_progress and self._client._topics): future = cluster.request_update() self._client.poll(future=future) stash = cluster.need_all_topic_metadata cluster.need_all_topic_metadata = True future = cluste...
Get all topics the user is authorized to view. Returns: set: topics
codesearchnet
def deepnn(x): with tf.name_scope('reshape'): x_image = tf.reshape(x, [(- 1), 28, 28, 1]) with tf.name_scope('conv1'): W_conv1 = weight_variable([5, 5, 1, 32]) b_conv1 = bias_variable([32]) h_conv1 = tf.nn.relu((conv2d(x_image, W_conv1) + b_conv1)) with tf.name_scope('pool1')...
deepnn builds the graph for a deep net for classifying digits. Args: x: an input tensor with the dimensions (N_examples, 784), where 784 is the number of pixels in a standard MNIST image. Returns: A tuple (y, keep_prob). y is a tensor of shape (N_examples, 10), with values equal to the logits of classifying the digit...
codesearchnet
def _get_countdown_for_next_slice(self, spec): countdown = 0 if (self._processing_limit(spec) != (- 1)): countdown = max(int((parameters.config._SLICE_DURATION_SEC - (self._time() - self._start_time))), 0) return countdown
Get countdown for next slice's task. When user sets processing rate, we set countdown to delay task execution. Args: spec: model.MapreduceSpec Returns: countdown in int.
codesearchnet
def find_many(self, url, type, resource): return [type(item) for item in RestClient.get(url)[resource]]
Get a list of resources Args: url (string): URL to invoke type (class): Class type resource (string): The REST Resource Returns: list of object: List of resource instances
juraj-google-style
def job_history(backend): year = widgets.Output(layout=widgets.Layout(display='flex-inline', align_items='center', min_height='400px')) month = widgets.Output(layout=widgets.Layout(display='flex-inline', align_items='center', min_height='400px')) week = widgets.Output(layout=widgets.Layout(display='flex-inl...
Widget for displaying job history Args: backend (IBMQbackend): The backend. Returns: Tab: A tab widget for history images.
codesearchnet
def get_msms_df(model, pdb_id, outfile=None, outdir=None, outext='_msms.df', force_rerun=False): outfile = ssbio.utils.outfile_maker(inname=pdb_id, outname=outfile, outdir=outdir, outext=outext) if ssbio.utils.force_rerun(flag=force_rerun, outfile=outfile): try: ...
Run MSMS (using Biopython) on a Biopython Structure Model. Depths are in units Angstroms. 1A = 10^-10 m = 1nm. Returns a dictionary of:: { chain_id:{ resnum1_id: (res_depth, ca_depth), resnum2_id: (res_depth, ca_depth) } } Args: model: Biopython Structure Model Returns: Pandas DataFrame: ResidueDepth property_dict,...
juraj-google-style
def run(self, module, post_check): try: _cwd = os.getcwd() _sys_path = list(sys.path) _sys_argv = list(sys.argv) sys.path.insert(0, os.path.dirname(self._path)) sys.argv = [os.path.basename(self._path)] +...
Execute the configured source code in a module and run any post checks. Args: module (Module) : a module to execute the configured code in. post_check(callable) : a function that can raise an exception if expected post-conditions are not met after code execution.
juraj-google-style
def _ScanVolume(self, scan_context, scan_node, base_path_specs): if ((not scan_node) or (not scan_node.path_spec)): raise errors.ScannerError('Invalid or missing scan node.') if scan_context.IsLockedScanNode(scan_node.path_spec): self._ScanEncryptedVolume(scan_context, scan_node) if scan...
Scans a volume scan node for volume and file systems. Args: scan_context (SourceScannerContext): source scanner context. scan_node (SourceScanNode): volume scan node. base_path_specs (list[PathSpec]): file system base path specifications. Raises: ScannerError: if the format of or within the source is not supported or...
codesearchnet
def insert(self, **fields): if (self.conflict_target or self.conflict_action): compiler = self._build_insert_compiler([fields]) rows = compiler.execute_sql(return_id=True) pk_field_name = self.model._meta.pk.name return rows[0][pk_field_name] return super().create(**fields).pk
Creates a new record in the database. This allows specifying custom conflict behavior using .on_conflict(). If no special behavior was specified, this uses the normal Django create(..) Arguments: fields: The fields of the row to create. Returns: The primary key of the record that was created.
codesearchnet
def _request(self, method, url, body): if ((method != 'POST') and (method != 'PUT')): body = None s = Session() LOGGER.debug('Method: {0}, Url: {1}, Body: {2}.'.format(method, url, body)) req = Request(method, url, json=body) prepped = s.prepare_request(req) res = s.send(prepped, timeout...
Internal method to send request to the remote server. Args: method(str): HTTP Method(GET/POST/PUT/DELET/HEAD). url(str): The request url. body(dict): The JSON object to be sent. Returns: A dict represent the json body from server response. Raises: ConnectionError: Meet network problem (e.g. DNS failure, refused conn...
codesearchnet
def normalize_attr_values(a: Any) -> np.ndarray: scalar = False if np.isscalar(a): a = np.array([a]) scalar = True arr = normalize_attr_array(a) if np.issubdtype(arr.dtype, np.integer) or np.issubdtype(arr.dtype, np.floating): pass elif np.issubdtype(arr.dtype, np.character) or np.issubdtype(arr.dtype, n...
Take all kinds of input values and validate/normalize them. Args: a List, tuple, np.matrix, np.ndarray or sparse matrix Elements can be strings, numbers or bools Returns a_normalized An np.ndarray with elements conforming to one of the valid Loom attribute types Remarks: This method should be used to prepare the ...
juraj-google-style
def bartlett(x): if any_symbolic_tensors((x,)): return Bartlett().symbolic_call(x) return backend.numpy.bartlett(x)
Bartlett window function. The Bartlett window is a triangular window that rises then falls linearly. Args: x: Scalar or 1D Tensor. Window length. Returns: A 1D tensor containing the Bartlett window values. Example: >>> x = keras.ops.convert_to_tensor(5) >>> keras.ops.bartlett(x) array([0. , 0.5, 1. , 0.5, 0. ], dtyp...
github-repos
def _make_projcet_list(path): from collections import OrderedDict from matplotlib.colors import LinearSegmentedColormap from matplotlib.colors import rgb2hex as r2h from numpy import linspace proj = [] projects = OrderedDict() file_list = os.listdir(path) for files in file_list...
Returns a dictionaries in which each project is a key and the tasks are stored as a list within that dictionaly element. Args: path (str): The path to the folder containing the *.json files. Returns: projects (list of dict): A dictionary in which each project is a key containing a list of it's tasks.
juraj-google-style