code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def _encode_value(self, value): if isinstance(value, (int, float, str, bool, datetime)): return value elif isinstance(value, list): return [self._encode_value(item) for item in value] elif isinstance(value, dict): result = {} for (key, item) in value.items(): resu...
Encodes the value such that it can be stored into MongoDB. Any primitive types are stored directly into MongoDB, while non-primitive types are pickled and stored as GridFS objects. The id pointing to a GridFS object replaces the original value. Args: value (object): The object that should be encoded for storing in Mo...
codesearchnet
def NewFromJSON(data): s = Shake( id=data.get('id', None), name=data.get('name', None), url=data.get('url', None), thumbnail_url=data.get('thumbnail_url', None), description=data.get('description', None), type=data.get('type', None...
Create a new Shake instance from a JSON dict. Args: data (dict): JSON dictionary representing a Shake. Returns: A Shake instance.
juraj-google-style
def reopen_encoded(fileobj, mode='r', fallback_encoding=None): encoding = determine_encoding(fileobj.name, fallback_encoding) fileobj.close() return open(fileobj.name, mode, encoding=encoding)
Makes sure that a file was opened with some valid encoding. Arguments: fileobj (file): The file-object. mode (str, optional): The mode in which to re-open the file. fallback_encoding (str, optional): The encoding in which to re-open the file if it does not specify an encoding itself. Returns: file: The re-opened file...
juraj-google-style
def cifar_generator(cifar_version, tmp_dir, training, how_many, start_from=0): if cifar_version == "cifar10": url = _CIFAR10_URL train_files = _CIFAR10_TRAIN_FILES test_files = _CIFAR10_TEST_FILES prefix = _CIFAR10_PREFIX image_size = _CIFAR10_IMAGE_SIZE label_key = "labels" elif cifar_ve...
Image generator for CIFAR-10 and 100. Args: cifar_version: string; one of "cifar10" or "cifar100" tmp_dir: path to temporary storage directory. training: a Boolean; if true, we use the train set, otherwise the test set. how_many: how many images and labels to generate. start_from: from which image to start. Returns: ...
juraj-google-style
def _create_RSA_private_key(self, bytes): try: private_key = serialization.load_pem_private_key( bytes, password=None, backend=default_backend() ) return private_key except Excep...
Instantiates an RSA key from bytes. Args: bytes (byte string): Bytes of RSA private key. Returns: private_key (cryptography.hazmat.primitives.asymmetric.rsa.RSAPrivateKey): RSA private key created from key bytes.
juraj-google-style
def diff_charsToLines(self, diffs, lineArray): for i in range(len(diffs)): text = [] for char in diffs[i][1]: text.append(lineArray[ord(char)]) diffs[i] = (diffs[i][0], "".join(text))
Rehydrate the text in a diff from a string of line hashes to real lines of text. Args: diffs: Array of diff tuples. lineArray: Array of unique strings.
juraj-google-style
def get_gutter_client(alias='default', cache=CLIENT_CACHE, **kwargs): from gutter.client.models import Manager if (not alias): return Manager(**kwargs) elif (alias not in cache): cache[alias] = Manager(**kwargs) return cache[alias]
Creates gutter clients and memoizes them in a registry for future quick access. Args: alias (str or None): Name of the client. Used for caching. If name is falsy then do not use the cache. cache (dict): cache to store gutter managers in. **kwargs: kwargs to be passed the Manger class. Returns (Manager): A gutter clie...
codesearchnet
def __instancecheck__(cls, other): try: return bool( isinstance(other, cls.__type__) and cls(other) ) except ValueError: return False
Determine if an instance is of the sliced type and within bounds. Args: other: The instance to test. Returns: True if the object is both of the same type as sliced by the created class as well as within the bounds defined by the class.
juraj-google-style
async def verify_parent_task(chain, link): worker_type = get_worker_type(link.task) if (worker_type not in chain.context.config['valid_decision_worker_types']): raise CoTError('{} is not a valid decision workerType!'.format(worker_type)) if (chain is not link): path = link.get_artifact_full_...
Verify the parent task Link. Action task verification is currently in the same verification function as decision tasks, because sometimes we'll have an action task masquerading as a decision task, e.g. in templatized actions for release graphs. To make sure our guess of decision or action task isn't fatal, we call thi...
codesearchnet
def apply_strain(self, strain): s = (1 + np.array(strain)) * np.eye(3) self.lattice = Lattice(np.dot(self._lattice.matrix.T, s).T)
Apply a strain to the lattice. Args: strain (float or list): Amount of strain to apply. Can be a float, or a sequence of 3 numbers. E.g., 0.01 means all lattice vectors are increased by 1%. This is equivalent to calling modify_lattice with a lattice with lattice parameters that are 1% larger.
juraj-google-style
def register_scenario(self, scenario_name, handler): if (scenario_name in self._known_scenarios): raise ArgumentError('Attempted to add the same scenario name twice', scenario_name=scenario_name, previous_handler=self._known_scenarios[scenario_name]) self._known_scenarios[scenario_name] = handler
Register a scenario handler for this object. Scenario handlers are callable functions with no positional arguments that can be called by name with the load_scenario function and should prepare the emulated object into a known state. The purpose of a scenario is to make it easy to get a device into a specific state fo...
codesearchnet
def GetMessages(self, formatter_mediator, event): if self.DATA_TYPE != event.data_type: raise errors.WrongFormatter('Unsupported data type: {0:s}.'.format( event.data_type)) event_values = event.CopyToDict() return self._FormatMessages( self.FORMAT_STRING, self.FORMAT_STRING_SH...
Determines the formatted message strings for an event object. Args: formatter_mediator (FormatterMediator): mediates the interactions between formatters and other components, such as storage and Windows EventLog resources. event (EventObject): event. Returns: tuple(str, str): formatted message string and short messag...
juraj-google-style
def confirm_iam_role(self, account): try: iam = self.session.client('iam') rolearn = iam.get_role(RoleName=self.role_name)['Role']['Arn'] return rolearn except ClientError as e: if (e.response['Error']['Code'] == 'NoSuchEntity'): self.create_iam_role(account) ...
Return the ARN of the IAM Role on the provided account as a string. Returns an `IAMRole` object from boto3 Args: account (:obj:`Account`): Account where to locate the role Returns: :obj:`IAMRole`
codesearchnet
def _update_object(object_key: str, event: Event): events_list_key = _keys.events_list(object_key) events_data_key = _keys.events_data(object_key) event_dict = deepcopy(event.config) event_dict.pop('id') DB.append_to_list(events_list_key, event.id, pipeline=True) DB.set_hash_value(events_da...
Update the events list and events data for the object. - Adds the event Id to the list of events for the object. - Adds the event data to the hash of object event data keyed by event id. Args: object_key (str): Key of the object being updated. event (Event): Event object
juraj-google-style
def region(self, start=0, end=None): if end is None: end = len(self.sequence) return '>{}\n{}'.format(self.id, self.sequence[start:end])
Returns a region of ``Sequence.sequence``, in FASTA format. If called without kwargs, the entire sequence will be returned. Args: start (int): Start position of the region to be returned. Default is 0. end (int): End position of the region to be returned. Negative values will function as they do when slicing string...
juraj-google-style
def cast_type(self, var, cast_type=None): if cast_type is None: cast_type = self.valid_values try: if cast_type == int: return int(var) elif cast_type == float: return float(var) elif type == str: ...
cast the value into the type typ if type is not provided it is set to self.valid_values Args: var: variable to be cast type: target type Returns: the variable var csat into type typ
juraj-google-style
def start_automated_run(path, automated_run_id): with functions.DBContextManager(path) as session: automated_run = session.query(models.AutomatedRun).filter_by(id=automated_run_id).first() if (not automated_run): raise exceptions.UserError('Automated run {} does not exist'.format(automat...
Starts automated run. This will automatically create base learners until the run finishes or errors out. Args: path (str): Path to Xcessiv notebook automated_run_id (str): Automated Run ID
codesearchnet
def ias53(msg): d = hex2bin(data(msg)) if d[12] == '0': return None ias = bin2int(d[13:23]) return ias
Indicated airspeed, DBS 5,3 message Args: msg (String): 28 bytes hexadecimal message Returns: int: indicated arispeed in knots
juraj-google-style
def _non_batched_matmul(lhs, rhs, lhs_contraction, rhs_contraction): return math_ops.tensordot(lhs, rhs, axes=(list(lhs_contraction), list(rhs_contraction)))
Compute the non-batched matrix multiplication. If it is the general non-batched/single-batched matrix multiplication, use the highly optimized kernel `tf.tensordot` to handle it. Args: lhs: an array (the left-hand side matrix/vector to be multiplied) rhs: an array (the right-hand side matrix/vector to be multiplied) ...
github-repos
def update(self, data): updated = self.set_property('state', data['state']) updated |= self.set_property('notes', sorted(data['notes'] or [])) updated |= self.set_property('last_notice', data['last_notice']) if updated: self.set_property('last_change', date...
Updates the object information based on live data, if there were any changes made. Any changes will be automatically applied to the object, but will not be automatically persisted. You must manually call `db.session.add(instance)` on the object. Args: data (:obj:): AWS API Resource object fetched from AWS API Returns...
juraj-google-style
def NewOutputModule(cls, name, output_mediator): output_class = cls.GetOutputClass(name) return output_class(output_mediator)
Creates a new output module object for the specified output format. Args: name (str): name of the output module. output_mediator (OutputMediator): output mediator. Returns: OutputModule: output module. Raises: KeyError: if there is no output class found with the supplied name. ValueError: if name is not a string.
juraj-google-style
def createSimpleResourceMap(ore_pid, scimeta_pid, sciobj_pid_list): ore = ResourceMap() ore.initialize(ore_pid) ore.addMetadataDocument(scimeta_pid) ore.addDataDocuments(sciobj_pid_list, scimeta_pid) return ore
Create a simple OAI-ORE Resource Map with one Science Metadata document and any number of Science Data objects. This creates a document that establishes an association between a Science Metadata object and any number of Science Data objects. The Science Metadata object contains information that is indexed by DataONE, ...
codesearchnet
def get_path_attribute(obj, path): path = path.replace('original.', '').replace('current_user.', '') attr_parts = path.split('.') res = obj try: for part in attr_parts: try: res = getattr(res, part) except AttributeError: res = getattr(res....
Given a path like `related_record.related_record2.id`, this method will be able to pull the value of ID from that object, returning None if it doesn't exist. Args: obj (fleaker.db.Model): The object to attempt to pull the value from path (str): The path to follow to pull the value from Returns: (int|str|None): The va...
codesearchnet
def _ParseDistributedTrackingIdentifier( self, parser_mediator, uuid_object, origin): if uuid_object.version == 1: event_data = windows_events.WindowsDistributedLinkTrackingEventData( uuid_object, origin) date_time = dfdatetime_uuid_time.UUIDTime(timestamp=uuid_object.time) ev...
Extracts data from a Distributed Tracking identifier. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. uuid_object (uuid.UUID): UUID of the Distributed Tracking identifier. origin (str): origin of the event (event source). Returns: str: UUI...
juraj-google-style
def set_peer_link(self, value=None, default=False, disable=False): return self._configure_mlag('peer-link', value, default, disable)
Configures the mlag peer-link value Args: value (str): The value to configure the peer-link default (bool): Configures the peer-link using the default keyword disable (bool): Negates the peer-link using the no keyword Returns: bool: Returns True if the commands complete successfully
codesearchnet
def prepare_data_index(self, silence_percentage, unknown_percentage, wanted_words, validation_percentage, testing_percentage): random.seed(RANDOM_SEED) wanted_words_index = {} for index, wanted_word in enumerate(wanted_words): wanted_words_index[wanted_word] = index + 2 self.data_index = {'valid...
Prepares a list of the samples organized by set and label. The training loop needs a list of all the available data, organized by which partition it should belong to, and with ground truth labels attached. This function analyzes the folders below the `data_dir`, figures out the right labels for each file based on the ...
github-repos
def create_image_streamer_client(self): image_streamer = ImageStreamerClient(self.__image_streamer_ip, self.__connection.get_session_id(), self.__connection._apiVersion, self.__connection._sslBundle) return image_streamer
Create the Image Streamer API Client. Returns: ImageStreamerClient:
codesearchnet
def zip_columns(columns): weld_obj = WeldObject(encoder_, decoder_) column_vars = [] for column in columns: col_var = weld_obj.update(column) if isinstance(column, WeldObject): col_var = column.obj_id weld_obj.dependencies[col_var] = column column_vars.ap...
Zip together multiple columns. Args: columns (WeldObject / Numpy.ndarray): lust of columns Returns: A WeldObject representing this computation
juraj-google-style
def CalculateHashes(self, base_path_specs, output_writer): for base_path_spec in base_path_specs: file_system = resolver.Resolver.OpenFileSystem(base_path_spec) file_entry = resolver.Resolver.OpenFileEntry(base_path_spec) if (file_entry is None): logging.warning('Unable to open b...
Recursive calculates hashes starting with the base path specification. Args: base_path_specs (list[dfvfs.PathSpec]): source path specification. output_writer (StdoutWriter): output writer.
codesearchnet
def recipe_dcm_to_bigquery(config, auth_read, auth_write, account, report_id, report_name, dataset, table, is_incremental_load): dcm(config, {'auth': auth_read, 'report': {'account': account, 'report_id': report_id, 'name': report_name}, 'out': {'bigquery': {'auth': auth_write, 'dataset': dataset, 'table': table, '...
Move existing CM report into a BigQuery table. Args: auth_read (authentication) - Credentials used for reading data. auth_write (authentication) - Credentials used for writing data. account (integer) - CM network id. report_id (integer) - CM report id, empty if using name . report_name (string) - CM report name, empty...
github-repos
def to_numpy(self, dtype=None, copy=False): return self._default_to_pandas("to_numpy", dtype=dtype, copy=copy)
Convert the DataFrame to a NumPy array. Args: dtype: The dtype to pass to numpy.asarray() copy: Whether to ensure that the returned value is a not a view on another array. Returns: A numpy array.
juraj-google-style
def create_default_views(self, create_datastore_views=False): package = deepcopy(self.data) if self.resources: package['resources'] = self._convert_hdxobjects(self.resources) data = {'package': package, 'create_datastore_views': create_datastore_views} self...
Create default resource views for all resources in dataset Args: create_datastore_views (bool): Whether to try to create resource views that point to the datastore Returns: None
juraj-google-style
def reorder_resource_views(self, resource_views): if not isinstance(resource_views, list): raise HDXError('ResourceViews should be a list!') ids = list() for resource_view in resource_views: if isinstance(resource_view, str): resource_vie...
Order resource views in resource. Args: resource_views (List[Union[ResourceView,Dict,str]]): A list of either resource view ids or resource views metadata from ResourceView objects or dictionaries Returns: None
juraj-google-style
def incr(self, counter_name, delta=1): self._state.counters_map.increment(counter_name, delta)
Changes counter by delta. Args: counter_name: the name of the counter to change. str. delta: int.
codesearchnet
def in_sorted(values, value): index = bisect.bisect_left(values, value) if index >= len(values): return False return values[index] == value
Checks if a value is in a sorted list. Uses the :mod:`bisect` builtin to find the insertion point for ``value``. Args: values (List[int]): Integers sorted in ascending order. value (int): Value to check if contained in ``values``. Returns: bool: Indicating if the value is contained.
juraj-google-style
def configure_bigchaindb(command): @functools.wraps(command) def configure(args): config_from_cmdline = None try: if (args.log_level is not None): config_from_cmdline = {'log': {'level_console': args.log_level, 'level_logfile': args.log_level}, 'server': {'loglevel':...
Decorator to be used by command line functions, such that the configuration of bigchaindb is performed before the execution of the command. Args: command: The command to decorate. Returns: The command wrapper function.
codesearchnet
def _contains_op_with_name_and_attribute(self, nodes: Iterable[node_def_pb2.NodeDef], op_name: str, attr_name: str, attr_val: _AttrValType, node_name: str='') -> bool: def match_node_name(name): if not node_name: return True compiled_regex = re.compile(node_name) match = re.full...
Determine whether there is a node whose operation name matches `op_name`. If `attr_name` is given, additionally check if the `attr_val` matches with the attribute value of the op. Args: nodes: Iterable of NodeDefs. op_name: Name of the op to match. attr_name: Name of the attribute of the op to match. attr_val: Value ...
github-repos
def unparse(node, indentation=None, include_encoding_marker=True): del indentation if not isinstance(node, (list, tuple)): node = (node,) codes = [] if include_encoding_marker: codes.append(' for n in node: if isinstance(n, gast.AST): ast_n = gast.gast_to_ast(n) ...
Returns the source code of given AST. Args: node: The code to compile, as an AST object. indentation: Unused, deprecated. The returning code will always be indented at 4 spaces. include_encoding_marker: Bool, whether to include a comment on the first line to explicitly specify UTF-8 encoding. Returns: code: The sourc...
github-repos
def has_implicit_access_to_dashboard(user, obj): request = get_request_or_stub() decoded_jwt = get_decoded_jwt_from_request(request) return request_user_has_implicit_access_via_jwt(decoded_jwt, ENTERPRISE_DASHBOARD_ADMIN_ROLE)
Check that if request user has implicit access to `ENTERPRISE_DASHBOARD_ADMIN_ROLE` feature role. Returns: boolean: whether the request user has access or not
codesearchnet
def max_steps_per_epoch(): return _MAX_STEPS_PER_EPOCH
Get the maximum number of steps for any call to fit/evaluate/predict. Retrieves the limit on the number of epochs set by `keras.config.set_max_steps_per_epoch` or the `KERAS_MAX_STEPS_PER_EPOCH` environment variable. Args: max_epochs: The integer limit on the number of epochs or `None`. If `None`, no limit is applied...
github-repos
def to_qasm(self, header: Optional[str] = None, precision: int = 10, qubit_order: ops.QubitOrderOrList = ops.QubitOrder.DEFAULT, ) -> str: return str(self._to_qasm_output(header, precision, qubit_order))
Returns QASM equivalent to the circuit. Args: header: A multi-line string that is placed in a comment at the top of the QASM. Defaults to a cirq version specifier. precision: Number of digits to use when representing numbers. qubit_order: Determines how qubits are ordered in the QASM register.
juraj-google-style
def _decode_filename(base_filename, problem_name, decode_hp): if (decode_hp.shards > 1): base_filename = _add_shard_to_filename(base_filename, decode_hp) if ('beam{beam}.alpha{alpha}.decodes'.format(beam=str(decode_hp.beam_size), alpha=str(decode_hp.alpha)) in base_filename): return base_filenam...
Generates decode filename. Args: base_filename: A string, base of the decode filename. problem_name: A string, name of the problem. decode_hp: HParams for decoding. Returns: A string, produced decode filename.
codesearchnet
def cipher(self): if (self.offset is False): self.offset = randrange(5, 25) logging.info('Random offset selected: {0}'.format(self.offset)) logging.debug('Offset set: {0}'.format(self.offset)) ciphered_message_list = list(self.message) for (i, letter) in enumerate(ciphered_message_list):...
Applies the Caesar shift cipher. Based on the attributes of the object, applies the Caesar shift cipher to the message attribute. Accepts positive and negative integers as offsets. Required attributes: message offset Returns: String with cipher applied.
codesearchnet
def solve(A, b): r A = asarray(A, float) b = asarray(b, float) if A.shape[0] == 1: with errstate(divide="ignore"): A_ = array([[1.0 / A[0, 0]]]) if not isfinite(A_[0, 0]): raise LinAlgError("Division error.") return dot(A_, b) elif A.shape[0] == 2: ...
r"""Solve for the linear equations :math:`\mathrm A \mathbf x = \mathbf b`. Args: A (array_like): Coefficient matrix. b (array_like): Ordinate values. Returns: :class:`numpy.ndarray`: Solution ``x``.
juraj-google-style
class TFTopPLogitsWarper(TFLogitsWarper): def __init__(self, top_p: float, filter_value: float=-float('Inf'), min_tokens_to_keep: int=1): if not isinstance(top_p, float) or (top_p < 0 or top_p > 1.0): raise ValueError(f'`top_p` has to be a float > 0 and < 1, but is {top_p}') if not isin...
[`TFLogitsWarper`] that performs top-p, i.e. restricting to top tokens summing to <= prob_cut_off. Args: top_p (`float`): If set to < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or higher are kept for generation. filter_value (`float`, *optional*, defaults to -inf): All f...
github-repos
def get_function_from_signature(self, function_signature): return next((f for f in self.functions if f.full_name == function_signature), None)
Return a function from a signature Args: function_signature (str): signature of the function (without return statement) Returns: Function
juraj-google-style
def has_values(o): try: next(o) return True except StopIteration: return False
Converts iterator to a boolean. Destroys iterator but returns True if at least one value is present. Args: * o: An iterator instance. Returns: * True if at least one instance or False if none.
github-repos
def _to_components(self, value): raise NotImplementedError('%s._to_components()' % type(self).__name__)
Encodes `value` as a nested structure of `Tensor` or `CompositeTensor`. Args: value: A value compatible with this `TypeSpec`. (Caller is responsible for ensuring compatibility.) Returns: A nested structure of `tf.Tensor` or `tf.CompositeTensor` compatible with `self._component_specs`, which can be used to reconstruc...
github-repos
def store(self, store=None, usage='both', mech=None, overwrite=False, set_default=False): if (store is None): if (rcred_rfc5588 is None): raise NotImplementedError('Your GSSAPI implementation does not have support for RFC 5588') return rcred_rfc5588.store_cred(self, usage, mech, overwrit...
Store these credentials into the given store This method stores the current credentials into the specified credentials store. If the default store is used, support for :rfc:`5588` is required. Otherwise, support for the credentials store extension is required. :requires-ext:`rfc5588` or :requires-ext:`cred_store` ...
codesearchnet
def get_name_servers(self, id_or_uri): uri = (self._client.build_uri(id_or_uri) + '/nameServers') return self._client.get(uri)
Gets the named servers for an interconnect. Args: id_or_uri: Can be either the interconnect id or the interconnect uri. Returns: dict: the name servers for an interconnect.
codesearchnet
def user_lists(self, username, member_type='USER'): return self.client.service.getUserLists(username, member_type, self.proxy_id)
Look up all the lists that the user is a member of. Args: username (str): The MIT username of the user member_type(str): The type of user, "USER" or "STRING" Returns: list of strings: names of the lists that this user is a member of
codesearchnet
def dumps(messages): serialized_messages = [] try: for message in messages: message_dict = message._dump() serialized_messages.append(message_dict) except AttributeError: _log.error("Improper object for messages serialization.") raise TypeError("Message h...
Serialize messages to a JSON formatted str Args: messages (list): The list of messages to serialize. Each message in the messages is subclass of Messge. Returns: str: Serialized messages. Raises: TypeError: If at least one message is not instance of Message class or subclass.
juraj-google-style
def decode(value, strip=False): if value is None: return None if isinstance(value, bytes) and not isinstance(value, unicode): value = value.decode("utf-8") if strip: return unicode(value).strip() return unicode(value)
Python 2/3 friendly decoding of output. Args: value (str | unicode | bytes | None): The value to decode. strip (bool): If True, `strip()` the returned string. (Default value = False) Returns: str: Decoded value, if applicable.
juraj-google-style
def set_hostname(hostname): with salt.utils.winapi.Com(): conn = wmi.WMI() comp = conn.Win32_ComputerSystem()[0] return comp.Rename(Name=hostname)
Set the hostname of the windows minion, requires a restart before this will be updated. .. versionadded:: 2016.3.0 Args: hostname (str): The hostname to set Returns: bool: ``True`` if successful, otherwise ``False`` CLI Example: .. code-block:: bash salt 'minion-id' system.set_hostname newhostname
juraj-google-style
def train_model(samples_path: str, labels_path: str, model_state_output_path: str): samples = pandas.read_csv(samples_path) labels = pandas.read_csv(labels_path) xgb = xgboost.XGBClassifier(max_depth=3) xgb.fit(samples, labels) xgb.save_model(model_state_output_path) return xgb
Function to train the XGBoost model. Args: samples_path: path to csv file containing the training data labels_path: path to csv file containing the labels for the training data model_state_output_path: Path to store the trained model
github-repos
def get_function_args_defaults(f): signature = get_signature(f) parameter = inspect.Parameter _SUPPORTED_ARG_TYPES = [parameter.POSITIONAL_ONLY, parameter.POSITIONAL_OR_KEYWORD] args = [name for name, p in signature.parameters.items() if p.kind in _SUPPORTED_ARG_TYPES] defaults = [p.default for p in...
Returns the function arguments of a given function. Returns: (args: List[str], defaults: List[Any]). The first list names the arguments of the method and the second one has the values of the default arguments. This is similar to ``inspect.getfullargspec()``'s results, except it doesn't include bound arguments and may ...
github-repos
def trainer_results(trainer, mean=0, std=1, title='', show=True, save=True): return plot_network_results(network=trainer.module, ds=trainer.ds, mean=mean, std=std, title=title, show=show, save=save)
Plot the performance of the Network and SupervisedDataSet in a pybrain Trainer DataSet target and output values are denormalized before plotting with: output * std + mean Which inverses the normalization (output - mean) / std Args: trainer (Trainer): a pybrain Trainer instance containing a valid Network and DataSe...
codesearchnet
def run_experiment(hparams): estimator = train_and_maybe_evaluate(hparams) schema = taxi.read_schema(hparams.schema_file) tf_transform_output = tft.TFTransformOutput(hparams.tf_transform_dir) eval_model_dir = os.path.join(hparams.output_dir, EVAL_MODEL_DIR) receiver_fn = lambda: model.eval_input_rec...
Train the model then export it for tf.model_analysis evaluation. Args: hparams: Holds hyperparameters used to train the model as name/value pairs.
github-repos
def loadnetcdf(filename, copy=True): filename = str(Path(filename).expanduser()) if copy: dataarray = xr.open_dataarray(filename).copy() else: dataarray = xr.open_dataarray(filename, chunks={}) if dataarray.name is None: dataarray.name = filename.rstrip('.nc') for key...
Load a dataarray from a NetCDF file. Args: filename (str): Filename (*.nc). copy (bool): If True, dataarray is copied in memory. Default is True. Returns: dataarray (xarray.DataArray): Loaded dataarray.
juraj-google-style
def load_data_split(proc_data_dir): ds_train = Dataset.load(path.join(proc_data_dir, 'train.bin')) ds_val = Dataset.load(path.join(proc_data_dir, 'val.bin')) ds_test = Dataset.load(path.join(proc_data_dir, 'test.bin')) return (ds_train, ds_val, ds_test)
Loads a split dataset Args: proc_data_dir: Directory with the split and processed data Returns: (Training Data, Validation Data, Test Data)
codesearchnet
def _prefix_from_ip_int(self, ip_int): prefixlen = self._max_prefixlen while prefixlen: if ip_int & 1: break ip_int >>= 1 prefixlen -= 1 if ip_int == (1 << prefixlen) - 1: return prefixlen else: raise N...
Return prefix length from a bitwise netmask. Args: ip_int: An integer, the netmask in expanded bitwise format. Returns: An integer, the prefix length. Raises: NetmaskValueError: If the input is not a valid netmask.
juraj-google-style
def set_cookie(self, key, value, domain=None, path='/', secure=False, httponly=True): self._cookies[key] = value if domain: self._cookies[key]['domain'] = domain if path: self._cookies[key]['path'] = path if secure: self._co...
Set a cookie. Args: key (:obj:`str`): Cookie name value (:obj:`str`): Cookie value domain (:obj:`str`): Cookie domain path (:obj:`str`): Cookie value secure (:obj:`bool`): True if secure, False otherwise httponly (:obj:`bool`): True if it's a HTTP only cookie, False otherwise
juraj-google-style
def set_json(self, obj, status=HttpStatusCodes.HTTP_200): obj = json.dumps(obj, sort_keys=True, default=(lambda x: str(x))) self.set_status(status) self.set_header(HttpResponseHeaders.CONTENT_TYPE, 'application/json') self.set_content(obj)
Helper method to set a JSON response. Args: obj (:obj:`object`): JSON serializable object status (:obj:`str`, optional): Status code of the response
codesearchnet
def get_2d_local_memory_v2(x, query_shape, memory_flange): (_, height, width, depth_x) = common_layers.shape_list(x) paddings = [[0, 0], [memory_flange[0], memory_flange[0]], [memory_flange[1], memory_flange[1]], [0, 0]] padded_x = tf.pad(x, paddings) padded_x.set_shape([None, (height + (2 * memory_flan...
Gathering memory blocks around query blocks. flange is half of query . Only works if memory flanges are half of query sizes. Args: x: a [batch, height, width, depth tensor] query_shape: 2-d integer list of query shape memory_flange: 2-d integer list of memory flanges Returns: x: A [batch, num_h_blocks, num_w_blocks,...
codesearchnet
def files_delete(self, *, id: str, **kwargs) -> SlackResponse: kwargs.update({'id': id}) return self.api_call('files.delete', json=kwargs)
Deletes a file. Args: id (str): The file id. e.g. 'F1234467890'
codesearchnet
def list(self, name=None, all=False, filters=None): resp = self.client.api.images(name=name, all=all, filters=filters) return [self.get(r['Id']) for r in resp]
List images on the server. Args: name (str): Only show images belonging to the repository ``name`` all (bool): Show intermediate image layers. By default, these are filtered out. filters (dict): Filters to be processed on the image list. Available filters: - ``dangling`` (bool) - ``label`` (str): format either ``key``...
codesearchnet
def _SetFieldType(self, field_proto, field_desc, package, scope): if field_proto.type_name: desc = self._GetTypeFromScope(package, field_proto.type_name, scope) else: desc = None if not field_proto.HasField('type'): if isinstance(desc, descriptor.Descriptor): field_proto.type...
Sets the field's type, cpp_type, message_type and enum_type. Args: field_proto: Data about the field in proto format. field_desc: The descriptor to modiy. package: The package the field's container is in. scope: Enclosing scope of available types.
juraj-google-style
def _create_all_weights(self, var_list): _ = self.iterations self._create_hypers() self._create_slots(var_list)
Creates all weights, including iterations, hyperparameters and slot vars. This will add newly created variables to `optimizer.weights`. New variables are only created when this method is called the first time, or when called with different variables in the var_list. Args: var_list: list or tuple of `Variable` object...
github-repos
def serialize(data): return rapidjson.dumps(data, skipkeys=False, ensure_ascii=False, sort_keys=True)
Serialize a dict into a JSON formatted string. This function enforces rules like the separator and order of keys. This ensures that all dicts are serialized in the same way. This is specially important for hashing data. We need to make sure that everyone serializes their data in the same way so that we do not have ha...
codesearchnet
def _process_image_id(self): try: image_info = self.image_id.strip().split(':') self.image_publisher = image_info[0] self.image_offer = image_info[1] self.image_sku = image_info[2] self.image_version = image_info[3] except Exception: self.image_publisher = None
Split image id into component values. Example: SUSE:SLES:12-SP3:2018.01.04 Publisher:Offer:Sku:Version Raises: If image_id is not a valid format.
codesearchnet
def _init_index(root_dir, schema, index_name): index_dir = os.path.join(root_dir, index_name) try: if (not os.path.exists(index_dir)): os.makedirs(index_dir) return (create_in(index_dir, schema), index_dir) else: return (open_dir(index_dir), index_dir) exc...
Creates new index or opens existing. Args: root_dir (str): root dir where to find or create index. schema (whoosh.fields.Schema): schema of the index to create or open. index_name (str): name of the index. Returns: tuple ((whoosh.index.FileIndex, str)): first element is index, second is index directory.
codesearchnet
def register_extension(self, ext_in, ext_out, force=False): if not force and (ext_in in self.__extensions.keys()): self.log_warning("Extension %s already exist, ignore redefinition." % ext_in) return self.__extensions[ext_in] = ext_out
Add/register a file extension. Args: ext_in (str): Extension of input files. ext_out (str): Extension of corresponding output files. force (bool): If ``force`` is set to ``True``, simply overwrite existing extensions, otherwise do nothing. If the ``logger`` is set, log a warning about the duplicate extension if ``for...
juraj-google-style
def MessageToJson(message, including_default_value_fields=False): js = _MessageToJsonObject(message, including_default_value_fields) return json.dumps(js, indent=2)
Converts protobuf message to JSON format. Args: message: The protocol buffers message instance to serialize. including_default_value_fields: If True, singular primitive fields, repeated fields, and map fields will always be serialized. If False, only serialize non-empty fields. Singular message fields and oneof fiel...
juraj-google-style
def get_all_thread_ids(self): json = self._get_json(self._url.thread_list()) return [thread['no'] for page in json for thread in page['threads']]
Return the ID of every thread on this board. Returns: list of ints: List of IDs of every thread on this board.
codesearchnet
def handle_upnp_error(self, xml_error): xml_err...
Disect a UPnP error, and raise an appropriate exception. Args: xml_error (str): a unicode string containing the body of the UPnP/SOAP Fault response. Raises an exception containing the error code.
juraj-google-style
def _FormatField(self, field): if self._FIELD_DELIMITER and isinstance(field, py2to3.STRING_TYPES): return field.replace(self._FIELD_DELIMITER, ' ') return field
Formats a field. Args: field (str): field value. Returns: str: formatted field value.
juraj-google-style
def get_number_of_image_patches(self, height: int, width: int, images_kwargs=None): do_image_splitting = images_kwargs.get('do_image_splitting', None) or self.do_image_splitting max_image_size = images_kwargs.get('max_image_size', None) or self.max_image_size size = images_kwargs.get('size', None) or self.s...
A utility that returns number of image patches for a given image size. Args: height (`int`): Height of the input image. width (`int`): Width of the input image. images_kwargs (`dict`, *optional*) Any kwargs to override defaults of the image processor. Returns: `int`: Number of patches per image.
github-repos
def rtruediv(self, other, axis="columns", level=None, fill_value=None): return self._binary_op( "rtruediv", other, axis=axis, level=level, fill_value=fill_value )
Div this DataFrame against another DataFrame/Series/scalar. Args: other: The object to use to apply the div against this. axis: The axis to div over. level: The Multilevel index level to apply div over. fill_value: The value to fill NaNs with. Returns: A new DataFrame with the rdiv applied.
juraj-google-style
class TFIdeficsVisionEncoder(tf.keras.layers.Layer): def __init__(self, config: IdeficsVisionConfig, **kwargs): super().__init__(**kwargs) self.config = config self.layers = [TFIdeficsVisionEncoderLayer(config, name=f'layers.{i}') for i in range(config.num_hidden_layers)] self.gradi...
Transformer encoder consisting of `config.num_hidden_layers` self attention layers. Each layer is a [`TFIdeficsVisionEncoderLayer`]. Args: config: IdeficsVisionConfig
github-repos
def fit_transform(self, tables=None, transformer_dict=None, transformer_list=None, missing=None): if (missing is None): missing = self.missing else: self.missing = missing warnings.warn(DEPRECATION_MESSAGE.format('fit_transform'), DeprecationWarning) transformed = {} if (tables i...
Create, apply and store the specified transformers for the given tables. Args: tables(dict): Mapping of table names to `tuple` where each tuple is on the form (`pandas.DataFrame`, `dict`). The `DataFrame` contains the table data and the `dict` the corresponding meta information. If not specified, the tables will be ...
codesearchnet
def FindMessageTypeByName(self, full_name): full_name = _NormalizeFullyQualifiedName(full_name) if (full_name not in self._descriptors): self.FindFileContainingSymbol(full_name) return self._descriptors[full_name]
Loads the named descriptor from the pool. Args: full_name: The full name of the descriptor to load. Returns: The descriptor for the named type.
codesearchnet
def Trim(lst, limit): limit = max(0, limit) clipping = lst[limit:] del lst[limit:] return clipping
Trims a given list so that it is not longer than given limit. Args: lst: A list to trim. limit: A maximum number of elements in the list after trimming. Returns: A suffix of the input list that was trimmed.
codesearchnet
def notify(self, notices): tmpl_html = get_template('required_tags_notice.html') tmpl_text = get_template('required_tags_notice.txt') for (recipient, data) in list(notices.items()): body_html = tmpl_html.render(data=data) body_text = tmpl_text.render(data=data) send_notification(subs...
Send notifications to the recipients provided Args: notices (:obj:`dict` of `str`: `list`): A dictionary mapping notification messages to the recipient. Returns: `None`
codesearchnet
def completely_parse_reader(parser: Parser[(Input, Output)], reader: Reader[Input]) -> Result[Output]: result = (parser << eof).consume(reader) if isinstance(result, Continue): return Success(result.value) else: used = set() unique_expected = [] for expected_lambda in result....
Consume reader and return Success only on complete consumption. This is a helper function for ``parse`` methods, which return ``Success`` when the input is completely consumed and ``Failure`` with an appropriate message otherwise. Args: parser: The parser doing the consuming reader: The input being consumed Returns:...
codesearchnet
def supports_card_actions(channel_id: str, button_cnt: int = 100) -> bool: max_actions = { Channels.facebook: 3, Channels.skype: 3, Channels.ms_teams: 3, Channels.line: 99, Channels.slack: 100, Channels.emulator: 100, ...
Determine if a number of Card Actions are supported by a Channel. Args: channel_id (str): The Channel to check if the Card Actions are supported in. button_cnt (int, optional): Defaults to 100. The number of Card Actions to check for the Channel. Returns: bool: True if the Channel supports the button_cnt total Card A...
juraj-google-style
def download_listing(self, file: Optional[IO], duration_timeout: Optional[float]=None) -> ListingResponse: if (self._session_state != SessionState.directory_request_sent): raise RuntimeError('File request not sent') self._session_state = SessionState.file_request_sent (yield from self.download(file=...
Read file listings. Args: file: A file object or asyncio stream. duration_timeout: Maximum time in seconds of which the entire file must be read. Returns: A Response populated the file listings Be sure to call :meth:`start_file_listing` first. Coroutine.
codesearchnet
def filter_queryset(self, request, term, queryset=None, **dependent_fields): if (queryset is None): queryset = self.get_queryset() search_fields = self.get_search_fields() select = Q() term = term.replace('\t', ' ') term = term.replace('\n', ' ') for t in [t for t in term.split(' ') if (...
Return QuerySet filtered by search_fields matching the passed term. Args: request (django.http.request.HttpRequest): The request is being passed from the JSON view and can be used to dynamically alter the response queryset. term (str): Search term queryset (django.db.models.query.QuerySet): QuerySet to select choices ...
codesearchnet
def mlir_convert(options, saved_model_dir, input_tensors, output_tensors, **kwargs): test_params = kwargs.get('test_params', {}) extra_convert_options = kwargs.get('extra_convert_options', zip_test_utils.ExtraConvertOptions()) tflite_model = None log = '' signature_key = signature_constants.DEFAULT_...
Convert a saved model into a tflite model with MLIR-based conversion. Args: options: A lite.testing.generate_examples_lib.Options instance. saved_model_dir: Path to the saved model. input_tensors: List of input tensor tuples `(name, shape, type)`. output_tensors: List of output tensors (names). **kwargs: Extra paramet...
github-repos
async def verify_scriptworker_task(chain, obj): errors = [] if (obj.worker_impl != 'scriptworker'): errors.append('{} {} must be run from scriptworker!'.format(obj.name, obj.task_id)) raise_on_errors(errors)
Verify the signing trust object. Currently the only check is to make sure it was run on a scriptworker. Args: chain (ChainOfTrust): the chain we're operating on obj (ChainOfTrust or LinkOfTrust): the trust object for the signing task.
codesearchnet
def html_page_for_render_items(bundle, docs_json, render_items, title, template=None, template_variables={}): if (title is None): title = DEFAULT_TITLE (bokeh_js, bokeh_css) = bundle json_id = make_id() json = escape(serialize_json(docs_json), quote=False) json = wrap_in_script_tag(json, 'ap...
Render an HTML page from a template and Bokeh render items. Args: bundle (tuple): a tuple containing (bokehjs, bokehcss) docs_json (JSON-like): Serialized Bokeh Document render_items (RenderItems) Specific items to render from the document and where title (str or None) A title for the HTML page. If None, DEFAULT_TI...
codesearchnet
def image_format(value): if (value.image.format.upper() not in constants.ALLOWED_IMAGE_FORMATS): raise ValidationError(MESSAGE_INVALID_IMAGE_FORMAT)
Confirms that the uploaded image is of supported format. Args: value (File): The file with an `image` property containing the image Raises: django.forms.ValidationError
codesearchnet
def GetString(self): string_list = [] string_list.append('Report generated from: {0:s}'.format(self.plugin_name)) time_compiled = getattr(self, 'time_compiled', 0) if time_compiled: time_compiled = timelib.Timestamp.CopyToIsoFormat(time_compiled) string_list.append('Generated on: {0:s}'....
Retrieves a string representation of the report. Returns: str: string representation of the report.
codesearchnet
def top_1(x, reduced_dim, dtype=tf.int32, name=None): reduced_dim = convert_to_dimension(reduced_dim) with tf.name_scope(name, default_name="top_1"): max_val = reduce_max(x, reduced_dim=reduced_dim) is_max = to_float(equal(x, max_val)) pos = mtf_range(x.mesh, reduced_dim, tf.float32) ret = reduce...
Argmax and Max. Args: x: a Tensor reduced_dim: a Dimension in x.shape.dims dtype: a tf.dtype (for the output) name: an optional string Returns: indices: a Tensor with given dtype values: optional Tensor equal to mtf.reduce_max(x, reduced_dim=reduced_dim)
juraj-google-style
def create_package(name, data, package_cls=None): from rez.package_maker__ import PackageMaker maker = PackageMaker(name, data, package_cls=package_cls) return maker.get_package()
Create a package given package data. Args: name (str): Package name. data (dict): Package data. Must conform to `package_maker.package_schema`. Returns: `Package` object.
juraj-google-style
def save(tiff_filename, numpy_data): tiff_filename = os.path.expanduser(tiff_filename) if (type(numpy_data) is str): fp = open(png_filename, 'wb') fp.write(numpy_data) fp.close() return png_filename try: img = tiff.imsave(tiff_filename, numpy_data) except Exceptio...
Export a numpy array to a TIFF file. Arguments: tiff_filename: A filename to which to save the TIFF data numpy_data: The numpy array to save to TIFF Returns: String. The expanded filename that now holds the TIFF data
codesearchnet
def open_port(upnp, internal_port, external_start_port=None): if external_start_port is None: external_start_port = internal_port if upnp is None: return False def register(internal, external): mapping = upnp.getspecificportmapping(external, 'UDP') if mapping ...
Open a port for the raiden service (listening at `internal_port`) through UPnP. Args: internal_port (int): the target port of the raiden service external_start_port (int): query for an external port starting here (default: internal_port) Returns: external_ip_address, external_port (tuple(str, int)): if successful or N...
juraj-google-style
def _escaped_token_to_subtoken_ids(self, escaped_token): return [self._subtoken_string_to_id[subtoken] for subtoken in self._escaped_token_to_subtoken_strings(escaped_token)]
Converts an escaped token string to a list of subtoken IDs. Args: escaped_token: An escaped token as a unicode string. Returns: A list of subtoken IDs as integers.
codesearchnet
def _error_messages(self, driver_id): assert isinstance(driver_id, ray.DriverID) message = self.redis_client.execute_command( "RAY.TABLE_LOOKUP", ray.gcs_utils.TablePrefix.ERROR_INFO, "", driver_id.binary()) if message is None: return [] ...
Get the error messages for a specific driver. Args: driver_id: The ID of the driver to get the errors for. Returns: A list of the error messages for this driver.
juraj-google-style
def __init__(self, code=None, contract_properties=0, name=None, version=None, author=None, email=None, description=None): self.Code = code self.ContractProperties = contract_properties self.Name = name self.CodeVersion = version self.Author = author ...
Create an instance. Args: code (neo.Core.FunctionCode): contract_properties (neo.SmartContract.ContractParameterType): contract type. name (bytes): version (bytes): author (bytes): email (bytes): description (bytes):
juraj-google-style
def _GetKeysDefaultEmpty(self, top_level, keys, depth=1): keys = set(keys) match = {} if (depth == 1): for key in keys: value = top_level.get(key, None) if (value is not None): match[key] = value else: for (_, parsed_key, parsed_value) in plist_int...
Retrieves plist keys, defaulting to empty values. Args: top_level (plistlib._InternalDict): top level plist object. keys (set[str]): names of keys that should be returned. depth (int): depth within the plist, where 1 is top level. Returns: dict[str, str]: values of the requested keys.
codesearchnet