code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def __init__(self, chain): self.queue = [] self._fsm = JTAGStateMachine() self._chain = chain
Create a new CommandQueue to manage, compile, and run Primitives. Args: chain: A JTAGScanChain instance that this queue will be associated with.
juraj-google-style
def _last_path_token(builder: expressions.Builder) -> str: if isinstance(builder.node, _evaluation.RootMessageNode): return '' return builder.node.to_path_token()
Returns `builder`'s last path token less the resource type. For example: * "Foo" returns "" (empty string) * "Foo.bar" returns "bar" * "Foo.bar.bats" returns "bats" Args: builder: The `builder` whose relative path to return.
github-repos
def dna_transformation(prev_image, dna_input, dna_kernel_size, relu_shift): prev_image_pad = tf.pad(prev_image, [[0, 0], [2, 2], [2, 2], [0, 0]]) image_height = int(prev_image.get_shape()[1]) image_width = int(prev_image.get_shape()[2]) inputs = [] for xkern in range(dna_kernel_size): for ykern in range(dna_kernel_size): inputs.append( tf.expand_dims( tf.slice(prev_image_pad, [0, xkern, ykern, 0], [-1, image_height, image_width, -1]), [3])) inputs = tf.concat(axis=3, values=inputs) kernel = tf.nn.relu(dna_input - relu_shift) + relu_shift kernel = tf.expand_dims( kernel / tf.reduce_sum(kernel, [3], keep_dims=True), [4]) return tf.reduce_sum(kernel * inputs, [3], keep_dims=False)
Apply dynamic neural advection to previous image. Args: prev_image: previous image to be transformed. dna_input: hidden lyaer to be used for computing DNA transformation. dna_kernel_size: dna kernel size. relu_shift: shift for ReLU function. Returns: List of images transformed by the predicted CDNA kernels.
juraj-google-style
def serialize_quantity(o): return dict( _type='astropy.units.Quantity', value=o.value, unit=o.unit.to_string())
Serializes an :obj:`astropy.units.Quantity`, for JSONification. Args: o (:obj:`astropy.units.Quantity`): :obj:`Quantity` to be serialized. Returns: A dictionary that can be passed to :obj:`json.dumps`.
juraj-google-style
def normalize_json(template): obj = parse_cloudformation_template(template) json_str = json.dumps(obj, sort_keys=True, indent=4, default=str, separators=(',', ': ')) result = [] lines = json_str.split('\n') for line in lines: result.append((line + '\n')) return result
Normalize our template for diffing. Args: template(str): string representing the template Returns: list: json representation of the parameters
codesearchnet
def _from_yaml_v0(cls, job): job_metadata = {} for key in ['job-id', 'job-name', 'create-time']: job_metadata[key] = job.get(key) job_metadata['create-time'] = dsub_util.replace_timezone(datetime.datetime.strptime(job['create-time'], '%Y-%m-%d %H:%M:%S.%f'), tzlocal()) job_resources = Resources() params = {} labels = job.get('labels', {}) if ('dsub-version' in labels): job_metadata['dsub-version'] = labels['dsub-version'] del labels['dsub-version'] params['labels'] = cls._label_params_from_dict(labels) params['envs'] = cls._env_params_from_dict(job.get('envs', {})) params['inputs'] = cls._input_file_params_from_dict(job.get('inputs', {}), False) params['outputs'] = cls._output_file_params_from_dict(job.get('outputs', {}), False) if (job.get('task-id') is None): job_params = params task_metadata = {'task-id': None} task_params = {} else: job_params = {} task_metadata = {'task-id': str(job.get('task-id'))} task_params = params task_resources = Resources(logging_path=job.get('logging')) task_descriptors = [TaskDescriptor.get_complete_descriptor(task_metadata, task_params, task_resources)] return JobDescriptor.get_complete_descriptor(job_metadata, job_params, job_resources, task_descriptors)
Populate a JobDescriptor from the local provider's original meta.yaml. The local job provider had the first incarnation of a YAML file for each task. That idea was extended here in the JobDescriptor and the local provider adopted the JobDescriptor.to_yaml() call to write its meta.yaml. The JobDescriptor.from_yaml() detects if it receives a local provider's "v0" meta.yaml and calls this function. Args: job: an object produced from decoding meta.yaml. Returns: A JobDescriptor populated as best we can from the old meta.yaml.
codesearchnet
def _iterate_through_class(self, class_dict): output_dict = {} for key in class_dict: val = class_dict[key] try: val = val.__dict__ except AttributeError: pass if (type(val) is dict): val = self._iterate_through_class(val) if (type(val) is list): temp_val = [] for val_i in val: try: val_i = val_i.__dict__ except AttributeError: pass if (type(val_i) is dict): val_i = self._iterate_through_class(val_i) temp_val.append(val_i) val = temp_val output_dict[key] = val return output_dict
Recursive function for output dictionary creation. Function will check each value in a dictionary to see if it is a class, list, or dictionary object. The idea is to turn all class objects into dictionaries. If it is a class object it will pass its ``class.__dict__`` recursively through this function again. If it is a dictionary, it will pass the dictionary recursively through this functin again. If the object is a list, it will iterate through entries checking for class or dictionary objects and pass them recursively through this function. This uses the knowledge of the list structures in the code. Args: class_dict (obj): Dictionary to iteratively check. Returns: Dictionary with all class objects turned into dictionaries.
codesearchnet
def cdna_transformation(prev_image, cdna_input, num_masks, color_channels, dna_kernel_size, relu_shift): batch_size = tf.shape(cdna_input)[0] height = int(prev_image.get_shape()[1]) width = int(prev_image.get_shape()[2]) cdna_kerns = tfl.dense(cdna_input, ((dna_kernel_size * dna_kernel_size) * num_masks), name='cdna_params', activation=None) cdna_kerns = tf.reshape(cdna_kerns, [batch_size, dna_kernel_size, dna_kernel_size, 1, num_masks]) cdna_kerns = (tf.nn.relu((cdna_kerns - relu_shift)) + relu_shift) norm_factor = tf.reduce_sum(cdna_kerns, [1, 2, 3], keep_dims=True) cdna_kerns /= norm_factor cdna_kerns = tf.transpose(cdna_kerns, [1, 2, 0, 4, 3]) cdna_kerns = tf.reshape(cdna_kerns, [dna_kernel_size, dna_kernel_size, batch_size, num_masks]) prev_image = tf.transpose(prev_image, [3, 1, 2, 0]) transformed = tf.nn.depthwise_conv2d(prev_image, cdna_kerns, [1, 1, 1, 1], 'SAME') transformed = tf.reshape(transformed, [color_channels, height, width, batch_size, num_masks]) transformed = tf.transpose(transformed, [3, 1, 2, 0, 4]) transformed = tf.unstack(transformed, axis=(- 1)) return transformed
Apply convolutional dynamic neural advection to previous image. Args: prev_image: previous image to be transformed. cdna_input: hidden lyaer to be used for computing CDNA kernels. num_masks: number of masks and hence the number of CDNA transformations. color_channels: the number of color channels in the images. dna_kernel_size: dna kernel size. relu_shift: shift for ReLU function. Returns: List of images transformed by the predicted CDNA kernels.
codesearchnet
class _ImageEmbeddingHandler(_EmbeddingHandler): def _validate_column_data(self, batch): if isinstance(batch[0], (int, str, float, bool)): raise TypeError(f'Embeddings can only be generated on dict[str, Image].Got dict[str, {type(batch[0])}] instead.') def get_metrics_namespace(self) -> str: return self._underlying.get_metrics_namespace() or 'BeamML_ImageEmbeddingHandler'
A ModelHandler intended to be work on list[dict[str, Image]] inputs. The inputs to the model handler are expected to be a list of dicts. For example, if the original mode is used with RunInference to take a PCollection[E] to a PCollection[P], this ModelHandler would take a PCollection[dict[str, E]] to a PCollection[dict[str, P]]. _ImageEmbeddingHandler will accept an EmbeddingsManager instance, which contains the details of the model to be loaded and the inference_fn to be used. The purpose of _ImageEmbeddingHandler is to generate embeddings for image inputs using the EmbeddingsManager instance. If the input is not an Image representation column, a RuntimeError will be raised. This is an internal class and offers no backwards compatibility guarantees. Args: embeddings_manager: An EmbeddingsManager instance.
github-repos
def plugin_test_validation(self, handler): methods = {name:func for name, func in inspect.getmembers(handler, callable)} if 'test' not in methods.keys(): print 'Failure for plugin: %s' % (handler.__name__) print 'Validation Error: The file must have a top level test() method' return None else: return methods['test']
Plugin validation. Every workbench plugin must have top level test method. Args: handler: The loaded plugin. Returns: None if the test fails or the test function.
juraj-google-style
def end_entry(self): if (self.in_progress is None): return Error.NO_ERROR if (self.in_progress.data_space() == 2): return Error.INPUT_BUFFER_WRONG_SIZE for entry in self.entries: if ((entry.target == self.in_progress.target) and (entry.var_id == self.in_progress.var_id)): entry.valid = False self.entries.append(self.in_progress) self.data_index += (self.in_progress.data_space() - 2) self.in_progress = None return Error.NO_ERROR
Finish a previously started config database entry. This commits the currently in progress entry. The expected flow is that start_entry() is called followed by 1 or more calls to add_data() followed by a single call to end_entry(). Returns: int: An error code
codesearchnet
def add(self, private_key): if not isinstance(private_key, PaillierPrivateKey): raise TypeError("private_key should be of type PaillierPrivateKey, " "not %s" % type(private_key)) self.__keyring[private_key.public_key] = private_key
Add a key to the keyring. Args: private_key (PaillierPrivateKey): a key to add to this keyring.
juraj-google-style
def load_fasta_file_as_dict_of_seqrecords(filename): results = {} records = load_fasta_file(filename) for r in records: results[r.id] = r return results
Load a FASTA file and return the sequences as a dict of {ID: SeqRecord} Args: filename (str): Path to the FASTA file to load Returns: dict: Dictionary of IDs to their SeqRecords
juraj-google-style
def __method_descriptor(self, service, method_info, operation_id, protorpc_method_info, security_definitions): descriptor = {} request_message_type = (resource_container.ResourceContainer. get_request_message(protorpc_method_info.remote)) request_kind = self.__get_request_kind(method_info) remote_method = protorpc_method_info.remote path = method_info.get_path(service.api_info) descriptor['parameters'] = self.__request_message_descriptor( request_kind, request_message_type, method_info.method_id(service.api_info), path) descriptor['responses'] = self.__response_message_descriptor( remote_method.response_type(), method_info.method_id(service.api_info)) descriptor['operationId'] = operation_id api_key_required = method_info.is_api_key_required(service.api_info) if method_info.audiences is not None: descriptor['security'] = self.__security_descriptor( method_info.audiences, security_definitions, api_key_required=api_key_required) elif service.api_info.audiences is not None or api_key_required: descriptor['security'] = self.__security_descriptor( service.api_info.audiences, security_definitions, api_key_required=api_key_required) if method_info.metric_costs: descriptor['x-google-quota'] = self.__x_google_quota_descriptor( method_info.metric_costs) return descriptor
Describes a method. Args: service: endpoints.Service, Implementation of the API as a service. method_info: _MethodInfo, Configuration for the method. operation_id: string, Operation ID of the method protorpc_method_info: protorpc.remote._RemoteMethodInfo, ProtoRPC description of the method. security_definitions: list of dicts, security definitions for the API. Returns: Dictionary describing the method.
juraj-google-style
def fetch_time_output(marker, format_s, ins): from parse import parse timings = [x for x in ins if marker in x] res = [parse(format_s, t) for t in timings] return [_f for _f in res if _f]
Fetch the output /usr/bin/time from a. Args: marker: The marker that limits the time output format_s: The format string used to parse the timings ins: A list of lines we look for the output. Returns: A list of timing tuples
juraj-google-style
def post(self, json=None): return self._call('post', url=self.endpoint, json=json)
Send a POST request and return the JSON decoded result. Args: json (dict, optional): Object to encode and send in request. Returns: mixed: JSON decoded response data.
juraj-google-style
def get_variants(self, chromosome=None, start=None, end=None): query = {} if chromosome: query['chrom'] = chromosome if start: query['start'] = {'$lte': end} query['end'] = {'$gte': start} LOG.info('Find all variants {}'.format(query)) return self.db.variant.find(query).sort([('start', ASCENDING)])
Return all variants in the database If no region is specified all variants will be returned. Args: chromosome(str) start(int) end(int) Returns: variants(Iterable(Variant))
codesearchnet
def generate_hpo_gene_list(self, *hpo_terms): genes = {} for term in hpo_terms: hpo_obj = self.hpo_term(term) if hpo_obj: for hgnc_id in hpo_obj['genes']: if hgnc_id in genes: genes[hgnc_id] += 1 else: genes[hgnc_id] = 1 else: LOG.warning("Term %s could not be found", term) sorted_genes = sorted(genes.items(), key=operator.itemgetter(1), reverse=True) return sorted_genes
Generate a sorted list with namedtuples of hpogenes Each namedtuple of the list looks like (hgnc_id, count) Args: hpo_terms(iterable(str)) Returns: hpo_genes(list(HpoGene))
juraj-google-style
def get_username(self, userid): username = self.user_map.get(userid) if not username: users = self.get_users() if users: members = { m['id']: m['name'] for m in users.get('members', [{}]) if m.get('id') and m.get('name') } if members: self.user_map.update(members) username = self.user_map.get(userid, userid) return username
Perform a lookup of users to resolve a userid to a username Args: userid (string): Slack userid to lookup. Returns: string: Human-friendly name of the user
juraj-google-style
def classifier_factory(clf): required_methods = ['fit', 'score', 'predict'] for method in required_methods: if not hasattr(clf, method): raise TypeError('"{}" is not in clf. Did you pass a ' 'classifier instance?'.format(method)) optional_methods = ['predict_proba'] for method in optional_methods: if not hasattr(clf, method): warnings.warn('{} not in clf. Some plots may ' 'not be possible to generate.'.format(method)) additional_methods = { 'plot_learning_curve': plot_learning_curve, 'plot_confusion_matrix': plot_confusion_matrix_with_cv, 'plot_roc_curve': plot_roc_curve_with_cv, 'plot_ks_statistic': plot_ks_statistic_with_cv, 'plot_precision_recall_curve': plot_precision_recall_curve_with_cv, 'plot_feature_importances': plot_feature_importances } for key, fn in six.iteritems(additional_methods): if hasattr(clf, key): warnings.warn('"{}" method already in clf. ' 'Overriding anyway. This may ' 'result in unintended behavior.'.format(key)) setattr(clf, key, types.MethodType(fn, clf)) return clf
Embeds scikit-plot instance methods in an sklearn classifier. Args: clf: Scikit-learn classifier instance Returns: The same scikit-learn classifier instance passed in **clf** with embedded scikit-plot instance methods. Raises: ValueError: If **clf** does not contain the instance methods necessary for scikit-plot instance methods.
juraj-google-style
def exe_cmd(*cmds): cmd = ' '.join(cmds) proc = Popen(cmd, stdout=PIPE, stderr=PIPE, shell=True) (out, err) = proc.communicate() if not err: return out return err
Executes commands in a new shell. Directing stderr to PIPE. This is fastboot's own exe_cmd because of its peculiar way of writing non-error info to stderr. Args: cmds: A sequence of commands and arguments. Returns: The output of the command run. Raises: Exception: An error occurred during the command execution.
juraj-google-style
def clear(self, size=(- 1), *, offset=0, chunk=None) -> None: self.mglo.clear(size, offset, chunk)
Clear the content. Args: size (int): The size. Value ``-1`` means all. Keyword Args: offset (int): The offset. chunk (bytes): The chunk to use repeatedly.
codesearchnet
def make_encoder(base_depth, activation, latent_size, code_size): conv = functools.partial( tf.keras.layers.Conv2D, padding="SAME", activation=activation) encoder_net = tf.keras.Sequential([ conv(base_depth, 5, 1), conv(base_depth, 5, 2), conv(2 * base_depth, 5, 1), conv(2 * base_depth, 5, 2), conv(4 * latent_size, 7, padding="VALID"), tf.keras.layers.Flatten(), tf.keras.layers.Dense(latent_size * code_size, activation=None), tf.keras.layers.Reshape([latent_size, code_size]) ]) def encoder(images): images = 2 * tf.cast(images, dtype=tf.float32) - 1 codes = encoder_net(images) return codes return encoder
Creates the encoder function. Args: base_depth: Layer base depth in encoder net. activation: Activation function in hidden layers. latent_size: The number of latent variables in the code. code_size: The dimensionality of each latent variable. Returns: encoder: A `callable` mapping a `Tensor` of images to a `Tensor` of shape `[..., latent_size, code_size]`.
juraj-google-style
def num_embedding_devices_per_chip(self): return self.tpu_hardware_feature_proto.num_embedding_devices_per_chip
Number of embedding accelerator devices per chip. Returns: Number of embedding devices per chip.
github-repos
def __init__(self, projection=None, orientation0=(0, 0, -1), **kwargs): kwargs['orientation0'] = orientation0 super(Camera, self).__init__(**kwargs) self.projection = PerspectiveProjection() if not projection else projection self.reset_uniforms()
Returns a camera object Args: projection (obj): the projection type for the camera. It can either be an instance of OrthoProjection or PerspeectiveProjection orientation0 (tuple): Returns: Camera instance
juraj-google-style
def set_sflow(self, name, value=None, default=False, disable=False): if value not in [True, False, None]: raise ValueError commands = ['interface %s' % name] commands.append(self.command_builder('sflow enable', value=value, default=default, disable=disable)) return self.configure(commands)
Configures the sFlow state on the interface Args: name (string): The interface identifier. It must be a full interface name (ie Ethernet, not Et) value (boolean): True if sFlow should be enabled otherwise False default (boolean): Specifies the default value for sFlow disable (boolean): Specifies to disable sFlow Returns: True if the operation succeeds otherwise False is returned
juraj-google-style
def _ParseOriginalFilename(self, file_object, format_version): file_offset = file_object.tell() if (format_version == 1): data_type_map = self._GetDataTypeMap('recycle_bin_metadata_utf16le_string') else: data_type_map = self._GetDataTypeMap('recycle_bin_metadata_utf16le_string_with_size') try: (original_filename, _) = self._ReadStructureFromFileObject(file_object, file_offset, data_type_map) except (ValueError, errors.ParseError) as exception: raise errors.ParseError('Unable to parse original filename with error: {0!s}'.format(exception)) if (format_version == 1): return original_filename.rstrip('\x00') return original_filename.string.rstrip('\x00')
Parses the original filename. Args: file_object (FileIO): file-like object. format_version (int): format version. Returns: str: filename or None on error. Raises: ParseError: if the original filename cannot be read.
codesearchnet
def _assert_weights_created(self): if self.dynamic: return if 'build' in self.__class__.__dict__ and self.__class__ != Model and (not self.built): raise ValueError('Weights for model %s have not yet been created. Weights are created when the Model is first called on inputs or `build()` is called with an `input_shape`.' % self.name)
Asserts that all the weights for the model have been created. For a non-dynamic model, the weights must already be created after the layer has been called. For a dynamic model, the exact list of weights can never be known for certain since it may change at any time during execution. We run this check right before accessing weights or getting the Numpy value for the current weights. Otherwise, if the layer has never been called, the user would just get an empty list, which is misleading. Raises: ValueError: if the weights of the network has not yet been created.
github-repos
def output(self, _filename): txt = '' for contract in self.slither.contracts_derived: txt += '\n{}:\n'.format(contract.name) table = PrettyTable(['Name', 'Type']) for variable in contract.state_variables: if not variable.is_constant: table.add_row([variable.name, str(variable.type)]) txt += str(table) + '\n' self.info(txt)
_filename is not used Args: _filename(string)
juraj-google-style
def find_nearest_color_hexstr(hexdigits, color_table=None, method='euclid'): triplet = [] try: if (len(hexdigits) == 3): for digit in hexdigits: digit = int(digit, 16) triplet.append(((digit * 16) + digit)) elif (len(hexdigits) == 6): triplet.extend((int(hexdigits[i:(i + 2)], 16) for i in (0, 2, 4))) else: raise ValueError(('wrong length: %r' % hexdigits)) except ValueError: return None return find_nearest_color_index(*triplet, color_table=color_table, method=method)
Given a three or six-character hex digit string, return the nearest color index. Arguments: hexdigits: a three/6 digit hex string, e.g. 'b0b', '123456' Returns: int, None: index, or None on error.
codesearchnet
def is_github_task(task): return any(( task.get('schedulerId') == 'taskcluster-github', task.get('extra', {}).get('tasks_for', '').startswith('github-'), is_github_url(task.get('metadata', {}).get('source', '')), ))
Determine if a task is related to GitHub. This function currently looks into the ``schedulerId``, ``extra.tasks_for``, and ``metadata.source``. Args: task (dict): the task definition to check. Returns: bool: True if a piece of data refers to GitHub
juraj-google-style
def inputs(self) -> Mapping[str, Mapping[int, str]]: raise NotImplementedError()
Mapping containing the axis definition of the input tensors to provide to the model Returns: For each input: its name associated to the axes symbolic name and the axis position within the tensor
github-repos
def set_pattern_additional_cycles(self, patternnumber, value): _checkPatternNumber(patternnumber) minimalmodbus._checkInt(value, minvalue=0, maxvalue=99, description='number of additional cycles') address = _calculateRegisterAddress('cycles', patternnumber) self.write_register(address, value, 0)
Set the number of additional cycles for a given pattern. Args: * patternnumber (integer): 0-7 * value (integer): 0-99
juraj-google-style
def generate_files(generator, output_filenames, max_cases=None, cycle_every_n=1): if outputs_exist(output_filenames): tf.logging.info('Skipping generator because outputs files exists at {}'.format(output_filenames)) return tmp_filenames = [(fname + '.incomplete') for fname in output_filenames] num_shards = len(output_filenames) if (num_shards > 0): if ('-train' in output_filenames[0]): tag = 'train' elif ('-dev' in output_filenames[0]): tag = 'eval' else: tag = 'other' writers = [tf.python_io.TFRecordWriter(fname) for fname in tmp_filenames] (counter, shard) = (0, 0) for case in generator: if (case is None): continue if ((counter % 100000) == 0): tf.logging.info(('Generating case %d.' % counter)) counter += 1 if (max_cases and (counter > max_cases)): break example = to_example(case) writers[shard].write(example.SerializeToString()) if ((counter % cycle_every_n) == 0): shard = ((shard + 1) % num_shards) for writer in writers: writer.close() for (tmp_name, final_name) in zip(tmp_filenames, output_filenames): tf.gfile.Rename(tmp_name, final_name) if (num_shards > 0): if (tag == 'train'): mlperf_log.transformer_print(key=mlperf_log.PREPROC_NUM_TRAIN_EXAMPLES, value=counter) elif (tag == 'eval'): mlperf_log.transformer_print(key=mlperf_log.PREPROC_NUM_EVAL_EXAMPLES, value=counter) tf.logging.info('Generated %s Examples', counter)
Generate cases from a generator and save as TFRecord files. Generated cases are transformed to tf.Example protos and saved as TFRecords in sharded files named output_dir/output_name-00..N-of-00..M=num_shards. Args: generator: a generator yielding (string -> int/float/str list) dictionaries. output_filenames: List of output file paths. max_cases: maximum number of cases to get from the generator; if None (default), we use the generator until StopIteration is raised. cycle_every_n: how many cases from the generator to take before switching to the next shard; by default set to 1, switch every case.
codesearchnet
def calc_checksum(sentence): if sentence.startswith('$'): sentence = sentence[1:] sentence = sentence.split('*')[0] return reduce(xor, map(ord, sentence))
Calculate a NMEA 0183 checksum for the given sentence. NMEA checksums are a simple XOR of all the characters in the sentence between the leading "$" symbol, and the "*" checksum separator. Args: sentence (str): NMEA 0183 formatted sentence
codesearchnet
def create_gpu_capa_map(match_list, generate_csv=False, filename='compute_capability'): gpu_capa = collections.OrderedDict() include = False gpu = '' cnt = 0 mismatch_cnt = 0 for match in match_list: if 'Products' in match: if not include: include = True continue elif 'www' in match: include = False break if include: if gpu: if gpu in gpu_capa: gpu_capa[gpu].append(match) else: gpu_capa[gpu] = [match] gpu = '' cnt += 1 if len(list(gpu_capa.keys())) < cnt: mismatch_cnt += 1 cnt = len(list(gpu_capa.keys())) else: gpu = match if generate_csv: f_name = filename + '.csv' write_csv_from_dict(f_name, gpu_capa) return gpu_capa
Generates a map between GPU types and corresponding compute capability. This method is used for retrieving CUDA compute capability from the web only. Args: match_list: List of all CUDA compute capability detected from the webpage. generate_csv: Boolean for creating csv file to store results. filename: String that is the name of the csv file (without `.csv` ending). Returns: OrderedDict that lists in the incoming order of all CUDA compute capability provided as `match_list`.
github-repos
def pop(self): return self._queue.popleft()
Removes and returns the oldest value from the data window (FIFO). Returns: The oldest value from the window.
github-repos
def get_qa_logit_layer(self) -> nn.Module: if hasattr(self, 'answer_head'): return self.answer_head.logit_fc[-1]
Returns the linear layer that produces question answering logits. Returns: `nn.Module`: A torch module mapping the question answering prediction hidden states or `None` if LXMERT does not have a visual answering head.
github-repos
def bulk_load_docs(es, docs): chunk_size = 200 try: results = elasticsearch.helpers.bulk(es, docs, chunk_size=chunk_size) log.debug(f"Elasticsearch documents loaded: {results[0]}") if len(results[1]) > 0: log.error("Bulk load errors {}".format(results)) except elasticsearch.ElasticsearchException as e: log.error("Indexing error: {}\n".format(e))
Bulk load docs Args: es: elasticsearch handle docs: Iterator of doc objects - includes index_name
juraj-google-style
def get_parameter_bounds(self, include_frozen=False): if include_frozen: return self.parameter_bounds return list((p for (p, f) in zip(self.parameter_bounds, self.unfrozen_mask) if f))
Get a list of the parameter bounds Args: include_frozen (Optional[bool]): Should the frozen parameters be included in the returned value? (default: ``False``)
codesearchnet
def wait_for_plug_update(self, plug_name, remote_state, timeout_s): plug = self._plugs_by_name.get(plug_name) if (plug is None): raise InvalidPlugError(('Cannot wait on unknown plug "%s".' % plug_name)) if (not isinstance(plug, FrontendAwareBasePlug)): raise InvalidPlugError(('Cannot wait on a plug %s that is not an subclass of FrontendAwareBasePlug.' % plug_name)) (state, update_event) = plug.asdict_with_event() if (state != remote_state): return state if update_event.wait(timeout_s): return plug._asdict()
Wait for a change in the state of a frontend-aware plug. Args: plug_name: Plug name, e.g. 'openhtf.plugs.user_input.UserInput'. remote_state: The last observed state. timeout_s: Number of seconds to wait for an update. Returns: An updated state, or None if the timeout runs out. Raises: InvalidPlugError: The plug can't be waited on either because it's not in use or it's not a frontend-aware plug.
codesearchnet
def generate_index(fn, cols=None, names=None, sep=" "): assert cols is not None, "'cols' was not set" assert names is not None, "'names' was not set" assert len(cols) == len(names) bgzip, open_func = get_open_func(fn, return_fmt=True) data = pd.read_csv(fn, sep=sep, engine="c", usecols=cols, names=names, compression="gzip" if bgzip else None) f = open_func(fn, "rb") data["seek"] = np.fromiter(_seek_generator(f), dtype=np.uint)[:-1] f.close() write_index(get_index_fn(fn), data) return data
Build a index for the given file. Args: fn (str): the name of the file. cols (list): a list containing column to keep (as int). names (list): the name corresponding to the column to keep (as str). sep (str): the field separator. Returns: pandas.DataFrame: the index.
juraj-google-style
def get_axis_value_discrete(self, axis): if (self.type != EventType.POINTER_AXIS): raise AttributeError(_wrong_meth.format(self.type)) return self._libinput.libinput_event_pointer_get_axis_value_discrete(self._handle, axis)
Return the axis value in discrete steps for a given axis event. How a value translates into a discrete step depends on the source. If the source is :attr:`~libinput.constant.PointerAxisSource.WHEEL`, the discrete value correspond to the number of physical mouse wheel clicks. If the source is :attr:`~libinput.constant.PointerAxisSource.CONTINUOUS` or :attr:`~libinput.constant.PointerAxisSource.FINGER`, the discrete value is always 0. Args: axis (~libinput.constant.PointerAxis): The axis who's value to get. Returns: float: The discrete value for the given event. Raises: AttributeError
codesearchnet
def _DeserializeResponse(self, payload): status_line, payload = payload.split('\n', 1) _, status, _ = status_line.split(' ', 2) parser = email_parser.Parser() msg = parser.parsestr(payload) info = dict(msg) info['status'] = status content = msg.get_payload() return http_wrapper.Response(info, content, self.__batch_url)
Convert string into Response and content. Args: payload: Header and body string to be deserialized. Returns: A Response object
juraj-google-style
def ReadLine(self, file_object): (line, _, self.lines) = self.lines.partition('\n') if (not line): self.ReadLines(file_object) (line, _, self.lines) = self.lines.partition('\n') return line
Reads a line. Args: file_object (dfvfs.FileIO): file-like object. Returns: str: line read from the lines buffer.
codesearchnet
def is_scheduled(configuration, task={}): days = configuration.recipe.get('setup', {}).get('day', []) hours = [int(h) for h in task.get('hour', configuration.recipe.get('setup', {}).get('hour', []))] if days == [] or configuration.date.strftime('%a') in days: if hours == [] or configuration.hour in hours: return True return False
Wrapper for day_hour_scheduled that returns True if current time zone safe hour is in recipe schedule. Used as a helper for any cron job running projects. Keeping this logic in project helps avoid time zone detection issues and scheduling discrepencies between machines. Args: * recipe: (Recipe JSON) The JSON of a recipe. * task: ( dictionary / JSON ) The specific task being considered for execution. Returns: - True if task is scheduled to run current hour, else False.
github-repos
def make_test_function(self): if self.test_function is not None: return self.test_function def step_function(model, iterator): def run_step(data): outputs = model.test_step(data) with ops.control_dependencies(_minimum_control_deps(outputs)): model._test_counter.assign_add(1) return outputs data = next(iterator) outputs = model.distribute_strategy.run(run_step, args=(data,)) outputs = reduce_per_replica(outputs, self.distribute_strategy, reduction='first') return outputs if self._steps_per_execution.numpy().item() == 1: def test_function(iterator): return step_function(self, iterator) else: def test_function(iterator): for _ in math_ops.range(self._steps_per_execution): outputs = step_function(self, iterator) return outputs if not self.run_eagerly: test_function = def_function.function(test_function, experimental_relax_shapes=True) self.test_function = test_function if self._cluster_coordinator: self.test_function = lambda iterator: self._cluster_coordinator.schedule(test_function, args=(iterator,)) return self.test_function
Creates a function that executes one step of evaluation. This method can be overridden to support custom evaluation logic. This method is called by `Model.evaluate` and `Model.test_on_batch`. Typically, this method directly controls `tf.function` and `tf.distribute.Strategy` settings, and delegates the actual evaluation logic to `Model.test_step`. This function is cached the first time `Model.evaluate` or `Model.test_on_batch` is called. The cache is cleared whenever `Model.compile` is called. Returns: Function. The function created by this method should accept a `tf.data.Iterator`, and return a `dict` containing values that will be passed to `tf.keras.Callbacks.on_test_batch_end`.
github-repos
def __init__(self, key, items): self._key = key sequence = list(items) super(Grouping, self).__init__(sequence)
Create a Grouping with a given key and a collection of members. Args: key: The key corresponding to this Grouping items: An iterable collection of the members of the group.
juraj-google-style
def to_dms(angle, style='dms'): sign = (1 if (angle >= 0) else (- 1)) angle = (abs(angle) * 3600) (minutes, seconds) = divmod(angle, 60) (degrees, minutes) = divmod(minutes, 60) if (style == 'dms'): return tuple(((sign * abs(i)) for i in (int(degrees), int(minutes), seconds))) elif (style == 'dm'): return tuple(((sign * abs(i)) for i in (int(degrees), (minutes + (seconds / 60))))) else: raise ValueError(('Unknown style type %r' % style))
Convert decimal angle to degrees, minutes and possibly seconds. Args: angle (float): Angle to convert style (str): Return fractional or whole minutes values Returns: tuple of int: Angle converted to degrees, minutes and possibly seconds Raises: ValueError: Unknown value for ``style``
codesearchnet
def showbox(self, force_rerun=False): log.debug('{}: running box maker...'.format(self.id)) if not self.sphsel_path: return ValueError('Please run sphere_selector_using_residues') boxfile = op.join(self.dock_dir, "{}_box.pdb".format(self.id)) boxscript = op.join(self.dock_dir, "{}_box.in".format(self.id)) if ssbio.utils.force_rerun(flag=force_rerun, outfile=boxfile): with open(boxscript, "w") as f: f.write("Y\n") f.write("0\n") f.write("{}\n".format(op.basename(self.sphsel_path))) f.write("1\n") f.write("{}".format(op.basename(boxfile))) cmd = "showbox < {}".format(boxscript) os.chdir(self.dock_dir) os.system(cmd) if ssbio.utils.is_non_zero_file(boxfile): self.box_path = boxfile log.debug('{}: successful box creation'.format(self.box_path)) else: log.critical('{}: showbox failed to run on selected spheres file'.format(self.sphsel_path))
Create the dummy PDB box around the selected spheres. Args: force_rerun (bool): If method should be rerun even if output file exists
juraj-google-style
def process_result_value(self, value, dialect): masks = list() if value: for e in enums.CryptographicUsageMask: if (e.value & value): masks.append(e) return masks
Returns a new list of enums.CryptographicUsageMask Enums. This converts the integer value into the list of enums. Args: value(int): The integer value stored in the database that is used to create the list of enums.CryptographicUsageMask Enums. dialect(string): SQL dialect
codesearchnet
def _set_mode(self, discover_mode, connect_mode): payload = struct.pack('<BB', discover_mode, connect_mode) response = self._send_command(6, 1, payload) (result,) = unpack('<H', response.payload) if (result != 0): return (False, {'reason': 'Error code from BLED112 setting mode', 'code': result}) return (True, None)
Set the mode of the BLED112, used to enable and disable advertising To enable advertising, use 4, 2. To disable advertising use 0, 0. Args: discover_mode (int): The discoverability mode, 0 for off, 4 for on (user data) connect_mode (int): The connectability mode, 0 for of, 2 for undirected connectable
codesearchnet
def EnqueueBreakpointUpdate(self, breakpoint): with self._transmission_thread_startup_lock: if self._transmission_thread is None: self._transmission_thread = threading.Thread( target=self._TransmissionThreadProc) self._transmission_thread.name = 'Cloud Debugger transmission thread' self._transmission_thread.daemon = True self._transmission_thread.start() self._transmission_queue.append((breakpoint, 0)) self._new_updates.set()
Asynchronously updates the specified breakpoint on the backend. This function returns immediately. The worker thread is actually doing all the work. The worker thread is responsible to retry the transmission in case of transient errors. Args: breakpoint: breakpoint in either final or non-final state.
juraj-google-style
def create_primes(threshold): if (threshold == 2): return [2] elif (threshold < 2): return [] numbers = list(range(3, (threshold + 1), 2)) root_of_threshold = (threshold ** 0.5) half = int((((threshold + 1) / 2) - 1)) idx = 0 counter = 3 while (counter <= root_of_threshold): if numbers[idx]: idy = int((((counter * counter) - 3) / 2)) numbers[idy] = 0 while (idy < half): numbers[idy] = 0 idy += counter idx += 1 counter = ((2 * idx) + 3) return ([2] + [number for number in numbers if number])
Generate prime values using sieve of Eratosthenes method. Args: threshold (int): The upper bound for the size of the prime values. Returns (List[int]): All primes from 2 and up to ``threshold``.
codesearchnet
def convert_item_to_command_line_arg(self, action, key, value): args = [] if (action is None): command_line_key = self.get_command_line_key_for_unknown_config_file_setting(key) else: command_line_key = action.option_strings[(- 1)] if ((action is not None) and isinstance(action, ACTION_TYPES_THAT_DONT_NEED_A_VALUE)): if (value.lower() in ('true', 'yes', '1')): args.append(command_line_key) elif (value.lower() in ('false', 'no', '0')): pass else: self.error(("Unexpected value for %s: '%s'. Expecting 'true', 'false', 'yes', 'no', '1' or '0'" % (key, value))) elif isinstance(value, list): if ((action is None) or isinstance(action, argparse._AppendAction)): for list_elem in value: args.append(command_line_key) args.append(str(list_elem)) elif ((isinstance(action, argparse._StoreAction) and (action.nargs in ('+', '*'))) or (isinstance(action.nargs, int) and (action.nargs > 1))): args.append(command_line_key) for list_elem in value: args.append(str(list_elem)) else: self.error(("%s can't be set to a list '%s' unless its action type is changed to 'append' or nargs is set to '*', '+', or > 1" % (key, value))) elif isinstance(value, str): args.append(command_line_key) args.append(value) else: raise ValueError(('Unexpected value type %s for value: %s' % (type(value), value))) return args
Converts a config file or env var key + value to a list of commandline args to append to the commandline. Args: action: The argparse Action object for this setting, or None if this config file setting doesn't correspond to any defined configargparse arg. key: string (config file key or env var name) value: parsed value of type string or list
codesearchnet
def build_ring_all_reduce(input_tensors, num_workers, num_subchunks, gpu_perm, red_op, un_op=None): if len(input_tensors) < 2: raise ValueError('input_tensors must be length 2 or longer') input_tensors, shape = _flatten_tensors(input_tensors) devices = [t.device for t in input_tensors] pred_by_s_d, rank_by_s_d = _ring_permutations(num_workers, num_subchunks, gpu_perm) chunks_by_dev, pad_len = _build_ring_gather(input_tensors, devices, num_subchunks, pred_by_s_d, rank_by_s_d, red_op) if un_op: chunks_by_dev = _apply_unary_to_chunks(un_op, chunks_by_dev) output_tensors = _build_ring_scatter(pred_by_s_d, rank_by_s_d, chunks_by_dev) if pad_len > 0: output_tensors = _strip_padding(output_tensors, pad_len) if len(shape) != 1: output_tensors = _reshape_tensors(output_tensors, shape) return output_tensors
Construct a subgraph performing a ring-style all-reduce of input_tensors. Args: input_tensors: a list of `tf.Tensor` objects, which must all have the same shape and type. num_workers: number of worker tasks spanned by input_tensors. num_subchunks: number of subchunks each device should process in one tick. gpu_perm: a list of ints giving a ring-wise rank ordering of GPUs at each worker. All workers must have the same number of GPUs with the same rank ordering. If NVLINK is available, this should be a ring order supported by NVLINK edges. red_op: a binary operator for elementwise reduction. un_op: an optional unary operator to apply to fully reduced values. Raises: ValueError: empty input_tensors or they don't all have same size. Returns: a list of `tf.Tensor` identical sum-reductions of input_tensors.
github-repos
def close(self, file_des): file_handle = self.filesystem.get_open_file(file_des) file_handle.close()
Close a file descriptor. Args: file_des: An integer file descriptor for the file object requested. Raises: OSError: bad file descriptor. TypeError: if file descriptor is not an integer.
codesearchnet
def delete_case(self, case_id=None, institute_id=None, display_name=None): query = {} if case_id: query['_id'] = case_id LOG.info('Deleting case %s', case_id) else: if (not (institute_id and display_name)): raise ValueError('Have to provide both institute_id and display_name') LOG.info('Deleting case %s institute %s', display_name, institute_id) query['owner'] = institute_id query['display_name'] = display_name result = self.case_collection.delete_one(query) return result
Delete a single case from database Args: institute_id(str) case_id(str) Returns: case_obj(dict): The case that was deleted
codesearchnet
def WriteEventMACBGroup(self, event_macb_group): output_values = self._GetOutputValues(event_macb_group[0]) timestamp_descriptions = [ event.timestamp_desc for event in event_macb_group] output_values[3] = ( self._output_mediator.GetMACBRepresentationFromDescriptions( timestamp_descriptions)) output_values[6] = '; '.join(timestamp_descriptions) self._WriteOutputValues(output_values)
Writes an event MACB group to the output. Args: event_macb_group (list[EventObject]): event MACB group.
juraj-google-style
def set_napp(self, user, napp, version=None): self.user = user self.napp = napp self.version = (version or 'latest')
Set info about NApp. Args: user (str): NApps Server username. napp (str): NApp name. version (str): NApp version.
codesearchnet
def __init__(self, filenames, record_bytes, header_bytes=None, footer_bytes=None, buffer_size=None, compression_type=None, name=None): self._filenames = filenames self._record_bytes = ops.convert_to_tensor(record_bytes, dtype=dtypes.int64, name='record_bytes') self._header_bytes = convert.optional_param_to_tensor('header_bytes', header_bytes) self._footer_bytes = convert.optional_param_to_tensor('footer_bytes', footer_bytes) self._buffer_size = convert.optional_param_to_tensor('buffer_size', buffer_size, _DEFAULT_READER_BUFFER_SIZE_BYTES) self._compression_type = convert.optional_param_to_tensor('compression_type', compression_type, argument_default='', argument_dtype=dtypes.string) self._name = name variant_tensor = gen_dataset_ops.fixed_length_record_dataset_v2(self._filenames, self._header_bytes, self._record_bytes, self._footer_bytes, self._buffer_size, self._compression_type, metadata=self._metadata.SerializeToString()) super(_FixedLengthRecordDataset, self).__init__(variant_tensor)
Creates a `FixedLengthRecordDataset`. Args: filenames: A `tf.string` tensor containing one or more filenames. record_bytes: A `tf.int64` scalar representing the number of bytes in each record. header_bytes: (Optional.) A `tf.int64` scalar representing the number of bytes to skip at the start of a file. footer_bytes: (Optional.) A `tf.int64` scalar representing the number of bytes to ignore at the end of a file. buffer_size: (Optional.) A `tf.int64` scalar representing the number of bytes to buffer when reading. compression_type: (Optional.) A `tf.string` scalar evaluating to one of `""` (no compression), `"ZLIB"`, or `"GZIP"`. name: (Optional.) A name for the tf.data operation.
github-repos
def _assert_at_most_n_true(predicates, n, msg): preds_c = array_ops_stack.stack(predicates, name='preds_c') num_true_conditions = math_ops.reduce_sum(math_ops.cast(preds_c, dtypes.int32), name='num_true_conds') condition = math_ops.less_equal(num_true_conditions, constant_op.constant(n, name='n_true_conds')) preds_names = ', '.join((getattr(p, 'name', '?') for p in predicates)) error_msg = ['%s: more than %d conditions (%s) evaluated as True:' % (msg, n, preds_names), preds_c] return control_flow_assert.Assert(condition, data=error_msg, summarize=len(predicates))
Returns an Assert op that checks that at most n predicates are True. Args: predicates: list of bool scalar tensors. n: maximum number of true predicates allowed. msg: Error message.
github-repos
def GetEntries(self, parser_mediator, match=None, **unused_kwargs): version = match.get('LastAttemptSystemVersion', 'N/A') pending = match.get('LastUpdatesAvailable', None) event_data = plist_event.PlistTimeEventData() event_data.desc = 'Last MacOS {0:s} full update.'.format(version) event_data.key = '' event_data.root = '/' datetime_value = match.get('LastFullSuccessfulDate', None) if datetime_value: event = time_events.PythonDatetimeEvent( datetime_value, definitions.TIME_DESCRIPTION_WRITTEN) parser_mediator.ProduceEventWithEventData(event, event_data) datetime_value = match.get('LastSuccessfulDate', None) if datetime_value and pending: software = [] for update in match.get('RecommendedUpdates', []): identifier = update.get('Identifier', '<IDENTIFIER>') product_key = update.get('Product Key', '<PRODUCT_KEY>') software.append('{0:s}({1:s})'.format(identifier, product_key)) if not software: return software = ','.join(software) event_data.desc = ( 'Last Mac OS {0!s} partially update, pending {1!s}: ' '{2:s}.').format(version, pending, software) event = time_events.PythonDatetimeEvent( datetime_value, definitions.TIME_DESCRIPTION_WRITTEN) parser_mediator.ProduceEventWithEventData(event, event_data)
Extracts relevant MacOS update entries. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. match (Optional[dict[str: object]]): keys extracted from PLIST_KEYS.
juraj-google-style
def __init__(self, data_file, vocab_data_file): def reading_function(file_name): for root in self.ROOTS: file_path = os.path.join(root, file_name) if os.path.exists(file_path): break file_path = None assert file_path is not None, ("Couldn't locate %s in %r" % (file_name, self.ROOTS)) with open(file_path, mode="rb") as fp: return list(fp.read().decode().replace("\n", self.CHAR_EOS)) self._vocab_dict = {} self._inv_vocab_dict = {} token_list = reading_function(vocab_data_file) self.vocab_size = 0 for token in self.DEFAULT_START_TOKENS + token_list: if token not in self._vocab_dict: self._vocab_dict[token] = self.vocab_size self._inv_vocab_dict[self.vocab_size] = token self.vocab_size += 1 raw_data = reading_function(data_file) self.flat_data = np.array(self.tokenize(raw_data), dtype=np.int32) self.num_tokens = self.flat_data.shape[0]
Creates a TokenDataSource instance. Args: data_file: file object containing text data to be tokenized. vocab_data_file: file object containing text data used to initialize the vocabulary.
juraj-google-style
def add_edge_bias(x, filter_size): x_shape = common_layers.shape_list(x) if ((filter_size[0] == 1) and (filter_size[1] == 1)): return x a = ((filter_size[0] - 1) b = ((filter_size[1] - 1) padding = [[0, 0], [a, a], [b, b], [0, 0]] x_bias = tf.zeros((x_shape[:(- 1)] + [1])) x = tf.pad(x, padding) x_pad = tf.pad(x_bias, padding, constant_values=1) return tf.concat([x, x_pad], axis=3)
Pad x and concatenates an edge bias across the depth of x. The edge bias can be thought of as a binary feature which is unity when the filter is being convolved over an edge and zero otherwise. Args: x: Input tensor, shape (NHWC) filter_size: filter_size to determine padding. Returns: x_pad: Input tensor, shape (NHW(c+1))
codesearchnet
def Add(self, other): if len(self.data) != len(other.data): raise RuntimeError("Can only add series of identical lengths.") for i in range(len(self.data)): if self.data[i][1] != other.data[i][1]: raise RuntimeError("Timestamp mismatch.") if self.data[i][0] is None and other.data[i][0] is None: continue self.data[i][0] = (self.data[i][0] or 0) + (other.data[i][0] or 0)
Add other to self pointwise. Requires that both self and other are of the same length, and contain identical timestamps. Typically this means that Normalize has been called on both with identical time parameters. Args: other: The sequence to add to self. Raises: RuntimeError: other does not contain the same timestamps as self.
juraj-google-style
def MakeCdfFromPmf(pmf, name=None): if (name == None): name = pmf.name return MakeCdfFromItems(pmf.Items(), name)
Makes a CDF from a Pmf object. Args: pmf: Pmf.Pmf object name: string name for the data. Returns: Cdf object
codesearchnet
def save(self, items): rows = [] indx = self.indx size = 0 tick = s_common.now() for item in items: byts = s_msgpack.en(item) size += len(byts) lkey = s_common.int64en(indx) indx += 1 rows.append((lkey, byts)) self.slab.putmulti(rows, append=True, db=self.db) took = (s_common.now() - tick) origindx = self.indx self.indx = indx return {'indx': indx, 'size': size, 'count': len(items), 'time': tick, 'took': took} return origindx
Save a series of items to a sequence. Args: items (tuple): The series of items to save into the sequence. Returns: The index of the first item
codesearchnet
def emit(self, record): if (record.levelno < logging.getLevelName(self.min_level)): return evt = LogEvent() evt.level = record.levelname evt.levelno = record.levelno evt.timestamp = datetime.fromtimestamp(record.created) evt.message = record.message evt.filename = record.filename evt.lineno = record.lineno evt.module = record.module evt.funcname = record.funcName evt.pathname = record.pathname evt.process_id = record.process if (record.levelno >= 40): evt.stacktrace = traceback.format_exc() try: db.session.add(evt) db.session.commit() except Exception: db.session.rollback()
Persist a record into the database Args: record (`logging.Record`): The logging.Record object to store Returns: `None`
codesearchnet
def get_structure_from_prev_run(vasprun, outcar=None, sym_prec=0.1, international_monoclinic=True): structure = vasprun.final_structure site_properties = {} if vasprun.is_spin: if (outcar and outcar.magnetization): site_properties.update({'magmom': [i['tot'] for i in outcar.magnetization]}) else: site_properties.update({'magmom': vasprun.parameters['MAGMOM']}) if vasprun.parameters.get('LDAU', False): for k in ('LDAUU', 'LDAUJ', 'LDAUL'): vals = vasprun.incar[k] m = {} l = [] s = 0 for site in structure: if (site.specie.symbol not in m): m[site.specie.symbol] = vals[s] s += 1 l.append(m[site.specie.symbol]) if (len(l) == len(structure)): site_properties.update({k.lower(): l}) else: raise ValueError('length of list {} not the same asstructure'.format(l)) structure = structure.copy(site_properties=site_properties) if sym_prec: sym_finder = SpacegroupAnalyzer(structure, symprec=sym_prec) new_structure = sym_finder.get_primitive_standard_structure(international_monoclinic=international_monoclinic) vpa_old = (structure.volume / structure.num_sites) vpa_new = (new_structure.volume / new_structure.num_sites) if ((abs((vpa_old - vpa_new)) / vpa_old) > 0.02): raise ValueError('Standardizing cell failed! VPA old: {}, VPA new: {}'.format(vpa_old, vpa_new)) sm = StructureMatcher() if (not sm.fit(structure, new_structure)): raise ValueError("Standardizing cell failed! Old structure doesn't match new.") structure = new_structure return structure
Process structure from previous run. Args: vasprun (Vasprun): Vasprun that contains the final structure from previous run. outcar (Outcar): Outcar that contains the magnetization info from previous run. sym_prec (float): Tolerance for symmetry finding for standardization. If no standardization is desired, set to 0 or a False. international_monoclinic (bool): Whether to use international convention (vs Curtarolo) for monoclinic. Defaults True. Returns: Returns the magmom-decorated structure that can be passed to get Vasp input files, e.g. get_kpoints.
codesearchnet
def is_in_path(program): if (sys.version_info.major == 2): path = os.getenv('PATH') if (os.name == 'nt'): path = path.split(';') else: path = path.split(':') else: path = os.get_exec_path() for i in path: if os.path.isdir(i): if (program in os.listdir(i)): return True
Check if a program is in the system ``PATH``. Checks if a given program is in the user's ``PATH`` or not. Args: program (str): The program to try to find in ``PATH``. Returns: bool: Is the program in ``PATH``?
codesearchnet
def _compute_edge_nodes(nodes, degree): (dimension, _) = np.shape(nodes) nodes1 = np.empty((dimension, (degree + 1)), order='F') nodes2 = np.empty((dimension, (degree + 1)), order='F') nodes3 = np.empty((dimension, (degree + 1)), order='F') curr2 = degree curr3 = (- 1) for i in six.moves.xrange((degree + 1)): nodes1[(:, i)] = nodes[(:, i)] nodes2[(:, i)] = nodes[(:, curr2)] nodes3[(:, i)] = nodes[(:, curr3)] curr2 += (degree - i) curr3 -= (i + 2) return (nodes1, nodes2, nodes3)
Compute the nodes of each edges of a surface. .. note:: There is also a Fortran implementation of this function, which will be used if it can be built. Args: nodes (numpy.ndarray): Control point nodes that define the surface. degree (int): The degree of the surface define by ``nodes``. Returns: Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray]: The nodes in the edges of the surface.
codesearchnet
def item_status(self, **kwargs): path = self._get_id_path('item_status') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Check to see if a movie id is already added to a list. Args: movie_id: The id of the movie. Returns: A dict respresentation of the JSON returned from the API.
juraj-google-style
def __init__(self, channel): self.AuthEnable = channel.unary_unary( '/etcdserverpb.Auth/AuthEnable', request_serializer=rpc__pb2.AuthEnableRequest.SerializeToString, response_deserializer=rpc__pb2.AuthEnableResponse.FromString, ) self.AuthDisable = channel.unary_unary( '/etcdserverpb.Auth/AuthDisable', request_serializer=rpc__pb2.AuthDisableRequest.SerializeToString, response_deserializer=rpc__pb2.AuthDisableResponse.FromString, ) self.Authenticate = channel.unary_unary( '/etcdserverpb.Auth/Authenticate', request_serializer=rpc__pb2.AuthenticateRequest.SerializeToString, response_deserializer=rpc__pb2.AuthenticateResponse.FromString, ) self.UserAdd = channel.unary_unary( '/etcdserverpb.Auth/UserAdd', request_serializer=rpc__pb2.AuthUserAddRequest.SerializeToString, response_deserializer=rpc__pb2.AuthUserAddResponse.FromString, ) self.UserGet = channel.unary_unary( '/etcdserverpb.Auth/UserGet', request_serializer=rpc__pb2.AuthUserGetRequest.SerializeToString, response_deserializer=rpc__pb2.AuthUserGetResponse.FromString, ) self.UserList = channel.unary_unary( '/etcdserverpb.Auth/UserList', request_serializer=rpc__pb2.AuthUserListRequest.SerializeToString, response_deserializer=rpc__pb2.AuthUserListResponse.FromString, ) self.UserDelete = channel.unary_unary( '/etcdserverpb.Auth/UserDelete', request_serializer=rpc__pb2.AuthUserDeleteRequest.SerializeToString, response_deserializer=rpc__pb2.AuthUserDeleteResponse.FromString, ) self.UserChangePassword = channel.unary_unary( '/etcdserverpb.Auth/UserChangePassword', request_serializer=rpc__pb2.AuthUserChangePasswordRequest.SerializeToString, response_deserializer=rpc__pb2.AuthUserChangePasswordResponse.FromString, ) self.UserGrantRole = channel.unary_unary( '/etcdserverpb.Auth/UserGrantRole', request_serializer=rpc__pb2.AuthUserGrantRoleRequest.SerializeToString, response_deserializer=rpc__pb2.AuthUserGrantRoleResponse.FromString, ) self.UserRevokeRole = channel.unary_unary( '/etcdserverpb.Auth/UserRevokeRole', request_serializer=rpc__pb2.AuthUserRevokeRoleRequest.SerializeToString, response_deserializer=rpc__pb2.AuthUserRevokeRoleResponse.FromString, ) self.RoleAdd = channel.unary_unary( '/etcdserverpb.Auth/RoleAdd', request_serializer=rpc__pb2.AuthRoleAddRequest.SerializeToString, response_deserializer=rpc__pb2.AuthRoleAddResponse.FromString, ) self.RoleGet = channel.unary_unary( '/etcdserverpb.Auth/RoleGet', request_serializer=rpc__pb2.AuthRoleGetRequest.SerializeToString, response_deserializer=rpc__pb2.AuthRoleGetResponse.FromString, ) self.RoleList = channel.unary_unary( '/etcdserverpb.Auth/RoleList', request_serializer=rpc__pb2.AuthRoleListRequest.SerializeToString, response_deserializer=rpc__pb2.AuthRoleListResponse.FromString, ) self.RoleDelete = channel.unary_unary( '/etcdserverpb.Auth/RoleDelete', request_serializer=rpc__pb2.AuthRoleDeleteRequest.SerializeToString, response_deserializer=rpc__pb2.AuthRoleDeleteResponse.FromString, ) self.RoleGrantPermission = channel.unary_unary( '/etcdserverpb.Auth/RoleGrantPermission', request_serializer=rpc__pb2.AuthRoleGrantPermissionRequest.SerializeToString, response_deserializer=rpc__pb2.AuthRoleGrantPermissionResponse.FromString, ) self.RoleRevokePermission = channel.unary_unary( '/etcdserverpb.Auth/RoleRevokePermission', request_serializer=rpc__pb2.AuthRoleRevokePermissionRequest.SerializeToString, response_deserializer=rpc__pb2.AuthRoleRevokePermissionResponse.FromString, )
Constructor. Args: channel: A grpc.Channel.
juraj-google-style
def make_train_function(self): if self.train_function is not None: return self.train_function def step_function(model, iterator): def run_step(data): outputs = model.train_step(data) with ops.control_dependencies(_minimum_control_deps(outputs)): model._train_counter.assign_add(1) return outputs data = next(iterator) outputs = model.distribute_strategy.run(run_step, args=(data,)) outputs = reduce_per_replica(outputs, self.distribute_strategy, reduction='first') write_scalar_summaries(outputs, step=model._train_counter) return outputs if self._steps_per_execution.numpy().item() == 1: def train_function(iterator): return step_function(self, iterator) else: def train_function(iterator): for _ in math_ops.range(self._steps_per_execution): outputs = step_function(self, iterator) return outputs if not self.run_eagerly: train_function = def_function.function(train_function, experimental_relax_shapes=True) self.train_tf_function = train_function self.train_function = train_function if self._cluster_coordinator: self.train_function = lambda iterator: self._cluster_coordinator.schedule(train_function, args=(iterator,)) return self.train_function
Creates a function that executes one step of training. This method can be overridden to support custom training logic. This method is called by `Model.fit` and `Model.train_on_batch`. Typically, this method directly controls `tf.function` and `tf.distribute.Strategy` settings, and delegates the actual training logic to `Model.train_step`. This function is cached the first time `Model.fit` or `Model.train_on_batch` is called. The cache is cleared whenever `Model.compile` is called. Returns: Function. The function created by this method should accept a `tf.data.Iterator`, and return a `dict` containing values that will be passed to `tf.keras.Callbacks.on_train_batch_end`, such as `{'loss': 0.2, 'accuracy': 0.7}`.
github-repos
def sendResponse(self, message, UUID, routing_key): self.sendMessage( exchange=self.output_exchange, routing_key=routing_key, message=message, UUID=UUID )
Send `message` to ``self.output_exchange`` with routing key ``self.output_key``, ``self.content_type`` in ``delivery_mode=2``. Args: message (str): message which will be sent UUID: unique identification of message routing_key (str): which routing key to use to send message back
juraj-google-style
def start_trial(self, trial, checkpoint=None): self._commit_resources(trial.resources) try: self._start_trial(trial, checkpoint) except Exception as e: logger.exception('Error starting runner for Trial %s', str(trial)) error_msg = traceback.format_exc() time.sleep(2) self._stop_trial(trial, error=True, error_msg=error_msg) if isinstance(e, AbortTrialExecution): return try: trial.clear_checkpoint() logger.info('Trying to start runner for Trial %s without checkpoint.', str(trial)) self._start_trial(trial) except Exception: logger.exception('Error starting runner for Trial %s, aborting!', str(trial)) error_msg = traceback.format_exc() self._stop_trial(trial, error=True, error_msg=error_msg)
Starts the trial. Will not return resources if trial repeatedly fails on start. Args: trial (Trial): Trial to be started. checkpoint (Checkpoint): A Python object or path storing the state of trial.
codesearchnet
def _select_forward_and_backward_functions(self, args, possible_gradient_type, executing_eagerly): if executing_eagerly: input_tangents = forwardprop_util.pack_tangents(args) else: input_tangents = forwardprop_util.TangentInfo() need_gradients_for_jvps = record.should_record_backprop(input_tangents.tangents) cache_key = (need_gradients_for_jvps, input_tangents.indices) if possible_gradient_type == gradients_util.POSSIBLE_GRADIENT_TYPES_FIRST_ORDER: if input_tangents.indices or executing_eagerly: functions = self._first_order_tape_functions.get(cache_key, None) if functions is None: functions = _FirstOrderTapeGradientFunctions(self._func_graph, self._attrs, self._garbage_collector, forwardprop_input_indices=input_tangents.indices, delayed_rewrite_functions=self._delayed_rewrite_functions, need_gradients_for_jvps=need_gradients_for_jvps) self._first_order_tape_functions[cache_key] = functions return _ForwardBackwardCall(functions, args, input_tangents.tangents, tape_watching=True) else: return _ForwardBackwardCall(self._delayed_rewrite_functions, args, input_tangents.tangents, tape_watching=True) elif possible_gradient_type == gradients_util.POSSIBLE_GRADIENT_TYPES_HIGHER_ORDER: functions = self._higher_order_tape_functions.get(cache_key, None) if functions is None: functions = _HigherOrderTapeGradientFunctions(self._func_graph, self._attrs, self._garbage_collector, forwardprop_input_indices=input_tangents.indices, delayed_rewrite_functions=self._delayed_rewrite_functions, need_gradients_for_jvps=need_gradients_for_jvps) self._higher_order_tape_functions[cache_key] = functions return _ForwardBackwardCall(functions, args, input_tangents.tangents, tape_watching=True) return _ForwardBackwardCall(self._delayed_rewrite_functions, args, input_tangents.tangents, tape_watching=False)
Selects forward and backward functions based on the calling context. The forward function computes the "real" function outputs, `self._outputs`, and any extra values needed by the corresponding backward function. Args: args: A flat list of Tensors with all of the inputs to the forward function (including user-specified and captured inputs). possible_gradient_type: One of gradients_util.POSSIBLE_GRADIENT_TYPES_*. executing_eagerly: Boolean, the value of context.executing_eagerly(). Returns: An object with a `forward` method returning a tuple of (forward_function : AtomicFunction, augmented_arguments : List), and a corresponding `record` method which takes outputs from the forward function and records the operation. forward_function should be called with augmented_arguments.
github-repos
def MessageToRepr(msg, multiline=False, **kwargs): indent = kwargs.get('indent', 0) def IndentKwargs(kwargs): kwargs = dict(kwargs) kwargs['indent'] = (kwargs.get('indent', 0) + 4) return kwargs if isinstance(msg, list): s = '[' for item in msg: if multiline: s += ('\n' + (' ' * (indent + 4))) s += (MessageToRepr(item, multiline=multiline, **IndentKwargs(kwargs)) + ',') if multiline: s += ('\n' + (' ' * indent)) s += ']' return s if isinstance(msg, messages.Message): s = (type(msg).__name__ + '(') if (not kwargs.get('no_modules')): s = ((msg.__module__ + '.') + s) names = sorted([field.name for field in msg.all_fields()]) for name in names: field = msg.field_by_name(name) if multiline: s += ('\n' + (' ' * (indent + 4))) value = getattr(msg, field.name) s += (((field.name + '=') + MessageToRepr(value, multiline=multiline, **IndentKwargs(kwargs))) + ',') if multiline: s += ('\n' + (' ' * indent)) s += ')' return s if isinstance(msg, six.string_types): if (kwargs.get('shortstrings') and (len(msg) > 100)): msg = msg[:100] if isinstance(msg, datetime.datetime): class SpecialTZInfo(datetime.tzinfo): def __init__(self, offset): super(SpecialTZInfo, self).__init__() self.offset = offset def __repr__(self): s = (('TimeZoneOffset(' + repr(self.offset)) + ')') if (not kwargs.get('no_modules')): s = ('apitools.base.protorpclite.util.' + s) return s msg = datetime.datetime(msg.year, msg.month, msg.day, msg.hour, msg.minute, msg.second, msg.microsecond, SpecialTZInfo(msg.tzinfo.utcoffset(0))) return repr(msg)
Return a repr-style string for a protorpc message. protorpc.Message.__repr__ does not return anything that could be considered python code. Adding this function lets us print a protorpc message in such a way that it could be pasted into code later, and used to compare against other things. Args: msg: protorpc.Message, the message to be repr'd. multiline: bool, True if the returned string should have each field assignment on its own line. **kwargs: {str:str}, Additional flags for how to format the string. Known **kwargs: shortstrings: bool, True if all string values should be truncated at 100 characters, since when mocking the contents typically don't matter except for IDs, and IDs are usually less than 100 characters. no_modules: bool, True if the long module name should not be printed with each type. Returns: str, A string of valid python (assuming the right imports have been made) that recreates the message passed into this function.
codesearchnet
def _make_spec_file(self): if issubclass(BdistRPMCommand, object): spec_file = super(BdistRPMCommand, self)._make_spec_file() else: spec_file = bdist_rpm._make_spec_file(self) if (sys.version_info[0] < 3): python_package = 'python' else: python_package = 'python3' description = [] summary = '' in_description = False python_spec_file = [] for line in spec_file: if line.startswith('Summary: '): summary = line elif line.startswith('BuildRequires: '): line = 'BuildRequires: {0:s}-setuptools'.format(python_package) elif line.startswith('Requires: '): if (python_package == 'python3'): line = line.replace('python', 'python3') elif line.startswith('%description'): in_description = True elif line.startswith('%files'): line = '%files -f INSTALLED_FILES -n {0:s}-%{{name}}'.format(python_package) elif line.startswith('%prep'): in_description = False python_spec_file.append('%package -n {0:s}-%{{name}}'.format(python_package)) python_spec_file.append('{0:s}'.format(summary)) python_spec_file.append('') python_spec_file.append('%description -n {0:s}-%{{name}}'.format(python_package)) python_spec_file.extend(description) elif in_description: if ((not description) and (not line)): continue description.append(line) python_spec_file.append(line) return python_spec_file
Generates the text of an RPM spec file. Returns: A list of strings containing the lines of text.
codesearchnet
def observation_spec(self): obs_spec = named_array.NamedDict({'action_result': (0,), 'alerts': (0,), 'available_actions': (0,), 'build_queue': (0, len(UnitLayer)), 'cargo': (0, len(UnitLayer)), 'cargo_slots_available': (1,), 'control_groups': (10, 2), 'game_loop': (1,), 'last_actions': (0,), 'multi_select': (0, len(UnitLayer)), 'player': (len(Player),), 'score_cumulative': (len(ScoreCumulative),), 'score_by_category': (len(ScoreByCategory), len(ScoreCategories)), 'score_by_vital': (len(ScoreByVital), len(ScoreVitals)), 'single_select': (0, len(UnitLayer))}) aif = self._agent_interface_format if aif.feature_dimensions: obs_spec['feature_screen'] = (len(SCREEN_FEATURES), aif.feature_dimensions.screen.y, aif.feature_dimensions.screen.x) obs_spec['feature_minimap'] = (len(MINIMAP_FEATURES), aif.feature_dimensions.minimap.y, aif.feature_dimensions.minimap.x) if aif.rgb_dimensions: obs_spec['rgb_screen'] = (aif.rgb_dimensions.screen.y, aif.rgb_dimensions.screen.x, 3) obs_spec['rgb_minimap'] = (aif.rgb_dimensions.minimap.y, aif.rgb_dimensions.minimap.x, 3) if aif.use_feature_units: obs_spec['feature_units'] = (0, len(FeatureUnit)) if aif.use_raw_units: obs_spec['raw_units'] = (0, len(FeatureUnit)) if aif.use_unit_counts: obs_spec['unit_counts'] = (0, len(UnitCounts)) if aif.use_camera_position: obs_spec['camera_position'] = (2,) return obs_spec
The observation spec for the SC2 environment. It's worth noting that the image-like observations are in y,x/row,column order which is different than the actions which are in x,y order. This is due to conflicting conventions, and to facilitate printing of the images. Returns: The dict of observation names to their tensor shapes. Shapes with a 0 can vary in length, for example the number of valid actions depends on which units you have selected.
codesearchnet
def plot_probabilities_histogram(Y_p, title=None): if Y_p.ndim > 1: msg = ( f"Arg Y_p should be a 1-dimensional np.ndarray, not of shape " f"{Y_p.shape}." ) raise ValueError(msg) plt.hist(Y_p, bins=20) plt.xlim((0, 1.025)) plt.xlabel("Probability") plt.ylabel(" if isinstance(title, str): plt.title(title) plt.show()
Plot a histogram from a numpy array of probabilities Args: Y_p: An [n] or [n, 1] np.ndarray of probabilities (floats in [0,1])
juraj-google-style
def get_accounts_for_service(cls, service_type): return [ a for a in cls.get_accounts().values() if a.service_type == service_type ]
Get a list of accounts for a given music service. Args: service_type (str): The service_type to use. Returns: list: A list of `Account` instances.
juraj-google-style
def discovery(self, logfile=None, tracefile=None): self._enable_logging(logfile=logfile, tracefile=tracefile) self.log("'discovery' method is deprecated. Please 'connect' with force_discovery=True.") self.log("Device discovery process started") self.connect(logfile=logfile, force_discovery=True, tracefile=tracefile) self.disconnect()
Discover the device details. This method discover several device attributes. Args: logfile (file): Optional file descriptor for session logging. The file must be open for write. The session is logged only if ``log_session=True`` was passed to the constructor. It the parameter is not passed then the default *session.log* file is created in `log_dir`.
juraj-google-style
def stage_tc_create_tag(self, tag, resource): tag_resource = resource.tags(self.tcex.safetag(tag)) tag_resource.http_method = 'POST' t_response = tag_resource.request() if (t_response.get('status') != 'Success'): self.log.warning('[tcex] Failed adding tag "{}" ({}).'.format(tag, t_response.get('response').text))
Add a tag to a resource. Args: tag (str): The tag to be added to the resource. resource (obj): An instance of tcex resource class.
codesearchnet
def get_legacy_output_classes(dataset_or_iterator): return nest.map_structure(lambda component_spec: component_spec._to_legacy_output_classes(), get_structure(dataset_or_iterator))
Returns the output classes for elements of the input dataset / iterator. Args: dataset_or_iterator: A `tf.data.Dataset` or `tf.data.Iterator`. Returns: A (nested) structure of Python `type` objects matching the structure of the dataset / iterator elements and specifying the class of the individual components. @compatibility(TF2) This is a legacy API for inspecting the type signature of dataset elements. In TF 2, you should use the `tf.data.Dataset.element_spec` attribute instead. @end_compatibility
github-repos
def register_binary_elementwise_assert_api(func): _BINARY_ELEMENTWISE_ASSERT_APIS.append(func) for args, handler in _ELEMENTWISE_API_HANDLERS.items(): if len(args) == 3 and args[2] is _ASSERT_API_TAG: _add_dispatch_for_binary_elementwise_api(func, args[0], args[1], handler) return func
Decorator that registers a TensorFlow op as a binary elementwise assert API. Different from `dispatch_for_binary_elementwise_apis`, this decorator is used for assert apis, such as assert_equal, assert_none_equal, etc, which return None in eager mode and an op in graph mode. Args: func: The function that implements the binary elementwise assert API. Returns: `func`
github-repos
def global_env_valid(env): if env not in EFConfig.ACCOUNT_SCOPED_ENVS: raise ValueError("Invalid global env: {}; global envs are: {}".format(env, EFConfig.ACCOUNT_SCOPED_ENVS)) return True
Given an env, determine if it's a valid "global" or "mgmt" env as listed in EFConfig Args: env: the env to check Returns: True if the env is a valid global env in EFConfig Raises: ValueError with message if the env is not valid
juraj-google-style
def _bit_list_to_bytes(bit_list): num_bits = len(bit_list) byte_vals = bytearray() for start in six.moves.xrange(0, num_bits, 8): curr_bits = bit_list[start:start + 8] char_val = sum( val * digit for val, digit in six.moves.zip(_POW2, curr_bits)) byte_vals.append(char_val) return bytes(byte_vals)
Converts an iterable of 1s and 0s to bytes. Combines the list 8 at a time, treating each group of 8 bits as a single byte. Args: bit_list (Sequence): Sequence of 1s and 0s. Returns: bytes: The decoded bytes.
juraj-google-style
def get_energy_buckingham(structure, gulp_cmd='gulp', keywords=('optimise', 'conp', 'qok'), valence_dict=None): gio = GulpIO() gc = GulpCaller(gulp_cmd) gin = gio.buckingham_input(structure, keywords, valence_dict=valence_dict) gout = gc.run(gin) return gio.get_energy(gout)
Compute the energy of a structure using Buckingham potential. Args: structure: pymatgen.core.structure.Structure gulp_cmd: GULP command if not in standard place keywords: GULP first line keywords valence_dict: {El: valence}. Needed if the structure is not charge neutral.
codesearchnet
def get_dataset(self, dsid, dsinfo): data = self[dsinfo.get('file_key', dsid.name)] data.attrs.update(dsinfo) data.attrs["platform_name"] = self['/attr/satellite_name'] data.attrs["sensor"] = self['/attr/instrument_name'] return data
Get dataset function Args: dsid: Dataset ID param2: Dataset Information Returns: Dask DataArray: Data
juraj-google-style
def is_evenly_distributed_thresholds(thresholds): num_thresholds = len(thresholds) if num_thresholds < 3: return False even_thresholds = np.arange(num_thresholds, dtype=np.float32) / (num_thresholds - 1) return np.allclose(thresholds, even_thresholds, atol=backend.epsilon())
Check if the thresholds list is evenly distributed. We could leverage evenly distributed thresholds to use less memory when calculate metrcis like AUC where each individual threshold need to be evaluated. Args: thresholds: A python list or tuple, or 1D numpy array whose value is ranged in [0, 1]. Returns: boolean, whether the values in the inputs are evenly distributed.
github-repos
def __init__(self, config, start=True): if config.dispatcher_address is None: raise ValueError('Must specify a `dispatcher_address` in the `config` passed to `WorkerServer`.') if isinstance(config, service_config_pb2.WorkerConfig): config_proto = config else: config_proto = service_config_pb2.WorkerConfig(dispatcher_address=config.dispatcher_address, worker_address=config.worker_address, port=config.port, protocol=config.protocol, heartbeat_interval_ms=config.heartbeat_interval_ms, dispatcher_timeout_ms=config.dispatcher_timeout_ms, data_transfer_protocol=config.data_transfer_protocol, data_transfer_address=config.data_transfer_address) self._server = _pywrap_server_lib.TF_DATA_NewWorkerServer(config_proto.SerializeToString()) if start: self._server.start()
Creates a new worker server. Args: config: A `tf.data.experimental.service.WorkerConfig` configuration. start: (Optional.) Boolean, indicating whether to start the server after creating it. Defaults to True.
github-repos
def from_symmop(cls, symmop, time_reversal): magsymmop = cls(symmop.affine_matrix, time_reversal, symmop.tol) return magsymmop
Initialize a MagSymmOp from a SymmOp and time reversal operator. Args: symmop (SymmOp): SymmOp time_reversal (int): Time reversal operator, +1 or -1. Returns: MagSymmOp object
juraj-google-style
def _ring_2d(m, n): if m == 1: return [(0, i) for i in range(n)] if n == 1: return [(i, 0) for i in range(m)] if m % 2 != 0: tf.logging.warning("Odd dimension") return [(i % m, i ret = [(0, 0)] for i in range(m for j in range(1, n): ret.append((2 * i, j)) for j in range(n-1, 0, -1): ret.append((2 * i + 1, j)) for i in range(m-1, 0, -1): ret.append((i, 0)) return ret
Ring-order of a mxn mesh. Args: m: an integer n: an integer Returns: a list of mxn pairs
juraj-google-style
def _check_for_fail_message(self, transport, exc_info, timeout): try: transport.read_message(timeout) except usb_exceptions.CommonUsbError: if (sys.exc_info()[0] is usb_exceptions.AdbRemoteError): raise raise_with_traceback(exc_info[0](exc_info[1]), traceback=exc_info[2])
Check for a 'FAIL' message from transport. This method always raises, if 'FAIL' was read, it will raise an AdbRemoteError with the message, otherwise it will raise based on exc_info, which should be a tuple as per sys.exc_info(). Args: transport: Transport from which to read for a 'FAIL' message. exc_info: Exception info to raise if no 'FAIL' is read. timeout: Timeout to use for the read operation. Raises: AdbRemoteError: If a 'FAIL' is read, otherwise raises exc_info.
codesearchnet
def create_dummy_class(klass, dependency): assert not building_rtfd() class _DummyMetaClass(type): def __getattr__(_, __): raise AttributeError("Cannot import '{}', therefore '{}' is not available".format(dependency, klass)) @six.add_metaclass(_DummyMetaClass) class _Dummy(object): def __init__(self, *args, **kwargs): raise ImportError("Cannot import '{}', therefore '{}' is not available".format(dependency, klass)) return _Dummy
When a dependency of a class is not available, create a dummy class which throws ImportError when used. Args: klass (str): name of the class. dependency (str): name of the dependency. Returns: class: a class object
juraj-google-style
def _filter_out_metaclasses(bases, ctx): non_meta = [] meta = None for base in bases: with_metaclass = False for b in base.data: if isinstance(b, metaclass.WithMetaclassInstance): with_metaclass = True if not meta: meta = b.cls.to_variable(ctx.root_node) non_meta.extend(b.bases) if not with_metaclass: non_meta.append(base) return (meta, non_meta)
Process the temporary classes created by six.with_metaclass. six.with_metaclass constructs an anonymous class holding a metaclass and a list of base classes; if we find instances in `bases`, store the first metaclass we find and remove all metaclasses from `bases`. Args: bases: The list of base classes for the class being constructed. ctx: The current context. Returns: A tuple of (metaclass, base classes)
github-repos
def get_min_muO2(self, min_voltage=None, max_voltage=None): data = [] for pair in self._select_in_voltage_range(min_voltage, max_voltage): if pair.muO2_discharge is not None: data.extend([d['chempot'] for d in pair.muO2_discharge]) if pair.muO2_charge is not None: data.extend([d['chempot'] for d in pair.muO2_discharge]) return min(data) if len(data) > 0 else None
Minimum critical oxygen chemical potential along path. Args: min_voltage: The minimum allowable voltage for a given step max_voltage: The maximum allowable voltage allowable for a given step Returns: Minimum critical oxygen chemical of all compounds along the insertion path (a subset of the path can be chosen by the optional arguments).
juraj-google-style
def grad_dot(dy, x1, x2): if (len(numpy.shape(x1)) == 1): dy = numpy.atleast_2d(dy) elif (len(numpy.shape(x2)) == 1): dy = numpy.transpose(numpy.atleast_2d(dy)) x2 = numpy.transpose(numpy.atleast_2d(x2)) x2_t = numpy.transpose(numpy.atleast_2d(numpy.sum(x2, axis=tuple(numpy.arange((numpy.ndim(x2) - 2)))))) dy_x2 = numpy.sum(dy, axis=tuple(((- numpy.arange((numpy.ndim(x2) - 2))) - 2))) return numpy.reshape(numpy.dot(dy_x2, x2_t), numpy.shape(x1))
Gradient of NumPy dot product w.r.t. to the left hand side. Args: dy: The gradient with respect to the output. x1: The left hand side of the `numpy.dot` function. x2: The right hand side Returns: The gradient with respect to `x1` i.e. `x2.dot(dy.T)` with all the broadcasting involved.
codesearchnet