code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def vgg_layer(inputs, nout, kernel_size=3, activation=tf.nn.leaky_relu, padding="SAME", is_training=True, has_batchnorm=False, scope=None): with tf.variable_scope(scope): net = tfl.conv2d(inputs, nout, kernel_size=ke...
A layer of VGG network with batch norm. Args: inputs: image tensor nout: number of output channels kernel_size: size of the kernel activation: activation function padding: padding of the image is_training: whether it is training mode or not has_batchnorm: whether batchnorm is applied or n...
def _get_isolated(self, hostport): assert hostport, "hostport is required" if hostport not in self._peers: peer = self.peer_class( tchannel=self.tchannel, hostport=hostport, ) self._peers[peer.hostport] = peer return self._peers...
Get a Peer for the given destination for a request. A new Peer is added and returned if one does not already exist for the given host-port. Otherwise, the existing Peer is returned. **NOTE** new peers will not be added to the peer heap.
def _process_list(self, list_line): res = list_line.split(' ', 8) if res[0].startswith('-'): self.state['file_list'].append(res[-1]) if res[0].startswith('d'): self.state['dir_list'].append(res[-1])
Processes a line of 'ls -l' output, and updates state accordingly. :param list_line: Line to process
def allow_migrate(self, db, model): if db == DUAS_DB_ROUTE_PREFIX: return model._meta.app_label == 'duashttp' elif model._meta.app_label == 'duashttp': return False return None
Make sure the auth app only appears in the 'duashttp' database.
def get_network_adapter_object_type(adapter_object): if isinstance(adapter_object, vim.vm.device.VirtualVmxnet2): return 'vmxnet2' if isinstance(adapter_object, vim.vm.device.VirtualVmxnet3): return 'vmxnet3' if isinstance(adapter_object, vim.vm.device.VirtualVmxnet): return 'vmxnet'...
Returns the network adapter type. adapter_object The adapter object from which to obtain the network adapter type.
def safe_join(*paths): try: return join(*paths) except UnicodeDecodeError: npaths = () for path in paths: npaths += (unicoder(path),) return join(*npaths)
Join paths in a Unicode-safe way
def get_process_tag(program, ccd, version='p'): return "%s_%s%s" % (program, str(version), str(ccd).zfill(2))
make a process tag have a suffix indicating which ccd its for. @param program: Name of the process that a tag is built for. @param ccd: the CCD number that this process ran on. @param version: The version of the exposure (s, p, o) that the process ran on. @return: The string that represents the processi...
def block_header_verify( block_data, prev_hash, block_hash ): serialized_header = block_header_to_hex( block_data, prev_hash ) candidate_hash_bin_reversed = hashing.bin_double_sha256(binascii.unhexlify(serialized_header)) candidate_hash = binascii.hexlify( candidate_hash_bin_reversed[::-1] ) return bloc...
Verify whether or not bitcoind's block header matches the hash we expect.
def normalized(self): return Rect(pos=(min(self.left, self.right), min(self.top, self.bottom)), size=(abs(self.width), abs(self.height)))
Return a Rect covering the same area, but with height and width guaranteed to be positive.
def serializer(_type): def inner(func): name = dr.get_name(_type) if name in SERIALIZERS: msg = "%s already has a serializer registered: %s" raise Exception(msg % (name, dr.get_name(SERIALIZERS[name]))) SERIALIZERS[name] = func return func return inner
Decorator for serializers. A serializer should accept two parameters: An object and a path which is a directory on the filesystem where supplementary data can be stored. This is most often useful for datasources. It should return a dictionary version of the original object that contains only elements t...
def save_load(jid, clear_load, minions=None): for returner_ in __opts__[CONFIG_KEY]: _mminion().returners['{0}.save_load'.format(returner_)](jid, clear_load)
Write load to all returners in multi_returner
def _compile_constant_expression(self, expr: Expression, scope: Dict[str, TensorFluent], batch_size: Optional[int] = None, noise: Optional[List[tf.Tensor]] = None) -> Tenso...
Compile a constant expression `expr` into a TensorFluent in the given `scope` with optional batch size. Args: expr (:obj:`rddl2tf.expr.Expression`): A RDDL constant expression. scope (Dict[str, :obj:`rddl2tf.fluent.TensorFluent`]): A fluent scope. batch_size (Optiona...
def fetchAllUsers(self): data = {"viewer": self._uid} j = self._post( self.req_url.ALL_USERS, query=data, fix_request=True, as_json=True ) if j.get("payload") is None: raise FBchatException("Missing payload while fetching users: {}".format(j)) users = [] ...
Gets all users the client is currently chatting with :return: :class:`models.User` objects :rtype: list :raises: FBchatException if request failed
def init_from_class_batches(self, class_batches, num_shards=None): shards_for_submissions = {} shard_idx = 0 for idx, (batch_id, batch_val) in enumerate(iteritems(class_batches)): work_id = DEFENSE_WORK_ID_PATTERN.format(idx) submission_id = batch_val['submission_id'] shard_id = None ...
Initializes work pieces from classification batches. Args: class_batches: dict with classification batches, could be obtained as ClassificationBatches.data num_shards: number of shards to split data into, if None then no sharding is done.
def open(cls, filename, band_names=None, lazy_load=True, mutable=False, **kwargs): if mutable: geo_raster = MutableGeoRaster(filename=filename, band_names=band_names, **kwargs) else: geo_raster = cls(filename=filename, band_names=band_names, **kwargs) if not lazy_load: ...
Read a georaster from a file. :param filename: url :param band_names: list of strings, or string. if None - will try to read from image, otherwise - these will be ['0', ..] :param lazy_load: if True - do not load anything :return: GeoRaster2
def present_weather_codes(self, value=None): if value is not None: try: value = int(value) except ValueError: raise ValueError( 'value {} need to be of type int ' 'for field `present_weather_codes`'.format(value)) ...
Corresponds to IDD Field `present_weather_codes` Args: value (int): value for IDD Field `present_weather_codes` if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value...
def to_allele_counts(self, max_allele=None, dtype='u1'): if max_allele is None: max_allele = self.max() alleles = list(range(max_allele + 1)) outshape = self.shape[:-1] + (len(alleles),) out = np.zeros(outshape, dtype=dtype) for allele in alleles: allele_m...
Transform genotype calls into allele counts per call. Parameters ---------- max_allele : int, optional Highest allele index. Provide this value to speed up computation. dtype : dtype, optional Output dtype. Returns ------- out : ndarray, ...
def is_for_driver_task(self): return all( len(x) == 0 for x in [self.module_name, self.class_name, self.function_name])
See whether this function descriptor is for a driver or not. Returns: True if this function descriptor is for driver tasks.
def disconnect(self, message=""): try: del self.connected except AttributeError: return try: self.socket.shutdown(socket.SHUT_WR) self.socket.close() except socket.error: pass del self.socket self.reactor._handle...
Hang up the connection and close the object. Arguments: message -- Quit message.
def get_registered_services(self): if self._state == Bundle.UNINSTALLED: raise BundleException( "Can't call 'get_registered_services' on an " "uninstalled bundle" ) return self.__framework._registry.get_bundle_registered_services(self)
Returns this bundle's ServiceReference list for all services it has registered or an empty list The list is valid at the time of the call to this method, however, as the Framework is a very dynamic environment, services can be modified or unregistered at any time. :return: An a...
def minimum_pitch(self): pitch = self.pitch minimal_pitch = [] for p in pitch: minimal_pitch.append(min(p)) return min(minimal_pitch)
Returns the minimal pitch between two neighboring nodes of the mesh in each direction. :return: Minimal pitch in each direction.
def build_function(name, args=None, defaults=None, doc=None): args, defaults = args or [], defaults or [] func = nodes.FunctionDef(name, doc) func.args = argsnode = nodes.Arguments() argsnode.args = [] for arg in args: argsnode.args.append(nodes.Name()) argsnode.args[-1].name = arg ...
create and initialize an astroid FunctionDef node
def touch(self, conn, key, exptime): assert self._validate_key(key) _cmd = b' '.join([b'touch', key, str(exptime).encode('utf-8')]) cmd = _cmd + b'\r\n' resp = yield from self._execute_simple_command(conn, cmd) if resp not in (const.TOUCHED, const.NOT_FOUND): raise Cl...
The command is used to update the expiration time of an existing item without fetching it. :param key: ``bytes``, is the key to update expiration time :param exptime: ``int``, is expiration time. This replaces the existing expiration time. :return: ``bool``, True in case of succ...
def fix_positions(self): shift_x = 0 for m in self.__reactants: max_x = self.__fix_positions(m, shift_x, 0) shift_x = max_x + 1 arrow_min = shift_x if self.__reagents: for m in self.__reagents: max_x = self.__fix_positions(m, shift_x, 1...
fix coordinates of molecules in reaction
def handleFailure(self, test, err): want_failure = self._handle_test_error_or_failure(test, err) if not want_failure and id(test) in self._tests_that_reran: self._nose_result.addFailure(test, err) return want_failure or None
Baseclass override. Called when a test fails. If the test isn't going to be rerun again, then report the failure to the nose test result. :param test: The test that has raised an error :type test: :class:`nose.case.Test` :param err: Informati...
def to_dotfile(self): domain = self.get_domain() filename = "%s.dot" % (self.__class__.__name__) nx.write_dot(domain, filename) return filename
Writes a DOT graphviz file of the domain structure, and returns the filename
def do_zsh_complete(cli, prog_name): commandline = os.environ['COMMANDLINE'] args = split_args(commandline)[1:] if args and not commandline.endswith(' '): incomplete = args[-1] args = args[:-1] else: incomplete = '' def escape(s): return s.replace('"', '""').replace("...
Do the zsh completion Parameters ---------- cli : click.Command The main click Command of the program prog_name : str The program name on the command line Returns ------- bool True if the completion was successful, False otherwise
def main(): test_targets = ( [ARCH_I386, MACH_I386_I386_INTEL_SYNTAX, ENDIAN_MONO, "\x55\x89\xe5\xE8\xB8\xFF\xFF\xFF", 0x1000], [ARCH_I386, MACH_X86_64_INTEL_SYNTAX, ENDIAN_MONO, "\x55\x48\x89\xe5\xE8\xA3\xFF\xFF\xFF", 0x1000], [ARCH_ARM, MACH_ARM_2, ENDIAN_LITTLE, "\x04\xe0\x2d\xe5\xED\...
Test case for simple opcode disassembly.
def throw_if_parsable(resp): e = None try: e = parse_response(resp) except: LOG.debug(utils.stringify_expt()) if e is not None: raise e if resp.status_code == 404: raise NoSuchObject('No such object.') else: text = resp.text if six.PY3 else resp.content ...
Try to parse the content of the response and raise an exception if neccessary.
def determine_result(self, returncode, returnsignal, output, isTimeout): splitout = "\n".join(output) if 'SMACK found no errors' in splitout: return result.RESULT_TRUE_PROP errmsg = re.search(r'SMACK found an error(:\s+([^\.]+))?\.', splitout) if errmsg: errtype = err...
Returns a BenchExec result status based on the output of SMACK
def get_log_entry_ids_by_log(self, log_id): id_list = [] for log_entry in self.get_log_entries_by_log(log_ids): id_list.append(log_entry.get_id()) return IdList(id_list)
Gets the list of ``LogEntry`` ``Ids`` associated with a ``Log``. arg: log_id (osid.id.Id): ``Id`` of a ``Log`` return: (osid.id.IdList) - list of related logEntry ``Ids`` raise: NotFound - ``log_id`` is not found raise: NullArgument - ``log_id`` is ``null`` raise: Operati...
def plot_joint_sfs_folded_scaled(*args, **kwargs): imshow_kwargs = kwargs.get('imshow_kwargs', dict()) imshow_kwargs.setdefault('norm', None) kwargs['imshow_kwargs'] = imshow_kwargs ax = plot_joint_sfs_folded(*args, **kwargs) ax.set_xlabel('minor allele count (population 1)') ax.set_ylabel('mino...
Plot a scaled folded joint site frequency spectrum. Parameters ---------- s : array_like, int, shape (n_chromosomes_pop1/2, n_chromosomes_pop2/2) Joint site frequency spectrum. ax : axes, optional Axes on which to draw. If not provided, a new figure will be created. imshow_kwargs : ...
def from_two_bytes(bytes): lsb, msb = bytes try: return msb << 7 | lsb except TypeError: try: lsb = ord(lsb) except TypeError: pass try: msb = ord(msb) except TypeError: pass return msb << 7 | lsb
Return an integer from two 7 bit bytes.
def _get_json(value): if hasattr(value, 'replace'): value = value.replace('\n', ' ') try: return json.loads(value) except json.JSONDecodeError: if hasattr(value, 'replace'): value = value.replace('"', '\\"') return json.loads('"{}"'.format(value))
Convert the given value to a JSON object.
def normpath (path): expanded = os.path.expanduser(os.path.expandvars(path)) return os.path.normcase(os.path.normpath(expanded))
Norm given system path with all available norm or expand functions in os.path.
def perplexity(test_data, predictions, topics, vocabulary): test_data = _check_input(test_data) assert isinstance(predictions, _SArray), \ "Predictions must be an SArray of vector type." assert predictions.dtype == _array.array, \ "Predictions must be probabilities. Try using m.predict() wit...
Compute the perplexity of a set of test documents given a set of predicted topics. Let theta be the matrix of document-topic probabilities, where theta_ik = p(topic k | document i). Let Phi be the matrix of term-topic probabilities, where phi_jk = p(word j | topic k). Then for each word in each do...
def extend_settings(self, data_id, files, secrets): process = Data.objects.get(pk=data_id).process if process.requirements.get('resources', {}).get('secrets', False): raise PermissionDenied( "Process which requires access to secrets cannot be run using the local executor" ...
Prevent processes requiring access to secrets from being run.
def decimal128_to_decimal(b): "decimal128 bytes to Decimal" v = decimal128_to_sign_digits_exponent(b) if isinstance(v, Decimal): return v sign, digits, exponent = v return Decimal((sign, Decimal(digits).as_tuple()[1], exponent))
decimal128 bytes to Decimal
def replace_suffixes_4(self, word): length = len(word) replacements = {'ational': 'ate', 'tional': 'tion', 'alize': 'al', 'icate': 'ic', 'iciti': 'ic', 'ical': 'ic', 'ful': '', 'ness': ''} for suffix in replacements.keys(): if word.ends...
Perform replacements on even more common suffixes.
def _get(self, url, params=None, headers=None): url = self.clean_url(url) response = requests.get(url, params=params, verify=self.verify, timeout=self.timeout, headers=headers) return response
Wraps a GET request with a url check
def add(self, snapshot, distributions, component='main', storage=""): for dist in distributions: self.publish(dist, storage=storage).add(snapshot, component)
Add mirror or repo to publish
def train_step_single(self, Xi, yi, **fit_params): self.module_.train() self.optimizer_.zero_grad() y_pred = self.infer(Xi, **fit_params) loss = self.get_loss(y_pred, yi, X=Xi, training=True) loss.backward() self.notify( 'on_grad_computed', named_p...
Compute y_pred, loss value, and update net's gradients. The module is set to be in train mode (e.g. dropout is applied). Parameters ---------- Xi : input data A batch of the input data. yi : target data A batch of the target data. **fit_par...
def get_es(**overrides): defaults = { 'urls': settings.ES_URLS, 'timeout': getattr(settings, 'ES_TIMEOUT', 5) } defaults.update(overrides) return base_get_es(**defaults)
Return a elasticsearch Elasticsearch object using settings from ``settings.py``. :arg overrides: Allows you to override defaults to create the ElasticSearch object. You can override any of the arguments isted in :py:func:`elasticutils.get_es`. For example, if you wanted to create an Elasti...
def etag(self, href): if self and self._etag is None: self._etag = LoadElement(href, only_etag=True) return self._etag
ETag can be None if a subset of element json is using this container, such as the case with Routing.
def simplified_rayliegh_vel(self): thicks = np.array([l.thickness for l in self]) depths_mid = np.array([l.depth_mid for l in self]) shear_vels = np.array([l.shear_vel for l in self]) mode_incr = depths_mid * thicks / shear_vels ** 2 shape = np.r_[np.cumsum(mode_incr[::-1])[::-1]...
Simplified Rayliegh velocity of the site. This follows the simplifications proposed by Urzua et al. (2017) Returns ------- rayleigh_vel : float Equivalent shear-wave velocity.
def normalize_curves_eb(curves): non_zero_curves = [(losses, poes) for losses, poes in curves if losses[-1] > 0] if not non_zero_curves: return curves[0][0], numpy.array([poes for _losses, poes in curves]) else: max_losses = [losses[-1] for losses, _poes in non_zero_cu...
A more sophisticated version of normalize_curves, used in the event based calculator. :param curves: a list of pairs (losses, poes) :returns: first losses, all_poes
def _get(url, headers={}, params=None): param_string = _foursquare_urlencode(params) for i in xrange(NUM_REQUEST_RETRIES): try: try: response = requests.get(url, headers=headers, params=param_string, verify=VERIFY_SSL) return _process_response(response) ...
Tries to GET data from an endpoint using retries
def get_crt_common_name(certificate_path=OLD_CERTIFICATE_PATH): try: certificate_file = open(certificate_path) crt = OpenSSL.crypto.load_certificate(OpenSSL.crypto.FILETYPE_PEM, certificate_file.read()) return crt.get_subject().commonName exc...
Get CN from certificate
def wnelmd(point, window): assert isinstance(window, stypes.SpiceCell) assert window.dtype == 1 point = ctypes.c_double(point) return bool(libspice.wnelmd_c(point, ctypes.byref(window)))
Determine whether a point is an element of a double precision window. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/wnelmd_c.html :param point: Input point. :type point: float :param window: Input window :type window: spiceypy.utils.support_types.SpiceCell :return: returns True...
def func(self, p): self._set_stochastics(p) try: return -1. * self.logp except ZeroProbability: return Inf
The function that gets passed to the optimizers.
def open(self, bus): self.fd = os.open("/dev/i2c-{}".format(bus), os.O_RDWR) self.funcs = self._get_funcs()
Open a given i2c bus. :param bus: i2c bus number (e.g. 0 or 1) :type bus: int
def slot_name_from_member_name(member_name): def replace_char(match): c = match.group(0) return '_' if c in '-.' else "u{:04d}".format(ord(c)) slot_name = re.sub('[^a-z0-9_]', replace_char, member_name.lower()) return slot_name[0:63]
Translate member name to valid PostgreSQL slot name. PostgreSQL replication slot names must be valid PostgreSQL names. This function maps the wider space of member names to valid PostgreSQL names. Names are lowercased, dashes and periods common in hostnames are replaced with underscores, other characters a...
def fetch_pillar(self): log.debug('Pillar cache getting external pillar with ext: %s', self.ext) fresh_pillar = Pillar(self.opts, self.grains, self.minion_id, self.saltenv, ext=self.ex...
In the event of a cache miss, we need to incur the overhead of caching a new pillar.
def reify_arrays(arrays, dims, copy=True): arrays = ({ k : AttrDict(**a) for k, a in arrays.iteritems() } if copy else arrays) for n, a in arrays.iteritems(): a.shape = tuple(dims[v].extent_size if isinstance(v, str) else v for v in a.shape) return arrays
Reify arrays, given the supplied dimensions. If copy is True, returns a copy of arrays else performs this inplace.
def _load_feed(path: str, view: View, config: nx.DiGraph) -> Feed: config_ = remove_node_attributes(config, ["converters", "transformations"]) feed_ = Feed(path, view={}, config=config_) for filename, column_filters in view.items(): config_ = reroot_graph(config_, filename) view_ = {filename...
Multi-file feed filtering
def last_revision(self, mod: YangIdentifier) -> ModuleId: revs = [mn for mn in self.modules if mn[0] == mod] if not revs: raise ModuleNotRegistered(mod) return sorted(revs, key=lambda x: x[1])[-1]
Return the last revision of a module that's part of the data model. Args: mod: Name of a module or submodule. Raises: ModuleNotRegistered: If the module `mod` is not present in the data model.
def get_grp_name(self, code): nt_code = self.code2nt.get(code.strip(), None) if nt_code is not None: return nt_code.group, nt_code.name return "", ""
Return group and name for an evidence code.
def _do_download( self, transport, file_obj, download_url, headers, start=None, end=None ): if self.chunk_size is None: download = Download( download_url, stream=file_obj, headers=headers, start=start, end=end ) download.consume(transport) ...
Perform a download without any error handling. This is intended to be called by :meth:`download_to_file` so it can be wrapped with error handling / remapping. :type transport: :class:`~google.auth.transport.requests.AuthorizedSession` :param transport: The transport (with c...
def _add_post_data(self, request: Request): if self._item_session.url_record.post_data: data = wpull.string.to_bytes(self._item_session.url_record.post_data) else: data = wpull.string.to_bytes( self._processor.fetch_params.post_data ) request.m...
Add data to the payload.
def make_gffutils_db(gtf, db): import gffutils out_db = gffutils.create_db(gtf, db, keep_order=True, infer_gene_extent=False) return out_db
Make database for gffutils. Parameters ---------- gtf : str Path to Gencode gtf file. db : str Path to save database to. Returns ------- out_db : gffutils.FeatureDB gffutils feature database.
def season_game_logs(season): max_year = int(datetime.now().year) - 1 if season > max_year or season < 1871: raise ValueError('Season must be between 1871 and {}'.format(max_year)) file_name = 'GL{}.TXT'.format(season) z = get_zip_file(gamelog_url.format(season)) data = pd.read_csv(z.open(fi...
Pull Retrosheet game logs for a given season
def dns(): if salt.utils.platform.is_windows() or 'proxyminion' in __opts__: return {} resolv = salt.utils.dns.parse_resolv() for key in ('nameservers', 'ip4_nameservers', 'ip6_nameservers', 'sortlist'): if key in resolv: resolv[key] = [six.text_type(i) for i in r...
Parse the resolver configuration file .. versionadded:: 2016.3.0
def buckingham_input(self, structure, keywords, library=None, uc=True, valence_dict=None): gin = self.keyword_line(*keywords) gin += self.structure_lines(structure, symm_flg=not uc) if not library: gin += self.buckingham_potential(structure, valence_dict) ...
Gets a GULP input for an oxide structure and buckingham potential from library. Args: structure: pymatgen.core.structure.Structure keywords: GULP first line keywords. library (Default=None): File containing the species and potential. uc (Default=True): Un...
def add_prioritized(self, command_obj, priority): if priority not in self.__priorities.keys(): self.__priorities[priority] = [] self.__priorities[priority].append(command_obj)
Add command with the specified priority :param command_obj: command to add :param priority: command priority :return: None
def get_resource_ids_by_bin(self, bin_id): id_list = [] for resource in self.get_resources_by_bin(bin_id): id_list.append(resource.get_id()) return IdList(id_list)
Gets the list of ``Resource`` ``Ids`` associated with a ``Bin``. arg: bin_id (osid.id.Id): ``Id`` of a ``Bin`` return: (osid.id.IdList) - list of related resource ``Ids`` raise: NotFound - ``bin_id`` is not found raise: NullArgument - ``bin_id`` is ``null`` raise: Operati...
def validate_ip(s): if _DOTTED_QUAD_RE.match(s): quads = s.split('.') for q in quads: if int(q) > 255: return False return True return False
Validate a dotted-quad ip address. The string is considered a valid dotted-quad address if it consists of one to four octets (0-255) seperated by periods (.). >>> validate_ip('127.0.0.1') True >>> validate_ip('127.0') True >>> validate_ip('127.0.0.256') False >>> validate_ip(LOCAL...
def build_label(self, ident, cls): ident_w_label = ident + ':' + cls.__label__ self._ast['match'].append('({0})'.format(ident_w_label)) self._ast['return'] = ident self._ast['result_class'] = cls return ident
match nodes by a label
def thumbnails_for_file(relative_source_path, root=None, basedir=None, subdir=None, prefix=None): if root is None: root = settings.MEDIA_ROOT if prefix is None: prefix = settings.THUMBNAIL_PREFIX if subdir is None: subdir = settings.THUMBNAIL_SUBDIR if bas...
Return a list of dictionaries, one for each thumbnail belonging to the source image. The following list explains each key of the dictionary: `filename` -- absolute thumbnail path `x` and `y` -- the size of the thumbnail `options` -- list of options for this thumbnail `quality` -- ...
def get_privacy_options(user): privacy_options = {} for ptype in user.permissions: for field in user.permissions[ptype]: if ptype == "self": privacy_options["{}-{}".format(field, ptype)] = user.permissions[ptype][field] else: privacy_options[field]...
Get a user's privacy options to pass as an initial value to a PrivacyOptionsForm.
def all_operations(self) -> Iterator[ops.Operation]: return (op for moment in self for op in moment.operations)
Iterates over the operations applied by this circuit. Operations from earlier moments will be iterated over first. Operations within a moment are iterated in the order they were given to the moment's constructor.
def _create_vxr(self, f, recStart, recEnd, currentVDR, priorVXR, vvrOffset): vxroffset = self._write_vxr(f) self._use_vxrentry(f, vxroffset, recStart, recEnd, vvrOffset) if (priorVXR == 0): self._update_offset_value(f, currentVDR+28, 8, vxroffset) else: self._upda...
Create a VXR AND use a VXR Parameters: f : file The open CDF file recStart : int The start record of this block recEnd : int The ending record of this block currentVDR : int The byte location of the ...
def _make_session(): sess = requests.Session() sess.mount('http://', requests.adapters.HTTPAdapter(max_retries=False)) sess.mount('https://', requests.adapters.HTTPAdapter(max_retries=False)) return sess
Create session object. :rtype: requests.Session
def rcategorical(p, size=None): out = flib.rcat(p, np.random.random(size=size)) if sum(out.shape) == 1: return out.squeeze() else: return out
Categorical random variates.
def read_metadata(self, key): if getattr(getattr(self.group, 'meta', None), key, None) is not None: return self.parent.select(self._get_metadata_path(key)) return None
return the meta data array for this key
def cmd_tracker_calpress(self, args): connection = self.find_connection() if not connection: print("No antenna tracker found") return connection.calibrate_pressure()
calibrate barometer on tracker
def normalize(expr): children = [] for child in expr.children: branch = normalize(child) if branch is None: continue if type(branch) is type(expr): children.extend(branch.children) else: children.append(branch) if len(children) == 0: ...
Pass through n-ary expressions, and eliminate empty branches. Variadic and binary expressions recursively visit all their children. If all children are eliminated then the parent expression is also eliminated: (& [removed] [removed]) => [removed] If only one child is left, it is promoted to repl...
def resolve(self): values = {} for target_name in self.target_names: if self.context.is_build_needed(self.parent, target_name): self.context.build_task(target_name) if len(self.keyword_chain) == 0: values[target_name] = self.context.tasks[target_na...
Builds all targets of this dependency and returns the result of self.function on the resulting values
def start_output (self): super(CSVLogger, self).start_output() row = [] if self.has_part("intro"): self.write_intro() self.flush() else: self.write(u"") self.queue = StringIO() self.writer = csv.writer(self.queue, dialect=self.dialect, ...
Write checking start info as csv comment.
def perform_patch(cls, operations, obj, state=None): if state is None: state = {} for operation in operations: if not cls._process_patch_operation(operation, obj=obj, state=state): log.info( "%s patching has been stopped because of unknown oper...
Performs all necessary operations by calling class methods with corresponding names.
def serialize_to_list(self, name, datas): items = datas.get('items', None) splitter = datas.get('splitter', self._DEFAULT_SPLITTER) if items is None: msg = ("List reference '{}' lacks of required 'items' variable " "or is empty") raise SerializerError(m...
Serialize given datas to a list structure. List structure is very simple and only require a variable ``--items`` which is a string of values separated with an empty space. Every other properties are ignored. Arguments: name (string): Name only used inside possible exception...
def get_version_from_scm(path=None): if is_git(path): return 'git', get_git_version(path) elif is_svn(path): return 'svn', get_svn_version(path) return None, None
Get the current version string of this package using SCM tool. Parameters ---------- path : None or string, optional The SCM checkout path (default is current directory) Returns ------- version : string The version string for this package
def make_pkh_output(value, pubkey, witness=False): return _make_output( value=utils.i2le_padded(value, 8), output_script=make_pkh_output_script(pubkey, witness))
int, bytearray -> TxOut
def list_formatter(handler, item, value): return u', '.join(str(v) for v in value)
Format list.
def context(self): "Internal property that returns the stylus compiler" if self._context is None: with io.open(path.join(path.abspath(path.dirname(__file__)), "compiler.js")) as compiler_file: compiler_source = compiler_file.read() self._context = self.backend.compile(compiler_source) re...
Internal property that returns the stylus compiler
def create(self, user, obj, **kwargs): follow = Follow(user=user) follow.target = obj follow.save() return follow
Create a new follow link between a user and an object of a registered model type.
def _nbOperations(n): if n < 2: return 0 else: n0 = (n + 2) // 3 n02 = n0 + n // 3 return 3 * (n02) + n0 + _nbOperations(n02)
Exact number of atomic operations in _radixPass.
def message_convert_rx(message_rx): is_extended_id = bool(message_rx.flags & IS_ID_TYPE) is_remote_frame = bool(message_rx.flags & IS_REMOTE_FRAME) is_error_frame = bool(message_rx.flags & IS_ERROR_FRAME) return Message(timestamp=message_rx.timestamp, is_remote_frame=is_remote_frame, ...
convert the message from the CANAL type to pythoncan type
def bit_clone( bits ): new = BitSet( bits.size ) new.ior( bits ) return new
Clone a bitset
def _GetPluginData(self): return_dict = {} return_dict['Versions'] = [ ('plaso engine', plaso.__version__), ('python', sys.version)] hashers_information = hashers_manager.HashersManager.GetHashersInformation() parsers_information = parsers_manager.ParsersManager.GetParsersInformation() ...
Retrieves the version and various plugin information. Returns: dict[str, list[str]]: available parsers and plugins.
def read_next_line(self): next_line = self.file.readline() if not next_line or next_line[-1:] != '\n': self.file = None else: next_line = next_line[:-1] expanded = next_line.expandtabs() edit = urwid.Edit("", expanded, allow_tab=True) edit.set_edit...
Read another line from the file.
def addr_info(addr): if isinstance(addr, basestring): return socket.AF_UNIX if not isinstance(addr, collections.Sequence): raise ValueError("address is not a tuple") if len(addr) < 2: raise ValueError("cannot understand address") if not (0 <= addr[1] < 65536): raise Value...
Interprets an address in standard tuple format to determine if it is valid, and, if so, which socket family it is. Returns the socket family.
def run_calculation(self, atoms=None, properties=['energy'], system_changes=all_changes): self.calc.calculate(self, atoms, properties, system_changes) self.write_input(self.atoms, properties, system_changes) if self.command is None: raise RuntimeError('Ple...
Internal calculation executor. We cannot use FileIOCalculator directly since we need to support remote execution. This calculator is different from others. It prepares the directory, launches the remote process and raises the exception to signal that we need to come back for re...
def _get_credentials(vcap_services, service_name=None): service_name = service_name or os.environ.get('STREAMING_ANALYTICS_SERVICE_NAME', None) services = vcap_services['streaming-analytics'] creds = None for service in services: if service['name'] == service_name: creds = service['c...
Retrieves the credentials of the VCAP Service of the specified `service_name`. If `service_name` is not specified, it takes the information from STREAMING_ANALYTICS_SERVICE_NAME environment variable. Args: vcap_services (dict): A dict representation of the VCAP Services information. servic...
def _add_arguments(self): self._parser.add_argument( '-v', '--version', action='store_true', help="show program's version number and exit") self._parser.add_argument( '-a', '--alias', nargs='?', const=get_alias(), help='...
Adds arguments to parser.
def pipe(self, command, timeout=None, cwd=None): if not timeout: timeout = self.timeout if not self.was_run: self.run(block=False, cwd=cwd) data = self.out if timeout: c = Command(command, timeout) else: c = Command(command) ...
Runs the current command and passes its output to the next given process.
def safe_json_response(method): def _safe_document(document): assert isinstance(document, dict), 'Error: provided document is not of DICT type: {0}' \ .format(document.__class__.__name__) for key, value in document.items(): if isinstance(value, dict): document...
makes sure the response' document has all leaf-fields converted to string
def DEFINE_multi_enum_class( name, default, enum_class, help, flag_values=_flagvalues.FLAGS, module_name=None, **args): DEFINE_flag( _flag.MultiEnumClassFlag(name, default, help, enum_class), flag_values, module_name, **args)
Registers a flag whose value can be a list of enum members. Use the flag on the command line multiple times to place multiple enum values into the list. Args: name: str, the flag name. default: Union[Iterable[Enum], Iterable[Text], Enum, Text, None], the default value of the flag; see `D...
def copychildren(self, newdoc=None, idsuffix=""): if idsuffix is True: idsuffix = ".copy." + "%08x" % random.getrandbits(32) for c in self: if isinstance(c, AbstractElement): yield c.copy(newdoc,idsuffix)
Generator creating a deep copy of the children of this element. Invokes :meth:`copy` on all children, parameters are the same.
def _learn( permanences, rng, activeCells, activeInput, growthCandidateInput, sampleSize, initialPermanence, permanenceIncrement, permanenceDecrement, connectedPermanence): permanences.incrementNonZerosOnOuter( activeCells, activeInput, permanenceIncrement) ...
For each active cell, reinforce active synapses, punish inactive synapses, and grow new synapses to a subset of the active input bits that the cell isn't already connected to. Parameters: ---------------------------- @param permanences (SparseMatrix) Matrix of permanences, with cells a...
def MeterOffset((lat1, lon1), (lat2, lon2)): "Return offset in meters of second arg from first." dx = EarthDistance((lat1, lon1), (lat1, lon2)) dy = EarthDistance((lat1, lon1), (lat2, lon1)) if lat1 < lat2: dy *= -1 if lon1 < lon2: dx *= -1 return (dx, dy)
Return offset in meters of second arg from first.