code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def vgg_layer(inputs, nout, kernel_size=3, activation=tf.nn.leaky_relu, padding="SAME", is_training=True, has_batchnorm=False, scope=None): with tf.variable_scope(scope): net = tfl.conv2d(inputs, nout, kernel_size=kernel_size, padding=padding, activation=None, name="conv") if has_batchnorm: net = tfl.batch_normalization(net, training=is_training, name="bn") net = activation(net) return net
A layer of VGG network with batch norm. Args: inputs: image tensor nout: number of output channels kernel_size: size of the kernel activation: activation function padding: padding of the image is_training: whether it is training mode or not has_batchnorm: whether batchnorm is applied or not scope: variable scope of the op Returns: net: output of layer
def _get_isolated(self, hostport): assert hostport, "hostport is required" if hostport not in self._peers: peer = self.peer_class( tchannel=self.tchannel, hostport=hostport, ) self._peers[peer.hostport] = peer return self._peers[hostport]
Get a Peer for the given destination for a request. A new Peer is added and returned if one does not already exist for the given host-port. Otherwise, the existing Peer is returned. **NOTE** new peers will not be added to the peer heap.
def _process_list(self, list_line): res = list_line.split(' ', 8) if res[0].startswith('-'): self.state['file_list'].append(res[-1]) if res[0].startswith('d'): self.state['dir_list'].append(res[-1])
Processes a line of 'ls -l' output, and updates state accordingly. :param list_line: Line to process
def allow_migrate(self, db, model): if db == DUAS_DB_ROUTE_PREFIX: return model._meta.app_label == 'duashttp' elif model._meta.app_label == 'duashttp': return False return None
Make sure the auth app only appears in the 'duashttp' database.
def get_network_adapter_object_type(adapter_object): if isinstance(adapter_object, vim.vm.device.VirtualVmxnet2): return 'vmxnet2' if isinstance(adapter_object, vim.vm.device.VirtualVmxnet3): return 'vmxnet3' if isinstance(adapter_object, vim.vm.device.VirtualVmxnet): return 'vmxnet' if isinstance(adapter_object, vim.vm.device.VirtualE1000e): return 'e1000e' if isinstance(adapter_object, vim.vm.device.VirtualE1000): return 'e1000' raise ValueError('An unknown network adapter object type.')
Returns the network adapter type. adapter_object The adapter object from which to obtain the network adapter type.
def safe_join(*paths): try: return join(*paths) except UnicodeDecodeError: npaths = () for path in paths: npaths += (unicoder(path),) return join(*npaths)
Join paths in a Unicode-safe way
def get_process_tag(program, ccd, version='p'): return "%s_%s%s" % (program, str(version), str(ccd).zfill(2))
make a process tag have a suffix indicating which ccd its for. @param program: Name of the process that a tag is built for. @param ccd: the CCD number that this process ran on. @param version: The version of the exposure (s, p, o) that the process ran on. @return: The string that represents the processing tag.
def block_header_verify( block_data, prev_hash, block_hash ): serialized_header = block_header_to_hex( block_data, prev_hash ) candidate_hash_bin_reversed = hashing.bin_double_sha256(binascii.unhexlify(serialized_header)) candidate_hash = binascii.hexlify( candidate_hash_bin_reversed[::-1] ) return block_hash == candidate_hash
Verify whether or not bitcoind's block header matches the hash we expect.
def normalized(self): return Rect(pos=(min(self.left, self.right), min(self.top, self.bottom)), size=(abs(self.width), abs(self.height)))
Return a Rect covering the same area, but with height and width guaranteed to be positive.
def serializer(_type): def inner(func): name = dr.get_name(_type) if name in SERIALIZERS: msg = "%s already has a serializer registered: %s" raise Exception(msg % (name, dr.get_name(SERIALIZERS[name]))) SERIALIZERS[name] = func return func return inner
Decorator for serializers. A serializer should accept two parameters: An object and a path which is a directory on the filesystem where supplementary data can be stored. This is most often useful for datasources. It should return a dictionary version of the original object that contains only elements that can be serialized to json.
def save_load(jid, clear_load, minions=None): for returner_ in __opts__[CONFIG_KEY]: _mminion().returners['{0}.save_load'.format(returner_)](jid, clear_load)
Write load to all returners in multi_returner
def _compile_constant_expression(self, expr: Expression, scope: Dict[str, TensorFluent], batch_size: Optional[int] = None, noise: Optional[List[tf.Tensor]] = None) -> TensorFluent: etype = expr.etype args = expr.args dtype = utils.python_type_to_dtype(etype[1]) fluent = TensorFluent.constant(args, dtype=dtype) return fluent
Compile a constant expression `expr` into a TensorFluent in the given `scope` with optional batch size. Args: expr (:obj:`rddl2tf.expr.Expression`): A RDDL constant expression. scope (Dict[str, :obj:`rddl2tf.fluent.TensorFluent`]): A fluent scope. batch_size (Optional[size]): The batch size. Returns: :obj:`rddl2tf.fluent.TensorFluent`: The compiled expression as a TensorFluent.
def fetchAllUsers(self): data = {"viewer": self._uid} j = self._post( self.req_url.ALL_USERS, query=data, fix_request=True, as_json=True ) if j.get("payload") is None: raise FBchatException("Missing payload while fetching users: {}".format(j)) users = [] for data in j["payload"].values(): if data["type"] in ["user", "friend"]: if data["id"] in ["0", 0]: continue users.append(User._from_all_fetch(data)) return users
Gets all users the client is currently chatting with :return: :class:`models.User` objects :rtype: list :raises: FBchatException if request failed
def init_from_class_batches(self, class_batches, num_shards=None): shards_for_submissions = {} shard_idx = 0 for idx, (batch_id, batch_val) in enumerate(iteritems(class_batches)): work_id = DEFENSE_WORK_ID_PATTERN.format(idx) submission_id = batch_val['submission_id'] shard_id = None if num_shards: shard_id = shards_for_submissions.get(submission_id) if shard_id is None: shard_id = shard_idx % num_shards shards_for_submissions[submission_id] = shard_id shard_idx += 1 self.work[work_id] = { 'claimed_worker_id': None, 'claimed_worker_start_time': None, 'is_completed': False, 'error': None, 'elapsed_time': None, 'submission_id': submission_id, 'shard_id': shard_id, 'output_classification_batch_id': batch_id, }
Initializes work pieces from classification batches. Args: class_batches: dict with classification batches, could be obtained as ClassificationBatches.data num_shards: number of shards to split data into, if None then no sharding is done.
def open(cls, filename, band_names=None, lazy_load=True, mutable=False, **kwargs): if mutable: geo_raster = MutableGeoRaster(filename=filename, band_names=band_names, **kwargs) else: geo_raster = cls(filename=filename, band_names=band_names, **kwargs) if not lazy_load: geo_raster._populate_from_rasterio_object(read_image=True) return geo_raster
Read a georaster from a file. :param filename: url :param band_names: list of strings, or string. if None - will try to read from image, otherwise - these will be ['0', ..] :param lazy_load: if True - do not load anything :return: GeoRaster2
def present_weather_codes(self, value=None): if value is not None: try: value = int(value) except ValueError: raise ValueError( 'value {} need to be of type int ' 'for field `present_weather_codes`'.format(value)) self._present_weather_codes = value
Corresponds to IDD Field `present_weather_codes` Args: value (int): value for IDD Field `present_weather_codes` if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
def to_allele_counts(self, max_allele=None, dtype='u1'): if max_allele is None: max_allele = self.max() alleles = list(range(max_allele + 1)) outshape = self.shape[:-1] + (len(alleles),) out = np.zeros(outshape, dtype=dtype) for allele in alleles: allele_match = self.values == allele if self.mask is not None: allele_match &= ~self.mask[..., np.newaxis] np.sum(allele_match, axis=-1, out=out[..., allele]) if self.ndim == 2: out = GenotypeAlleleCountsVector(out) elif self.ndim == 3: out = GenotypeAlleleCountsArray(out) return out
Transform genotype calls into allele counts per call. Parameters ---------- max_allele : int, optional Highest allele index. Provide this value to speed up computation. dtype : dtype, optional Output dtype. Returns ------- out : ndarray, uint8, shape (n_variants, n_samples, len(alleles)) Array of allele counts per call. Examples -------- >>> import allel >>> g = allel.GenotypeArray([[[0, 0], [0, 1]], ... [[0, 2], [1, 1]], ... [[2, 2], [-1, -1]]]) >>> g.to_allele_counts() <GenotypeAlleleCountsArray shape=(3, 2, 3) dtype=uint8> 2:0:0 1:1:0 1:0:1 0:2:0 0:0:2 0:0:0 >>> v = g[:, 0] >>> v <GenotypeVector shape=(3, 2) dtype=int64> 0/0 0/2 2/2 >>> v.to_allele_counts() <GenotypeAlleleCountsVector shape=(3, 3) dtype=uint8> 2:0:0 1:0:1 0:0:2
def is_for_driver_task(self): return all( len(x) == 0 for x in [self.module_name, self.class_name, self.function_name])
See whether this function descriptor is for a driver or not. Returns: True if this function descriptor is for driver tasks.
def disconnect(self, message=""): try: del self.connected except AttributeError: return try: self.socket.shutdown(socket.SHUT_WR) self.socket.close() except socket.error: pass del self.socket self.reactor._handle_event( self, Event("dcc_disconnect", self.peeraddress, "", [message])) self.reactor._remove_connection(self)
Hang up the connection and close the object. Arguments: message -- Quit message.
def get_registered_services(self): if self._state == Bundle.UNINSTALLED: raise BundleException( "Can't call 'get_registered_services' on an " "uninstalled bundle" ) return self.__framework._registry.get_bundle_registered_services(self)
Returns this bundle's ServiceReference list for all services it has registered or an empty list The list is valid at the time of the call to this method, however, as the Framework is a very dynamic environment, services can be modified or unregistered at any time. :return: An array of ServiceReference objects :raise BundleException: If the bundle has been uninstalled
def minimum_pitch(self): pitch = self.pitch minimal_pitch = [] for p in pitch: minimal_pitch.append(min(p)) return min(minimal_pitch)
Returns the minimal pitch between two neighboring nodes of the mesh in each direction. :return: Minimal pitch in each direction.
def build_function(name, args=None, defaults=None, doc=None): args, defaults = args or [], defaults or [] func = nodes.FunctionDef(name, doc) func.args = argsnode = nodes.Arguments() argsnode.args = [] for arg in args: argsnode.args.append(nodes.Name()) argsnode.args[-1].name = arg argsnode.args[-1].parent = argsnode argsnode.defaults = [] for default in defaults: argsnode.defaults.append(nodes.const_factory(default)) argsnode.defaults[-1].parent = argsnode argsnode.kwarg = None argsnode.vararg = None argsnode.parent = func if args: register_arguments(func) return func
create and initialize an astroid FunctionDef node
def touch(self, conn, key, exptime): assert self._validate_key(key) _cmd = b' '.join([b'touch', key, str(exptime).encode('utf-8')]) cmd = _cmd + b'\r\n' resp = yield from self._execute_simple_command(conn, cmd) if resp not in (const.TOUCHED, const.NOT_FOUND): raise ClientException('Memcached touch failed', resp) return resp == const.TOUCHED
The command is used to update the expiration time of an existing item without fetching it. :param key: ``bytes``, is the key to update expiration time :param exptime: ``int``, is expiration time. This replaces the existing expiration time. :return: ``bool``, True in case of success.
def fix_positions(self): shift_x = 0 for m in self.__reactants: max_x = self.__fix_positions(m, shift_x, 0) shift_x = max_x + 1 arrow_min = shift_x if self.__reagents: for m in self.__reagents: max_x = self.__fix_positions(m, shift_x, 1.5) shift_x = max_x + 1 else: shift_x += 3 arrow_max = shift_x - 1 for m in self.__products: max_x = self.__fix_positions(m, shift_x, 0) shift_x = max_x + 1 self._arrow = (arrow_min, arrow_max) self.flush_cache()
fix coordinates of molecules in reaction
def handleFailure(self, test, err): want_failure = self._handle_test_error_or_failure(test, err) if not want_failure and id(test) in self._tests_that_reran: self._nose_result.addFailure(test, err) return want_failure or None
Baseclass override. Called when a test fails. If the test isn't going to be rerun again, then report the failure to the nose test result. :param test: The test that has raised an error :type test: :class:`nose.case.Test` :param err: Information about the test failure (from sys.exc_info()) :type err: `tuple` of `class`, :class:`Exception`, `traceback` :return: True, if the test will be rerun; False, if nose should handle it. :rtype: `bool`
def to_dotfile(self): domain = self.get_domain() filename = "%s.dot" % (self.__class__.__name__) nx.write_dot(domain, filename) return filename
Writes a DOT graphviz file of the domain structure, and returns the filename
def do_zsh_complete(cli, prog_name): commandline = os.environ['COMMANDLINE'] args = split_args(commandline)[1:] if args and not commandline.endswith(' '): incomplete = args[-1] args = args[:-1] else: incomplete = '' def escape(s): return s.replace('"', '""').replace("'", "''").replace('$', '\\$').replace('`', '\\`') res = [] for item, help in get_choices(cli, prog_name, args, incomplete): if help: res.append(r'"%s"\:"%s"' % (escape(item), escape(help))) else: res.append('"%s"' % escape(item)) if res: echo("_arguments '*: :((%s))'" % '\n'.join(res)) else: echo("_files") return True
Do the zsh completion Parameters ---------- cli : click.Command The main click Command of the program prog_name : str The program name on the command line Returns ------- bool True if the completion was successful, False otherwise
def main(): test_targets = ( [ARCH_I386, MACH_I386_I386_INTEL_SYNTAX, ENDIAN_MONO, "\x55\x89\xe5\xE8\xB8\xFF\xFF\xFF", 0x1000], [ARCH_I386, MACH_X86_64_INTEL_SYNTAX, ENDIAN_MONO, "\x55\x48\x89\xe5\xE8\xA3\xFF\xFF\xFF", 0x1000], [ARCH_ARM, MACH_ARM_2, ENDIAN_LITTLE, "\x04\xe0\x2d\xe5\xED\xFF\xFF\xEB", 0x1000], [ARCH_MIPS, MACH_MIPSISA32, ENDIAN_BIG, "\x0C\x10\x00\x97\x00\x00\x00\x00", 0x1000], [ARCH_POWERPC, MACH_PPC, ENDIAN_BIG, "\x94\x21\xFF\xE8\x7C\x08\x02\xA6", 0x1000], ) for target_arch, target_mach, target_endian, binary, address in test_targets: opcodes = Opcodes(target_arch, target_mach, target_endian) print "\n[+] Architecture %s - Machine %d" % \ (opcodes.architecture_name, opcodes.machine) print "[+] Disassembly:" for vma, size, disasm in opcodes.disassemble(binary, address): print "0x%X (size=%d)\t %s" % (vma, size, disasm)
Test case for simple opcode disassembly.
def throw_if_parsable(resp): e = None try: e = parse_response(resp) except: LOG.debug(utils.stringify_expt()) if e is not None: raise e if resp.status_code == 404: raise NoSuchObject('No such object.') else: text = resp.text if six.PY3 else resp.content if text: raise ODPSError(text, code=str(resp.status_code)) else: raise ODPSError(str(resp.status_code))
Try to parse the content of the response and raise an exception if neccessary.
def determine_result(self, returncode, returnsignal, output, isTimeout): splitout = "\n".join(output) if 'SMACK found no errors' in splitout: return result.RESULT_TRUE_PROP errmsg = re.search(r'SMACK found an error(:\s+([^\.]+))?\.', splitout) if errmsg: errtype = errmsg.group(2) if errtype: if 'invalid pointer dereference' == errtype: return result.RESULT_FALSE_DEREF elif 'invalid memory deallocation' == errtype: return result.RESULT_FALSE_FREE elif 'memory leak' == errtype: return result.RESULT_FALSE_MEMTRACK elif 'memory cleanup' == errtype: return result.RESULT_FALSE_MEMCLEANUP elif 'integer overflow' == errtype: return result.RESULT_FALSE_OVERFLOW else: return result.RESULT_FALSE_REACH return result.RESULT_UNKNOWN
Returns a BenchExec result status based on the output of SMACK
def get_log_entry_ids_by_log(self, log_id): id_list = [] for log_entry in self.get_log_entries_by_log(log_ids): id_list.append(log_entry.get_id()) return IdList(id_list)
Gets the list of ``LogEntry`` ``Ids`` associated with a ``Log``. arg: log_id (osid.id.Id): ``Id`` of a ``Log`` return: (osid.id.IdList) - list of related logEntry ``Ids`` raise: NotFound - ``log_id`` is not found raise: NullArgument - ``log_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*
def plot_joint_sfs_folded_scaled(*args, **kwargs): imshow_kwargs = kwargs.get('imshow_kwargs', dict()) imshow_kwargs.setdefault('norm', None) kwargs['imshow_kwargs'] = imshow_kwargs ax = plot_joint_sfs_folded(*args, **kwargs) ax.set_xlabel('minor allele count (population 1)') ax.set_ylabel('minor allele count (population 2)') return ax
Plot a scaled folded joint site frequency spectrum. Parameters ---------- s : array_like, int, shape (n_chromosomes_pop1/2, n_chromosomes_pop2/2) Joint site frequency spectrum. ax : axes, optional Axes on which to draw. If not provided, a new figure will be created. imshow_kwargs : dict-like Additional keyword arguments, passed through to ax.imshow(). Returns ------- ax : axes The axes on which the plot was drawn.
def from_two_bytes(bytes): lsb, msb = bytes try: return msb << 7 | lsb except TypeError: try: lsb = ord(lsb) except TypeError: pass try: msb = ord(msb) except TypeError: pass return msb << 7 | lsb
Return an integer from two 7 bit bytes.
def _get_json(value): if hasattr(value, 'replace'): value = value.replace('\n', ' ') try: return json.loads(value) except json.JSONDecodeError: if hasattr(value, 'replace'): value = value.replace('"', '\\"') return json.loads('"{}"'.format(value))
Convert the given value to a JSON object.
def normpath (path): expanded = os.path.expanduser(os.path.expandvars(path)) return os.path.normcase(os.path.normpath(expanded))
Norm given system path with all available norm or expand functions in os.path.
def perplexity(test_data, predictions, topics, vocabulary): test_data = _check_input(test_data) assert isinstance(predictions, _SArray), \ "Predictions must be an SArray of vector type." assert predictions.dtype == _array.array, \ "Predictions must be probabilities. Try using m.predict() with " + \ "output_type='probability'." opts = {'test_data': test_data, 'predictions': predictions, 'topics': topics, 'vocabulary': vocabulary} response = _turicreate.extensions._text.topicmodel_get_perplexity(opts) return response['perplexity']
Compute the perplexity of a set of test documents given a set of predicted topics. Let theta be the matrix of document-topic probabilities, where theta_ik = p(topic k | document i). Let Phi be the matrix of term-topic probabilities, where phi_jk = p(word j | topic k). Then for each word in each document, we compute for a given word w and document d .. math:: p(word | \theta[doc_id,:], \phi[word_id,:]) = \sum_k \theta[doc_id, k] * \phi[word_id, k] We compute loglikelihood to be: .. math:: l(D) = \sum_{i \in D} \sum_{j in D_i} count_{i,j} * log Pr(word_{i,j} | \theta, \phi) and perplexity to be .. math:: \exp \{ - l(D) / \sum_i \sum_j count_{i,j} \} Parameters ---------- test_data : SArray of type dict or SFrame with a single column of type dict Documents in bag-of-words format. predictions : SArray An SArray of vector type, where each vector contains estimates of the probability that this document belongs to each of the topics. This must have the same size as test_data; otherwise an exception occurs. This can be the output of :py:func:`~turicreate.topic_model.TopicModel.predict`, for example. topics : SFrame An SFrame containing two columns: 'vocabulary' and 'topic_probabilities'. The value returned by m['topics'] is a valid input for this argument, where m is a trained :py:class:`~turicreate.topic_model.TopicModel`. vocabulary : SArray An SArray of words to use. All words in test_data that are not in this vocabulary will be ignored. Notes ----- For more details, see equations 13-16 of [PattersonTeh2013]. References ---------- .. [PERP] `Wikipedia - perplexity <http://en.wikipedia.org/wiki/Perplexity>`_ .. [PattersonTeh2013] Patterson, Teh. `"Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex" <http://www.stats.ox.ac.uk/~teh/research/compstats/PatTeh2013a.pdf>`_ NIPS, 2013. Examples -------- >>> from turicreate import topic_model >>> train_data, test_data = turicreate.text_analytics.random_split(docs) >>> m = topic_model.create(train_data) >>> pred = m.predict(train_data) >>> topics = m['topics'] >>> p = topic_model.perplexity(test_data, pred, topics['topic_probabilities'], topics['vocabulary']) >>> p 1720.7 # lower values are better
def extend_settings(self, data_id, files, secrets): process = Data.objects.get(pk=data_id).process if process.requirements.get('resources', {}).get('secrets', False): raise PermissionDenied( "Process which requires access to secrets cannot be run using the local executor" ) return super().extend_settings(data_id, files, secrets)
Prevent processes requiring access to secrets from being run.
def decimal128_to_decimal(b): "decimal128 bytes to Decimal" v = decimal128_to_sign_digits_exponent(b) if isinstance(v, Decimal): return v sign, digits, exponent = v return Decimal((sign, Decimal(digits).as_tuple()[1], exponent))
decimal128 bytes to Decimal
def replace_suffixes_4(self, word): length = len(word) replacements = {'ational': 'ate', 'tional': 'tion', 'alize': 'al', 'icate': 'ic', 'iciti': 'ic', 'ical': 'ic', 'ful': '', 'ness': ''} for suffix in replacements.keys(): if word.endswith(suffix): suffix_length = len(suffix) if self.r1 <= (length - suffix_length): word = word[:-suffix_length] + replacements[suffix] if word.endswith('ative'): if self.r1 <= (length - 5) and self.r2 <= (length - 5): word = word[:-5] return word
Perform replacements on even more common suffixes.
def _get(self, url, params=None, headers=None): url = self.clean_url(url) response = requests.get(url, params=params, verify=self.verify, timeout=self.timeout, headers=headers) return response
Wraps a GET request with a url check
def add(self, snapshot, distributions, component='main', storage=""): for dist in distributions: self.publish(dist, storage=storage).add(snapshot, component)
Add mirror or repo to publish
def train_step_single(self, Xi, yi, **fit_params): self.module_.train() self.optimizer_.zero_grad() y_pred = self.infer(Xi, **fit_params) loss = self.get_loss(y_pred, yi, X=Xi, training=True) loss.backward() self.notify( 'on_grad_computed', named_parameters=TeeGenerator(self.module_.named_parameters()), X=Xi, y=yi ) return { 'loss': loss, 'y_pred': y_pred, }
Compute y_pred, loss value, and update net's gradients. The module is set to be in train mode (e.g. dropout is applied). Parameters ---------- Xi : input data A batch of the input data. yi : target data A batch of the target data. **fit_params : dict Additional parameters passed to the ``forward`` method of the module and to the ``self.train_split`` call.
def get_es(**overrides): defaults = { 'urls': settings.ES_URLS, 'timeout': getattr(settings, 'ES_TIMEOUT', 5) } defaults.update(overrides) return base_get_es(**defaults)
Return a elasticsearch Elasticsearch object using settings from ``settings.py``. :arg overrides: Allows you to override defaults to create the ElasticSearch object. You can override any of the arguments isted in :py:func:`elasticutils.get_es`. For example, if you wanted to create an ElasticSearch with a longer timeout to a different cluster, you'd do: >>> from elasticutils.contrib.django import get_es >>> es = get_es(urls=['http://some_other_cluster:9200'], timeout=30)
def etag(self, href): if self and self._etag is None: self._etag = LoadElement(href, only_etag=True) return self._etag
ETag can be None if a subset of element json is using this container, such as the case with Routing.
def simplified_rayliegh_vel(self): thicks = np.array([l.thickness for l in self]) depths_mid = np.array([l.depth_mid for l in self]) shear_vels = np.array([l.shear_vel for l in self]) mode_incr = depths_mid * thicks / shear_vels ** 2 shape = np.r_[np.cumsum(mode_incr[::-1])[::-1], 0] freq_fund = np.sqrt(4 * np.sum( thicks * depths_mid ** 2 / shear_vels ** 2 ) / np.sum( thicks * np.sum(np.c_[shape, np.roll(shape, -1)], axis=1)[:-1] ** 2)) period_fun = 2 * np.pi / freq_fund rayleigh_vel = 4 * thicks.sum() / period_fun return rayleigh_vel
Simplified Rayliegh velocity of the site. This follows the simplifications proposed by Urzua et al. (2017) Returns ------- rayleigh_vel : float Equivalent shear-wave velocity.
def normalize_curves_eb(curves): non_zero_curves = [(losses, poes) for losses, poes in curves if losses[-1] > 0] if not non_zero_curves: return curves[0][0], numpy.array([poes for _losses, poes in curves]) else: max_losses = [losses[-1] for losses, _poes in non_zero_curves] reference_curve = non_zero_curves[numpy.argmax(max_losses)] loss_ratios = reference_curve[0] curves_poes = [interpolate.interp1d( losses, poes, bounds_error=False, fill_value=0)(loss_ratios) for losses, poes in curves] for cp in curves_poes: if numpy.isnan(cp[0]): cp[0] = 0 return loss_ratios, numpy.array(curves_poes)
A more sophisticated version of normalize_curves, used in the event based calculator. :param curves: a list of pairs (losses, poes) :returns: first losses, all_poes
def _get(url, headers={}, params=None): param_string = _foursquare_urlencode(params) for i in xrange(NUM_REQUEST_RETRIES): try: try: response = requests.get(url, headers=headers, params=param_string, verify=VERIFY_SSL) return _process_response(response) except requests.exceptions.RequestException as e: _log_and_raise_exception('Error connecting with foursquare API', e) except FoursquareException as e: if e.__class__ in [InvalidAuth, ParamError, EndpointError, NotAuthorized, Deprecated]: raise if ((i + 1) == NUM_REQUEST_RETRIES): raise time.sleep(1)
Tries to GET data from an endpoint using retries
def get_crt_common_name(certificate_path=OLD_CERTIFICATE_PATH): try: certificate_file = open(certificate_path) crt = OpenSSL.crypto.load_certificate(OpenSSL.crypto.FILETYPE_PEM, certificate_file.read()) return crt.get_subject().commonName except IOError: return None
Get CN from certificate
def wnelmd(point, window): assert isinstance(window, stypes.SpiceCell) assert window.dtype == 1 point = ctypes.c_double(point) return bool(libspice.wnelmd_c(point, ctypes.byref(window)))
Determine whether a point is an element of a double precision window. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/wnelmd_c.html :param point: Input point. :type point: float :param window: Input window :type window: spiceypy.utils.support_types.SpiceCell :return: returns True if point is an element of window. :rtype: bool
def func(self, p): self._set_stochastics(p) try: return -1. * self.logp except ZeroProbability: return Inf
The function that gets passed to the optimizers.
def open(self, bus): self.fd = os.open("/dev/i2c-{}".format(bus), os.O_RDWR) self.funcs = self._get_funcs()
Open a given i2c bus. :param bus: i2c bus number (e.g. 0 or 1) :type bus: int
def slot_name_from_member_name(member_name): def replace_char(match): c = match.group(0) return '_' if c in '-.' else "u{:04d}".format(ord(c)) slot_name = re.sub('[^a-z0-9_]', replace_char, member_name.lower()) return slot_name[0:63]
Translate member name to valid PostgreSQL slot name. PostgreSQL replication slot names must be valid PostgreSQL names. This function maps the wider space of member names to valid PostgreSQL names. Names are lowercased, dashes and periods common in hostnames are replaced with underscores, other characters are encoded as their unicode codepoint. Name is truncated to 64 characters. Multiple different member names may map to a single slot name.
def fetch_pillar(self): log.debug('Pillar cache getting external pillar with ext: %s', self.ext) fresh_pillar = Pillar(self.opts, self.grains, self.minion_id, self.saltenv, ext=self.ext, functions=self.functions, pillarenv=self.pillarenv) return fresh_pillar.compile_pillar()
In the event of a cache miss, we need to incur the overhead of caching a new pillar.
def reify_arrays(arrays, dims, copy=True): arrays = ({ k : AttrDict(**a) for k, a in arrays.iteritems() } if copy else arrays) for n, a in arrays.iteritems(): a.shape = tuple(dims[v].extent_size if isinstance(v, str) else v for v in a.shape) return arrays
Reify arrays, given the supplied dimensions. If copy is True, returns a copy of arrays else performs this inplace.
def _load_feed(path: str, view: View, config: nx.DiGraph) -> Feed: config_ = remove_node_attributes(config, ["converters", "transformations"]) feed_ = Feed(path, view={}, config=config_) for filename, column_filters in view.items(): config_ = reroot_graph(config_, filename) view_ = {filename: column_filters} feed_ = Feed(feed_, view=view_, config=config_) return Feed(feed_, config=config)
Multi-file feed filtering
def last_revision(self, mod: YangIdentifier) -> ModuleId: revs = [mn for mn in self.modules if mn[0] == mod] if not revs: raise ModuleNotRegistered(mod) return sorted(revs, key=lambda x: x[1])[-1]
Return the last revision of a module that's part of the data model. Args: mod: Name of a module or submodule. Raises: ModuleNotRegistered: If the module `mod` is not present in the data model.
def get_grp_name(self, code): nt_code = self.code2nt.get(code.strip(), None) if nt_code is not None: return nt_code.group, nt_code.name return "", ""
Return group and name for an evidence code.
def _do_download( self, transport, file_obj, download_url, headers, start=None, end=None ): if self.chunk_size is None: download = Download( download_url, stream=file_obj, headers=headers, start=start, end=end ) download.consume(transport) else: download = ChunkedDownload( download_url, self.chunk_size, file_obj, headers=headers, start=start if start else 0, end=end, ) while not download.finished: download.consume_next_chunk(transport)
Perform a download without any error handling. This is intended to be called by :meth:`download_to_file` so it can be wrapped with error handling / remapping. :type transport: :class:`~google.auth.transport.requests.AuthorizedSession` :param transport: The transport (with credentials) that will make authenticated requests. :type file_obj: file :param file_obj: A file handle to which to write the blob's data. :type download_url: str :param download_url: The URL where the media can be accessed. :type headers: dict :param headers: Optional headers to be sent with the request(s). :type start: int :param start: Optional, the first byte in a range to be downloaded. :type end: int :param end: Optional, The last byte in a range to be downloaded.
def _add_post_data(self, request: Request): if self._item_session.url_record.post_data: data = wpull.string.to_bytes(self._item_session.url_record.post_data) else: data = wpull.string.to_bytes( self._processor.fetch_params.post_data ) request.method = 'POST' request.fields['Content-Type'] = 'application/x-www-form-urlencoded' request.fields['Content-Length'] = str(len(data)) _logger.debug('Posting with data {0}.', data) if not request.body: request.body = Body(io.BytesIO()) with wpull.util.reset_file_offset(request.body): request.body.write(data)
Add data to the payload.
def make_gffutils_db(gtf, db): import gffutils out_db = gffutils.create_db(gtf, db, keep_order=True, infer_gene_extent=False) return out_db
Make database for gffutils. Parameters ---------- gtf : str Path to Gencode gtf file. db : str Path to save database to. Returns ------- out_db : gffutils.FeatureDB gffutils feature database.
def season_game_logs(season): max_year = int(datetime.now().year) - 1 if season > max_year or season < 1871: raise ValueError('Season must be between 1871 and {}'.format(max_year)) file_name = 'GL{}.TXT'.format(season) z = get_zip_file(gamelog_url.format(season)) data = pd.read_csv(z.open(file_name), header=None, sep=',', quotechar='"') data.columns = gamelog_columns return data
Pull Retrosheet game logs for a given season
def dns(): if salt.utils.platform.is_windows() or 'proxyminion' in __opts__: return {} resolv = salt.utils.dns.parse_resolv() for key in ('nameservers', 'ip4_nameservers', 'ip6_nameservers', 'sortlist'): if key in resolv: resolv[key] = [six.text_type(i) for i in resolv[key]] return {'dns': resolv} if resolv else {}
Parse the resolver configuration file .. versionadded:: 2016.3.0
def buckingham_input(self, structure, keywords, library=None, uc=True, valence_dict=None): gin = self.keyword_line(*keywords) gin += self.structure_lines(structure, symm_flg=not uc) if not library: gin += self.buckingham_potential(structure, valence_dict) else: gin += self.library_line(library) return gin
Gets a GULP input for an oxide structure and buckingham potential from library. Args: structure: pymatgen.core.structure.Structure keywords: GULP first line keywords. library (Default=None): File containing the species and potential. uc (Default=True): Unit Cell Flag. valence_dict: {El: valence}
def add_prioritized(self, command_obj, priority): if priority not in self.__priorities.keys(): self.__priorities[priority] = [] self.__priorities[priority].append(command_obj)
Add command with the specified priority :param command_obj: command to add :param priority: command priority :return: None
def get_resource_ids_by_bin(self, bin_id): id_list = [] for resource in self.get_resources_by_bin(bin_id): id_list.append(resource.get_id()) return IdList(id_list)
Gets the list of ``Resource`` ``Ids`` associated with a ``Bin``. arg: bin_id (osid.id.Id): ``Id`` of a ``Bin`` return: (osid.id.IdList) - list of related resource ``Ids`` raise: NotFound - ``bin_id`` is not found raise: NullArgument - ``bin_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*
def validate_ip(s): if _DOTTED_QUAD_RE.match(s): quads = s.split('.') for q in quads: if int(q) > 255: return False return True return False
Validate a dotted-quad ip address. The string is considered a valid dotted-quad address if it consists of one to four octets (0-255) seperated by periods (.). >>> validate_ip('127.0.0.1') True >>> validate_ip('127.0') True >>> validate_ip('127.0.0.256') False >>> validate_ip(LOCALHOST) True >>> validate_ip(None) #doctest: +IGNORE_EXCEPTION_DETAIL Traceback (most recent call last): ... TypeError: expected string or buffer :param s: String to validate as a dotted-quad ip address. :type s: str :returns: ``True`` if a valid dotted-quad ip address, ``False`` otherwise. :raises: TypeError
def build_label(self, ident, cls): ident_w_label = ident + ':' + cls.__label__ self._ast['match'].append('({0})'.format(ident_w_label)) self._ast['return'] = ident self._ast['result_class'] = cls return ident
match nodes by a label
def thumbnails_for_file(relative_source_path, root=None, basedir=None, subdir=None, prefix=None): if root is None: root = settings.MEDIA_ROOT if prefix is None: prefix = settings.THUMBNAIL_PREFIX if subdir is None: subdir = settings.THUMBNAIL_SUBDIR if basedir is None: basedir = settings.THUMBNAIL_BASEDIR source_dir, filename = os.path.split(relative_source_path) thumbs_path = os.path.join(root, basedir, source_dir, subdir) if not os.path.isdir(thumbs_path): return [] files = all_thumbnails(thumbs_path, recursive=False, prefix=prefix, subdir='') return files.get(filename, [])
Return a list of dictionaries, one for each thumbnail belonging to the source image. The following list explains each key of the dictionary: `filename` -- absolute thumbnail path `x` and `y` -- the size of the thumbnail `options` -- list of options for this thumbnail `quality` -- quality setting for this thumbnail
def get_privacy_options(user): privacy_options = {} for ptype in user.permissions: for field in user.permissions[ptype]: if ptype == "self": privacy_options["{}-{}".format(field, ptype)] = user.permissions[ptype][field] else: privacy_options[field] = user.permissions[ptype][field] return privacy_options
Get a user's privacy options to pass as an initial value to a PrivacyOptionsForm.
def all_operations(self) -> Iterator[ops.Operation]: return (op for moment in self for op in moment.operations)
Iterates over the operations applied by this circuit. Operations from earlier moments will be iterated over first. Operations within a moment are iterated in the order they were given to the moment's constructor.
def _create_vxr(self, f, recStart, recEnd, currentVDR, priorVXR, vvrOffset): vxroffset = self._write_vxr(f) self._use_vxrentry(f, vxroffset, recStart, recEnd, vvrOffset) if (priorVXR == 0): self._update_offset_value(f, currentVDR+28, 8, vxroffset) else: self._update_offset_value(f, priorVXR+12, 8, vxroffset) self._update_offset_value(f, currentVDR+36, 8, vxroffset) return vxroffset
Create a VXR AND use a VXR Parameters: f : file The open CDF file recStart : int The start record of this block recEnd : int The ending record of this block currentVDR : int The byte location of the variables VDR priorVXR : int The byte location of the previous VXR vvrOffset : int The byte location of ther VVR Returns: vxroffset : int The byte location of the created vxr
def _make_session(): sess = requests.Session() sess.mount('http://', requests.adapters.HTTPAdapter(max_retries=False)) sess.mount('https://', requests.adapters.HTTPAdapter(max_retries=False)) return sess
Create session object. :rtype: requests.Session
def rcategorical(p, size=None): out = flib.rcat(p, np.random.random(size=size)) if sum(out.shape) == 1: return out.squeeze() else: return out
Categorical random variates.
def read_metadata(self, key): if getattr(getattr(self.group, 'meta', None), key, None) is not None: return self.parent.select(self._get_metadata_path(key)) return None
return the meta data array for this key
def cmd_tracker_calpress(self, args): connection = self.find_connection() if not connection: print("No antenna tracker found") return connection.calibrate_pressure()
calibrate barometer on tracker
def normalize(expr): children = [] for child in expr.children: branch = normalize(child) if branch is None: continue if type(branch) is type(expr): children.extend(branch.children) else: children.append(branch) if len(children) == 0: return None if len(children) == 1: return children[0] return type(expr)(*children, start=children[0].start, end=children[-1].end)
Pass through n-ary expressions, and eliminate empty branches. Variadic and binary expressions recursively visit all their children. If all children are eliminated then the parent expression is also eliminated: (& [removed] [removed]) => [removed] If only one child is left, it is promoted to replace the parent node: (& True) => True
def resolve(self): values = {} for target_name in self.target_names: if self.context.is_build_needed(self.parent, target_name): self.context.build_task(target_name) if len(self.keyword_chain) == 0: values[target_name] = self.context.tasks[target_name].value else: values[target_name] = reduce( lambda task, name: getattr(task, name), self.keyword_chain, self.context.tasks[target_name].task) return self.function(**values)
Builds all targets of this dependency and returns the result of self.function on the resulting values
def start_output (self): super(CSVLogger, self).start_output() row = [] if self.has_part("intro"): self.write_intro() self.flush() else: self.write(u"") self.queue = StringIO() self.writer = csv.writer(self.queue, dialect=self.dialect, delimiter=self.separator, lineterminator=self.linesep, quotechar=self.quotechar) for s in Columns: if self.has_part(s): row.append(s) if row: self.writerow(row)
Write checking start info as csv comment.
def perform_patch(cls, operations, obj, state=None): if state is None: state = {} for operation in operations: if not cls._process_patch_operation(operation, obj=obj, state=state): log.info( "%s patching has been stopped because of unknown operation %s", obj.__class__.__name__, operation ) raise ValidationError( "Failed to update %s details. Operation %s could not succeed." % ( obj.__class__.__name__, operation ) ) return True
Performs all necessary operations by calling class methods with corresponding names.
def serialize_to_list(self, name, datas): items = datas.get('items', None) splitter = datas.get('splitter', self._DEFAULT_SPLITTER) if items is None: msg = ("List reference '{}' lacks of required 'items' variable " "or is empty") raise SerializerError(msg.format(name)) else: items = self.value_splitter(name, 'items', items, mode=splitter) return items
Serialize given datas to a list structure. List structure is very simple and only require a variable ``--items`` which is a string of values separated with an empty space. Every other properties are ignored. Arguments: name (string): Name only used inside possible exception message. datas (dict): Datas to serialize. Returns: list: List of serialized reference datas.
def get_version_from_scm(path=None): if is_git(path): return 'git', get_git_version(path) elif is_svn(path): return 'svn', get_svn_version(path) return None, None
Get the current version string of this package using SCM tool. Parameters ---------- path : None or string, optional The SCM checkout path (default is current directory) Returns ------- version : string The version string for this package
def make_pkh_output(value, pubkey, witness=False): return _make_output( value=utils.i2le_padded(value, 8), output_script=make_pkh_output_script(pubkey, witness))
int, bytearray -> TxOut
def list_formatter(handler, item, value): return u', '.join(str(v) for v in value)
Format list.
def context(self): "Internal property that returns the stylus compiler" if self._context is None: with io.open(path.join(path.abspath(path.dirname(__file__)), "compiler.js")) as compiler_file: compiler_source = compiler_file.read() self._context = self.backend.compile(compiler_source) return self._context
Internal property that returns the stylus compiler
def create(self, user, obj, **kwargs): follow = Follow(user=user) follow.target = obj follow.save() return follow
Create a new follow link between a user and an object of a registered model type.
def _nbOperations(n): if n < 2: return 0 else: n0 = (n + 2) // 3 n02 = n0 + n // 3 return 3 * (n02) + n0 + _nbOperations(n02)
Exact number of atomic operations in _radixPass.
def message_convert_rx(message_rx): is_extended_id = bool(message_rx.flags & IS_ID_TYPE) is_remote_frame = bool(message_rx.flags & IS_REMOTE_FRAME) is_error_frame = bool(message_rx.flags & IS_ERROR_FRAME) return Message(timestamp=message_rx.timestamp, is_remote_frame=is_remote_frame, is_extended_id=is_extended_id, is_error_frame=is_error_frame, arbitration_id=message_rx.id, dlc=message_rx.sizeData, data=message_rx.data[:message_rx.sizeData])
convert the message from the CANAL type to pythoncan type
def bit_clone( bits ): new = BitSet( bits.size ) new.ior( bits ) return new
Clone a bitset
def _GetPluginData(self): return_dict = {} return_dict['Versions'] = [ ('plaso engine', plaso.__version__), ('python', sys.version)] hashers_information = hashers_manager.HashersManager.GetHashersInformation() parsers_information = parsers_manager.ParsersManager.GetParsersInformation() plugins_information = ( parsers_manager.ParsersManager.GetParserPluginsInformation()) presets_information = parsers_manager.ParsersManager.GetPresetsInformation() return_dict['Hashers'] = hashers_information return_dict['Parsers'] = parsers_information return_dict['Parser Plugins'] = plugins_information return_dict['Parser Presets'] = presets_information return return_dict
Retrieves the version and various plugin information. Returns: dict[str, list[str]]: available parsers and plugins.
def read_next_line(self): next_line = self.file.readline() if not next_line or next_line[-1:] != '\n': self.file = None else: next_line = next_line[:-1] expanded = next_line.expandtabs() edit = urwid.Edit("", expanded, allow_tab=True) edit.set_edit_pos(0) edit.original_text = next_line self.lines.append(edit) return next_line
Read another line from the file.
def addr_info(addr): if isinstance(addr, basestring): return socket.AF_UNIX if not isinstance(addr, collections.Sequence): raise ValueError("address is not a tuple") if len(addr) < 2: raise ValueError("cannot understand address") if not (0 <= addr[1] < 65536): raise ValueError("cannot understand port number") ipaddr = addr[0] if not ipaddr: if len(addr) != 2: raise ValueError("cannot understand address") return socket.AF_INET if netaddr.valid_ipv6(ipaddr): if len(addr) > 4: raise ValueError("cannot understand address") return socket.AF_INET6 elif netaddr.valid_ipv4(ipaddr): if len(addr) != 2: raise ValueError("cannot understand address") return socket.AF_INET raise ValueError("cannot understand address")
Interprets an address in standard tuple format to determine if it is valid, and, if so, which socket family it is. Returns the socket family.
def run_calculation(self, atoms=None, properties=['energy'], system_changes=all_changes): self.calc.calculate(self, atoms, properties, system_changes) self.write_input(self.atoms, properties, system_changes) if self.command is None: raise RuntimeError('Please configure Remote calculator!') olddir = os.getcwd() errorcode=0 try: os.chdir(self.directory) output = subprocess.check_output(self.command, shell=True) self.jobid=output.split()[0] self.submited=True except subprocess.CalledProcessError as e: errorcode=e.returncode finally: os.chdir(olddir) if errorcode: raise RuntimeError('%s returned an error: %d' % (self.name, errorcode)) self.read_results()
Internal calculation executor. We cannot use FileIOCalculator directly since we need to support remote execution. This calculator is different from others. It prepares the directory, launches the remote process and raises the exception to signal that we need to come back for results when the job is finished.
def _get_credentials(vcap_services, service_name=None): service_name = service_name or os.environ.get('STREAMING_ANALYTICS_SERVICE_NAME', None) services = vcap_services['streaming-analytics'] creds = None for service in services: if service['name'] == service_name: creds = service['credentials'] break if creds is None: raise ValueError("Streaming Analytics service " + str(service_name) + " was not found in VCAP_SERVICES") return creds
Retrieves the credentials of the VCAP Service of the specified `service_name`. If `service_name` is not specified, it takes the information from STREAMING_ANALYTICS_SERVICE_NAME environment variable. Args: vcap_services (dict): A dict representation of the VCAP Services information. service_name (str): One of the service name stored in `vcap_services` Returns: dict: A dict representation of the credentials. Raises: ValueError: Cannot find `service_name` in `vcap_services`
def _add_arguments(self): self._parser.add_argument( '-v', '--version', action='store_true', help="show program's version number and exit") self._parser.add_argument( '-a', '--alias', nargs='?', const=get_alias(), help='[custom-alias-name] prints alias for current shell') self._parser.add_argument( '-l', '--shell-logger', action='store', help='log shell output to the file') self._parser.add_argument( '--enable-experimental-instant-mode', action='store_true', help='enable experimental instant mode, use on your own risk') self._parser.add_argument( '-h', '--help', action='store_true', help='show this help message and exit') self._add_conflicting_arguments() self._parser.add_argument( '-d', '--debug', action='store_true', help='enable debug output') self._parser.add_argument( '--force-command', action='store', help=SUPPRESS) self._parser.add_argument( 'command', nargs='*', help='command that should be fixed')
Adds arguments to parser.
def pipe(self, command, timeout=None, cwd=None): if not timeout: timeout = self.timeout if not self.was_run: self.run(block=False, cwd=cwd) data = self.out if timeout: c = Command(command, timeout) else: c = Command(command) c.run(block=False, cwd=cwd) if data: c.send(data) c.block() return c
Runs the current command and passes its output to the next given process.
def safe_json_response(method): def _safe_document(document): assert isinstance(document, dict), 'Error: provided document is not of DICT type: {0}' \ .format(document.__class__.__name__) for key, value in document.items(): if isinstance(value, dict): document[key] = {k: str(v) for k, v in value.items()} elif isinstance(value, list): document[key] = [str(v) for v in value] else: document[key] = str(document[key]) return document @functools.wraps(method) def _wrapper(self, *args, **kwargs): try: document = method(self, *args, **kwargs) return _safe_document(document) except Exception as e: return self.reply_server_error(e) return _wrapper
makes sure the response' document has all leaf-fields converted to string
def DEFINE_multi_enum_class( name, default, enum_class, help, flag_values=_flagvalues.FLAGS, module_name=None, **args): DEFINE_flag( _flag.MultiEnumClassFlag(name, default, help, enum_class), flag_values, module_name, **args)
Registers a flag whose value can be a list of enum members. Use the flag on the command line multiple times to place multiple enum values into the list. Args: name: str, the flag name. default: Union[Iterable[Enum], Iterable[Text], Enum, Text, None], the default value of the flag; see `DEFINE_multi`; only differences are documented here. If the value is a single Enum, it is treated as a single-item list of that Enum value. If it is an iterable, text values within the iterable will be converted to the equivalent Enum objects. enum_class: class, the Enum class with all the possible values for the flag. help: str, the help message. flag_values: FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. module_name: A string, the name of the Python module declaring this flag. If not provided, it will be computed using the stack trace of this call. **args: Dictionary with extra keyword args that are passed to the Flag __init__.
def copychildren(self, newdoc=None, idsuffix=""): if idsuffix is True: idsuffix = ".copy." + "%08x" % random.getrandbits(32) for c in self: if isinstance(c, AbstractElement): yield c.copy(newdoc,idsuffix)
Generator creating a deep copy of the children of this element. Invokes :meth:`copy` on all children, parameters are the same.
def _learn( permanences, rng, activeCells, activeInput, growthCandidateInput, sampleSize, initialPermanence, permanenceIncrement, permanenceDecrement, connectedPermanence): permanences.incrementNonZerosOnOuter( activeCells, activeInput, permanenceIncrement) permanences.incrementNonZerosOnRowsExcludingCols( activeCells, activeInput, -permanenceDecrement) permanences.clipRowsBelowAndAbove( activeCells, 0.0, 1.0) if sampleSize == -1: permanences.setZerosOnOuter( activeCells, activeInput, initialPermanence) else: existingSynapseCounts = permanences.nNonZerosPerRowOnCols( activeCells, activeInput) maxNewByCell = numpy.empty(len(activeCells), dtype="int32") numpy.subtract(sampleSize, existingSynapseCounts, out=maxNewByCell) permanences.setRandomZerosOnOuter( activeCells, growthCandidateInput, maxNewByCell, initialPermanence, rng)
For each active cell, reinforce active synapses, punish inactive synapses, and grow new synapses to a subset of the active input bits that the cell isn't already connected to. Parameters: ---------------------------- @param permanences (SparseMatrix) Matrix of permanences, with cells as rows and inputs as columns @param rng (Random) Random number generator @param activeCells (sorted sequence) Sorted list of the cells that are learning @param activeInput (sorted sequence) Sorted list of active bits in the input @param growthCandidateInput (sorted sequence) Sorted list of active bits in the input that the activeCells may grow new synapses to For remaining parameters, see the __init__ docstring.
def MeterOffset((lat1, lon1), (lat2, lon2)): "Return offset in meters of second arg from first." dx = EarthDistance((lat1, lon1), (lat1, lon2)) dy = EarthDistance((lat1, lon1), (lat2, lon1)) if lat1 < lat2: dy *= -1 if lon1 < lon2: dx *= -1 return (dx, dy)
Return offset in meters of second arg from first.