code stringlengths 51 2.38k | docstring stringlengths 4 15.2k |
|---|---|
def mbar_objective_and_gradient(u_kn, N_k, f_k):
u_kn, N_k, f_k = validate_inputs(u_kn, N_k, f_k)
log_denominator_n = logsumexp(f_k - u_kn.T, b=N_k, axis=1)
log_numerator_k = logsumexp(-log_denominator_n - u_kn, axis=1)
grad = -1 * N_k * (1.0 - np.exp(f_k + log_numerator_k))
obj = math.fsum(log_denominator_n) - N_k.dot(f_k)
return obj, grad | Calculates both objective function and gradient for MBAR.
Parameters
----------
u_kn : np.ndarray, shape=(n_states, n_samples), dtype='float'
The reduced potential energies, i.e. -log unnormalized probabilities
N_k : np.ndarray, shape=(n_states), dtype='int'
The number of samples in each state
f_k : np.ndarray, shape=(n_states), dtype='float'
The reduced free energies of each state
Returns
-------
obj : float
Objective function
grad : np.ndarray, dtype=float, shape=(n_states)
Gradient of objective function
Notes
-----
This objective function is essentially a doubly-summed partition function and is
quite sensitive to precision loss from both overflow and underflow. For optimal
results, u_kn can be preconditioned by subtracting out a `n` dependent
vector.
More optimal precision, the objective function uses math.fsum for the
outermost sum and logsumexp for the inner sum.
The gradient is equation C6 in the JCP MBAR paper; the objective
function is its integral. |
def path(self, *names):
path = [self]
for name in names:
path.append(path[-1][name, ])
return path[1:] | Look up and return the complete path of an atom.
For example, atoms.path('moov', 'udta', 'meta') will return a
list of three atoms, corresponding to the moov, udta, and meta
atoms. |
def status(self, agreement_id):
condition_ids = self._keeper.agreement_manager.get_agreement(agreement_id).condition_ids
result = {"agreementId": agreement_id}
conditions = dict()
for i in condition_ids:
conditions[self._keeper.get_condition_name_by_address(
self._keeper.condition_manager.get_condition(
i).type_ref)] = self._keeper.condition_manager.get_condition_state(i)
result["conditions"] = conditions
return result | Get the status of a service agreement.
:param agreement_id: id of the agreement, hex str
:return: dict with condition status of each of the agreement's conditions or None if the
agreement is invalid. |
def get_ip_address(ifname):
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
return socket.inet_ntoa(fcntl.ioctl(
s.fileno(),
0x8915,
struct.pack('256s', ifname[:15])
)[20:24]) | Hack to get IP address from the interface |
def make_iterable(value):
if sys.version_info <= (3, 0):
if isinstance(value, unicode):
value = str(value)
if isinstance(value, str) or isinstance(value, dict):
value = [value]
if not isinstance(value, collections.Iterable):
raise TypeError('value must be an iterable object')
return value | Converts the supplied value to a list object
This function will inspect the supplied value and return an
iterable in the form of a list.
Args:
value (object): An valid Python object
Returns:
An iterable object of type list |
def hash_tags(text, hashes):
def sub(match):
hashed = hash_text(match.group(0), 'tag')
hashes[hashed] = match.group(0)
return hashed
return re_tag.sub(sub, text) | Hashes any non-block tags.
Only the tags themselves are hashed -- the contains surrounded
by tags are not touched. Indeed, there is no notion of "contained"
text for non-block tags.
Inline tags that are to be hashed are not white-listed, which
allows users to define their own tags. These user-defined tags
will also be preserved in their original form until the controller
(see link.py) is applied to them. |
def _load_secret(self, creds_file):
try:
with open(creds_file) as fp:
creds = json.load(fp)
return creds
except Exception as e:
sys.stderr.write("Error loading oauth secret from local file called '{0}'\n".format(creds_file))
sys.stderr.write("\tThere should be a local OAuth credentials file \n")
sys.stderr.write("\twhich has contents like this:\n")
sys.stderr.write(
)
sys.stderr.write("\n")
raise e | read the oauth secrets and account ID from a credentials configuration file |
def prepare(self, params):
jsonparams = json.dumps(params)
payload = base64.b64encode(jsonparams.encode())
signature = hmac.new(self.secret_key.encode(), payload,
hashlib.sha384).hexdigest()
return {'X-GEMINI-APIKEY': self.api_key,
'X-GEMINI-PAYLOAD': payload,
'X-GEMINI-SIGNATURE': signature} | Prepare, return the required HTTP headers.
Base 64 encode the parameters, sign it with the secret key,
create the HTTP headers, return the whole payload.
Arguments:
params -- a dictionary of parameters |
def default_settings(params):
def _default_settings(fn, command):
for k, w in params.items():
settings.setdefault(k, w)
return fn(command)
return decorator(_default_settings) | Adds default values to settings if it not presented.
Usage:
@default_settings({'apt': '/usr/bin/apt'})
def match(command):
print(settings.apt) |
def fric(FlowRate, Diam, Nu, PipeRough):
ut.check_range([PipeRough, "0-1", "Pipe roughness"])
if re_pipe(FlowRate, Diam, Nu) >= RE_TRANSITION_PIPE:
f = (0.25 / (np.log10(PipeRough / (3.7 * Diam)
+ 5.74 / re_pipe(FlowRate, Diam, Nu) ** 0.9
)
) ** 2
)
else:
f = 64 / re_pipe(FlowRate, Diam, Nu)
return f | Return the friction factor for pipe flow.
This equation applies to both laminar and turbulent flows. |
def plot_target(target, ax):
ax.scatter(target[0], target[1], target[2], c="red", s=80) | Ajoute la target au plot |
def _MakeMethodDescriptor(self, method_proto, service_name, package, scope,
index):
full_name = '.'.join((service_name, method_proto.name))
input_type = self._GetTypeFromScope(
package, method_proto.input_type, scope)
output_type = self._GetTypeFromScope(
package, method_proto.output_type, scope)
return descriptor.MethodDescriptor(name=method_proto.name,
full_name=full_name,
index=index,
containing_service=None,
input_type=input_type,
output_type=output_type,
options=_OptionsOrNone(method_proto)) | Creates a method descriptor from a MethodDescriptorProto.
Args:
method_proto: The proto describing the method.
service_name: The name of the containing service.
package: Optional package name to look up for types.
scope: Scope containing available types.
index: Index of the method in the service.
Returns:
An initialized MethodDescriptor object. |
def clear_candidates(self, clear_env=True):
async def slave_task(addr):
r_manager = await self.env.connect(addr)
return await r_manager.clear_candidates()
self._candidates = []
if clear_env:
if self._single_env:
self.env.clear_candidates()
else:
mgrs = self.get_managers()
run(create_tasks(slave_task, mgrs)) | Clear the current candidates.
:param bool clear_env:
If ``True``, clears also environment's (or its underlying slave
environments') candidates. |
def page_not_found(request, template_name='404.html'):
response = render_in_page(request, template_name)
if response:
return response
template = Template(
'<h1>Not Found</h1>'
'<p>The requested URL {{ request_path }} was not found on this server.</p>')
body = template.render(RequestContext(
request, {'request_path': request.path}))
return http.HttpResponseNotFound(body, content_type=CONTENT_TYPE) | Default 404 handler.
Templates: :template:`404.html`
Context:
request_path
The path of the requested URL (e.g., '/app/pages/bad_page/') |
def landsat_c1_toa_cloud_mask(input_img, snow_flag=False, cirrus_flag=False,
cloud_confidence=2, shadow_confidence=3,
snow_confidence=3, cirrus_confidence=3):
qa_img = input_img.select(['BQA'])
cloud_mask = qa_img.rightShift(4).bitwiseAnd(1).neq(0)\
.And(qa_img.rightShift(5).bitwiseAnd(3).gte(cloud_confidence))\
.Or(qa_img.rightShift(7).bitwiseAnd(3).gte(shadow_confidence))
if snow_flag:
cloud_mask = cloud_mask.Or(
qa_img.rightShift(9).bitwiseAnd(3).gte(snow_confidence))
if cirrus_flag:
cloud_mask = cloud_mask.Or(
qa_img.rightShift(11).bitwiseAnd(3).gte(cirrus_confidence))
return cloud_mask.Not() | Extract cloud mask from the Landsat Collection 1 TOA BQA band
Parameters
----------
input_img : ee.Image
Image from a Landsat Collection 1 TOA collection with a BQA band
(e.g. LANDSAT/LE07/C01/T1_TOA).
snow_flag : bool
If true, mask snow pixels (the default is False).
cirrus_flag : bool
If true, mask cirrus pixels (the default is False).
Note, cirrus bits are only set for Landsat 8 (OLI) images.
cloud_confidence : int
Minimum cloud confidence value (the default is 2).
shadow_confidence : int
Minimum cloud confidence value (the default is 3).
snow_confidence : int
Minimum snow confidence value (the default is 3). Only used if
snow_flag is True.
cirrus_confidence : int
Minimum cirrus confidence value (the default is 3). Only used if
cirrus_flag is True.
Returns
-------
ee.Image
Notes
-----
Output image is structured to be applied directly with updateMask()
i.e. 0 is cloud, 1 is cloud free
Assuming Cloud must be set to check Cloud Confidence
Bits
0: Designated Fill
1: Terrain Occlusion (OLI) / Dropped Pixel (TM, ETM+)
2-3: Radiometric Saturation
4: Cloud
5-6: Cloud Confidence
7-8: Cloud Shadow Confidence
9-10: Snow/Ice Confidence
11-12: Cirrus Confidence (Landsat 8 only)
Confidence values
00: "Not Determined", algorithm did not determine the status of this
condition
01: "No", algorithm has low to no confidence that this condition exists
(0-33 percent confidence)
10: "Maybe", algorithm has medium confidence that this condition exists
(34-66 percent confidence)
11: "Yes", algorithm has high confidence that this condition exists
(67-100 percent confidence)
References
----------
https://landsat.usgs.gov/collectionqualityband |
def check_read_permission(self, user_id, do_raise=True):
if _is_admin(user_id):
return True
if int(self.created_by) == int(user_id):
return True
for owner in self.owners:
if int(owner.user_id) == int(user_id):
if owner.view == 'Y':
break
else:
if do_raise is True:
raise PermissionError("Permission denied. User %s does not have read"
" access on network %s" %
(user_id, self.id))
else:
return False
return True | Check whether this user can read this network |
def get_leads(self, *guids, **options):
original_options = options
options = self.camelcase_search_options(options.copy())
params = {}
for i in xrange(len(guids)):
params['guids[%s]'%i] = guids[i]
for k in options.keys():
if k in SEARCH_OPTIONS:
params[k] = options[k]
del options[k]
leads = self._call('list/', params, **options)
self.log.info("retrieved %s leads through API ( %soptions=%s )" %
(len(leads), guids and 'guids=%s, '%guids or '', original_options))
return leads | Supports all the search parameters in the API as well as python underscored variants |
def geturl(self):
if self.retries is not None and len(self.retries.history):
return self.retries.history[-1].redirect_location
else:
return self._request_url | Returns the URL that was the source of this response.
If the request that generated this response redirected, this method
will return the final redirect location. |
def clone_and_merge_sub(self, key):
new_comp = copy.deepcopy(self)
new_comp.components = None
new_comp.comp_key = key
return new_comp | Clones self and merges clone with sub-component specific information
Parameters
----------
key : str
Key specifying which sub-component
Returns `ModelComponentInfo` object |
def out_degree(self, nbunch=None, t=None):
if nbunch in self:
return next(self.out_degree_iter(nbunch, t))[1]
else:
return dict(self.out_degree_iter(nbunch, t)) | Return the out degree of a node or nodes at time t.
The node degree is the number of interaction outgoing from that node in a given time frame.
Parameters
----------
nbunch : iterable container, optional (default=all nodes)
A container of nodes. The container will be iterated
through once.
t : snapshot id (default=None)
If None will be returned the degree of nodes on the flattened graph.
Returns
-------
nd : dictionary, or number
A dictionary with nodes as keys and degree as values or
a number if a single node is specified.
Examples
--------
>>> G = dn.DynDiGraph()
>>> G.add_interactions(0,1, t=0)
>>> G.add_interactions(1,2, t=0)
>>> G.add_interactions(2,3, t=0)
>>> G.out_degree(0, t=0)
1
>>> G.out_degree([0,1], t=1)
{0: 0, 1: 0}
>>> list(G.out_degree([0,1], t=0).values())
[1, 2] |
def stripArgs(args, blacklist):
blacklist = [b.lower() for b in blacklist]
return list([arg for arg in args if arg.lower() not in blacklist]) | Removes any arguments in the supplied list that are contained in the specified blacklist |
def role_required(role_name=None):
def _role_required(http_method_handler):
@wraps(http_method_handler)
def secure_http_method_handler(self, *args, **kwargs):
if role_name is None:
_message = "Role name must be provided"
authorization_error = prestans.exception.AuthorizationError(_message)
authorization_error.request = self.request
raise authorization_error
if not self.__provider_config__.authentication:
_message = "Service available to authenticated users only, no auth context provider set in handler"
authentication_error = prestans.exception.AuthenticationError(_message)
authentication_error.request = self.request
raise authentication_error
if not self.__provider_config__.authentication.current_user_has_role(role_name):
authorization_error = prestans.exception.AuthorizationError(role_name)
authorization_error.request = self.request
raise authorization_error
http_method_handler(self, *args, **kwargs)
return wraps(http_method_handler)(secure_http_method_handler)
return _role_required | Authenticates a HTTP method handler based on a provided role
With a little help from Peter Cole's Blog
http://mrcoles.com/blog/3-decorator-examples-and-awesome-python/ |
def run_hook(hook_name, project_dir, context):
script = find_hook(hook_name)
if script is None:
logger.debug('No {} hook found'.format(hook_name))
return
logger.debug('Running hook {}'.format(hook_name))
run_script_with_context(script, project_dir, context) | Try to find and execute a hook from the specified project directory.
:param hook_name: The hook to execute.
:param project_dir: The directory to execute the script from.
:param context: Cookiecutter project context. |
def bqsr_table(data):
in_file = dd.get_align_bam(data)
out_file = "%s-recal-table.txt" % utils.splitext_plus(in_file)[0]
if not utils.file_uptodate(out_file, in_file):
with file_transaction(data, out_file) as tx_out_file:
assoc_files = dd.get_variation_resources(data)
known = "-k %s" % (assoc_files.get("dbsnp")) if "dbsnp" in assoc_files else ""
license = license_export(data)
cores = dd.get_num_cores(data)
ref_file = dd.get_ref_file(data)
cmd = ("{license}sentieon driver -t {cores} -r {ref_file} "
"-i {in_file} --algo QualCal {known} {tx_out_file}")
do.run(cmd.format(**locals()), "Sentieon QualCal generate table")
return out_file | Generate recalibration tables as inputs to BQSR. |
def get_user(self):
query =
data = self.raw_query(query, authorization=True)['data']['user']
utils.replace(data, "insertedAt", utils.parse_datetime_string)
utils.replace(data, "availableUsd", utils.parse_float_string)
utils.replace(data, "availableNmr", utils.parse_float_string)
return data | Get all information about you!
Returns:
dict: user information including the following fields:
* assignedEthAddress (`str`)
* availableNmr (`decimal.Decimal`)
* availableUsd (`decimal.Decimal`)
* banned (`bool`)
* email (`str`)
* id (`str`)
* insertedAt (`datetime`)
* mfaEnabled (`bool`)
* status (`str`)
* username (`str`)
* country (`str)
* phoneNumber (`str`)
* apiTokens (`list`) each with the following fields:
* name (`str`)
* public_id (`str`)
* scopes (`list of str`)
Example:
>>> api = NumerAPI(secret_key="..", public_id="..")
>>> api.get_user()
{'apiTokens': [
{'name': 'tokenname',
'public_id': 'BLABLA',
'scopes': ['upload_submission', 'stake', ..]
}, ..],
'assignedEthAddress': '0x0000000000000000000000000001',
'availableNmr': Decimal('99.01'),
'availableUsd': Decimal('9.47'),
'banned': False,
'email': 'username@example.com',
'phoneNumber': '0123456',
'country': 'US',
'id': '1234-ABC..',
'insertedAt': datetime.datetime(2018, 1, 1, 2, 16, 48),
'mfaEnabled': False,
'status': 'VERIFIED',
'username': 'cool username'
} |
def get_igraph_from_adjacency(adjacency, directed=None):
import igraph as ig
sources, targets = adjacency.nonzero()
weights = adjacency[sources, targets]
if isinstance(weights, np.matrix):
weights = weights.A1
g = ig.Graph(directed=directed)
g.add_vertices(adjacency.shape[0])
g.add_edges(list(zip(sources, targets)))
try:
g.es['weight'] = weights
except:
pass
if g.vcount() != adjacency.shape[0]:
logg.warn('The constructed graph has only {} nodes. '
'Your adjacency matrix contained redundant nodes.'
.format(g.vcount()))
return g | Get igraph graph from adjacency matrix. |
def _resize(self):
lines = self.text.split('\n')
xsize, ysize = 0, 0
for line in lines:
size = self.textctrl.GetTextExtent(line)
xsize = max(xsize, size[0])
ysize = ysize + size[1]
xsize = int(xsize*1.2)
self.textctrl.SetSize((xsize, ysize))
self.textctrl.SetMinSize((xsize, ysize)) | calculate and set text size, handling multi-line |
def fetch_meta_by_name(name, filter_context=None, exact_match=True):
result = SMCRequest(
params={'filter': name,
'filter_context': filter_context,
'exact_match': exact_match}).read()
if not result.json:
result.json = []
return result | Find the element based on name and optional filters. By default, the
name provided uses the standard filter query. Additional filters can
be used based on supported collections in the SMC API.
:method: GET
:param str name: element name, can use * as wildcard
:param str filter_context: further filter request, i.e. 'host', 'group',
'single_fw', 'network_elements', 'services',
'services_and_applications'
:param bool exact_match: Do an exact match by name, note this still can
return multiple entries
:rtype: SMCResult |
def asynchronous(self, fun, low, user='UNKNOWN', pub=None):
async_pub = pub if pub is not None else self._gen_async_pub()
proc = salt.utils.process.SignalHandlingMultiprocessingProcess(
target=self._proc_function,
args=(fun, low, user, async_pub['tag'], async_pub['jid']))
with salt.utils.process.default_signals(signal.SIGINT, signal.SIGTERM):
proc.start()
proc.join()
return async_pub | Execute the function in a multiprocess and return the event tag to use
to watch for the return |
def reject(self, delivery_tag, requeue=False):
args = Writer()
args.write_longlong(delivery_tag).\
write_bit(requeue)
self.send_frame(MethodFrame(self.channel_id, 60, 90, args)) | Reject a message. |
def uninstall(
ctx,
state,
all_dev=False,
all=False,
**kwargs
):
from ..core import do_uninstall
retcode = do_uninstall(
packages=state.installstate.packages,
editable_packages=state.installstate.editables,
three=state.three,
python=state.python,
system=state.system,
lock=not state.installstate.skip_lock,
all_dev=all_dev,
all=all,
keep_outdated=state.installstate.keep_outdated,
pypi_mirror=state.pypi_mirror,
ctx=ctx
)
if retcode:
sys.exit(retcode) | Un-installs a provided package and removes it from Pipfile. |
def new(filename: str, *, file_attrs: Optional[Dict[str, str]] = None) -> LoomConnection:
if filename.startswith("~/"):
filename = os.path.expanduser(filename)
if file_attrs is None:
file_attrs = {}
f = h5py.File(name=filename, mode='w')
f.create_group('/layers')
f.create_group('/row_attrs')
f.create_group('/col_attrs')
f.create_group('/row_graphs')
f.create_group('/col_graphs')
f.flush()
f.close()
ds = connect(filename, validate=False)
for vals in file_attrs:
ds.attrs[vals] = file_attrs[vals]
currentTime = time.localtime(time.time())
ds.attrs['CreationDate'] = timestamp()
ds.attrs["LOOM_SPEC_VERSION"] = loompy.loom_spec_version
return ds | Create an empty Loom file, and return it as a context manager. |
def copy_layer_keywords(layer_keywords):
copy_keywords = {}
for key, value in list(layer_keywords.items()):
if isinstance(value, QUrl):
copy_keywords[key] = value.toString()
elif isinstance(value, datetime):
copy_keywords[key] = value.date().isoformat()
elif isinstance(value, QDate):
copy_keywords[key] = value.toString(Qt.ISODate)
elif isinstance(value, QDateTime):
copy_keywords[key] = value.toString(Qt.ISODate)
elif isinstance(value, date):
copy_keywords[key] = value.isoformat()
else:
copy_keywords[key] = deepcopy(value)
return copy_keywords | Helper to make a deep copy of a layer keywords.
:param layer_keywords: A dictionary of layer's keywords.
:type layer_keywords: dict
:returns: A deep copy of layer keywords.
:rtype: dict |
def get_analyses(self):
analyses = self.context.getAnalyses(full_objects=True)
return filter(self.is_analysis_attachment_allowed, analyses) | Returns a list of analyses from the AR |
def decrypt_block(self, cipherText):
if not self.initialized:
raise TypeError("CamCrypt object has not been initialized")
if len(cipherText) != BLOCK_SIZE:
raise ValueError("cipherText must be %d bytes long (received %d bytes)" %
(BLOCK_SIZE, len(cipherText)))
plain = ctypes.create_string_buffer(BLOCK_SIZE)
self.decblock(self.bitlen, cipherText, self.keytable, plain)
return plain.raw | Decrypt a 16-byte block of data.
NOTE: This function was formerly called `decrypt`, but was changed when
support for decrypting arbitrary-length strings was added.
Args:
cipherText (str): 16-byte data.
Returns:
16-byte str.
Raises:
TypeError if CamCrypt object has not been initialized.
ValueError if `cipherText` is not BLOCK_SIZE (i.e. 16) bytes. |
def enable_repositories(self, repositories):
for r in repositories:
if r['type'] != 'rhsm_channel':
continue
if r['name'] not in self.rhsm_channels:
self.rhsm_channels.append(r['name'])
if self.rhsm_active:
subscription_cmd = "subscription-manager repos '--disable=*' --enable=" + ' --enable='.join(
self.rhsm_channels)
self.run(subscription_cmd)
repo_files = [r for r in repositories if r['type'] == 'yum_repo']
for repo_file in repo_files:
self.create_file(repo_file['dest'], repo_file['content'])
packages = [r['name'] for r in repositories if r['type'] == 'package']
if packages:
self.yum_install(packages) | Enable a list of RHSM repositories.
:param repositories: a dict in this format:
[{'type': 'rhsm_channel', 'name': 'rhel-7-server-rpms'}] |
def _handle_dumps(self, handler, **kwargs):
return handler.dumps(self.__class__, to_dict(self), **kwargs) | Dumps caller, used by partial method for dynamic handler assignments.
:param object handler: The dump handler
:return: The dumped string
:rtype: str |
def until(self, method, message=''):
screen = None
stacktrace = None
end_time = time.time() + self._timeout
while True:
try:
value = method(self._driver)
if value:
return value
except self._ignored_exceptions as exc:
screen = getattr(exc, 'screen', None)
stacktrace = getattr(exc, 'stacktrace', None)
time.sleep(self._poll)
if time.time() > end_time:
break
raise TimeoutException(message, screen, stacktrace) | Calls the method provided with the driver as an argument until the \
return value does not evaluate to ``False``.
:param method: callable(WebDriver)
:param message: optional message for :exc:`TimeoutException`
:returns: the result of the last call to `method`
:raises: :exc:`selenium.common.exceptions.TimeoutException` if timeout occurs |
def read_local_manifest(self):
manifest = file_or_default(self.get_full_file_path(self.manifest_file), {
'format_version' : 2,
'root' : '/',
'have_revision' : 'root',
'files' : {}}, json.loads)
if 'format_version' not in manifest or manifest['format_version'] < 2:
raise SystemExit('Please update the client manifest format')
return manifest | Read the file manifest, or create a new one if there isn't one already |
def platform_path(path):
r
try:
if path == '':
raise ValueError('path cannot be the empty string')
path1 = truepath_relative(path)
if sys.platform.startswith('win32'):
path2 = expand_win32_shortname(path1)
else:
path2 = path1
except Exception as ex:
util_dbg.printex(ex, keys=['path', 'path1', 'path2'])
raise
return path2 | r"""
Returns platform specific path for pyinstaller usage
Args:
path (str):
Returns:
str: path2
CommandLine:
python -m utool.util_path --test-platform_path
Example:
>>> # ENABLE_DOCTEST
>>> # FIXME: find examples of the wird paths this fixes (mostly on win32 i think)
>>> from utool.util_path import * # NOQA
>>> import utool as ut
>>> path = 'some/odd/../weird/path'
>>> path2 = platform_path(path)
>>> result = str(path2)
>>> if ut.WIN32:
... ut.assert_eq(path2, r'some\weird\path')
... else:
... ut.assert_eq(path2, r'some/weird/path')
Example:
>>> # ENABLE_DOCTEST
>>> from utool.util_path import * # NOQA
>>> import utool as ut # NOQA
>>> if ut.WIN32:
... path = 'C:/PROGRA~2'
... path2 = platform_path(path)
... assert path2 == u'..\\..\\..\\..\\Program Files (x86)' |
def setPrefix(self, p, u=None):
self.prefix = p
if p is not None and u is not None:
self.addPrefix(p, u)
return self | Set the element namespace prefix.
@param p: A new prefix for the element.
@type p: basestring
@param u: A namespace URI to be mapped to the prefix.
@type u: basestring
@return: self
@rtype: L{Element} |
def dir2(obj):
attrs = set()
if not hasattr(obj, '__bases__'):
if not hasattr(obj, '__class__'):
return sorted(get_attrs(obj))
klass = obj.__class__
attrs.update(get_attrs(klass))
else:
klass = obj
for cls in klass.__bases__:
attrs.update(get_attrs(cls))
attrs.update(dir2(cls))
attrs.update(get_attrs(obj))
return list(attrs) | Default dir implementation.
Inspired by gist: katyukha/dirmixin.py
https://gist.github.com/katyukha/c6e5e2b829e247c9b009 |
def list_templates():
templates = [f for f in glob.glob(os.path.join(template_path, '*.yaml'))]
return templates | Returns a list of all templates. |
def replace(self, year=None, week=None):
return self.__class__(self.year if year is None else year,
self.week if week is None else week) | Return a Week with either the year or week attribute value replaced |
def get_version_manifest(name, data=None, required=False):
manifest_dir = _get_manifest_dir(data, name)
manifest_vs = _get_versions_manifest(manifest_dir) or []
for x in manifest_vs:
if x["program"] == name:
v = x.get("version", "")
if v:
return v
if required:
raise ValueError("Did not find %s in install manifest. Could not check version." % name)
return "" | Retrieve a version from the currently installed manifest. |
def get_convert_dist(
dist_units_in: str, dist_units_out: str
) -> Callable[[float], float]:
di, do = dist_units_in, dist_units_out
DU = cs.DIST_UNITS
if not (di in DU and do in DU):
raise ValueError(f"Distance units must lie in {DU}")
d = {
"ft": {"ft": 1, "m": 0.3048, "mi": 1 / 5280, "km": 0.0003048},
"m": {"ft": 1 / 0.3048, "m": 1, "mi": 1 / 1609.344, "km": 1 / 1000},
"mi": {"ft": 5280, "m": 1609.344, "mi": 1, "km": 1.609344},
"km": {"ft": 1 / 0.0003048, "m": 1000, "mi": 1 / 1.609344, "km": 1},
}
return lambda x: d[di][do] * x | Return a function of the form
distance in the units ``dist_units_in`` ->
distance in the units ``dist_units_out``
Only supports distance units in :const:`constants.DIST_UNITS`. |
def assemble_points(graph, assemblies, multicolor, verbose=False, verbose_destination=None):
if verbose:
print(">>Assembling for multicolor", [e.name for e in multicolor.multicolors.elements()],
file=verbose_destination)
for assembly in assemblies:
v1, v2, (before, after, ex_data) = assembly
iv1 = get_irregular_vertex(get_irregular_edge_by_vertex(graph, vertex=v1))
iv2 = get_irregular_vertex(get_irregular_edge_by_vertex(graph, vertex=v2))
kbreak = KBreak(start_edges=[(v1, iv1), (v2, iv2)],
result_edges=[(v1, v2), (iv1, iv2)],
multicolor=multicolor)
if verbose:
print("(", v1.name, ",", iv1.name, ")x(", v2.name, ",", iv2.name, ")", " score=", before - after, sep="",
file=verbose_destination)
graph.apply_kbreak(kbreak=kbreak, merge=True) | This function actually does assembling being provided
a graph, to play with
a list of assembly points
and a multicolor, which to assemble |
def _parse_methods(cls, list_string):
if list_string is None:
return APIServer.DEFAULT_METHODS
json_list = list_string.replace("'", '"')
return json.loads(json_list) | Return HTTP method list. Use json for security reasons. |
def breslauer_corrections(seq, pars_error):
deltas_corr = [0, 0]
contains_gc = 'G' in str(seq) or 'C' in str(seq)
only_at = str(seq).count('A') + str(seq).count('T') == len(seq)
symmetric = seq == seq.reverse_complement()
terminal_t = str(seq)[0] == 'T' + str(seq)[-1] == 'T'
for i, delta in enumerate(['delta_h', 'delta_s']):
if contains_gc:
deltas_corr[i] += pars_error[delta]['anyGC']
if only_at:
deltas_corr[i] += pars_error[delta]['onlyAT']
if symmetric:
deltas_corr[i] += pars_error[delta]['symmetry']
if terminal_t and delta == 'delta_h':
deltas_corr[i] += pars_error[delta]['terminalT'] * terminal_t
return deltas_corr | Sum corrections for Breslauer '84 method.
:param seq: sequence for which to calculate corrections.
:type seq: str
:param pars_error: dictionary of error corrections
:type pars_error: dict
:returns: Corrected delta_H and delta_S parameters
:rtype: list of floats |
def get_title(self, entry):
title = _('%(title)s (%(word_count)i words)') % \
{'title': entry.title, 'word_count': entry.word_count}
reaction_count = int(entry.comment_count +
entry.pingback_count +
entry.trackback_count)
if reaction_count:
return ungettext_lazy(
'%(title)s (%(reactions)i reaction)',
'%(title)s (%(reactions)i reactions)', reaction_count) % \
{'title': title,
'reactions': reaction_count}
return title | Return the title with word count and number of comments. |
def safe_shake(self, x, fun, fmax):
self.lock[:] = False
def extra_equation(xx):
f, g = fun(xx, do_gradient=True)
return (f-fmax)/abs(fmax), g/abs(fmax)
self.equations.append((-1,extra_equation))
x, shake_counter, constraint_couter = self.free_shake(x)
del self.equations[-1]
return x, shake_counter, constraint_couter | Brings unknowns to the constraints, without increasing fun above fmax.
Arguments:
| ``x`` -- The unknowns.
| ``fun`` -- The function being minimized.
| ``fmax`` -- The highest allowed value of the function being
minimized.
The function ``fun`` takes a mandatory argument ``x`` and an optional
argument ``do_gradient``:
| ``x`` -- the arguments of the function to be tested
| ``do_gradient`` -- when False, only the function value is
returned. when True, a 2-tuple with the
function value and the gradient are returned
[default=False] |
def _hasViewChangeQuorum(self):
num_of_ready_nodes = len(self._view_change_done)
diff = self.quorum - num_of_ready_nodes
if diff > 0:
logger.info('{} needs {} ViewChangeDone messages'.format(self, diff))
return False
logger.info("{} got view change quorum ({} >= {})".
format(self.name, num_of_ready_nodes, self.quorum))
return True | Checks whether n-f nodes completed view change and whether one
of them is the next primary |
def multiple_packaged_versions(package_name):
dist_files = os.listdir('dist')
versions = set()
for filename in dist_files:
version = funcy.re_find(r'{}-(.+).tar.gz'.format(package_name), filename)
if version:
versions.add(version)
return len(versions) > 1 | Look through built package directory and see if there are multiple versions there |
def get_vault_query_session(self, proxy):
if not self.supports_vault_query():
raise errors.Unimplemented()
return sessions.VaultQuerySession(proxy=proxy, runtime=self._runtime) | Gets the OsidSession associated with the vault query service.
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.authorization.VaultQuerySession) - a
``VaultQuerySession``
raise: NullArgument - ``proxy`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_vault_query() is false``
*compliance: optional -- This method must be implemented if
``supports_vault_query()`` is true.* |
def url_unquote_plus(s, charset='utf-8', errors='replace'):
if isinstance(s, unicode):
s = s.encode(charset)
return _decode_unicode(_unquote_plus(s), charset, errors) | URL decode a single string with the given decoding and decode
a "+" to whitespace.
Per default encoding errors are ignored. If you want a different behavior
you can set `errors` to ``'replace'`` or ``'strict'``. In strict mode a
`HTTPUnicodeError` is raised.
:param s: the string to unquote.
:param charset: the charset to be used.
:param errors: the error handling for the charset decoding. |
def set_language(self, editor, language):
LOGGER.debug("> Setting '{0}' language to '{1}' editor.".format(language.name, editor))
return editor.set_language(language) | Sets given language to given Model editor.
:param editor: Editor to set language to.
:type editor: Editor
:param language: Language to set.
:type language: Language
:return: Method success.
:rtype: bool |
def new_logger(name):
log = get_task_logger(name)
handler = logstash.LogstashHandler(
config.logstash.host, config.logstash.port)
log.addHandler(handler)
create_logdir(config.logdir)
handler = TimedRotatingFileHandler(
'%s.json' % join(config.logdir, name),
when='midnight',
utc=True,
)
handler.setFormatter(JSONFormatter())
log.addHandler(handler)
return TaskCtxAdapter(log, {}) | Return new logger which will log both to logstash and to file in JSON
format.
Log files are stored in <logdir>/name.json |
def print_prompt_values(values, message=None, sub_attr=None):
if message:
prompt_message(message)
for index, entry in enumerate(values):
if sub_attr:
line = '{:2d}: {}'.format(index, getattr(utf8(entry), sub_attr))
else:
line = '{:2d}: {}'.format(index, utf8(entry))
with indent(3):
print_message(line) | Prints prompt title and choices with a bit of formatting. |
def get_temp_filename (content):
fd, filename = fileutil.get_temp_file(mode='wb', suffix='.doc',
prefix='lc_')
try:
fd.write(content)
finally:
fd.close()
return filename | Get temporary filename for content to parse. |
def queryTypesDescriptions(self, types):
types = list(types)
if types:
types_descs = self.describeSObjects(types)
else:
types_descs = []
return dict(map(lambda t, d: (t, d), types, types_descs)) | Given a list of types, construct a dictionary such that
each key is a type, and each value is the corresponding sObject
for that type. |
async def sort(self, request, reverse=False):
return sorted(
self.collection, key=lambda o: getattr(o, self.columns_sort, 0), reverse=reverse) | Sort collection. |
def is_email(potential_email_address):
context, mail = parseaddr(potential_email_address)
first_condition = len(context) == 0 and len(mail) != 0
dot_after_at = ('@' in potential_email_address and
'.' in potential_email_address.split('@')[1])
return first_condition and dot_after_at | Check if potential_email_address is a valid e-mail address.
Please note that this function has no false-negatives but many
false-positives. So if it returns that the input is not a valid
e-mail adress, it certainly isn't. If it returns True, it might still be
invalid. For example, the domain could not be registered.
Parameters
----------
potential_email_address : str
Returns
-------
is_email : bool
Examples
--------
>>> is_email('')
False
>>> is_email('info@martin-thoma.de')
True
>>> is_email('info@math.martin-thoma.de')
True
>>> is_email('Martin Thoma <info@martin-thoma.de>')
False
>>> is_email('info@martin-thoma')
False |
def connection(self, commit=False):
if commit:
self._need_commit = True
if self._db:
yield self._db
else:
try:
with self._get_db() as db:
self._db = db
db.create_function("REGEXP", 2, sql_regexp_func)
db.create_function("PROGRAM_NAME", 1,
sql_program_name_func)
db.create_function("PATHDIST", 2, sql_pathdist_func)
yield self._db
if self._need_commit:
db.commit()
finally:
self._db = None
self._need_commit = False | Context manager to keep around DB connection.
:rtype: sqlite3.Connection
SOMEDAY: Get rid of this function. Keeping connection around as
an argument to the method using this context manager is
probably better as it is more explicit.
Also, holding "global state" as instance attribute is bad for
supporting threaded search, which is required for more fluent
percol integration. |
def _ts_parse(ts):
dt = datetime.strptime(ts[:19],"%Y-%m-%dT%H:%M:%S")
if ts[19] == '+':
dt -= timedelta(hours=int(ts[20:22]),minutes=int(ts[23:]))
elif ts[19] == '-':
dt += timedelta(hours=int(ts[20:22]),minutes=int(ts[23:]))
return dt.replace(tzinfo=pytz.UTC) | Parse alert timestamp, return UTC datetime object to maintain Python 2 compatibility. |
def server(port):
args = ['python', 'manage.py', 'runserver']
if port:
args.append(port)
run.main(args) | Start the Django dev server. |
def _get_subnet_explicit_route_table(subnet_id, vpc_id, conn=None, region=None, key=None, keyid=None, profile=None):
if not conn:
conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile)
if conn:
vpc_route_tables = conn.get_all_route_tables(filters={'vpc_id': vpc_id})
for vpc_route_table in vpc_route_tables:
for rt_association in vpc_route_table.associations:
if rt_association.subnet_id == subnet_id and not rt_association.main:
return rt_association.id
return None | helper function to find subnet explicit route table associations
.. versionadded:: 2016.11.0 |
def endswith(self, search_str):
for entry in reversed(list(open(self._jrnl_file, 'r'))[-5:]):
if search_str in entry:
return True
return False | Check whether the provided string exists in Journal file.
Only checks the last 5 lines of the journal file. This method is
usually used when tracking a journal from an active Revit session.
Args:
search_str (str): string to search for
Returns:
bool: if True the search string is found |
def load_recipe(self, recipe):
self.recipe = recipe
for module_description in recipe['modules']:
module_name = module_description['name']
module = self.config.get_module(module_name)(self)
self._module_pool[module_name] = module | Populates the internal module pool with modules declared in a recipe.
Args:
recipe: Dict, recipe declaring modules to load. |
def load_config(self, config_file_name):
with open(config_file_name) as f:
commands = f.read().splitlines()
for command in commands:
if not command.startswith(';'):
try:
self.send_command(command)
except XenaCommandException as e:
self.logger.warning(str(e)) | Load configuration file from xpc file.
:param config_file_name: full path to the configuration file. |
def register_array(self, name, shape, dtype, **kwargs):
if name in self._arrays:
raise ValueError(('Array %s is already registered '
'on this cube object.') % name)
A = self._arrays[name] = AttrDict(name=name,
dtype=dtype, shape=shape,
**kwargs)
return A | Register an array with this cube.
.. code-block:: python
cube.register_array("model_vis", ("ntime", "nbl", "nchan", 4), np.complex128)
Parameters
----------
name : str
Array name
shape : A tuple containing either Dimension names or ints
Array shape schema
dtype :
Array data type |
def business_days(start, stop):
dates=rrule.rruleset()
dates.rrule(rrule.rrule(rrule.DAILY, dtstart=start, until=stop))
dates.exrule(rrule.rrule(rrule.DAILY, byweekday=(rrule.SA, rrule.SU), dtstart=start))
return dates.count() | Return business days between two inclusive dates - ignoring public holidays.
Note that start must be less than stop or else 0 is returned.
@param start: Start date
@param stop: Stop date
@return int |
def get_pipeline(node: Node) -> RenderingPipeline:
pipeline = _get_registered_pipeline(node)
if pipeline is None:
msg = _get_pipeline_registration_error_message(node)
raise RenderingError(msg)
return pipeline | Gets rendering pipeline for passed node |
def _checker_mixer(slice1,
slice2,
checker_size=None):
checkers = _get_checkers(slice1.shape, checker_size)
if slice1.shape != slice2.shape or slice2.shape != checkers.shape:
raise ValueError('size mismatch between cropped slices and checkers!!!')
mixed = slice1.copy()
mixed[checkers > 0] = slice2[checkers > 0]
return mixed | Mixes the two slices in alternating areas specified by checkers |
def url_encode(url):
if isinstance(url, text_type):
url = url.encode('utf8')
return quote(url, ':/%?&=') | Convert special characters using %xx escape.
:param url: str
:return: str - encoded url |
def to_text(self):
if self.text == '':
return '::%s' % self.uri
return '::%s [%s]' % (self.text, self.uri) | Render as plain text. |
def format(self):
if hasattr(self.image, '_getexif'):
self.rotate_exif()
crop_box = self.crop_to_ratio()
self.resize()
return self.image, crop_box | Crop and resize the supplied image. Return the image and the crop_box used.
If the input format is JPEG and in EXIF there is information about rotation, use it and rotate resulting image. |
def combine_kwargs(**kwargs):
combined_kwargs = []
for kw, arg in kwargs.items():
if isinstance(arg, dict):
for k, v in arg.items():
for tup in flatten_kwarg(k, v):
combined_kwargs.append(('{}{}'.format(kw, tup[0]), tup[1]))
elif is_multivalued(arg):
for i in arg:
for tup in flatten_kwarg('', i):
combined_kwargs.append(('{}{}'.format(kw, tup[0]), tup[1]))
else:
combined_kwargs.append((text_type(kw), arg))
return combined_kwargs | Flatten a series of keyword arguments from complex combinations of
dictionaries and lists into a list of tuples representing
properly-formatted parameters to pass to the Requester object.
:param kwargs: A dictionary containing keyword arguments to be
flattened into properly-formatted parameters.
:type kwargs: dict
:returns: A list of tuples that represent flattened kwargs. The
first element is a string representing the key. The second
element is the value.
:rtype: `list` of `tuple` |
def _get_subject_public_key(cert):
public_key = cert.get_pubkey()
cryptographic_key = public_key.to_cryptography_key()
subject_public_key = cryptographic_key.public_bytes(Encoding.DER,
PublicFormat.PKCS1)
return subject_public_key | Returns the SubjectPublicKey asn.1 field of the SubjectPublicKeyInfo
field of the server's certificate. This is used in the server
verification steps to thwart MitM attacks.
:param cert: X509 certificate from pyOpenSSL .get_peer_certificate()
:return: byte string of the asn.1 DER encoded SubjectPublicKey field |
def limit(self, value):
self._query = self._query.limit(value)
return self | Allows for limiting number of results returned for query. Useful
for pagination. |
def list_private_repos(profile='github'):
repos = []
for repo in _get_repos(profile):
if repo.private is True:
repos.append(repo.name)
return repos | List private repositories within the organization. Dependent upon the access
rights of the profile token.
.. versionadded:: 2016.11.0
profile
The name of the profile configuration to use. Defaults to ``github``.
CLI Example:
.. code-block:: bash
salt myminion github.list_private_repos
salt myminion github.list_private_repos profile='my-github-profile' |
def map(self, f, preservesPartitioning=False):
return (
self
.mapPartitions(lambda p: (f(e) for e in p), preservesPartitioning)
.transform(lambda rdd:
rdd.setName('{}:{}'.format(rdd.prev.name(), f)))
) | Apply function f
:param f: mapping function
:rtype: DStream
Example:
>>> import pysparkling
>>> sc = pysparkling.Context()
>>> ssc = pysparkling.streaming.StreamingContext(sc, 0.1)
>>> (
... ssc
... .queueStream([[4], [2], [7]])
... .map(lambda e: e + 1)
... .foreachRDD(lambda rdd: print(rdd.collect()))
... )
>>> ssc.start()
>>> ssc.awaitTermination(0.35)
[5]
[3]
[8] |
def with_name(cls, name, id_user=0, **extra_data):
return cls(name=name, id_user=0, **extra_data) | Instantiate a WorkflowEngine given a name or UUID.
:param name: name of workflow to run.
:type name: str
:param id_user: id of user to associate with workflow
:type id_user: int
:param module_name: label used to query groups of workflows.
:type module_name: str |
def cv_error(self, cv=True, skip_endpoints=True):
resids = self.cv_residuals(cv)
if skip_endpoints:
resids = resids[1:-1]
return np.mean(abs(resids)) | Return the sum of cross-validation residuals for the input data |
def decodes(self, s: str) -> BioCCollection:
tree = etree.parse(io.BytesIO(bytes(s, encoding='UTF-8')))
collection = self.__parse_collection(tree.getroot())
collection.encoding = tree.docinfo.encoding
collection.standalone = tree.docinfo.standalone
collection.version = tree.docinfo.xml_version
return collection | Deserialize ``s`` to a BioC collection object.
Args:
s: a "str" instance containing a BioC collection
Returns:
an object of BioCollection |
def create_guest_screen_info(self, display, status, primary, change_origin, origin_x, origin_y, width, height, bits_per_pixel):
if not isinstance(display, baseinteger):
raise TypeError("display can only be an instance of type baseinteger")
if not isinstance(status, GuestMonitorStatus):
raise TypeError("status can only be an instance of type GuestMonitorStatus")
if not isinstance(primary, bool):
raise TypeError("primary can only be an instance of type bool")
if not isinstance(change_origin, bool):
raise TypeError("change_origin can only be an instance of type bool")
if not isinstance(origin_x, baseinteger):
raise TypeError("origin_x can only be an instance of type baseinteger")
if not isinstance(origin_y, baseinteger):
raise TypeError("origin_y can only be an instance of type baseinteger")
if not isinstance(width, baseinteger):
raise TypeError("width can only be an instance of type baseinteger")
if not isinstance(height, baseinteger):
raise TypeError("height can only be an instance of type baseinteger")
if not isinstance(bits_per_pixel, baseinteger):
raise TypeError("bits_per_pixel can only be an instance of type baseinteger")
guest_screen_info = self._call("createGuestScreenInfo",
in_p=[display, status, primary, change_origin, origin_x, origin_y, width, height, bits_per_pixel])
guest_screen_info = IGuestScreenInfo(guest_screen_info)
return guest_screen_info | Make a IGuestScreenInfo object with the provided parameters.
in display of type int
The number of the guest display.
in status of type :class:`GuestMonitorStatus`
@c True, if this guest screen is enabled,
@c False otherwise.
in primary of type bool
Whether this guest monitor must be primary.
in change_origin of type bool
@c True, if the origin of the guest screen should be changed,
@c False otherwise.
in origin_x of type int
The X origin of the guest screen.
in origin_y of type int
The Y origin of the guest screen.
in width of type int
The width of the guest screen.
in height of type int
The height of the guest screen.
in bits_per_pixel of type int
The number of bits per pixel of the guest screen.
return guest_screen_info of type :class:`IGuestScreenInfo`
The created object. |
def customer_gateway_exists(customer_gateway_id=None, customer_gateway_name=None,
region=None, key=None, keyid=None, profile=None):
return resource_exists('customer_gateway', name=customer_gateway_name,
resource_id=customer_gateway_id,
region=region, key=key, keyid=keyid, profile=profile) | Given a customer gateway ID, check if the customer gateway ID exists.
Returns True if the customer gateway ID exists; Returns False otherwise.
CLI Example:
.. code-block:: bash
salt myminion boto_vpc.customer_gateway_exists cgw-b6a247df
salt myminion boto_vpc.customer_gateway_exists customer_gatway_name=mycgw |
def subdomain(self, hostname):
hostname = hostname.split(":")[0]
for domain in getDomainNames(self.siteStore):
if hostname.endswith("." + domain):
username = hostname[:-len(domain) - 1]
if username != "www":
return username, domain
return None | Determine of which known domain the given hostname is a subdomain.
@return: A two-tuple giving the subdomain part and the domain part or
C{None} if the domain is not a subdomain of any known domain. |
def intervention(self, commit, conf):
if not conf.harpoon.interactive or conf.harpoon.no_intervention:
yield
return
hp.write_to(conf.harpoon.stdout, "!!!!\n")
hp.write_to(conf.harpoon.stdout, "It would appear building the image failed\n")
hp.write_to(conf.harpoon.stdout, "Do you want to run {0} where the build to help debug why it failed?\n".format(conf.resolved_shell))
conf.harpoon.stdout.flush()
answer = input("[y]: ")
if answer and not answer.lower().startswith("y"):
yield
return
with self.commit_and_run(commit, conf, command=conf.resolved_shell):
yield | Ask the user if they want to commit this container and run sh in it |
async def _put_chunk(
cls, session: aiohttp.ClientSession,
upload_uri: str, buf: bytes):
headers = {
'Content-Type': 'application/octet-stream',
'Content-Length': '%s' % len(buf),
}
credentials = cls._handler.session.credentials
if credentials is not None:
utils.sign(upload_uri, headers, credentials)
async with await session.put(
upload_uri, data=buf, headers=headers) as response:
if response.status != 200:
content = await response.read()
request = {
"body": buf,
"headers": headers,
"method": "PUT",
"uri": upload_uri,
}
raise CallError(request, response, content, None) | Upload one chunk to `upload_uri`. |
def _ensure_tuple_or_list(arg_name, tuple_or_list):
if not isinstance(tuple_or_list, (tuple, list)):
raise TypeError(
"Expected %s to be a tuple or list. "
"Received %r" % (arg_name, tuple_or_list)
)
return list(tuple_or_list) | Ensures an input is a tuple or list.
This effectively reduces the iterable types allowed to a very short
whitelist: list and tuple.
:type arg_name: str
:param arg_name: Name of argument to use in error message.
:type tuple_or_list: sequence of str
:param tuple_or_list: Sequence to be verified.
:rtype: list of str
:returns: The ``tuple_or_list`` passed in cast to a ``list``.
:raises TypeError: if the ``tuple_or_list`` is not a tuple or list. |
def multi_rpush(self, queue, values, bulk_size=0, transaction=False):
if hasattr(values, '__iter__'):
pipe = self.pipeline(transaction=transaction)
pipe.multi()
self._multi_rpush_pipeline(pipe, queue, values, bulk_size)
pipe.execute()
else:
raise ValueError('Expected an iterable') | Pushes multiple elements to a list
If bulk_size is set it will execute the pipeline every bulk_size elements
This operation will be atomic if transaction=True is passed |
def scale_subplots(subplots=None, xlim='auto', ylim='auto'):
auto_axis = ''
if xlim == 'auto':
auto_axis += 'x'
if ylim == 'auto':
auto_axis += 'y'
autoscale_subplots(subplots, auto_axis)
for loc, ax in numpy.ndenumerate(subplots):
if 'x' not in auto_axis:
ax.set_xlim(xlim)
if 'y' not in auto_axis:
ax.set_ylim(ylim) | Set the x and y axis limits for a collection of subplots.
Parameters
-----------
subplots : ndarray or list of matplotlib.axes.Axes
xlim : None | 'auto' | (xmin, xmax)
'auto' : sets the limits according to the most
extreme values of data encountered.
ylim : None | 'auto' | (ymin, ymax) |
def add(self, src):
if not audio.get_type(src):
raise TypeError('The type of this file is not supported.')
return super().add(src) | store an audio file to storage dir
:param src: audio file path
:return: checksum value |
def newline(self, *args, **kwargs):
levelOverride = kwargs.get('level') or self._lastlevel
self._log(levelOverride, '', 'newline', args, kwargs) | Prints an empty line to the log. Uses the level of the last message
printed unless specified otherwise with the level= kwarg. |
def get_by_hostname(self, hostname):
resources = self._client.get_all()
resources_filtered = [x for x in resources if x['hostname'] == hostname]
if resources_filtered:
return resources_filtered[0]
else:
return None | Retrieve a storage system by its hostname.
Works only in API500 onwards.
Args:
hostname: Storage system hostname.
Returns:
dict |
def weld_arrays_to_vec_of_struct(arrays, weld_types):
weld_obj = create_empty_weld_object()
obj_ids = [get_weld_obj_id(weld_obj, array) for array in arrays]
arrays = 'zip({})'.format(', '.join(obj_ids)) if len(obj_ids) > 1 else '{}'.format(obj_ids[0])
input_types = struct_of('{e}', weld_types) if len(obj_ids) > 1 else '{}'.format(weld_types[0])
res_types = struct_of('{e}', weld_types)
to_merge = 'e' if len(obj_ids) > 1 else '{e}'
weld_template =
weld_obj.weld_code = weld_template.format(arrays=arrays,
input_types=input_types,
res_types=res_types,
to_merge=to_merge)
return weld_obj | Create a vector of structs from multiple vectors.
Parameters
----------
arrays : list of (numpy.ndarray or WeldObject)
Arrays to put in a struct.
weld_types : list of WeldType
The Weld types of the arrays in the same order.
Returns
-------
WeldObject
Representation of this computation. |
def _split_stock_code(self, code):
stock_str = str(code)
split_loc = stock_str.find(".")
if 0 <= split_loc < len(
stock_str) - 1 and stock_str[0:split_loc] in MKT_MAP:
market_str = stock_str[0:split_loc]
partial_stock_str = stock_str[split_loc + 1:]
return RET_OK, (market_str, partial_stock_str)
else:
error_str = ERROR_STR_PREFIX + "format of %s is wrong. (US.AAPL, HK.00700, SZ.000001)" % stock_str
return RET_ERROR, error_str | do not use the built-in split function in python.
The built-in function cannot handle some stock strings correctly.
for instance, US..DJI, where the dot . itself is a part of original code |
def getPolicyValue(self):
self._cur.execute("SELECT action FROM policy")
r = self._cur.fetchall()
policy = [x[0] for x in r]
self._cur.execute("SELECT value FROM V")
r = self._cur.fetchall()
value = [x[0] for x in r]
return policy, value | Get the policy and value vectors. |
def columnSimilarities(self, threshold=0.0):
java_sims_mat = self._java_matrix_wrapper.call("columnSimilarities", float(threshold))
return CoordinateMatrix(java_sims_mat) | Compute similarities between columns of this matrix.
The threshold parameter is a trade-off knob between estimate
quality and computational cost.
The default threshold setting of 0 guarantees deterministically
correct results, but uses the brute-force approach of computing
normalized dot products.
Setting the threshold to positive values uses a sampling
approach and incurs strictly less computational cost than the
brute-force approach. However the similarities computed will
be estimates.
The sampling guarantees relative-error correctness for those
pairs of columns that have similarity greater than the given
similarity threshold.
To describe the guarantee, we set some notation:
* Let A be the smallest in magnitude non-zero element of
this matrix.
* Let B be the largest in magnitude non-zero element of
this matrix.
* Let L be the maximum number of non-zeros per row.
For example, for {0,1} matrices: A=B=1.
Another example, for the Netflix matrix: A=1, B=5
For those column pairs that are above the threshold, the
computed similarity is correct to within 20% relative error
with probability at least 1 - (0.981)^10/B^
The shuffle size is bounded by the *smaller* of the following
two expressions:
* O(n log(n) L / (threshold * A))
* O(m L^2^)
The latter is the cost of the brute-force approach, so for
non-zero thresholds, the cost is always cheaper than the
brute-force approach.
:param: threshold: Set to 0 for deterministic guaranteed
correctness. Similarities above this
threshold are estimated with the cost vs
estimate quality trade-off described above.
:return: An n x n sparse upper-triangular CoordinateMatrix of
cosine similarities between columns of this matrix.
>>> rows = sc.parallelize([[1, 2], [1, 5]])
>>> mat = RowMatrix(rows)
>>> sims = mat.columnSimilarities()
>>> sims.entries.first().value
0.91914503... |
def validate_metadata(self, handler):
if self.meta == 'category':
new_metadata = self.metadata
cur_metadata = handler.read_metadata(self.cname)
if (new_metadata is not None and cur_metadata is not None and
not array_equivalent(new_metadata, cur_metadata)):
raise ValueError("cannot append a categorical with "
"different categories to the existing") | validate that kind=category does not change the categories |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.