code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def memoize(func): cache = {} def memoizer(): if 0 not in cache: cache[0] = func() return cache[0] return functools.wraps(func)(memoizer)
Cache forever.
def _split_match_steps_into_match_traversals(match_steps): output = [] current_list = None for step in match_steps: if isinstance(step.root_block, QueryRoot): if current_list is not None: output.append(current_list) current_list = [step] else: current_list.append(step) if current_list is None: raise AssertionError(u'current_list was unexpectedly None: {}'.format(match_steps)) output.append(current_list) return output
Split a list of MatchSteps into multiple lists, each denoting a single MATCH traversal.
async def hmset(self, name, mapping): if not mapping: raise DataError("'hmset' with 'mapping' of length 0") items = [] for pair in iteritems(mapping): items.extend(pair) return await self.execute_command('HMSET', name, *items)
Set key to value within hash ``name`` for each corresponding key and value from the ``mapping`` dict.
def get_access_token(self): if (self.token is None) or (datetime.utcnow() > self.reuse_token_until): headers = {'Ocp-Apim-Subscription-Key': self.client_secret} response = requests.post(self.base_url, headers=headers) response.raise_for_status() self.token = response.content self.reuse_token_until = datetime.utcnow() + timedelta(minutes=5) return self.token.decode('utf-8')
Returns an access token for the specified subscription. This method uses a cache to limit the number of requests to the token service. A fresh token can be re-used during its lifetime of 10 minutes. After a successful request to the token service, this method caches the access token. Subsequent invocations of the method return the cached token for the next 5 minutes. After 5 minutes, a new token is fetched from the token service and the cache is updated.
def str(self,local=False,ifempty=None): ts = self.get(local) if not ts: return ifempty return ts.strftime('%Y-%m-%d %H:%M:%S')
Returns the string representation of the datetime
def single(self): nodes = super(OneOrMore, self).all() if nodes: return nodes[0] raise CardinalityViolation(self, 'none')
Fetch one of the related nodes :return: Node
def getLogger(name=None, **kwargs): adapter = _LOGGERS.get(name) if not adapter: adapter = KeywordArgumentAdapter(logging.getLogger(name), kwargs) _LOGGERS[name] = adapter return adapter
Build a logger with the given name. :param name: The name for the logger. This is usually the module name, ``__name__``. :type name: string
def check_notebooks_for_errors(notebooks_directory): print("Checking notebooks in directory {} for errors".format(notebooks_directory)) failed_notebooks_count = 0 for file in os.listdir(notebooks_directory): if file.endswith(".ipynb"): print("Checking notebook " + file) full_file_path = os.path.join(notebooks_directory, file) output, errors = run_notebook(full_file_path) if errors is not None and len(errors) > 0: failed_notebooks_count += 1 print("Errors in notebook " + file) print(errors) if failed_notebooks_count == 0: print("No errors found in notebooks under " + notebooks_directory)
Evaluates all notebooks in given directory and prints errors, if any
def utc_datetime_and_leap_second(self): year, month, day, hour, minute, second = self._utc_tuple( _half_millisecond) second, fraction = divmod(second, 1.0) second = second.astype(int) leap_second = second // 60 second -= leap_second milli = (fraction * 1000).astype(int) * 1000 if self.shape: utcs = [utc] * self.shape[0] argsets = zip(year, month, day, hour, minute, second, milli, utcs) dt = array([datetime(*args) for args in argsets]) else: dt = datetime(year, month, day, hour, minute, second, milli, utc) return dt, leap_second
Convert to a Python ``datetime`` in UTC, plus a leap second value. Convert this time to a `datetime`_ object and a leap second:: dt, leap_second = t.utc_datetime_and_leap_second() If the third-party `pytz`_ package is available, then its ``utc`` timezone will be used as the timezone of the return value. Otherwise, Skyfield uses its own ``utc`` timezone. The leap second value is provided because a Python ``datetime`` can only number seconds ``0`` through ``59``, but leap seconds have a designation of at least ``60``. The leap second return value will normally be ``0``, but will instead be ``1`` if the date and time are a UTC leap second. Add the leap second value to the ``second`` field of the ``datetime`` to learn the real name of the second. If this time is an array, then an array of ``datetime`` objects and an array of leap second integers is returned, instead of a single value each.
def get_quoted_foreign_columns(self, platform): columns = [] for column in self._foreign_column_names.values(): columns.append(column.get_quoted_name(platform)) return columns
Returns the quoted representation of the referenced table column names the foreign key constraint is associated with. But only if they were defined with one or the referenced table column name is a keyword reserved by the platform. Otherwise the plain unquoted value as inserted is returned. :param platform: The platform to use for quotation. :type platform: Platform :rtype: list
def dict_clip(a_dict, inlude_keys_lst=[]): return dict([[i[0], i[1]] for i in list(a_dict.items()) if i[0] in inlude_keys_lst])
returns a new dict with keys not in included in inlude_keys_lst clipped off
def apply_trend_constraint(self, limit, dt, **kwargs): if 'RV monitoring' not in self.constraints: self.constraints.append('RV monitoring') for pop in self.poplist: if not hasattr(pop,'dRV'): continue pop.apply_trend_constraint(limit, dt, **kwargs) self.trend_limit = limit self.trend_dt = dt
Applies constraint corresponding to RV trend non-detection to each population See :func:`stars.StarPopulation.apply_trend_constraint`; all arguments passed to that function for each population.
def warning(cls, name, message, *args): cls.getLogger(name).warning(message, *args)
Convenience function to log a message at the WARNING level. :param name: The name of the logger instance in the VSG namespace (VSG.<name>) :param message: A message format string. :param args: The arguments that are are merged into msg using the string formatting operator. :..note: The native logger's `kwargs` are not used in this function.
def register_event(self, event_name, event_level, message): self.events[event_name] = (event_level, message)
Registers an event so that it can be logged later.
def assert_conditions(self): self.assert_condition_md5() etag = self.clean_etag(self.call_method('get_etag')) self.response.last_modified = self.call_method('get_last_modified') self.assert_condition_etag() self.assert_condition_last_modified()
Handles various HTTP conditions and raises HTTP exceptions to abort the request. - Content-MD5 request header must match the MD5 hash of the full input (:func:`assert_condition_md5`). - If-Match and If-None-Match etags are checked against the ETag of this resource (:func:`assert_condition_etag`). - If-Modified-Since and If-Unmodified-Since are checked against the modification date of this resource (:func:`assert_condition_last_modified`). .. todo:: Return a 501 exception when any Content-* headers have been set in the request. (See :rfc:`2616`, section 9.6)
def read(self, size = -1): if size < -1: raise Exception('You shouldnt be doing this') if size == -1: t = self.current_segment.remaining_len(self.current_position) if not t: return None old_new_pos = self.current_position self.current_position = self.current_segment.end_address return self.current_segment.data[old_new_pos - self.current_segment.start_address:] t = self.current_position + size if not self.current_segment.inrange(t): raise Exception('Would read over segment boundaries!') old_new_pos = self.current_position self.current_position = t return self.current_segment.data[old_new_pos - self.current_segment.start_address :t - self.current_segment.start_address]
Returns data bytes of size size from the current segment. If size is -1 it returns all the remaining data bytes from memory segment
def _no_proxy(method): @wraps(method) def wrapper(self, *args, **kwargs): notproxied = _oga(self, "__notproxied__") _osa(self, "__notproxied__", True) try: return method(self, *args, **kwargs) finally: _osa(self, "__notproxied__", notproxied) return wrapper
Returns a wrapped version of `method`, such that proxying is turned off during the method call.
def calc_path_and_create_folders(folder, import_path): file_path = abspath(path_join(folder, import_path[:import_path.rfind(".")].replace(".", folder_seperator) + ".py")) mkdir_p(dirname(file_path)) return file_path
calculate the path and create the needed folders
def get_model(is_netfree=False, without_reset=False, seeds=None, effective=False): try: if seeds is not None or is_netfree: m = ecell4_base.core.NetfreeModel() else: m = ecell4_base.core.NetworkModel() for sp in SPECIES_ATTRIBUTES: m.add_species_attribute(sp) for rr in REACTION_RULES: m.add_reaction_rule(rr) if not without_reset: reset_model() if seeds is not None: return m.expand(seeds) if isinstance(m, ecell4_base.core.NetfreeModel): m.set_effective(effective) except Exception as e: reset_model() raise e return m
Generate a model with parameters in the global scope, ``SPECIES_ATTRIBUTES`` and ``REACTIONRULES``. Parameters ---------- is_netfree : bool, optional Return ``NetfreeModel`` if True, and ``NetworkModel`` if else. Default is False. without_reset : bool, optional Do not reset the global variables after the generation if True. Default is False. seeds : list, optional A list of seed ``Species`` for expanding the model. If this is not None, generate a ``NetfreeModel`` once, and return a ``NetworkModel``, which is an expanded form of that with the given seeds. Default is None. effective : bool, optional See ``NetfreeModel.effective`` and ``Netfree.set_effective``. Only meaningfull with option ``is_netfree=True``. Default is False Returns ------- model : NetworkModel, NetfreeModel
def from_list(cls, database, key, data, clear=False): arr = cls(database, key) if clear: arr.clear() arr.extend(data) return arr
Create and populate an Array object from a data dictionary.
def read_datafiles(files, dtype, column): pha = [] pha_fpi = [] for filename, filetype in zip(files, dtype): if filetype == 'cov': cov = load_cov(filename) elif filetype == 'mag': mag = load_rho(filename, column) elif filetype == 'pha': pha = load_rho(filename, 2) elif filetype == 'pha_fpi': pha_fpi = load_rho(filename, 2) return cov, mag, pha, pha_fpi
Load the datafiles and return cov, mag, phase and fpi phase values.
def report(data): work_dir = dd.get_work_dir(data[0][0]) out_dir = op.join(work_dir, "report") safe_makedir(out_dir) summary_file = op.join(out_dir, "summary.csv") with file_transaction(summary_file) as out_tx: with open(out_tx, 'w') as out_handle: out_handle.write("sample_id,%s\n" % _guess_header(data[0][0])) for sample in data: info = sample[0] group = _guess_group(info) files = info["seqbuster"] if "seqbuster" in info else "None" out_handle.write(",".join([dd.get_sample_name(info), group]) + "\n") _modify_report(work_dir, out_dir) return summary_file
Create a Rmd report for small RNAseq analysis
def cancel_signature_request(self, signature_request_id): request = self._get_request() request.post(url=self.SIGNATURE_REQUEST_CANCEL_URL + signature_request_id, get_json=False)
Cancels a SignatureRequest Cancels a SignatureRequest. After canceling, no one will be able to sign or access the SignatureRequest or its documents. Only the requester can cancel and only before everyone has signed. Args: signing_request_id (str): The id of the signature request to cancel Returns: None
def do_move_to(self, element, decl, pseudo): target = serialize(decl.value).strip() step = self.state[self.state['current_step']] elem = self.current_target().tree actions = step['actions'] for pos, action in enumerate(reversed(actions)): if action[0] == 'move' and action[1] == elem: target_index = - pos - 1 actions[target_index:] = actions[target_index+1:] break _, valstep = self.lookup('pending', target) if not valstep: step['pending'][target] = [('move', elem)] else: self.state[valstep]['pending'][target].append(('move', elem))
Implement move-to declaration.
def is_obtuse(p1, v, p2): p1x = p1[:,1] p1y = p1[:,0] p2x = p2[:,1] p2y = p2[:,0] vx = v[:,1] vy = v[:,0] Dx = vx - p2x Dy = vy - p2y Dvp1x = p1x - vx Dvp1y = p1y - vy return Dvp1x * Dx + Dvp1y * Dy > 0
Determine whether the angle, p1 - v - p2 is obtuse p1 - N x 2 array of coordinates of first point on edge v - N x 2 array of vertex coordinates p2 - N x 2 array of coordinates of second point on edge returns vector of booleans
def discover_handler_classes(handlers_package): if handlers_package is None: return sys.path.insert(0, os.getcwd()) package = import_module(handlers_package) if hasattr(package, '__path__'): for _, modname, _ in pkgutil.iter_modules(package.__path__): import_module('{package}.{module}'.format(package=package.__name__, module=modname)) return registered_handlers
Looks for handler classes within handler path module. Currently it's not looking deep into nested module. :param handlers_package: module path to handlers :type handlers_package: string :return: list of handler classes
def track_change(self, instance, resolution_level=0): tobj = self.objects[id(instance)] tobj.set_resolution_level(resolution_level)
Change tracking options for the already tracked object 'instance'. If instance is not tracked, a KeyError will be raised.
def default_logging(grab_log=None, network_log=None, level=logging.DEBUG, mode='a', propagate_network_logger=False): logging.basicConfig(level=level) network_logger = logging.getLogger('grab.network') network_logger.propagate = propagate_network_logger if network_log: hdl = logging.FileHandler(network_log, mode) network_logger.addHandler(hdl) network_logger.setLevel(level) grab_logger = logging.getLogger('grab') if grab_log: hdl = logging.FileHandler(grab_log, mode) grab_logger.addHandler(hdl) grab_logger.setLevel(level)
Customize logging output to display all log messages except grab network logs. Redirect grab network logs into file.
def get_current_url(request, ignore_params=None): if ignore_params is None: ignore_params = set() protocol = u'https' if request.is_secure() else u"http" service_url = u"%s://%s%s" % (protocol, request.get_host(), request.path) if request.GET: params = copy_params(request.GET, ignore_params) if params: service_url += u"?%s" % urlencode(params) return service_url
Giving a django request, return the current http url, possibly ignoring some GET parameters :param django.http.HttpRequest request: The current request object. :param set ignore_params: An optional set of GET parameters to ignore :return: The URL of the current page, possibly omitting some parameters from ``ignore_params`` in the querystring. :rtype: unicode
def shell_sqlalchemy(session: SqlalchemySession, backend: ShellBackend): namespace = { 'session': session } namespace.update(backend.get_namespace()) embed(user_ns=namespace, header=backend.header)
This command includes SQLAlchemy DB Session
def _FlushCache(cls, format_categories): if definitions.FORMAT_CATEGORY_ARCHIVE in format_categories: cls._archive_remainder_list = None cls._archive_scanner = None cls._archive_store = None if definitions.FORMAT_CATEGORY_COMPRESSED_STREAM in format_categories: cls._compressed_stream_remainder_list = None cls._compressed_stream_scanner = None cls._compressed_stream_store = None if definitions.FORMAT_CATEGORY_FILE_SYSTEM in format_categories: cls._file_system_remainder_list = None cls._file_system_scanner = None cls._file_system_store = None if definitions.FORMAT_CATEGORY_STORAGE_MEDIA_IMAGE in format_categories: cls._storage_media_image_remainder_list = None cls._storage_media_image_scanner = None cls._storage_media_image_store = None if definitions.FORMAT_CATEGORY_VOLUME_SYSTEM in format_categories: cls._volume_system_remainder_list = None cls._volume_system_scanner = None cls._volume_system_store = None
Flushes the cached objects for the specified format categories. Args: format_categories (set[str]): format categories.
def capture(self, event_type, data=None, date=None, time_spent=None, extra=None, stack=None, tags=None, sample_rate=None, **kwargs): if not self.is_enabled(): return exc_info = kwargs.get('exc_info') if exc_info is not None: if self.skip_error_for_logging(exc_info): return elif not self.should_capture(exc_info): self.logger.info( 'Not capturing exception due to filters: %s', exc_info[0], exc_info=sys.exc_info()) return self.record_exception_seen(exc_info) data = self.build_msg( event_type, data, date, time_spent, extra, stack, tags=tags, **kwargs) if sample_rate is None: sample_rate = self.sample_rate if self._random.random() < sample_rate: self.send(**data) self._local_state.last_event_id = data['event_id'] return data['event_id']
Captures and processes an event and pipes it off to SentryClient.send. To use structured data (interfaces) with capture: >>> capture('raven.events.Message', message='foo', data={ >>> 'request': { >>> 'url': '...', >>> 'data': {}, >>> 'query_string': '...', >>> 'method': 'POST', >>> }, >>> 'logger': 'logger.name', >>> }, extra={ >>> 'key': 'value', >>> }) The finalized ``data`` structure contains the following (some optional) builtin values: >>> { >>> # the culprit and version information >>> 'culprit': 'full.module.name', # or /arbitrary/path >>> >>> # all detectable installed modules >>> 'modules': { >>> 'full.module.name': 'version string', >>> }, >>> >>> # arbitrary data provided by user >>> 'extra': { >>> 'key': 'value', >>> } >>> } :param event_type: the module path to the Event class. Builtins can use shorthand class notation and exclude the full module path. :param data: the data base, useful for specifying structured data interfaces. Any key which contains a '.' will be assumed to be a data interface. :param date: the datetime of this event :param time_spent: a integer value representing the duration of the event (in milliseconds) :param extra: a dictionary of additional standard metadata :param stack: a stacktrace for the event :param tags: dict of extra tags :param sample_rate: a float in the range [0, 1] to sample this message :return: a 32-length string identifying this event
def prevmonday(num): today = get_today() lastmonday = today - timedelta(days=today.weekday(), weeks=num) return lastmonday
Return unix SECOND timestamp of "num" mondays ago
def _load_physical_network_mappings(self, phys_net_vswitch_mappings): for mapping in phys_net_vswitch_mappings: parts = mapping.split(':') if len(parts) != 2: LOG.debug('Invalid physical network mapping: %s', mapping) else: pattern = re.escape(parts[0].strip()).replace('\\*', '.*') pattern = pattern + '$' vswitch = parts[1].strip() self._physical_network_mappings[pattern] = vswitch
Load all the information regarding the physical network.
def _add_current_quay_tag(repo, container_tags): if ':' in repo: return repo, container_tags try: latest_tag = container_tags[repo] except KeyError: repo_id = repo[repo.find('/') + 1:] tags = requests.request("GET", "https://quay.io/api/v1/repository/" + repo_id).json()["tags"] latest_tag = None latest_modified = None for tag, info in tags.items(): if latest_tag: if (dateutil.parser.parse(info['last_modified']) > dateutil.parser.parse(latest_modified) and tag != 'latest'): latest_modified = info['last_modified'] latest_tag = tag else: latest_modified = info['last_modified'] latest_tag = tag container_tags[repo] = str(latest_tag) latest_pull = repo + ':' + str(latest_tag) return latest_pull, container_tags
Lookup the current quay tag for the repository, adding to repo string. Enables generation of CWL explicitly tied to revisions.
def monitor_deletion(): monitors = {} def set_deleted(x): def _(weakref): del monitors[x] return _ def monitor(item, name): monitors[name] = ref(item, set_deleted(name)) def is_alive(name): return monitors.get(name, None) is not None return monitor, is_alive
Function for checking for correct deletion of weakref-able objects. Example usage:: monitor, is_alive = monitor_deletion() obj = set() monitor(obj, "obj") assert is_alive("obj") # True because there is a ref to `obj` is_alive del obj assert not is_alive("obj") # True because there `obj` is deleted
def splitkeyurl(url): key = url[-22:] urlid = url[-34:-24] service = url[:-43] return service, urlid, key
Splits a Send url into key, urlid and 'prefix' for the Send server Should handle any hostname, but will brake on key & id length changes
def guess_file_name_stream_type_header(args): ftype = None fheader = None if isinstance(args, (tuple, list)): if len(args) == 2: fname, fstream = args elif len(args) == 3: fname, fstream, ftype = args else: fname, fstream, ftype, fheader = args else: fname, fstream = guess_filename_stream(args) ftype = guess_content_type(fname) if isinstance(fstream, (str, bytes, bytearray)): fdata = fstream else: fdata = fstream.read() return fname, fdata, ftype, fheader
Guess filename, file stream, file type, file header from args. :param args: may be string (filepath), 2-tuples (filename, fileobj), 3-tuples (filename, fileobj, contentype) or 4-tuples (filename, fileobj, contentype, custom_headers). :return: filename, file stream, file type, file header
def sortJobs(jobTypes, options): longforms = {"med": "median", "ave": "average", "min": "min", "total": "total", "max": "max",} sortField = longforms[options.sortField] if (options.sortCategory == "time" or options.sortCategory == "clock" or options.sortCategory == "wait" or options.sortCategory == "memory" ): return sorted( jobTypes, key=lambda tag: getattr(tag, "%s_%s" % (sortField, options.sortCategory)), reverse=options.sortReverse) elif options.sortCategory == "alpha": return sorted( jobTypes, key=lambda tag: tag.name, reverse=options.sortReverse) elif options.sortCategory == "count": return sorted(jobTypes, key=lambda tag: tag.total_number, reverse=options.sortReverse)
Return a jobTypes all sorted.
def has_perm(self, user_obj, perm, obj=None): if not is_authenticated(user_obj): return False change_permission = self.get_full_permission_string('change') delete_permission = self.get_full_permission_string('delete') if obj is None: if self.any_permission: return True if self.change_permission and perm == change_permission: return True if self.delete_permission and perm == delete_permission: return True return False elif user_obj.is_active: if obj == user_obj: if self.any_permission: return True if (self.change_permission and perm == change_permission): return True if (self.delete_permission and perm == delete_permission): return True return False
Check if user have permission of himself If the user_obj is not authenticated, it return ``False``. If no object is specified, it return ``True`` when the corresponding permission was specified to ``True`` (changed from v0.7.0). This behavior is based on the django system. https://code.djangoproject.com/wiki/RowLevelPermissions If an object is specified, it will return ``True`` if the object is the user. So users can change or delete themselves (you can change this behavior to set ``any_permission``, ``change_permissino`` or ``delete_permission`` attributes of this instance). Parameters ---------- user_obj : django user model instance A django user model instance which be checked perm : string `app_label.codename` formatted permission string obj : None or django model instance None or django model instance for object permission Returns ------- boolean Whether the specified user have specified permission (of specified object).
def choose(formatter, value, name, option, format): if not option: return words = format.split('|') num_words = len(words) if num_words < 2: return choices = option.split('|') num_choices = len(choices) if num_words not in (num_choices, num_choices + 1): n = num_choices raise ValueError('specify %d or %d choices' % (n, n + 1)) choice = get_choice(value) try: index = choices.index(choice) except ValueError: if num_words == num_choices: raise ValueError('no default choice supplied') index = -1 return formatter.format(words[index], value)
Adds simple logic to format strings. Spec: `{:c[hoose](choice1|choice2|...):word1|word2|...[|default]}` Example:: >>> smart.format(u'{num:choose(1|2|3):one|two|three|other}, num=1) u'one' >>> smart.format(u'{num:choose(1|2|3):one|two|three|other}, num=4) u'other'
def authorize(ctx, public_key, append): wva = get_wva(ctx) http_client = wva.get_http_client() authorized_keys_uri = "/files/userfs/WEB/python/.ssh/authorized_keys" authorized_key_contents = public_key if append: try: existing_contents = http_client.get(authorized_keys_uri) authorized_key_contents = "{}\n{}".format(existing_contents, public_key) except WVAHttpNotFoundError: pass http_client.put(authorized_keys_uri, authorized_key_contents) print("Public key written to authorized_keys for python user.") print("You should now be able to ssh to the device by doing the following:") print("") print(" $ ssh python@{}".format(get_root_ctx(ctx).hostname))
Enable ssh login as the Python user for the current user This command will create an authorized_keys file on the target device containing the current users public key. This will allow ssh to the WVA from this machine.
def output_snapshot_profile(gandi, profile, output_keys, justify=13): schedules = 'schedules' in output_keys if schedules: output_keys.remove('schedules') output_generic(gandi, profile, output_keys, justify) if schedules: schedule_keys = ['name', 'kept_version'] for schedule in profile['schedules']: gandi.separator_line() output_generic(gandi, schedule, schedule_keys, justify)
Helper to output a snapshot_profile.
def _check_key(self, key): if not len(key) == 2: raise TypeError('invalid key: %r' % key) elif key[1] not in TYPES: raise TypeError('invalid datatype: %s' % key[1])
Ensures well-formedness of a key.
def iter_events(self, public=False, number=-1, etag=None): path = ['events'] if public: path.append('public') url = self._build_url(*path, base_url=self._api) return self._iter(int(number), url, Event, etag=etag)
Iterate over events performed by this user. :param bool public: (optional), only list public events for the authenticated user :param int number: (optional), number of events to return. Default: -1 returns all available events. :param str etag: (optional), ETag from a previous request to the same endpoint :returns: list of :class:`Event <github3.events.Event>`\ s
def wait_until(what, times=-1): while times: logger.info('Waiting times left %d', times) try: if what() is True: return True except: logger.exception('Wait failed') else: logger.warning('Trial[%d] failed', times) times -= 1 time.sleep(1) return False
Wait until `what` return True Args: what (Callable[bool]): Call `wait()` again and again until it returns True times (int): Maximum times of trials before giving up Returns: True if success, False if times threshold reached
def init(textCNN, vocab, model_mode, context, lr): textCNN.initialize(mx.init.Xavier(), ctx=context, force_reinit=True) if model_mode != 'rand': textCNN.embedding.weight.set_data(vocab.embedding.idx_to_vec) if model_mode == 'multichannel': textCNN.embedding_extend.weight.set_data(vocab.embedding.idx_to_vec) if model_mode == 'static' or model_mode == 'multichannel': textCNN.embedding.collect_params().setattr('grad_req', 'null') trainer = gluon.Trainer(textCNN.collect_params(), 'adam', {'learning_rate': lr}) return textCNN, trainer
Initialize parameters.
def item(self): url = self._contentURL return Item(url=self._contentURL, securityHandler=self._securityHandler, proxy_url=self._proxy_url, proxy_port=self._proxy_port, initalize=True)
returns the Item class of an Item
def pending_assignment(self): return { self.partitions[pid].name: [ self.brokers[bid].id for bid in self.replicas[pid] ] for pid in set(self.pending_partitions) }
Return the pending partition assignment that this state represents.
def reset(self): self.alive.value = False qsize = 0 try: while True: self.queue.get(timeout=0.1) qsize += 1 except QEmptyExcept: pass print("Queue size on reset: {}".format(qsize)) for i, p in enumerate(self.proc): p.join() self.proc.clear()
Resets the generator by stopping all processes
def _make_property_from_dict(self, property_def: Dict) -> Property: property_hash = hash_dump(property_def) edge_property_model = self.object_cache_property.get(property_hash) if edge_property_model is None: edge_property_model = self.get_property_by_hash(property_hash) if not edge_property_model: property_def['sha512'] = property_hash edge_property_model = Property(**property_def) self.object_cache_property[property_hash] = edge_property_model return edge_property_model
Build an edge property from a dictionary.
def del_node(self, name, index=None): if isinstance(name, Node): name = name.get_name() if name in self.obj_dict['nodes']: if (index is not None and index < len(self.obj_dict['nodes'][name])): del self.obj_dict['nodes'][name][index] return True else: del self.obj_dict['nodes'][name] return True return False
Delete a node from the graph. Given a node's name all node(s) with that same name will be deleted if 'index' is not specified or set to None. If there are several nodes with that same name and 'index' is given, only the node in that position will be deleted. 'index' should be an integer specifying the position of the node to delete. If index is larger than the number of nodes with that name, no action is taken. If nodes are deleted it returns True. If no action is taken it returns False.
def server_args(self, parsed_args): args = { arg: value for arg, value in vars(parsed_args).items() if not arg.startswith('_') and value is not None } args.update(vars(self)) return args
Return keyword args for Server class.
def monitor(self, message, *args, **kws): if self.isEnabledFor(MON): self._log(MON, message, args, **kws)
Define a monitoring logger that will be added to Logger :param self: The logging object :param message: The logging message :param args: Positional arguments :param kws: Keyword arguments :return:
def create_ecdsap256_key_pair(): pub = ECDSAP256PublicKey() priv = ECDSAP256PrivateKey() rc = _lib.xtt_crypto_create_ecdsap256_key_pair(pub.native, priv.native) if rc == RC.SUCCESS: return (pub, priv) else: raise error_from_code(rc)
Create a new ECDSAP256 key pair. :returns: a tuple of the public and private keys
def _distance_squared(self, p2: "Point2") -> Union[int, float]: return (self[0] - p2[0]) ** 2 + (self[1] - p2[1]) ** 2
Function used to not take the square root as the distances will stay proportionally the same. This is to speed up the sorting process.
def offsets(self, group=None): if not group: return { 'fetch': self.offsets('fetch'), 'commit': self.offsets('commit'), 'task_done': self.offsets('task_done'), 'highwater': self.offsets('highwater') } else: return dict(deepcopy(getattr(self._offsets, group)))
Get internal consumer offset values Keyword Arguments: group: Either "fetch", "commit", "task_done", or "highwater". If no group specified, returns all groups. Returns: A copy of internal offsets struct
def ufo_create_background_layer_for_all_glyphs(ufo_font): if "public.background" in ufo_font.layers: background = ufo_font.layers["public.background"] else: background = ufo_font.newLayer("public.background") for glyph in ufo_font: if glyph.name not in background: background.newGlyph(glyph.name)
Create a background layer for all glyphs in ufo_font if not present to reduce roundtrip differences.
def get(self, key, filepath): if not filepath: raise RuntimeError("Configuration file not given") if not self.__check_config_key(key): raise RuntimeError("%s parameter does not exists" % key) if not os.path.isfile(filepath): raise RuntimeError("%s config file does not exist" % filepath) section, option = key.split('.') config = configparser.SafeConfigParser() config.read(filepath) try: option = config.get(section, option) self.display('config.tmpl', key=key, option=option) except (configparser.NoSectionError, configparser.NoOptionError): pass return CMD_SUCCESS
Get configuration parameter. Reads 'key' configuration parameter from the configuration file given in 'filepath'. Configuration parameter in 'key' must follow the schema <section>.<option> . :param key: key to get :param filepath: configuration file
def compare(ctx, commands): mp_pool = multiprocessing.Pool(multiprocessing.cpu_count() * 2) for ip in ctx.obj['hosts']: mp_pool.apply_async(wrap.open_connection, args=(ip, ctx.obj['conn']['username'], ctx.obj['conn']['password'], wrap.compare, [commands], ctx.obj['out'], ctx.obj['conn']['connect_timeout'], ctx.obj['conn']['session_timeout'], ctx.obj['conn']['port']), callback=write_out) mp_pool.close() mp_pool.join()
Run 'show | compare' for set commands. @param ctx: The click context paramter, for receiving the object dictionary | being manipulated by other previous functions. Needed by any | function with the @click.pass_context decorator. @type ctx: click.Context @param commands: The Junos set commands that will be put into a candidate | configuration and used to create the 'show | compare' | against the running configuration. much like the commands | parameter for the commit() function, this can be one of | three things: a string containing a single command, a | string containing a comma separated list of commands, or | a string containing a filepath location for a file with | commands on each line. @type commands: str @returns: None. Functions part of click relating to the command group | 'main' do not return anything. Click handles passing context | between the functions and maintaing command order and chaining.
def gill_king(mat, eps=1e-16): if not scipy.sparse.issparse(mat): mat = numpy.asfarray(mat) assert numpy.allclose(mat, mat.T) size = mat.shape[0] mat_diag = mat.diagonal() gamma = abs(mat_diag).max() off_diag = abs(mat - numpy.diag(mat_diag)).max() delta = eps*max(gamma + off_diag, 1) beta = numpy.sqrt(max(gamma, off_diag/size, eps)) lowtri = _gill_king(mat, beta, delta) return lowtri
Gill-King algorithm for modified cholesky decomposition. Args: mat (numpy.ndarray): Must be a non-singular and symmetric matrix. If sparse, the result will also be sparse. eps (float): Error tolerance used in algorithm. Returns: (numpy.ndarray): Lower triangular Cholesky factor. Examples: >>> mat = [[4, 2, 1], [2, 6, 3], [1, 3, -.004]] >>> lowtri = gill_king(mat) >>> print(numpy.around(lowtri, 4)) [[2. 0. 0. ] [1. 2.2361 0. ] [0.5 1.118 1.2264]] >>> print(numpy.around(numpy.dot(lowtri, lowtri.T), 4)) [[4. 2. 1. ] [2. 6. 3. ] [1. 3. 3.004]]
def info_section(*tokens: Token, **kwargs: Any) -> None: process_tokens_kwargs = kwargs.copy() process_tokens_kwargs["color"] = False no_color = _process_tokens(tokens, **process_tokens_kwargs) info(*tokens, **kwargs) info("-" * len(no_color), end="\n\n")
Print an underlined section name
def generate_blob(self, container_name, blob_name, permission=None, expiry=None, start=None, id=None, ip=None, protocol=None, cache_control=None, content_disposition=None, content_encoding=None, content_language=None, content_type=None): resource_path = container_name + '/' + blob_name sas = _SharedAccessHelper() sas.add_base(permission, expiry, start, ip, protocol) sas.add_id(id) sas.add_resource('b') sas.add_override_response_headers(cache_control, content_disposition, content_encoding, content_language, content_type) sas.add_resource_signature(self.account_name, self.account_key, 'blob', resource_path) return sas.get_token()
Generates a shared access signature for the blob. Use the returned signature with the sas_token parameter of any BlobService. :param str container_name: Name of container. :param str blob_name: Name of blob. :param BlobPermissions permission: The permissions associated with the shared access signature. The user is restricted to operations allowed by the permissions. Permissions must be ordered read, write, delete, list. Required unless an id is given referencing a stored access policy which contains this field. This field must be omitted if it has been specified in an associated stored access policy. :param expiry: The time at which the shared access signature becomes invalid. Required unless an id is given referencing a stored access policy which contains this field. This field must be omitted if it has been specified in an associated stored access policy. Azure will always convert values to UTC. If a date is passed in without timezone info, it is assumed to be UTC. :type expiry: date or str :param start: The time at which the shared access signature becomes valid. If omitted, start time for this call is assumed to be the time when the storage service receives the request. Azure will always convert values to UTC. If a date is passed in without timezone info, it is assumed to be UTC. :type start: date or str :param str id: A unique value up to 64 characters in length that correlates to a stored access policy. To create a stored access policy, use set_blob_service_properties. :param str ip: Specifies an IP address or a range of IP addresses from which to accept requests. If the IP address from which the request originates does not match the IP address or address range specified on the SAS token, the request is not authenticated. For example, specifying sip=168.1.5.65 or sip=168.1.5.60-168.1.5.70 on the SAS restricts the request to those IP addresses. :param str protocol: Specifies the protocol permitted for a request made. The default value is https,http. See :class:`~azure.storage.models.Protocol` for possible values. :param str cache_control: Response header value for Cache-Control when resource is accessed using this shared access signature. :param str content_disposition: Response header value for Content-Disposition when resource is accessed using this shared access signature. :param str content_encoding: Response header value for Content-Encoding when resource is accessed using this shared access signature. :param str content_language: Response header value for Content-Language when resource is accessed using this shared access signature. :param str content_type: Response header value for Content-Type when resource is accessed using this shared access signature.
def _get_link(element: Element) -> Optional[str]: link = get_text(element, 'link') if link is not None: return link guid = get_child(element, 'guid') if guid is not None and guid.attrib.get('isPermaLink') == 'true': return get_text(element, 'guid') return None
Attempt to retrieve item link. Use the GUID as a fallback if it is a permalink.
def modify_cache_parameter_group(name, region=None, key=None, keyid=None, profile=None, **args): args = dict([(k, v) for k, v in args.items() if not k.startswith('_')]) try: Params = args['ParameterNameValues'] except ValueError as e: raise SaltInvocationError('Invalid `ParameterNameValues` structure passed.') while Params: args.update({'ParameterNameValues': Params[:20]}) Params = Params[20:] if not _modify_resource(name, name_param='CacheParameterGroupName', desc='cache parameter group', res_type='cache_parameter_group', region=region, key=key, keyid=keyid, profile=profile, **args): return False return True
Update a cache parameter group in place. Note that due to a design limitation in AWS, this function is not atomic -- a maximum of 20 params may be modified in one underlying boto call. This means that if more than 20 params need to be changed, the update is performed in blocks of 20, which in turns means that if a later sub-call fails after an earlier one has succeeded, the overall update will be left partially applied. CacheParameterGroupName The name of the cache parameter group to modify. ParameterNameValues A [list] of {dicts}, each composed of a parameter name and a value, for the parameter update. At least one parameter/value pair is required. .. code-block:: yaml ParameterNameValues: - ParameterName: timeout # Amazon requires ALL VALUES to be strings... ParameterValue: "30" - ParameterName: appendonly # The YAML parser will turn a bare `yes` into a bool, which Amazon will then throw on... ParameterValue: "yes" Example: .. code-block:: bash salt myminion boto3_elasticache.modify_cache_parameter_group \ CacheParameterGroupName=myParamGroup \ ParameterNameValues='[ { ParameterName: timeout, ParameterValue: "30" }, { ParameterName: appendonly, ParameterValue: "yes" } ]'
def _assert_transition(self, event): state = self.domain.state()[0] if event not in STATES_MAP[state]: raise RuntimeError("State transition %s not allowed" % event)
Asserts the state transition validity.
def _unpack_bitmap(bitmap, xenum): unpacked = set() for enval in xenum: if enval.value & bitmap == enval.value: unpacked.add(enval) return unpacked
Given an integer bitmap and an enumerated type, build a set that includes zero or more enumerated type values corresponding to the bitmap.
def equivalent_vertices(self): level1 = {} for i, row in enumerate(self.vertex_fingerprints): key = row.tobytes() l = level1.get(key) if l is None: l = set([i]) level1[key] = l else: l.add(i) level2 = {} for key, vertices in level1.items(): for vertex in vertices: level2[vertex] = vertices return level2
A dictionary with symmetrically equivalent vertices.
def _repr_html_(self, **kwargs): if self._parent is None: self.add_to(Figure()) out = self._parent._repr_html_(**kwargs) self._parent = None else: out = self._parent._repr_html_(**kwargs) return out
Displays the HTML Map in a Jupyter notebook.
def help(self, print_output=True): help_text = self._rpc('help') if print_output: print(help_text) else: return help_text
Calls the help RPC, which returns the list of RPC calls available. This RPC should normally be used in an interactive console environment where the output should be printed instead of returned. Otherwise, newlines will be escaped, which will make the output difficult to read. Args: print_output: A bool for whether the output should be printed. Returns: A str containing the help output otherwise None if print_output wasn't set.
def load_preset(self): if 'preset' in self.settings.preview: with open(os.path.join(os.path.dirname(__file__), 'presets.yaml')) as f: presets = yaml.load(f.read()) if self.settings.preview['preset'] in presets: self.preset = presets[self.settings.preview['preset']] return self.preset
Loads preset if it is specified in the .frigg.yml
def acquire(self): while self.size() == 0 or self.size() < self._minsize: _conn = yield from self._create_new_conn() if _conn is None: break self._pool.put_nowait(_conn) conn = None while not conn: _conn = yield from self._pool.get() if _conn.reader.at_eof() or _conn.reader.exception(): self._do_close(_conn) conn = yield from self._create_new_conn() else: conn = _conn self._in_use.add(conn) return conn
Acquire connection from the pool, or spawn new one if pool maxsize permits. :return: ``tuple`` (reader, writer)
def _gen_condition(cls, initial, new_public_keys): try: threshold = len(new_public_keys) except TypeError: threshold = None if isinstance(new_public_keys, list) and len(new_public_keys) > 1: ffill = ThresholdSha256(threshold=threshold) reduce(cls._gen_condition, new_public_keys, ffill) elif isinstance(new_public_keys, list) and len(new_public_keys) <= 1: raise ValueError('Sublist cannot contain single owner') else: try: new_public_keys = new_public_keys.pop() except AttributeError: pass if isinstance(new_public_keys, Fulfillment): ffill = new_public_keys else: ffill = Ed25519Sha256( public_key=base58.b58decode(new_public_keys)) initial.add_subfulfillment(ffill) return initial
Generates ThresholdSha256 conditions from a list of new owners. Note: This method is intended only to be used with a reduce function. For a description on how to use this method, see :meth:`~.Output.generate`. Args: initial (:class:`cryptoconditions.ThresholdSha256`): A Condition representing the overall root. new_public_keys (:obj:`list` of :obj:`str`|str): A list of new owners or a single new owner. Returns: :class:`cryptoconditions.ThresholdSha256`:
def swarm_init(advertise_addr=str, listen_addr=int, force_new_cluster=bool): try: salt_return = {} __context__['client'].swarm.init(advertise_addr, listen_addr, force_new_cluster) output = 'Docker swarm has been initialized on {0} ' \ 'and the worker/manager Join token is below'.format(__context__['server_name']) salt_return.update({'Comment': output, 'Tokens': swarm_tokens()}) except TypeError: salt_return = {} salt_return.update({'Error': 'Please make sure you are passing advertise_addr, ' 'listen_addr and force_new_cluster correctly.'}) return salt_return
Initalize Docker on Minion as a Swarm Manager advertise_addr The ip of the manager listen_addr Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form 192.168.1.1:4567, or an interface followed by a port number, like eth0:4567 force_new_cluster Force a new cluster if True is passed CLI Example: .. code-block:: bash salt '*' swarm.swarm_init advertise_addr='192.168.50.10' listen_addr='0.0.0.0' force_new_cluster=False
def aloha_to_etree(html_source): xml = _tidy2xhtml5(html_source) for i, transform in enumerate(ALOHA2HTML_TRANSFORM_PIPELINE): xml = transform(xml) return xml
Converts HTML5 from Aloha editor output to a lxml etree.
def get_name(model_id): name = _names.get(model_id) if name is None: name = 'id = %s (no name)' % str(model_id) return name
Get the name for a model. :returns str: The model's name. If the id has no associated name, then "id = {ID} (no name)" is returned.
def splitpath(path): c = [] head, tail = os.path.split(path) while tail: c.insert(0, tail) head, tail = os.path.split(head) return c
Split a path in its components.
def rel_path(filename): return os.path.join(os.getcwd(), os.path.dirname(__file__), filename)
Function that gets relative path to the filename
def collect(self, order_ref): try: out = self.client.service.Collect(order_ref) except Error as e: raise get_error_class(e, "Could not complete Collect call.") return self._dictify(out)
Collect the progress status of the order with the specified order reference. :param order_ref: The UUID string specifying which order to collect status from. :type order_ref: str :return: The CollectResponse parsed to a dictionary. :rtype: dict :raises BankIDError: raises a subclass of this error when error has been returned from server.
def _update_rr_ce_entry(self, rec): if rec.rock_ridge is not None and rec.rock_ridge.dr_entries.ce_record is not None: celen = rec.rock_ridge.dr_entries.ce_record.len_cont_area added_block, block, offset = self.pvd.add_rr_ce_entry(celen) rec.rock_ridge.update_ce_block(block) rec.rock_ridge.dr_entries.ce_record.update_offset(offset) if added_block: return self.pvd.logical_block_size() return 0
An internal method to update the Rock Ridge CE entry for the given record. Parameters: rec - The record to update the Rock Ridge CE entry for (if it exists). Returns: The number of additional bytes needed for this Rock Ridge CE entry.
def pipeline(steps, initial=None): def apply(result, step): return step(result) return reduce(apply, steps, initial)
Chain results from a list of functions. Inverted reduce. :param (function) steps: List of function callbacks :param initial: Starting value for pipeline.
def resume_instance(self, paused_info): if not paused_info.get("instance_id"): log.info("Instance to stop has no instance id.") return gce = self._connect() try: request = gce.instances().start(project=self._project_id, instance=paused_info["instance_id"], zone=self._zone) operation = self._execute_request(request) response = self._wait_until_done(operation) self._check_response(response) return except HttpError as e: log.error("Error restarting instance: `%s", e) raise InstanceError("Error restarting instance `%s`", e)
Restarts a paused instance, retaining disk and config. :param str instance_id: instance identifier :raises: `InstanceError` if instance cannot be resumed. :return: dict - information needed to restart instance.
def start_worker(self): if not self.include_rq: return None worker = Worker(queues=self.queues, connection=self.connection) worker_pid_path = current_app.config.get( "{}_WORKER_PID".format(self.config_prefix), 'rl_worker.pid' ) try: worker_pid_file = open(worker_pid_path, 'r') worker_pid = int(worker_pid_file.read()) print("Worker already started with PID=%d" % worker_pid) worker_pid_file.close() return worker_pid except (IOError, TypeError): self.worker_process = Process(target=worker_wrapper, kwargs={ 'worker_instance': worker, 'pid_path': worker_pid_path }) self.worker_process.start() worker_pid_file = open(worker_pid_path, 'w') worker_pid_file.write("%d" % self.worker_process.pid) worker_pid_file.close() print("Start a worker process with PID=%d" % self.worker_process.pid) return self.worker_process.pid
Trigger new process as a RQ worker.
def push_pq(self, tokens): logger.debug("Pushing PQ data: %s" % tokens) bus = self.case.buses[tokens["bus_no"] - 1] bus.p_demand = tokens["p"] bus.q_demand = tokens["q"]
Creates and Load object, populates it with data, finds its Bus and adds it.
def generate(self, x, **kwargs): assert self.parse_params(**kwargs) labels, _nb_classes = self.get_or_guess_labels(x, kwargs) return fgm( x, self.model.get_logits(x), y=labels, eps=self.eps, ord=self.ord, clip_min=self.clip_min, clip_max=self.clip_max, targeted=(self.y_target is not None), sanity_checks=self.sanity_checks)
Returns the graph for Fast Gradient Method adversarial examples. :param x: The model's symbolic inputs. :param kwargs: See `parse_params`
def dependencies(self, deps_dict): try: import pygraphviz as pgv except ImportError: graph_easy, comma = "", "" if (self.image == "ascii" and not os.path.isfile("/usr/bin/graph-easy")): comma = "," graph_easy = " graph-easy" print("Require 'pygraphviz{0}{1}': Install with 'slpkg -s sbo " "pygraphviz{1}'".format(comma, graph_easy)) raise SystemExit() if self.image != "ascii": self.check_file() try: G = pgv.AGraph(deps_dict) G.layout(prog="fdp") if self.image == "ascii": G.write("{0}.dot".format(self.image)) self.graph_easy() G.draw(self.image) except IOError: raise SystemExit() if os.path.isfile(self.image): print("Graph image file '{0}' created".format(self.image)) raise SystemExit()
Generate graph file with depenndencies map tree
def save_spectre_plot(self, filename="spectre.pdf", img_format="pdf", sigma=0.05, step=0.01): d, plt = self.get_spectre_plot(sigma, step) plt.savefig(filename, format=img_format)
Save matplotlib plot of the spectre to a file. Args: filename: Filename to write to. img_format: Image format to use. Defaults to EPS. sigma: Full width at half maximum in eV for normal functions. step: bin interval in eV
def download_results(self, savedir=None, raw=True, calib=False, index=None): obsids = self.obsids if index is None else [self.obsids[index]] for obsid in obsids: pm = io.PathManager(obsid.img_id, savedir=savedir) pm.basepath.mkdir(exist_ok=True) to_download = [] if raw is True: to_download.extend(obsid.raw_urls) if calib is True: to_download.extend(obsid.calib_urls) for url in to_download: basename = Path(url).name print("Downloading", basename) store_path = str(pm.basepath / basename) try: urlretrieve(url, store_path) except Exception as e: urlretrieve(url.replace("https", "http"), store_path) return str(pm.basepath)
Download the previously found and stored Opus obsids. Parameters ========== savedir: str or pathlib.Path, optional If the database root folder as defined by the config.ini should not be used, provide a different savedir here. It will be handed to PathManager.
def update_floatingip(floatingip_id, port=None, profile=None): conn = _auth(profile) return conn.update_floatingip(floatingip_id, port)
Updates a floatingIP CLI Example: .. code-block:: bash salt '*' neutron.update_floatingip network-name port-name :param floatingip_id: ID of floatingIP :param port: ID or name of port, to associate floatingip to `None` or do not specify to disassociate the floatingip (Optional) :param profile: Profile to build on (Optional) :return: Value of updated floating IP information
def chdir(path: str) -> Iterator[None]: curdir = os.getcwd() os.chdir(path) try: yield finally: os.chdir(curdir)
Context manager for changing dir and restoring previous workdir after exit.
def set_control_output(self, name: str, value: float, *, options: dict=None) -> None: self.__instrument.set_control_output(name, value, options)
Set the value of a control asynchronously. :param name: The name of the control (string). :param value: The control value (float). :param options: A dict of custom options to pass to the instrument for setting the value. Options are: value_type: local, delta, output. output is default. confirm, confirm_tolerance_factor, confirm_timeout: confirm value gets set. inform: True to keep dependent control outputs constant by adjusting their internal values. False is default. Default value of confirm is False. Default confirm_tolerance_factor is 1.0. A value of 1.0 is the nominal tolerance for that control. Passing a higher tolerance factor (for example 1.5) will increase the permitted error margin and passing lower tolerance factor (for example 0.5) will decrease the permitted error margin and consequently make a timeout more likely. The tolerance factor value 0.0 is a special value which removes all checking and only waits for any change at all and then returns. Default confirm_timeout is 16.0 (seconds). Raises exception if control with name doesn't exist. Raises TimeoutException if confirm is True and timeout occurs. .. versionadded:: 1.0 Scriptable: Yes
def calc_qma_v1(self): der = self.parameters.derived.fastaccess flu = self.sequences.fluxes.fastaccess log = self.sequences.logs.fastaccess for idx in range(der.nmb): flu.qma[idx] = 0. for jdx in range(der.ma_order[idx]): flu.qma[idx] += der.ma_coefs[idx, jdx] * log.login[idx, jdx]
Calculate the discharge responses of the different MA processes. Required derived parameters: |Nmb| |MA_Order| |MA_Coefs| Required log sequence: |LogIn| Calculated flux sequence: |QMA| Examples: Assume there are three response functions, involving one, two and three MA coefficients respectively: >>> from hydpy.models.arma import * >>> parameterstep() >>> derived.nmb(3) >>> derived.ma_order.shape = 3 >>> derived.ma_order = 1, 2, 3 >>> derived.ma_coefs.shape = (3, 3) >>> logs.login.shape = (3, 3) >>> fluxes.qma.shape = 3 The coefficients of the different MA processes are stored in separate rows of the 2-dimensional parameter `ma_coefs`: >>> derived.ma_coefs = ((1.0, nan, nan), ... (0.8, 0.2, nan), ... (0.5, 0.3, 0.2)) The "memory values" of the different MA processes are defined as follows (one row for each process). The current values are stored in first column, the values of the last time step in the second column, and so on: >>> logs.login = ((1.0, nan, nan), ... (2.0, 3.0, nan), ... (4.0, 5.0, 6.0)) Applying method |calc_qma_v1| is equivalent to calculating the inner product of the different rows of both matrices: >>> model.calc_qma_v1() >>> fluxes.qma qma(1.0, 2.2, 4.7)
def pout(*args, **kwargs): if should_msg(kwargs.get("groups", ["normal"])): args = indent_text(*args, **kwargs) sys.stderr.write("".join(args)) sys.stderr.write("\n")
print to stdout, maintaining indent level
def email_report(job, univ_options): fromadd = "results@protect.cgl.genomics.ucsc.edu" msg = MIMEMultipart() msg['From'] = fromadd if univ_options['mail_to'] is None: return else: msg['To'] = univ_options['mail_to'] msg['Subject'] = "Protect run for sample %s completed successfully." % univ_options['patient'] body = "Protect run for sample %s completed successfully." % univ_options['patient'] msg.attach(MIMEText(body, 'plain')) text = msg.as_string() try: server = smtplib.SMTP('localhost') except socket.error as e: if e.errno == 111: print('No mail utils on this maachine') else: print('Unexpected error while attempting to send an email.') print('Could not send email report') except: print('Could not send email report') else: server.sendmail(fromadd, msg['To'], text) server.quit()
Send an email to the user when the run finishes. :param dict univ_options: Dict of universal options used by almost all tools
def get_position_p(self): data = [] data.append(0x09) data.append(self.servoid) data.append(RAM_READ_REQ) data.append(POSITION_KP_RAM) data.append(BYTE2) send_data(data) rxdata = [] try: rxdata = SERPORT.read(13) return (ord(rxdata[10])*256)+(ord(rxdata[9])&0xff) except HerkulexError: raise HerkulexError("could not communicate with motors")
Get the P value of the current PID for position
def _gamma_difference_hrf(tr, oversampling=50, time_length=32., onset=0., delay=6, undershoot=16., dispersion=1., u_dispersion=1., ratio=0.167): from scipy.stats import gamma dt = tr / oversampling time_stamps = np.linspace(0, time_length, np.rint(float(time_length) / dt).astype(np.int)) time_stamps -= onset hrf = gamma.pdf(time_stamps, delay / dispersion, dt / dispersion) -\ ratio * gamma.pdf( time_stamps, undershoot / u_dispersion, dt / u_dispersion) hrf /= hrf.sum() return hrf
Compute an hrf as the difference of two gamma functions Parameters ---------- tr : float scan repeat time, in seconds oversampling : int, optional (default=16) temporal oversampling factor time_length : float, optional (default=32) hrf kernel length, in seconds onset: float onset time of the hrf delay: float, optional delay parameter of the hrf (in s.) undershoot: float, optional undershoot parameter of the hrf (in s.) dispersion : float, optional dispersion parameter for the first gamma function u_dispersion : float, optional dispersion parameter for the second gamma function ratio : float, optional ratio of the two gamma components Returns ------- hrf : array of shape(length / tr * oversampling, dtype=float) hrf sampling on the oversampled time grid
def get_feedback(self, block = True, timeout = None): if self._feedback_greenlet is None: self._feedback_greenlet = gevent.spawn(self._feedback_loop) return self._feedback_queue.get(block = block, timeout = timeout)
Gets the next feedback message. Each feedback message is a 2-tuple of (timestamp, device_token).
def delete_cookie(self, cookie_name=None): if cookie_name is None: cookie_name = self.default_value['name'] return self.create_cookie("", "", cookie_name=cookie_name, kill=True)
Create a cookie that will immediately expire when it hits the other side. :param cookie_name: Name of the cookie :return: A tuple to be added to headers
def randomize(self, period=None): if period is not None: self.period = period perm = list(range(self.period)) perm_right = self.period - 1 for i in list(perm): j = self.randint_function(0, perm_right) perm[i], perm[j] = perm[j], perm[i] self.permutation = tuple(perm) * 2
Randomize the permutation table used by the noise functions. This makes them generate a different noise pattern for the same inputs.
def rewrite_autodoc(app, what, name, obj, options, lines): try: lines[:] = parse_cartouche_text(lines) except CartoucheSyntaxError as syntax_error: args = syntax_error.args arg0 = args[0] if args else '' arg0 += " in docstring for {what} {name} :".format(what=what, name=name) arg0 += "\n=== BEGIN DOCSTRING ===\n{lines}\n=== END DOCSTRING ===\n".format(lines='\n'.join(lines)) syntax_error.args = (arg0,) + args[1:] raise
Convert lines from Cartouche to Sphinx format. The function to be called by the Sphinx autodoc extension when autodoc has read and processed a docstring. This function modified its ``lines`` argument *in place* replacing Cartouche syntax input into Sphinx reStructuredText output. Args: apps: The Sphinx application object. what: The type of object which the docstring belongs to. One of 'module', 'class', 'exception', 'function', 'method', 'attribute' name: The fully qualified name of the object. obj: The object itself. options: The options given to the directive. An object with attributes ``inherited_members``, ``undoc_members``, ``show_inheritance`` and ``noindex`` that are ``True`` if the flag option of the same name was given to the auto directive. lines: The lines of the docstring. Will be modified *in place*. Raises: CartoucheSyntaxError: If the docstring is malformed.