code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def _get_vm_device_status(self, device='FLOPPY'): valid_devices = {'FLOPPY': 'floppy', 'CDROM': 'cd'} if device not in valid_devices: raise exception.IloInvalidInputError( "Invalid device. Valid devices: FLOPPY or CDROM.") manager, uri = self._get_ilo_details() try: vmedia_uri = manager['links']['VirtualMedia']['href'] except KeyError: msg = ('"VirtualMedia" section in Manager/links does not exist') raise exception.IloCommandNotSupportedError(msg) for status, hds, vmed, memberuri in self._get_collection(vmedia_uri): status, headers, response = self._rest_get(memberuri) if status != 200: msg = self._get_extended_error(response) raise exception.IloError(msg) if (valid_devices[device] in [item.lower() for item in response['MediaTypes']]): vm_device_uri = response['links']['self']['href'] return response, vm_device_uri msg = ('Virtualmedia device "' + device + '" is not' ' found on this system.') raise exception.IloError(msg)
Returns the given virtual media device status and device URI :param device: virtual media device to be queried :returns json format virtual media device status and its URI :raises: IloError, on an error from iLO. :raises: IloCommandNotSupportedError, if the command is not supported on the server.
def list(self, limit=None, offset=None): uri = "/%s%s" % (self.uri_base, self._get_pagination_qs(limit, offset)) return self._list(uri)
Gets a list of all domains, or optionally a page of domains.
def toggle_attr(self, attr): selection = self.grid.selection if selection: value = self.get_new_selection_attr_state(selection, attr) else: value = self.get_new_cell_attr_state(self.grid.actions.cursor, attr) self.set_attr(attr, value)
Toggles an attribute attr for current selection
def normalize_name(s): s = unicodedata.normalize('NFKD', s).encode('ascii', 'ignore') s = str(s)[2:-1] return s
Remove foreign accents and characters to normalize the string. Prevents encoding errors. :param str s: String :return str s: String
def add_to_team(self, **kw): group = self.context.participant_policy.title() data = kw.copy() if "groups" in data: data["groups"].add(group) else: data["groups"] = set([group]) super(PloneIntranetWorkspace, self).add_to_team(**data)
We override this method to add our additional participation policy groups, as detailed in available_groups above
def removetmp(): for path in _tmp_paths: if os.path.exists(path): try: os.remove(path) except PermissionError: pass
Remove the temporary files created by gettemp
def send_message_tracked(self, msg): msg.type_ = aioxmpp.MessageType.GROUPCHAT msg.to = self._mucjid msg.xep0045_muc_user = muc_xso.UserExt() msg.autoset_id() tracking_svc = self.service.dependencies[ aioxmpp.tracking.BasicTrackingService ] tracker = aioxmpp.tracking.MessageTracker() id_key = msg.id_ body_key = _extract_one_pair(msg.body) self._tracking_by_id[id_key] = tracker self._tracking_metadata[tracker] = ( id_key, body_key, ) self._tracking_by_body.setdefault( body_key, [] ).append(tracker) tracker.on_closed.connect(functools.partial( self._tracker_closed, tracker, )) token = tracking_svc.send_tracked(msg, tracker) self.on_message( msg, self._this_occupant, aioxmpp.im.dispatcher.MessageSource.STREAM, tracker=tracker, ) return token, tracker
Send a message to the MUC with tracking. :param msg: The message to send. :type msg: :class:`aioxmpp.Message` .. warning:: Please read :ref:`api-tracking-memory`. This is especially relevant for MUCs because tracking is not guaranteed to work due to how :xep:`45` is written. It will work in many cases, probably in all cases you test during development, but it may fail to work for some individual messages and it may fail to work consistently for some services. See the implementation details below for reasons. The message is tracked and is considered :attr:`~.MessageState.DELIVERED_TO_RECIPIENT` when it is reflected back to us by the MUC service. The reflected message is then available in the :attr:`~.MessageTracker.response` attribute. .. note:: Two things: 1. The MUC service may change the contents of the message. An example of this is the Prosody developer MUC which replaces messages with more than a few lines with a pastebin link. 2. Reflected messages which are caught by tracking are not emitted through :meth:`on_message`. There is no need to set the address attributes or the type of the message correctly; those will be overridden by this method to conform to the requirements of a message to the MUC. Other attributes are left untouched (except that :meth:`~.StanzaBase.autoset_id` is called) and can be used as desired for the message. .. warning:: Using :meth:`send_message_tracked` before :meth:`on_join` has emitted will cause the `member` object in the resulting :meth:`on_message` event to be :data:`None` (the message will be delivered just fine). Using :meth:`send_message_tracked` before history replay is over will cause the :meth:`on_message` event to be emitted during history replay, even though everyone else in the MUC will -- of course -- only see the message after the history. :meth:`send_message` is not affected by these quirks. .. seealso:: :meth:`.AbstractConversation.send_message_tracked` for the full interface specification. **Implementation details:** Currently, we try to detect reflected messages using two different criteria. First, if we see a message with the same message ID (note that message IDs contain 120 bits of entropy) as the message we sent, we consider it as the reflection. As some MUC services re-write the message ID in the reflection, as a fallback, we also consider messages which originate from the correct sender and have the correct body a reflection. Obviously, this fails consistently in MUCs which re-write the body and re-write the ID and randomly if the MUC always re-writes the ID but only sometimes the body.
def format_from_extension(self, extension): formats = [name for name, format in self._formats.items() if format.get('file_extension', None) == extension] if len(formats) == 0: return None elif len(formats) == 2: raise RuntimeError("Several extensions are registered with " "that extension; please specify the format " "explicitly.") else: return formats[0]
Find a format from its extension.
def find_price_by_category(package, price_category): for item in package['items']: price_id = _find_price_id(item['prices'], price_category) if price_id: return price_id raise ValueError("Could not find price with the category, %s" % price_category)
Find the price in the given package that has the specified category :param package: The AsAService, Enterprise, or Performance product package :param price_category: The price category code to search for :return: Returns the price for the given category, or an error if not found
def add_monitor(self, pattern, callback, limit=80): self.buffer.add_monitor(pattern, partial(callback, self), limit)
Calls the given function whenever the given pattern matches the incoming data. .. HINT:: If you want to catch all incoming data regardless of a pattern, use the Protocol.data_received_event event instead. Arguments passed to the callback are the protocol instance, the index of the match, and the match object of the regular expression. :type pattern: str|re.RegexObject|list(str|re.RegexObject) :param pattern: One or more regular expressions. :type callback: callable :param callback: The function that is called. :type limit: int :param limit: The maximum size of the tail of the buffer that is searched, in number of bytes.
def close(self, virtual_account_id, data={}, **kwargs): url = "{}/{}".format(self.base_url, virtual_account_id) data['status'] = 'closed' return self.patch_url(url, data, **kwargs)
Close Virtual Account from given Id Args: virtual_account_id : Id for which Virtual Account objects has to be Closed
async def description(self): resp = await self._call_web(f'nation={self.id}') return html.unescape( re.search( '<div class="nationsummary">(.+?)<p class="nationranktext">', resp.text, flags=re.DOTALL ) .group(1) .replace('\n', '') .replace('</p>', '') .replace('<p>', '\n\n') .strip() )
Nation's full description, as seen on its in-game page. Returns ------- an awaitable of str
def to_string(self): return '%s %s %s %s %s' % ( self.trait, self.start_position, self.peak_start_position, self.peak_stop_position, self.stop_position)
Return the string as it should be presented in a MapChart input file.
def _authenticated_call_geocoder(self, url, timeout=DEFAULT_SENTINEL): if self.token is None or int(time()) > self.token_expiry: self._refresh_authentication_token() request = Request( "&".join((url, urlencode({"token": self.token}))), headers={"Referer": self.referer} ) return self._base_call_geocoder(request, timeout=timeout)
Wrap self._call_geocoder, handling tokens.
def available_for_protocol(self, protocol): if self.protocol == ALL or protocol == ALL: return True return protocol in ensure_sequence(self.protocol)
Check if the current function can be executed from a request defining the given protocol
def get_external_ip(self): random.shuffle(self.server_list) myip = '' for server in self.server_list[:3]: myip = self.fetch(server) if myip != '': return myip else: continue return ''
This function gets your IP from a random server
def copy(self): return Eq( self._lhs, self._rhs, tag=self._tag, _prev_lhs=self._prev_lhs, _prev_rhs=self._prev_rhs, _prev_tags=self._prev_tags)
Return a copy of the equation
def _token_counts_from_generator(generator, max_chars, reserved_tokens): reserved_tokens = list(reserved_tokens) + [_UNDERSCORE_REPLACEMENT] tokenizer = text_encoder.Tokenizer( alphanum_only=False, reserved_tokens=reserved_tokens) num_chars = 0 token_counts = collections.defaultdict(int) for s in generator: s = tf.compat.as_text(s) if max_chars and (num_chars + len(s)) >= max_chars: s = s[:(max_chars - num_chars)] tokens = tokenizer.tokenize(s) tokens = _prepare_tokens_for_encode(tokens) for t in tokens: token_counts[t] += 1 if max_chars: num_chars += len(s) if num_chars > max_chars: break return token_counts
Builds token counts from generator.
def load(self, rel_path=None): for k, v in self.layer.iteritems(): self.add(k, v['module'], v.get('package')) filename = v.get('filename') path = v.get('path') if filename: warnings.warn(DeprecationWarning(SIMFILE_LOAD_WARNING)) if not path: path = rel_path else: path = os.path.join(rel_path, path) filename = os.path.join(path, filename) self.open(k, filename)
Add sim_src to layer.
def theta_str(theta, taustr=TAUSTR, fmtstr='{coeff:,.1f}{taustr}'): r coeff = theta / TAU theta_str = fmtstr.format(coeff=coeff, taustr=taustr) return theta_str
r""" Format theta so it is interpretable in base 10 Args: theta (float) angle in radians taustr (str): default 2pi Returns: str : theta_str - the angle in tau units Example1: >>> # ENABLE_DOCTEST >>> from utool.util_str import * # NOQA >>> theta = 3.1415 >>> result = theta_str(theta) >>> print(result) 0.5*2pi Example2: >>> # ENABLE_DOCTEST >>> from utool.util_str import * # NOQA >>> theta = 6.9932 >>> taustr = 'tau' >>> result = theta_str(theta, taustr) >>> print(result) 1.1tau
def align(self, out_path=None): if out_path is None: out_path = self.prefix_path + '.aln' sh.muscle38("-in", self.path, "-out", out_path) return AlignedFASTA(out_path)
We align the sequences in the fasta file with muscle.
async def getItemCmdr(cell, outp=None, **opts): cmdr = await s_cli.Cli.anit(cell, outp=outp) typename = await cell.getCellType() for ctor in cmdsbycell.get(typename, ()): cmdr.addCmdClass(ctor) return cmdr
Construct and return a cmdr for the given remote cell. Example: cmdr = await getItemCmdr(foo)
def date_add(start, days): sc = SparkContext._active_spark_context return Column(sc._jvm.functions.date_add(_to_java_column(start), days))
Returns the date that is `days` days after `start` >>> df = spark.createDataFrame([('2015-04-08',)], ['dt']) >>> df.select(date_add(df.dt, 1).alias('next_date')).collect() [Row(next_date=datetime.date(2015, 4, 9))]
def order_delete(backend, kitchen, order_id): use_kitchen = Backend.get_kitchen_name_soft(kitchen) print use_kitchen if use_kitchen is None and order_id is None: raise click.ClickException('You must specify either a kitchen or an order_id or be in a kitchen directory') if order_id is not None: click.secho('%s - Delete an Order using id %s' % (get_datetime(), order_id), fg='green') check_and_print(DKCloudCommandRunner.delete_one_order(backend.dki, order_id)) else: click.secho('%s - Delete all orders in Kitchen %s' % (get_datetime(), use_kitchen), fg='green') check_and_print(DKCloudCommandRunner.delete_all_order(backend.dki, use_kitchen))
Delete one order or all orders in a kitchen
async def storPropSet(self, buid, prop, valu): assert self.buidcache.disabled indx = prop.type.indx(valu) if indx is not None and len(indx) > MAX_INDEX_LEN: mesg = 'index bytes are too large' raise s_exc.BadIndxValu(mesg=mesg, prop=prop, valu=valu) univ = prop.utf8name[0] in (46, 35) bpkey = buid + prop.utf8name self._storPropSetCommon(buid, prop.utf8name, bpkey, prop.pref, univ, valu, indx)
Migration-only function
def where(self, exact=False, **kwargs): for field_name in kwargs: if isinstance(kwargs[field_name], list): self.where_in(field_name, kwargs[field_name], exact) else: self.where_equals(field_name, kwargs[field_name], exact) return self
To get all the document that equal to the value within kwargs with the specific key @param bool exact: If True getting exact match of the query @param kwargs: the keys of the kwargs will be the fields name in the index you want to query. The value will be the the fields value you want to query (if kwargs[field_name] is a list it will behave has the where_in method)
def shuffle(self, times=1): for _ in xrange(times): random.shuffle(self.cards)
Shuffles the Stack. .. note:: Shuffling large numbers of cards (100,000+) may take a while. :arg int times: The number of times to shuffle.
def get_by_index(self, index): try: return self[index] except KeyError: for v in self.get_volumes(): if v.index == str(index): return v raise KeyError(index)
Returns a Volume or Disk by its index.
def stop(self, force=False): if self._initialized: self.send(C1218TerminateRequest()) data = self.recv() if data == b'\x00' or force: self._initialized = False self._toggle_bit = False return True return False
Send a terminate request. :param bool force: ignore the remote devices response
def queue_poll(self, sleep_t=0.5): connection_alive = True while self.running: if self.ws: def logger_and_close(msg): self.log.error('Websocket exception', exc_info=True) if not self.running: connection_alive = False else: if not self.number_try_connection: self.teardown() self._display_ws_warning() with catch(websocket.WebSocketException, logger_and_close): result = self.ws.recv() self.queue.put(result) if connection_alive: time.sleep(sleep_t)
Put new messages on the queue as they arrive. Blocking in a thread. Value of sleep is low to improve responsiveness.
def cluster_reduce(idx, snr, window_size): ind = findchirp_cluster_over_window(idx, snr, window_size) return idx.take(ind), snr.take(ind)
Reduce the events by clustering over a window Parameters ----------- indices: Array The list of indices of the SNR values snr: Array The list of SNR value window_size: int The size of the window in integer samples. Returns ------- indices: Array The list of indices of the SNR values snr: Array The list of SNR values
def play(state): filename = None if state == SoundService.State.welcome: filename = "pad_glow_welcome1.wav" elif state == SoundService.State.goodbye: filename = "pad_glow_power_off.wav" elif state == SoundService.State.hotword_detected: filename = "pad_soft_on.wav" elif state == SoundService.State.asr_text_captured: filename = "pad_soft_off.wav" elif state == SoundService.State.error: filename = "music_marimba_error_chord_2x.wav" if filename is not None: AudioPlayer.play_async("{}/{}".format(ABS_SOUND_DIR, filename))
Play sound for a given state. :param state: a State value.
def _ixs(self, i, axis=0): label = self.index[i] if isinstance(label, Index): return self.take(i, axis=axis) else: return self._get_val_at(i)
Return the i-th value or values in the SparseSeries by location Parameters ---------- i : int, slice, or sequence of integers Returns ------- value : scalar (int) or Series (slice, sequence)
def _closest_centroid(self, x): closest_centroid = 0 distance = 10^9 for i in range(self.n_clusters): current_distance = linalg.norm(x - self.centroids[i]) if current_distance < distance: closest_centroid = i distance = current_distance return closest_centroid
Returns the index of the closest centroid to the sample
def sample(self, nsamples, nburn=0, nthin=1, save_hidden_state_trajectory=False, call_back=None): for iteration in range(nburn): logger().info("Burn-in %8d / %8d" % (iteration, nburn)) self._update() models = list() for iteration in range(nsamples): logger().info("Iteration %8d / %8d" % (iteration, nsamples)) for thin in range(nthin): self._update() model_copy = copy.deepcopy(self.model) if not save_hidden_state_trajectory: model_copy.hidden_state_trajectory = None models.append(model_copy) if call_back is not None: call_back() return models
Sample from the BHMM posterior. Parameters ---------- nsamples : int The number of samples to generate. nburn : int, optional, default=0 The number of samples to discard to burn-in, following which `nsamples` will be generated. nthin : int, optional, default=1 The number of Gibbs sampling updates used to generate each returned sample. save_hidden_state_trajectory : bool, optional, default=False If True, the hidden state trajectory for each sample will be saved as well. call_back : function, optional, default=None a call back function with no arguments, which if given is being called after each computed sample. This is useful for implementing progress bars. Returns ------- models : list of bhmm.HMM The sampled HMM models from the Bayesian posterior. Examples -------- >>> from bhmm import testsystems >>> [model, observations, states, sampled_model] = testsystems.generate_random_bhmm(ntrajectories=5, length=1000) >>> nburn = 5 # run the sampler a bit before recording samples >>> nsamples = 10 # generate 10 samples >>> nthin = 2 # discard one sample in between each recorded sample >>> samples = sampled_model.sample(nsamples, nburn=nburn, nthin=nthin)
def setupnode(overwrite=False): if not port_is_open(): if not skip_disable_root(): disable_root() port_changed = change_ssh_port() if server_state('setupnode-incomplete'): env.overwrite=True else: set_server_state('setupnode-incomplete') upload_ssh_key() restrict_ssh() add_repositories() upgrade_packages() setup_ufw() uninstall_packages() install_packages() upload_etc() post_install_package() setup_ufw_rules() set_timezone() set_server_state('setupnode-incomplete',delete=True) for s in webserver_list(): stop_webserver(s) start_webserver(s)
Install a baseline host. Can be run multiple times
def get_pretty_string(self, stat, verbose): pretty_output = _PrettyOutputToStr() self.generate_pretty_output(stat=stat, verbose=verbose, output_function=pretty_output.save_output) return pretty_output.result
Pretty string representation of the results :param stat: bool :param verbose: bool :return: str
def find(self, compile_failure_log, target): not_found_classnames = [err.classname for err in self.compile_error_extractor.extract(compile_failure_log)] return self._select_target_candidates_for_class(not_found_classnames, target)
Find missing deps on a best-effort basis from target's transitive dependencies. Returns (class2deps, no_dep_found) tuple. `class2deps` contains classname to deps that contain the class mapping. `no_dep_found` are the classnames that are unable to find the deps.
def _get_installations(self): response = None for base_url in urls.BASE_URLS: urls.BASE_URL = base_url try: response = requests.get( urls.get_installations(self._username), headers={ 'Cookie': 'vid={}'.format(self._vid), 'Accept': 'application/json,' 'text/javascript, */*; q=0.01', }) if 2 == response.status_code // 100: break elif 503 == response.status_code: continue else: raise ResponseError(response.status_code, response.text) except requests.exceptions.RequestException as ex: raise RequestError(ex) _validate_response(response) self.installations = json.loads(response.text)
Get information about installations
async def stop(self): if self._rpc_task is not None: self._rpc_task.cancel() try: await self._rpc_task except asyncio.CancelledError: pass self._rpc_task = None
Stop the rpc queue from inside the event loop.
def write_output(output, text=True, output_path=None): if output_path is None and text is False: print("ERROR: You must specify an output file using -o/--output for binary output formats") sys.exit(1) if output_path is not None: if text: outfile = open(output_path, "w", encoding="utf-8") else: outfile = open(output_path, "wb") else: outfile = sys.stdout try: if text and isinstance(output, bytes): output = output.decode('utf-8') outfile.write(output) finally: if outfile is not sys.stdout: outfile.close()
Write binary or text output to a file or stdout.
def print_permissions(permissions): table = formatting.Table(['keyName', 'Description']) for perm in permissions: table.add_row([perm['keyName'], perm['name']]) return table
Prints out a users permissions
def from_template(cls, data, template): name = DEFAULT_NAME if isinstance(template, str): name = template table_info = TEMPLATES[name] else: table_info = template if 'name' in table_info: name = table_info['name'] dt = table_info['dtype'] loc = table_info['h5loc'] split = table_info['split_h5'] h5singleton = table_info['h5singleton'] return cls( data, h5loc=loc, dtype=dt, split_h5=split, name=name, h5singleton=h5singleton )
Create a table from a predefined datatype. See the ``templates_avail`` property for available names. Parameters ---------- data Data in a format that the ``__init__`` understands. template: str or dict Name of the dtype template to use from ``kp.dataclasses_templates`` or a ``dict`` containing the required attributes (see the other templates for reference).
def logout(config): state = read(config.configfile) if state.get("BUGZILLA"): remove(config.configfile, "BUGZILLA") success_out("Forgotten") else: error_out("No stored Bugzilla credentials")
Remove and forget your Bugzilla credentials
def Add(self, file_desc_proto): proto_name = file_desc_proto.name if proto_name not in self._file_desc_protos_by_file: self._file_desc_protos_by_file[proto_name] = file_desc_proto elif self._file_desc_protos_by_file[proto_name] != file_desc_proto: raise DescriptorDatabaseConflictingDefinitionError( '%s already added, but with different descriptor.' % proto_name) package = file_desc_proto.package for message in file_desc_proto.message_type: self._file_desc_protos_by_symbol.update( (name, file_desc_proto) for name in _ExtractSymbols(message, package)) for enum in file_desc_proto.enum_type: self._file_desc_protos_by_symbol[ '.'.join((package, enum.name))] = file_desc_proto for extension in file_desc_proto.extension: self._file_desc_protos_by_symbol[ '.'.join((package, extension.name))] = file_desc_proto for service in file_desc_proto.service: self._file_desc_protos_by_symbol[ '.'.join((package, service.name))] = file_desc_proto
Adds the FileDescriptorProto and its types to this database. Args: file_desc_proto: The FileDescriptorProto to add. Raises: DescriptorDatabaseConflictingDefinitionError: if an attempt is made to add a proto with the same name but different definition than an exisiting proto in the database.
def duration(self): durs = [] for track in self._segments: durs.append(sum([comp.duration() for comp in track])) return max(durs)
The duration of this stimulus :returns: float -- duration in seconds
def get_subject_guide_for_section(section): return get_subject_guide_for_section_params( section.term.year, section.term.quarter, section.curriculum_abbr, section.course_number, section.section_id)
Returns a SubjectGuide model for the passed SWS section model.
def snippet(code, locations, sep=' | ', colmark=('-', '^'), context=5): if not locations: return [] lines = code.split('\n') offset = int(len(lines) / 10) + 1 linenofmt = '%{}d'.format(offset) s = [] for loc in locations: line = max(0, loc.get('line', 1) - 1) column = max(0, loc.get('column', 1) - 1) start_line = max(0, line - context) for i, ln in enumerate(lines[start_line:line + 1], start_line): s.append('{}{}{}'.format(linenofmt % i, sep, ln)) s.append('{}{}{}'.format(' ' * (offset + len(sep)), colmark[0] * column, colmark[1])) return s
Given a code and list of locations, convert to snippet lines. return will include line number, a separator (``sep``), then line contents. At most ``context`` lines are shown before each location line. After each location line, the column is marked using ``colmark``. The first character is repeated up to column, the second character is used only once. :return: list of lines of sources or column markups. :rtype: list
def find(cls, *args, **kwargs): return list(cls.collection.find(*args, **kwargs))
Returns all document dicts that pass the filter
def analyze(self): class MockBindings(dict): def __contains__(self, key): self[key] = None return True bindings = MockBindings() used = {} ancestor = self.ancestor if isinstance(ancestor, ParameterizedThing): ancestor = ancestor.resolve(bindings, used) filters = self.filters if filters is not None: filters = filters.resolve(bindings, used) return sorted(used)
Return a list giving the parameters required by a query.
def namespaced_view_name(view_name, metric_prefix): metric_prefix = metric_prefix or "custom.googleapis.com/opencensus" return os.path.join(metric_prefix, view_name).replace('\\', '/')
create string to be used as metric type
def _get_ref_lengths(self): sam_reader = pysam.Samfile(self.bam, "rb") return dict(zip(sam_reader.references, sam_reader.lengths))
Gets the length of each reference sequence from the header of the bam. Returns dict name => length
def get_assessment(self, assessment): response = self.http.get('/Assessment/' + str(assessment)) assessment = Schemas.Assessment(assessment=response) return assessment
To get Assessment by id
def lldp(interface='', **kwargs): proxy_output = salt.utils.napalm.call( napalm_device, 'get_lldp_neighbors_detail', **{ } ) if not proxy_output.get('result'): return proxy_output lldp_neighbors = proxy_output.get('out') if interface: lldp_neighbors = {interface: lldp_neighbors.get(interface)} proxy_output.update({ 'out': lldp_neighbors }) return proxy_output
Returns a detailed view of the LLDP neighbors. :param interface: interface name to filter on :return: A dictionary with the LLDL neighbors. The keys are the interfaces with LLDP activated on. CLI Example: .. code-block:: bash salt '*' net.lldp salt '*' net.lldp interface='TenGigE0/0/0/8' Example output: .. code-block:: python { 'TenGigE0/0/0/8': [ { 'parent_interface': 'Bundle-Ether8', 'interface_description': 'TenGigE0/0/0/8', 'remote_chassis_id': '8c60.4f69.e96c', 'remote_system_name': 'switch', 'remote_port': 'Eth2/2/1', 'remote_port_description': 'Ethernet2/2/1', 'remote_system_description': 'Cisco Nexus Operating System (NX-OS) Software 7.1(0)N1(1a) TAC support: http://www.cisco.com/tac Copyright (c) 2002-2015, Cisco Systems, Inc. All rights reserved.', 'remote_system_capab': 'B, R', 'remote_system_enable_capab': 'B' } ] }
def from_json(value, native_datetimes=True): hook = BasicJsonDecoder(native_datetimes=native_datetimes) result = json.loads(value, object_hook=hook) if native_datetimes and isinstance(result, string_types): return get_date_or_string(result) return result
Deserializes the given value from JSON. :param value: the value to deserialize :type value: str :param native_datetimes: whether or not strings that look like dates/times should be automatically cast to the native objects, or left as strings; if not specified, defaults to ``True`` :type native_datetimes: bool
def arrow(ctx, apollo_instance, verbose, log_level): set_logging_level(log_level) try: ctx.gi = get_apollo_instance(apollo_instance) except TypeError: pass ctx.verbose = verbose
Command line wrappers around Apollo functions. While this sounds unexciting, with arrow and jq you can easily build powerful command line scripts.
def parse_set(string): string = string.strip() if string: return set(string.split(",")) else: return set()
Parse set from comma separated string.
def rotation_from_axes(x_axis, y_axis, z_axis): return np.hstack((x_axis[:,np.newaxis], y_axis[:,np.newaxis], z_axis[:,np.newaxis]))
Convert specification of axis in target frame to a rotation matrix from source to target frame. Parameters ---------- x_axis : :obj:`numpy.ndarray` of float A normalized 3-vector for the target frame's x-axis. y_axis : :obj:`numpy.ndarray` of float A normalized 3-vector for the target frame's y-axis. z_axis : :obj:`numpy.ndarray` of float A normalized 3-vector for the target frame's z-axis. Returns ------- :obj:`numpy.ndarray` of float A 3x3 rotation matrix that transforms from a source frame to the given target frame.
def getWindow(title, exact=False): titles = getWindows() hwnd = titles.get(title, None) if not hwnd and not exact: for k, v in titles.items(): if title in k: hwnd = v break if hwnd: return Window(hwnd) else: return None
Return Window object if 'title' or its part found in visible windows titles, else return None Return only 1 window found first Args: title: unicode string exact (bool): True if search only exact match
def find_synonymous_field(field, model=DEFAULT_MODEL, app=DEFAULT_APP, score_cutoff=50, root_preference=1.02): fields = util.listify(field) + list(synonyms(field)) model = get_model(model, app) available_field_names = model._meta.get_all_field_names() best_match, best_ratio = None, None for i, field_name in enumerate(fields): match = fuzzy.extractOne(str(field_name), available_field_names) if match and match[1] >= score_cutoff: if not best_match or match[1] > (root_preference * best_ratio): best_match, best_ratio = match return best_match
Use a dictionary of synonyms and fuzzy string matching to find a similarly named field Returns: A single model field name (string) Examples: >>> find_synonymous_field('date', model='WikiItem') 'end_date_time' >>> find_synonymous_field('date', model='WikiItem') 'date_time' >>> find_synonymous_field('time', model='WikiItem') 'date_time'
def get_lmv2_response(domain, username, password, server_challenge, client_challenge): ntlmv2_hash = PasswordAuthentication.ntowfv2(domain, username, password.encode('utf-16le')) hmac_context = hmac.HMAC(ntlmv2_hash, hashes.MD5(), backend=default_backend()) hmac_context.update(server_challenge) hmac_context.update(client_challenge) lmv2_hash = hmac_context.finalize() session_key = hmac.HMAC(ntlmv2_hash, hashes.MD5(), backend=default_backend()) session_key.update(lmv2_hash) return lmv2_hash + client_challenge, session_key.finalize()
Computes an appropriate LMv2 response based on the supplied arguments The algorithm is based on jCIFS. The response is 24 bytes, with the 16 bytes of hash concatenated with the 8 byte client client_challenge
def getBlocksTags(self): myBlocks = self.blocks return [ (myBlocks[i], i) for i in range( len(myBlocks) ) if issubclass(myBlocks[i].__class__, AdvancedTag) ]
getBlocksTags - Returns a list of tuples referencing the blocks which are direct children of this node, and the block is an AdvancedTag. The tuples are ( block, blockIdx ) where "blockIdx" is the index of self.blocks wherein the tag resides. @return list< tuple(block, blockIdx) > - A list of tuples of child blocks which are tags and their index in the self.blocks list
def list_elasticache(region, filter_by_kwargs): conn = boto.elasticache.connect_to_region(region) req = conn.describe_cache_clusters() data = req["DescribeCacheClustersResponse"]["DescribeCacheClustersResult"]["CacheClusters"] if filter_by_kwargs: clusters = [x['CacheClusterId'] for x in data if x[filter_by_kwargs.keys()[0]] == filter_by_kwargs.values()[0]] else: clusters = [x['CacheClusterId'] for x in data] return clusters
List all ElastiCache Clusters.
def remove(self, name, **params): log = self._getparam('log', self._discard, **params) if name not in self.names: log.error("Attempt to remove %r which was never added", name) raise Exception("Command %r has never been added" % (name,)) del self.names[name] rebuild = False for path in list(self.modules): if name in self.modules[path]: self.modules[path].remove(name) if len(self.modules[path]) == 0: del self.modules[path] rebuild = True if rebuild: self._build(name, **params)
Delete a command from the watched list. This involves removing the command from the inverted watch list, then possibly rebuilding the event set if any modules no longer need watching.
def writeFace(self, val, what='f'): val = [v + 1 for v in val] if self._hasValues and self._hasNormals: val = ' '.join(['%i/%i/%i' % (v, v, v) for v in val]) elif self._hasNormals: val = ' '.join(['%i//%i' % (v, v) for v in val]) elif self._hasValues: val = ' '.join(['%i/%i' % (v, v) for v in val]) else: val = ' '.join(['%i' % v for v in val]) self.writeLine('%s %s' % (what, val))
Write the face info to the net line.
def _on_client_volume_changed(self, data): self._clients.get(data.get('id')).update_volume(data)
Handle client volume change.
def namedb_create(path, genesis_block): global BLOCKSTACK_DB_SCRIPT if os.path.exists( path ): raise Exception("Database '%s' already exists" % path) lines = [l + ";" for l in BLOCKSTACK_DB_SCRIPT.split(";")] con = sqlite3.connect( path, isolation_level=None, timeout=2**30 ) for line in lines: db_query_execute(con, line, ()) con.row_factory = namedb_row_factory namedb_create_token_genesis(con, genesis_block['rows'], genesis_block['history']) return con
Create a sqlite3 db at the given path. Create all the tables and indexes we need.
def map_helper(data): as_list = [] length = 2 for field, value in data.items(): as_list.append(Container(field=bytes(field, ENCODING), value=bytes(value, ENCODING))) length += len(field) + len(value) + 4 return (Container( num=len(as_list), map=as_list ), length)
Build a map message.
def abstract(class_): if not inspect.isclass(class_): raise TypeError("@abstract can only be applied to classes") abc_meta = None class_meta = type(class_) if class_meta not in (_ABCMetaclass, _ABCObjectMetaclass): if class_meta is type: abc_meta = _ABCMetaclass elif class_meta is ObjectMetaclass: abc_meta = _ABCObjectMetaclass else: raise ValueError( "@abstract cannot be applied to classes with custom metaclass") class_.__abstract__ = True return metaclass(abc_meta)(class_) if abc_meta else class_
Mark the class as _abstract_ base class, forbidding its instantiation. .. note:: Unlike other modifiers, ``@abstract`` can be applied to all Python classes, not just subclasses of :class:`Object`. .. versionadded:: 0.0.3
def generate_pages(self): for page in self.pages: self.generate_page(page.slug, template='page.html.jinja', page=page)
Generate HTML out of the pages added to the blog.
def parse_duration(duration, start=None, end=None): if not start and not end: return parse_simple_duration(duration) if start: return parse_duration_with_start(start, duration) if end: return parse_duration_with_end(duration, end)
Attepmt to parse an ISO8601 formatted duration. Accepts a ``duration`` and optionally a start or end ``datetime``. ``duration`` must be an ISO8601 formatted string. Returns a ``datetime.timedelta`` object.
def _input_as_lines(self, data): filename = self._input_filename = \ FilePath(self.getTmpFilename(self.TmpDir)) filename = FilePath(filename) data_file = open(filename, 'w') data_to_file = '\n'.join([str(d).strip('\n') for d in data]) data_file.write(data_to_file) data_file.close() return filename
Write a seq of lines to a temp file and return the filename string data: a sequence to be written to a file, each element of the sequence will compose a line in the file * Note: the result will be the filename as a FilePath object (which is a string subclass). * Note: '\n' will be stripped off the end of each sequence element before writing to a file in order to avoid multiple new lines accidentally be written to a file
def __up_cmp(self, obj1, obj2): if obj1.update_order > obj2.update_order: return 1 elif obj1.update_order < obj2.update_order: return -1 else: return 0
Defines how our updatable objects should be sorted
def images(self, type): images = [] res = yield from self.http_query("GET", "/{}/images".format(type), timeout=None) images = res.json try: if type in ["qemu", "dynamips", "iou"]: for local_image in list_images(type): if local_image['filename'] not in [i['filename'] for i in images]: images.append(local_image) images = sorted(images, key=itemgetter('filename')) else: images = sorted(images, key=itemgetter('image')) except OSError as e: raise ComputeError("Can't list images: {}".format(str(e))) return images
Return the list of images available for this type on controller and on the compute node.
def delcal(mspath): wantremove = 'MODEL_DATA CORRECTED_DATA'.split() tb = util.tools.table() tb.open(b(mspath), nomodify=False) cols = frozenset(tb.colnames()) toremove = [b(c) for c in wantremove if c in cols] if len(toremove): tb.removecols(toremove) tb.close() if six.PY2: return toremove else: return [c.decode('utf8') for c in toremove]
Delete the ``MODEL_DATA`` and ``CORRECTED_DATA`` columns from a measurement set. mspath (str) The path to the MS to modify Example:: from pwkit.environments.casa import tasks tasks.delcal('dataset.ms')
def filter_accept_reftrack(self, reftrack): if reftrack.status() in self._forbidden_status: return False if reftrack.get_typ() in self._forbidden_types: return False if reftrack.uptodate() in self._forbidden_uptodate: return False if reftrack.alien() in self._forbidden_alien: return False return True
Return True, if the filter accepts the given reftrack :param reftrack: the reftrack to filter :type reftrack: :class:`jukeboxcore.reftrack.Reftrack` :returns: True, if the filter accepts the reftrack :rtype: :class:`bool` :raises: None
def input(input_id, name, value_class=NumberValue): def _init(): return value_class( name, input_id=input_id, is_input=True, index=-1 ) def _decorator(cls): setattr(cls, input_id, _init()) return cls return _decorator
Add input to controller
def get_subgraphs(graph=None): graph = graph or DEPENDENCIES keys = set(graph) frontier = set() seen = set() while keys: frontier.add(keys.pop()) while frontier: component = frontier.pop() seen.add(component) frontier |= set([d for d in get_dependencies(component) if d in graph]) frontier |= set([d for d in get_dependents(component) if d in graph]) frontier -= seen yield dict((s, get_dependencies(s)) for s in seen) keys -= seen seen.clear()
Given a graph of possibly disconnected components, generate all graphs of connected components. graph is a dictionary of dependencies. Keys are components, and values are sets of components on which they depend.
def split_and_load(arrs, ctx): assert isinstance(arrs, (list, tuple)) loaded_arrs = [mx.gluon.utils.split_and_load(arr, ctx, even_split=False) for arr in arrs] return zip(*loaded_arrs)
split and load arrays to a list of contexts
def encrypt_document(self, document_id, content, threshold=0): return self._secret_store_client(self._account).publish_document( remove_0x_prefix(document_id), content, threshold )
encrypt string data using the DID as an secret store id, if secret store is enabled then return the result from secret store encryption None for no encryption performed :param document_id: hex str id of document to use for encryption session :param content: str to be encrypted :param threshold: int :return: None -- if encryption failed hex str -- the encrypted document
def igetattr(self, attrname, context=None): if attrname == "start": yield self._wrap_attribute(self.lower) elif attrname == "stop": yield self._wrap_attribute(self.upper) elif attrname == "step": yield self._wrap_attribute(self.step) else: yield from self.getattr(attrname, context=context)
Infer the possible values of the given attribute on the slice. :param attrname: The name of the attribute to infer. :type attrname: str :returns: The inferred possible values. :rtype: iterable(NodeNG)
def context_import(zap_helper, file_path): with zap_error_handler(): result = zap_helper.zap.context.import_context(file_path) if not result.isdigit(): raise ZAPError('Importing context from file failed: {}'.format(result)) console.info('Imported context from {}'.format(file_path))
Import a saved context file.
def _get_path_for_op_id(self, id: str) -> Optional[str]: for path_key, path_value in self._get_spec()['paths'].items(): for method in self.METHODS: if method in path_value: if self.OPERATION_ID_KEY in path_value[method]: if path_value[method][self.OPERATION_ID_KEY] == id: return path_key return None
Searches the spec for a path matching the operation id. Args: id: operation id Returns: path to the endpoint, or None if not found
def plot_seebeck_mu(self, temp=600, output='eig', xlim=None): import matplotlib.pyplot as plt plt.figure(figsize=(9, 7)) seebeck = self._bz.get_seebeck(output=output, doping_levels=False)[ temp] plt.plot(self._bz.mu_steps, seebeck, linewidth=3.0) self._plot_bg_limits() self._plot_doping(temp) if output == 'eig': plt.legend(['S$_1$', 'S$_2$', 'S$_3$']) if xlim is None: plt.xlim(-0.5, self._bz.gap + 0.5) else: plt.xlim(xlim[0], xlim[1]) plt.ylabel("Seebeck \n coefficient ($\\mu$V/K)", fontsize=30.0) plt.xlabel("E-E$_f$ (eV)", fontsize=30) plt.xticks(fontsize=25) plt.yticks(fontsize=25) plt.tight_layout() return plt
Plot the seebeck coefficient in function of Fermi level Args: temp: the temperature xlim: a list of min and max fermi energy by default (0, and band gap) Returns: a matplotlib object
def skip_child(self, child, ancestry): if child.any(): return True for x in ancestry: if x.choice(): return True return False
get whether or not to skip the specified child
async def run_with_interrupt(task, *events, loop=None): loop = loop or asyncio.get_event_loop() task = asyncio.ensure_future(task, loop=loop) event_tasks = [loop.create_task(event.wait()) for event in events] done, pending = await asyncio.wait([task] + event_tasks, loop=loop, return_when=asyncio.FIRST_COMPLETED) for f in pending: f.cancel() for f in done: f.exception() if task in done: return task.result() else: return None
Awaits a task while allowing it to be interrupted by one or more `asyncio.Event`s. If the task finishes without the events becoming set, the results of the task will be returned. If the event become set, the task will be cancelled ``None`` will be returned. :param task: Task to run :param events: One or more `asyncio.Event`s which, if set, will interrupt `task` and cause it to be cancelled. :param loop: Optional event loop to use other than the default.
def add(self, connection): if id(connection) in self.connections: raise ValueError('Connection already exists in pool') if len(self.connections) == self.max_size: LOGGER.warning('Race condition found when adding new connection') try: connection.close() except (psycopg2.Error, psycopg2.Warning) as error: LOGGER.error('Error closing the conn that cant be used: %s', error) raise PoolFullError(self) with self._lock: self.connections[id(connection)] = Connection(connection) LOGGER.debug('Pool %s added connection %s', self.id, id(connection))
Add a new connection to the pool :param connection: The connection to add to the pool :type connection: psycopg2.extensions.connection :raises: PoolFullError
def sync(filename, connection=None): c = connection or connect() rev = c.ls(filename) if rev: rev[0].sync()
Syncs a file :param filename: File to check out :type filename: str :param connection: Connection object to use :type connection: :py:class:`Connection`
def apply_karhunen_loeve_scaling(self): cnames = copy.deepcopy(self.jco.col_names) self.__jco *= self.fehalf self.__jco.col_names = cnames self.__parcov = self.parcov.identity
apply karhuene-loeve scaling to the jacobian matrix. Note ---- This scaling is not necessary for analyses using Schur's complement, but can be very important for error variance analyses. This operation effectively transfers prior knowledge specified in the parcov to the jacobian and reset parcov to the identity matrix.
def maybe_open(infile, mode='r'): if isinstance(infile, basestring): handle = open(infile, mode) do_close = True else: handle = infile do_close = False yield handle if do_close: handle.close()
Take a file name or a handle, and return a handle. Simplifies creating functions that automagically accept either a file name or an already opened file handle.
def loop_template_list(loop_positions, instance, instance_type, default_template, registry): templates = [] local_loop_position = loop_positions[1] global_loop_position = loop_positions[0] instance_string = slugify(str(instance)) for key in ['%s-%s' % (instance_type, instance_string), instance_string, instance_type, 'default']: try: templates.append(registry[key][global_loop_position]) except KeyError: pass templates.append( append_position(default_template, global_loop_position, '-')) templates.append( append_position(default_template, local_loop_position, '_')) templates.append(default_template) return templates
Build a list of templates from a position within a loop and a registry of templates.
def remove_unsupported_kwargs(module_or_fn, all_kwargs_dict): if all_kwargs_dict is None: all_kwargs_dict = {} if not isinstance(all_kwargs_dict, dict): raise ValueError("all_kwargs_dict must be a dict with string keys.") return { kwarg: value for kwarg, value in all_kwargs_dict.items() if supports_kwargs(module_or_fn, kwarg) != NOT_SUPPORTED }
Removes any kwargs not supported by `module_or_fn` from `all_kwargs_dict`. A new dict is return with shallow copies of keys & values from `all_kwargs_dict`, as long as the key is accepted by module_or_fn. The returned dict can then be used to connect `module_or_fn` (along with some other inputs, ie non-keyword arguments, in general). `snt.supports_kwargs` is used to tell whether a given kwarg is supported. Note that this method may give false negatives, which would lead to extraneous removals in the result of this function. Please read the docstring for `snt.supports_kwargs` for details, and manually inspect the results from this function if in doubt. Args: module_or_fn: some callable which can be interrogated by `snt.supports_kwargs`. Generally a Sonnet module or a method (wrapped in `@reuse_variables`) of a Sonnet module. all_kwargs_dict: a dict containing strings as keys, or None. Raises: ValueError: if `all_kwargs_dict` is not a dict. Returns: A dict containing some subset of the keys and values in `all_kwargs_dict`. This subset may be empty. If `all_kwargs_dict` is None, this will be an empty dict.
def ensure_str(text): u if isinstance(text, unicode): try: return text.encode(pyreadline_codepage, u"replace") except (LookupError, TypeError): return text.encode(u"ascii", u"replace") return text
u"""Convert unicode to str using pyreadline_codepage
def _is_shadowed(self, reaction_id, database): for other_database in self._databases: if other_database == database: break if other_database.has_reaction(reaction_id): return True return False
Whether reaction in database is shadowed by another database
def process_multinest_run(file_root, base_dir, **kwargs): dead = np.loadtxt(os.path.join(base_dir, file_root) + '-dead-birth.txt') live = np.loadtxt(os.path.join(base_dir, file_root) + '-phys_live-birth.txt') dead = dead[:, :-2] live = live[:, :-1] assert dead[:, -2].max() < live[:, -2].min(), ( 'final live points should have greater logls than any dead point!', dead, live) ns_run = process_samples_array(np.vstack((dead, live)), **kwargs) assert np.all(ns_run['thread_min_max'][:, 0] == -np.inf), ( 'As MultiNest does not currently perform dynamic nested sampling, all ' 'threads should start by sampling the whole prior.') ns_run['output'] = {} ns_run['output']['file_root'] = file_root ns_run['output']['base_dir'] = base_dir return ns_run
Loads data from a MultiNest run into the nestcheck dictionary format for analysis. N.B. producing required output file containing information about the iso-likelihood contours within which points were sampled (where they were "born") requies MultiNest version 3.11 or later. Parameters ---------- file_root: str Root name for output files. When running MultiNest, this is determined by the nest_root parameter. base_dir: str Directory containing output files. When running MultiNest, this is determined by the nest_root parameter. kwargs: dict, optional Passed to ns_run_utils.check_ns_run (via process_samples_array) Returns ------- ns_run: dict Nested sampling run dict (see the module docstring for more details).
def prepare(self, method=None, url=None, headers=None, files=None, data=None, params=None, auth=None, cookies=None, hooks=None, json=None): self.prepare_method(method) self.prepare_url(url, params) self.prepare_headers(headers) self.prepare_cookies(cookies) self.prepare_body(data, files, json) self.prepare_auth(auth, url) self.prepare_hooks(hooks)
Prepares the entire request with the given parameters.
def get_modname_from_modpath(module_fpath): modsubdir_list = get_module_subdir_list(module_fpath) modname = '.'.join(modsubdir_list) modname = modname.replace('.__init__', '').strip() modname = modname.replace('.__main__', '').strip() return modname
returns importable name from file path get_modname_from_modpath Args: module_fpath (str): module filepath Returns: str: modname Example: >>> # ENABLE_DOCTEST >>> from utool.util_path import * # NOQA >>> import utool as ut >>> module_fpath = ut.util_path.__file__ >>> modname = ut.get_modname_from_modpath(module_fpath) >>> result = modname >>> print(result) utool.util_path
def set_power(self, state): packet = bytearray(16) packet[0] = 2 if self.check_nightlight(): packet[4] = 3 if state else 2 else: packet[4] = 1 if state else 0 self.send_packet(0x6a, packet)
Sets the power state of the smart plug.
def _check_for_dyn_timed_auto_backup(self): current_time = time.time() self.timer_request_lock.acquire() if self._timer_request_time is None: return self.timer_request_lock.release() if self.timed_temp_storage_interval < current_time - self._timer_request_time: self.check_for_auto_backup(force=True) else: duration_to_wait = self.timed_temp_storage_interval - (current_time - self._timer_request_time) hard_limit_duration_to_wait = self.force_temp_storage_interval - (current_time - self.last_backup_time) hard_limit_active = hard_limit_duration_to_wait < duration_to_wait if hard_limit_active: self.set_timed_thread(hard_limit_duration_to_wait, self.check_for_auto_backup, True) else: self.set_timed_thread(duration_to_wait, self._check_for_dyn_timed_auto_backup) self.timer_request_lock.release()
The method implements the timed storage feature. The method re-initiating a new timed thread if the state-machine not already stored to backup (what could be caused by the force_temp_storage_interval) or force the storing of the state-machine if there is no new request for a timed backup. New timed backup request are intrinsically represented by self._timer_request_time and initiated by the check_for_auto_backup-method. The feature uses only one thread for each ModificationHistoryModel and lock to be thread save.
def create_thread(cls, session, conversation, thread, imported=False): return super(Conversations, cls).create( session, thread, endpoint_override='/conversations/%s.json' % conversation.id, imported=imported, )
Create a conversation thread. Please note that threads cannot be added to conversations with 100 threads (or more), if attempted the API will respond with HTTP 412. Args: conversation (helpscout.models.Conversation): The conversation that the thread is being added to. session (requests.sessions.Session): Authenticated session. thread (helpscout.models.Thread): The thread to be created. imported (bool, optional): The ``imported`` request parameter enables conversations to be created for historical purposes (i.e. if moving from a different platform, you can import your history). When ``imported`` is set to ``True``, no outgoing emails or notifications will be generated. Returns: helpscout.models.Conversation: Conversation including newly created thread.