code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def stop(self, *args, **kwargs): warnings.warn( "The 'stop()' method is deprecated, please use 'remove()' instead", DeprecationWarning ) return self.remove(*args, **kwargs)
Deprecated function to |remove| an existing handler. Warnings -------- .. deprecated:: 0.2.2 ``stop()`` will be removed in Loguru 1.0.0, it is replaced by ``remove()`` which is a less confusing name.
def update_index(entries): context = GLOBAL_TEMPLATE_CONTEXT.copy() context['entries'] = entries context['last_build'] = datetime.datetime.now().strftime( "%Y-%m-%dT%H:%M:%SZ") list(map(lambda x: _render(context, x[0], os.path.join(CONFIG['output_to'], x[1])), (('entry_index.html', 'index.html'), ('atom.xml', 'atom.xml'))))
find the last 10 entries in the database and create the main page. Each entry in has an doc_id, so we only get the last 10 doc_ids. This method also updates the ATOM feed.
def filter_pages(pages, pagenum, pagename): if pagenum: try: pages = [list(pages)[pagenum - 1]] except IndexError: raise IndexError('Invalid page number: %d' % pagenum) if pagename: pages = [page for page in pages if page.name == pagename] if pages == []: raise IndexError('Page not found: pagename=%s' % pagename) return pages
Choices pages by pagenum and pagename
def convert_dedent(self): if self.indent_amounts: self.indent_amounts.pop() tokenum = INDENT last_indent = 0 if self.indent_amounts: last_indent = self.indent_amounts[-1] while self.result[-1][0] == INDENT: self.result.pop() value = self.indent_type * last_indent return tokenum, value
Convert a dedent into an indent
def pday(dayfmt): year, month, day = map(int, dayfmt.split('-')) return '{day} the {number}'.format( day=calendar.day_name[calendar.weekday(year, month, day)], number=inflect.engine().ordinal(day), )
P the day >>> print(pday('2012-08-24')) Friday the 24th
def config_dict(name: str) -> Dict[str, Any]: try: content = resource_string(PACKAGE, DATADIR.format(name)).decode() except DistributionNotFound as error: LOGGER.warning("Cannot load %s from packages: %s", name, error) content = DATA_FALLBACK.joinpath(name).read_text() return cast(Dict[str, Any], json.loads(content))
Load a JSON configuration dict from Guesslang config directory. :param name: the JSON file name. :return: configuration
def get_flair_list(self, subreddit, *args, **kwargs): url = self.config['flairlist'].format( subreddit=six.text_type(subreddit)) return self.get_content(url, *args, root_field=None, thing_field='users', after_field='next', **kwargs)
Return a get_content generator of flair mappings. :param subreddit: Either a Subreddit object or the name of the subreddit to return the flair list for. The additional parameters are passed directly into :meth:`.get_content`. Note: the `url`, `root_field`, `thing_field`, and `after_field` parameters cannot be altered.
def refineData(root, options): worker = root.worker job = root.jobs jobTypesTree = root.job_types jobTypes = [] for childName in jobTypesTree: jobTypes.append(jobTypesTree[childName]) return root, worker, job, jobTypes
walk down from the root and gather up the important bits.
def _iter_vars(mod): vars = sorted(var for var in dir(mod) if _is_public(var)) for var in vars: yield getattr(mod, var)
Iterate through a list of variables define in a module's public namespace.
def complete_invoice(self, invoice_id, complete_dict): return self._create_put_request( resource=INVOICES, billomat_id=invoice_id, command=COMPLETE, send_data=complete_dict )
Completes an invoice :param complete_dict: the complete dict with the template id :param invoice_id: the invoice id :return: Response
def attach_keypress(fig, scaling=1.1): def press(event): if event.key == 'q': plt.close(fig) elif event.key == 'e': fig.set_size_inches(scaling * fig.get_size_inches(), forward=True) elif event.key == 'c': fig.set_size_inches(fig.get_size_inches() / scaling, forward=True) if not hasattr(fig, '_sporco_keypress_cid'): cid = fig.canvas.mpl_connect('key_press_event', press) fig._sporco_keypress_cid = cid return press
Attach a key press event handler that configures keys for closing a figure and changing the figure size. Keys 'e' and 'c' respectively expand and contract the figure, and key 'q' closes it. **Note:** Resizing may not function correctly with all matplotlib backends (a `bug <https://github.com/matplotlib/matplotlib/issues/10083>`__ has been reported). Parameters ---------- fig : :class:`matplotlib.figure.Figure` object Figure to which event handling is to be attached scaling : float, optional (default 1.1) Scaling factor for figure size changes Returns ------- press : function Key press event handler function
def cat(*wizards): data = {} for wizard in wizards: try: response = None while True: response = yield wizard.send(response) except Success as s: data.update(s.data) raise Success(data)
A higher-order wizard which is the concatenation of a number of other wizards. The resulting data is the union of all wizard outputs.
def _delete_device(device): log.trace('Deleting device with type %s', type(device)) device_spec = vim.vm.device.VirtualDeviceSpec() device_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.remove device_spec.device = device return device_spec
Returns a vim.vm.device.VirtualDeviceSpec specifying to remove a virtual machine device device Device data type object
def first(self, skipna=None, keep_attrs=None): return self._first_or_last(duck_array_ops.first, skipna, keep_attrs)
Return the first element of each group along the group dimension
def get_lowest_probable_prepared_certificate_in_view( self, view_no) -> Optional[int]: seq_no_pp = SortedList() seq_no_p = set() for (v, p) in self.prePreparesPendingPrevPP: if v == view_no: seq_no_pp.add(p) if v > view_no: break for (v, p), pr in self.preparesWaitingForPrePrepare.items(): if v == view_no and len(pr) >= self.quorums.prepare.value: seq_no_p.add(p) for n in seq_no_pp: if n in seq_no_p: return n return None
Return lowest pp_seq_no of the view for which can be prepared but choose from unprocessed PRE-PREPAREs and PREPAREs.
def urlencode(resource): if isinstance(resource, str): return _urlencode(resource.encode('utf-8')) return _urlencode(resource)
This implementation of urlencode supports all unicode characters :param: resource: Resource value to be url encoded.
def _from_dict(cls, _dict): args = {} if 'corpora' in _dict: args['corpora'] = [ Corpus._from_dict(x) for x in (_dict.get('corpora')) ] else: raise ValueError( 'Required property \'corpora\' not present in Corpora JSON') return cls(**args)
Initialize a Corpora object from a json dictionary.
def real(self, newreal): try: iter(newreal) except TypeError: for part in self.parts: part.real = newreal return if self.space.is_power_space: try: for part in self.parts: part.real = newreal except (ValueError, TypeError): for part, new_re in zip(self.parts, newreal): part.real = new_re pass elif len(newreal) == len(self): for part, new_re in zip(self.parts, newreal): part.real = new_re else: raise ValueError( 'dimensions of the new real part does not match the space, ' 'got element {} to set real part of {}'.format(newreal, self))
Setter for the real part. This method is invoked by ``x.real = other``. Parameters ---------- newreal : array-like or scalar Values to be assigned to the real part of this element.
def check_address(address): if not check_string(address, min_length=26, max_length=35, pattern=OP_ADDRESS_PATTERN): return False try: keylib.b58check_decode(address) return True except: return False
verify that a string is a base58check address >>> check_address('16EMaNw3pkn3v6f2BgnSSs53zAKH4Q8YJg') True >>> check_address('16EMaNw3pkn3v6f2BgnSSs53zAKH4Q8YJh') False >>> check_address('mkkJsS22dnDJhD8duFkpGnHNr9uz3JEcWu') True >>> check_address('mkkJsS22dnDJhD8duFkpGnHNr9uz3JEcWv') False >>> check_address('MD8WooqTKmwromdMQfSNh8gPTPCSf8KaZj') True >>> check_address('SSXMcDiCZ7yFSQSUj7mWzmDcdwYhq97p2i') True >>> check_address('SSXMcDiCZ7yFSQSUj7mWzmDcdwYhq97p2j') False >>> check_address('16SuThrz') False >>> check_address('1TGKrgtrQjgoPjoa5BnUZ9Qu') False >>> check_address('1LPckRbeTfLjzrfTfnCtP7z2GxFTpZLafXi') True
def removeFriend(self, friend_id=None): payload = {"friend_id": friend_id, "unref": "none", "confirm": "Confirm"} r = self._post(self.req_url.REMOVE_FRIEND, payload) query = parse_qs(urlparse(r.url).query) if "err" not in query: log.debug("Remove was successful!") return True else: log.warning("Error while removing friend") return False
Removes a specifed friend from your friend list :param friend_id: The ID of the friend that you want to remove :return: Returns error if the removing was unsuccessful, returns True when successful.
def write(nodes, output=sys.stdout, fmt='%.7E', gml=True, xmlns=None): root = Node('nrml', nodes=nodes) namespaces = {xmlns or NRML05: ''} if gml: namespaces[GML_NAMESPACE] = 'gml:' with floatformat(fmt): node_to_xml(root, output, namespaces) if hasattr(output, 'mode') and '+' in output.mode: output.seek(0) read(output)
Convert nodes into a NRML file. output must be a file object open in write mode. If you want to perform a consistency check, open it in read-write mode, then it will be read after creation and validated. :params nodes: an iterable over Node objects :params output: a file-like object in write or read-write mode :param fmt: format used for writing the floats (default '%.7E') :param gml: add the http://www.opengis.net/gml namespace :param xmlns: NRML namespace like http://openquake.org/xmlns/nrml/0.4
def paragraph_spans(self): if not self.is_tagged(PARAGRAPHS): self.tokenize_paragraphs() return self.spans(PARAGRAPHS)
The list of spans representing ``paragraphs`` layer elements.
def layer_postprocess(layer_input, layer_output, hparams): return layer_prepostprocess( layer_input, layer_output, sequence=hparams.layer_postprocess_sequence, dropout_rate=hparams.layer_prepostprocess_dropout, norm_type=hparams.norm_type, depth=None, epsilon=hparams.norm_epsilon, dropout_broadcast_dims=comma_separated_string_to_integer_list( getattr(hparams, "layer_prepostprocess_dropout_broadcast_dims", "")), default_name="layer_postprocess")
Apply layer postprocessing. See layer_prepostprocess() for details. A hyperparameters object is passed for convenience. The hyperparameters that may be used are: layer_postprocess_sequence layer_prepostprocess_dropout norm_type hidden_size norm_epsilon Args: layer_input: a Tensor layer_output: a Tensor hparams: a hyperparameters object. Returns: a Tensor
def _inherit_from(context, uri, calling_uri): if uri is None: return None template = _lookup_template(context, uri, calling_uri) self_ns = context['self'] ih = self_ns while ih.inherits is not None: ih = ih.inherits lclcontext = context._locals({'next': ih}) ih.inherits = TemplateNamespace("self:%s" % template.uri, lclcontext, template=template, populate_self=False) context._data['parent'] = lclcontext._data['local'] = ih.inherits callable_ = getattr(template.module, '_mako_inherit', None) if callable_ is not None: ret = callable_(template, lclcontext) if ret: return ret gen_ns = getattr(template.module, '_mako_generate_namespaces', None) if gen_ns is not None: gen_ns(context) return (template.callable_, lclcontext)
called by the _inherit method in template modules to set up the inheritance chain at the start of a template's execution.
def search_phenotype_association_sets(self, dataset_id): request = protocol.SearchPhenotypeAssociationSetsRequest() request.dataset_id = dataset_id request.page_size = pb.int(self._page_size) return self._run_search_request( request, "phenotypeassociationsets", protocol.SearchPhenotypeAssociationSetsResponse)
Returns an iterator over the PhenotypeAssociationSets on the server.
def _authenticate(self, auth, application, application_url=None, for_user=None, scopes=None, created_with=None, max_age=None, strength='strong', fail_if_already_exists=False, hostname=platform.node()): url = '%s/authentications' % (self.domain) payload = {"scopes": scopes, "note": application, "note_url": application_url, 'hostname': hostname, 'user': for_user, 'max-age': max_age, 'created_with': None, 'strength': strength, 'fail-if-exists': fail_if_already_exists} data, headers = jencode(payload) res = self.session.post(url, auth=auth, data=data, headers=headers) self._check_response(res) res = res.json() token = res['token'] self.session.headers.update({'Authorization': 'token %s' % (token)}) return token
Use basic authentication to create an authentication token using the interface below. With this technique, a username and password need not be stored permanently, and the user can revoke access at any time. :param username: The users name :param password: The users password :param application: The application that is requesting access :param application_url: The application's home page :param scopes: Scopes let you specify exactly what type of access you need. Scopes limit access for the tokens.
def download(self, destination, condition=None, media_count=None, timeframe=None, new_only=False, pgpbar_cls=None, dlpbar_cls=None, ): destination, close_destination = self._init_destfs(destination) queue = Queue() medias_queued = self._fill_media_queue( queue, destination, iter(self.medias()), media_count, new_only, condition) queue.put(None) worker = InstaDownloader( queue=queue, destination=destination, namegen=self.namegen, add_metadata=self.add_metadata, dump_json=self.dump_json, dump_only=self.dump_only, pbar=None, session=self.session) worker.run() return medias_queued
Download the refered post to the destination. See `InstaLooter.download` for argument reference. Note: This function, opposed to other *looter* implementations, will not spawn new threads, but simply use the main thread to download the files. Since a worker is in charge of downloading a *media* at a time (and not a *file*), there would be no point in spawning more.
def copy(self, strip=None, deep='ref'): dd = self.to_dict(strip=strip, deep=deep) return self.__class__(fromdict=dd)
Return another instance of the object, with the same attributes If deep=True, all attributes themselves are also copies
def set_tcp_flags(self, tcp_flags): if tcp_flags < 0 or tcp_flags > 255: raise ValueError("Invalid tcp_flags. Valid: 0-255.") prev_size = 0 if self._json_dict.get('tcp_flags') is not None: prev_size = len(str(self._json_dict['tcp_flags'])) + len('tcp_flags') + 3 self._json_dict['tcp_flags'] = tcp_flags new_size = len(str(self._json_dict['tcp_flags'])) + len('tcp_flags') + 3 self._size += new_size - prev_size if prev_size == 0 and self._has_field: self._size += 2 self._has_field = True
Set the complete tcp flag bitmask
def tokenize_ofp_instruction_arg(arg): arg_re = re.compile("[^,()]*") try: rest = arg result = [] while len(rest): m = arg_re.match(rest) if m.end(0) == len(rest): result.append(rest) return result if rest[m.end(0)] == '(': this_block, rest = _tokenize_paren_block( rest, m.end(0) + 1) result.append(this_block) elif rest[m.end(0)] == ',': result.append(m.group(0)) rest = rest[m.end(0):] else: raise Exception if len(rest): assert rest[0] == ',' rest = rest[1:] return result except Exception: raise ryu.exception.OFPInvalidActionString(action_str=arg)
Tokenize an argument portion of ovs-ofctl style action string.
def _checkResponseRegisterAddress(payload, registeraddress): _checkString(payload, minlength=2, description='payload') _checkRegisteraddress(registeraddress) BYTERANGE_FOR_STARTADDRESS = slice(0, 2) bytesForStartAddress = payload[BYTERANGE_FOR_STARTADDRESS] receivedStartAddress = _twoByteStringToNum(bytesForStartAddress) if receivedStartAddress != registeraddress: raise ValueError('Wrong given write start adress: {0}, but commanded is {1}. The data payload is: {2!r}'.format( \ receivedStartAddress, registeraddress, payload))
Check that the start adress as given in the response is correct. The first two bytes in the payload holds the address value. Args: * payload (string): The payload * registeraddress (int): The register address (use decimal numbers, not hex). Raises: TypeError, ValueError
def get_aside(self, aside_usage_id): aside_type = self.id_reader.get_aside_type_from_usage(aside_usage_id) xblock_usage = self.id_reader.get_usage_id_from_aside(aside_usage_id) xblock_def = self.id_reader.get_definition_id(xblock_usage) aside_def_id, aside_usage_id = self.id_generator.create_aside(xblock_def, xblock_usage, aside_type) keys = ScopeIds(self.user_id, aside_type, aside_def_id, aside_usage_id) block = self.create_aside(aside_type, keys) return block
Create an XBlockAside in this runtime. The `aside_usage_id` is used to find the Aside class and data.
def surge_handler(response, **kwargs): if response.status_code == codes.conflict: json = response.json() errors = json.get('errors', []) error = errors[0] if errors else json.get('error') if error and error.get('code') == 'surge': raise SurgeError(response) return response
Error Handler to surface 409 Surge Conflict errors. Attached as a callback hook on the Request object. Parameters response (requests.Response) The HTTP response from an API request. **kwargs Arbitrary keyword arguments.
def count_by(records: Sequence[Dict], field_name: str) -> defaultdict: counter = defaultdict(int) for record in records: name = record[field_name] counter[name] += 1 return counter
Frequency each value occurs in a record sequence for a given field name.
def _render_str(self, string): if isinstance(string, StrLabel): string = string._render(string.expr) string = str(string) if len(string) == 0: return '' name, supers, subs = split_super_sub(string) return render_unicode_sub_super( name, subs, supers, sub_first=True, translate_symbols=True, unicode_sub_super=self._settings['unicode_sub_super'])
Returned a unicodified version of the string
def getMoviesFromJSON(jsonURL): response = urllib.request.urlopen(jsonURL) jsonData = response.read().decode('utf-8') objects = json.loads(jsonData) if jsonURL.find('quickfind') != -1: objects = objects['results'] optionalInfo = ['actors','directors','rating','genre','studio','releasedate'] movies = [] for obj in objects: movie = Movie() movie.title = obj['title'] movie.baseURL = obj['location'] movie.posterURL = obj['poster'] if movie.posterURL.find('http:') == -1: movie.posterURL = "http://apple.com%s" % movie.posterURL movie.trailers = obj['trailers'] for i in optionalInfo: if i in obj: setattr(movie, i, obj[i]) movies.append(movie) return movies
Main function for this library Returns list of Movie classes from apple.com/trailers json URL such as: http://trailers.apple.com/trailers/home/feeds/just_added.json The Movie classes use lazy loading mechanisms so that data not directly available from JSON are loaded on demand. Currently these lazy loaded parts are: * poster * trailerLinks * description Be warned that accessing these fields can take long time due to network access. Therefore do the loading in thread separate from UI thread or your users will notice. There are optional fields that may or may not be present in every Movie instance. These include: * actors (list) * directors (list) * rating (string) * genre (string) * studio (string) * releasedate (sring) Please take care when trying to access these fields as they may not exist.
def NAND(*args, **kwargs): errors = [] for arg in args: try: arg() except CertifierError as e: errors.append(e) if (len(errors) != len(args)) and len(args) > 1: exc = kwargs.get( 'exc', CertifierValueError('Expecting no certified values'), ) if exc is not None: raise exc
ALL args must raise an exception when called overall. Raise the specified exception on failure OR the first exception. :params iterable[Certifier] args: The certifiers to call :param callable kwargs['exc']: Callable that excepts the unexpectedly raised exception as argument and return an exception to raise.
def get_by_slug(tag_slug): label_recs = TabTag.select().where(TabTag.slug == tag_slug) return label_recs.get() if label_recs else False
Get label by slug.
def wnfild(small, window): assert isinstance(window, stypes.SpiceCell) assert window.dtype == 1 small = ctypes.c_double(small) libspice.wnfild_c(small, ctypes.byref(window)) return window
Fill small gaps between adjacent intervals of a double precision window. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/wnfild_c.html :param small: Limiting measure of small gaps. :type small: float :param window: Window to be filled :type window: spiceypy.utils.support_types.SpiceCell :return: Filled Window. :rtype: spiceypy.utils.support_types.SpiceCell
def POST_AUTH(self): username = self.user_manager.session_username() user_info = self.database.users.find_one({"username": username}) user_input = web.input() success = None if "register_courseid" in user_input and user_input["register_courseid"] != "": try: course = self.course_factory.get_course(user_input["register_courseid"]) if not course.is_registration_possible(user_info): success = False else: success = self.user_manager.course_register_user(course, username, user_input.get("register_password", None)) except: success = False elif "new_courseid" in user_input and self.user_manager.user_is_superadmin(): try: courseid = user_input["new_courseid"] self.course_factory.create_course(courseid, {"name": courseid, "accessible": False}) success = True except: success = False return self.show_page(success)
Parse course registration or course creation and display the course list page
def write(self, file_or_filename, prog=None, format='xdot'): if prog is None: file = super(DotWriter, self).write(file_or_filename) else: buf = StringIO.StringIO() super(DotWriter, self).write(buf) buf.seek(0) data = self.create(buf.getvalue(), prog, format) if isinstance(file_or_filename, basestring): file = None try: file = open(file_or_filename, "wb") except: logger.error("Error opening %s." % file_or_filename) finally: if file is not None: file.write(data) file.close() else: file = file_or_filename file.write(data) return file
Writes the case data in Graphviz DOT language. The format 'raw' is used to dump the Dot representation of the Case object, without further processing. The output can be processed by any of graphviz tools, defined in 'prog'.
def mark_nonreturning_calls_endpoints(self): for src, dst, data in self.transition_graph.edges(data=True): if 'type' in data and data['type'] == 'call': func_addr = dst.addr if func_addr in self._function_manager: function = self._function_manager[func_addr] if function.returning is False: the_node = self.get_node(src.addr) self._callout_sites.add(the_node) self._add_endpoint(the_node, 'call')
Iterate through all call edges in transition graph. For each call a non-returning function, mark the source basic block as an endpoint. This method should only be executed once all functions are recovered and analyzed by CFG recovery, so we know whether each function returns or not. :return: None
def fast_sync_snapshot_decompress( snapshot_path, output_dir ): if not tarfile.is_tarfile(snapshot_path): return {'error': 'Not a tarfile-compatible archive: {}'.format(snapshot_path)} if not os.path.exists(output_dir): os.makedirs(output_dir) with tarfile.TarFile.bz2open(snapshot_path, 'r') as f: tarfile.TarFile.extractall(f, path=output_dir) return {'status': True}
Given the path to a snapshot file, decompress it and write its contents to the given output directory Return {'status': True} on success Return {'error': ...} on failure
def _highest_perm_from_iter(self, perm_iter): perm_set = set(perm_iter) for perm_str in reversed(ORDERED_PERM_LIST): if perm_str in perm_set: return perm_str
Return the highest perm present in ``perm_iter`` or None if ``perm_iter`` is empty.
def _GetChunkForReading(self, chunk): try: return self.chunk_cache.Get(chunk) except KeyError: pass missing_chunks = [] for chunk_number in range(chunk, chunk + 10): if chunk_number not in self.chunk_cache: missing_chunks.append(chunk_number) self._ReadChunks(missing_chunks) try: return self.chunk_cache.Get(chunk) except KeyError: raise aff4.ChunkNotFoundError("Cannot open chunk %s" % chunk)
Returns the relevant chunk from the datastore and reads ahead.
def eigen_table(self): idx = ["Eigenvalue", "Variability (%)", "Cumulative (%)"] table = pd.DataFrame( np.array( [self.eigenvalues, self.inertia, self.cumulative_inertia] ), columns=["F%s" % i for i in range(1, self.keep + 1)], index=idx, ) return table
Eigenvalues, expl. variance, and cumulative expl. variance.
def render_table(self, headers, rows, style=None): table = self.table(headers, rows, style) table.render(self._io)
Format input to textual table.
def data_to_stream(self, data_element, stream): generator = \ self._make_representation_generator(stream, self.resource_class, self._mapping) generator.run(data_element)
Writes the given data element to the given stream.
def score(self, testing_features, testing_target): if self.fitted_pipeline_ is None: raise RuntimeError('A pipeline has not yet been optimized. Please call fit() first.') testing_features, testing_target = self._check_dataset(testing_features, testing_target, sample_weight=None) score = SCORERS[self.scoring_function]( self.fitted_pipeline_, testing_features.astype(np.float64), testing_target.astype(np.float64) ) return score
Return the score on the given testing data using the user-specified scoring function. Parameters ---------- testing_features: array-like {n_samples, n_features} Feature matrix of the testing set testing_target: array-like {n_samples} List of class labels for prediction in the testing set Returns ------- accuracy_score: float The estimated test set accuracy
def add_object(self, object): if object.id is None: object.get_id() self.db.engine.save(object)
Add object to db session. Only for session-centric object-database mappers.
def ndlayout_(self, dataset, kdims, cols=3): try: return hv.NdLayout(dataset, kdims=kdims).cols(cols) except Exception as e: self.err(e, self.layout_, "Can not create layout")
Create a Holoview NdLayout from a dictionnary of chart objects
def get_api_key(self, api_key_id): api = self._get_api(iam.DeveloperApi) return ApiKey(api.get_api_key(api_key_id))
Get API key details for key registered in organisation. :param str api_key_id: The ID of the API key to be updated (Required) :returns: API key object :rtype: ApiKey
def discover_OP_information(OP_uri): _, content = httplib2.Http().request( '%s/.well-known/openid-configuration' % OP_uri) return _json_loads(content)
Discovers information about the provided OpenID Provider. :param OP_uri: The base URI of the Provider information is requested for. :type OP_uri: str :returns: The contents of the Provider metadata document. :rtype: dict .. versionadded:: 1.0
def client_details(self, *args): self.log(_('Client details:', lang='de')) client = self._clients[args[0]] self.log('UUID:', client.uuid, 'IP:', client.ip, 'Name:', client.name, 'User:', self._users[client.useruuid], pretty=True)
Display known details about a given client
def is_running(self): state = yield from self._get_container_state() if state == "running": return True if self.status == "started": yield from self.stop() return False
Checks if the container is running. :returns: True or False :rtype: bool
def url(self, name): scheme = 'http' path = self._prepend_name_prefix(name) query = '' fragment = '' url_tuple = (scheme, self.netloc, path, query, fragment) return urllib.parse.urlunsplit(url_tuple)
Return URL of resource
def get_subprocess_output(cls, command, ignore_stderr=True, **kwargs): if ignore_stderr is False: kwargs.setdefault('stderr', subprocess.STDOUT) try: return subprocess.check_output(command, **kwargs).decode('utf-8').strip() except (OSError, subprocess.CalledProcessError) as e: subprocess_output = getattr(e, 'output', '').strip() raise cls.ExecutionError(str(e), subprocess_output)
Get the output of an executed command. :param command: An iterable representing the command to execute (e.g. ['ls', '-al']). :param ignore_stderr: Whether or not to ignore stderr output vs interleave it with stdout. :raises: `ProcessManager.ExecutionError` on `OSError` or `CalledProcessError`. :returns: The output of the command.
def _rolling_window(a, window, axis=-1): axis = _validate_axis(a, axis) a = np.swapaxes(a, axis, -1) if window < 1: raise ValueError( "`window` must be at least 1. Given : {}".format(window)) if window > a.shape[-1]: raise ValueError("`window` is too long. Given : {}".format(window)) shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) rolling = np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides, writeable=False) return np.swapaxes(rolling, -2, axis)
Make an ndarray with a rolling window along axis. Parameters ---------- a : array_like Array to add rolling window to axis: int axis position along which rolling window will be applied. window : int Size of rolling window Returns ------- Array that is a view of the original array with a added dimension of size w. Examples -------- >>> x=np.arange(10).reshape((2,5)) >>> np.rolling_window(x, 3, axis=-1) array([[[0, 1, 2], [1, 2, 3], [2, 3, 4]], [[5, 6, 7], [6, 7, 8], [7, 8, 9]]]) Calculate rolling mean of last dimension: >>> np.mean(np.rolling_window(x, 3, axis=-1), -1) array([[ 1., 2., 3.], [ 6., 7., 8.]]) This function is taken from https://github.com/numpy/numpy/pull/31 but slightly modified to accept axis option.
def dump_requestdriver_cookies_into_webdriver(requestdriver, webdriverwrapper, handle_sub_domain=True): driver_hostname = urlparse(webdriverwrapper.current_url()).netloc for cookie in requestdriver.session.cookies: cookiedomain = cookie.domain if handle_sub_domain: if is_subdomain(cookiedomain, driver_hostname): cookiedomain = driver_hostname try: webdriverwrapper.add_cookie({ 'name': cookie.name, 'value': cookie.value, 'domain': cookiedomain, 'path': cookie.path }) except WebDriverException, e: raise WebDriverException( msg='Cannot set cookie "{name}" with domain "{domain}" on url "{url}" {override}: {message}'.format( name=cookie.name, domain=cookiedomain, url=webdriverwrapper.current_url(), override='(Note that subdomain override is set!)' if handle_sub_domain else '', message=e.message), screen=e.screen, stacktrace=e.stacktrace )
Adds all cookies in the RequestDriver session to Webdriver @type requestdriver: RequestDriver @param requestdriver: RequestDriver with cookies @type webdriverwrapper: WebDriverWrapper @param webdriverwrapper: WebDriverWrapper to receive cookies @param handle_sub_domain: If True, will check driver url and change cookies with subdomains of that domain to match the current driver domain in order to avoid cross-domain cookie errors @rtype: None @return: None
def call_at_most_every(seconds, count=1): def decorator(func): try: call_history = getattr(func, '_call_history') except AttributeError: call_history = collections.deque(maxlen=count) setattr(func, '_call_history', call_history) @functools.wraps(func) def _wrapper(*args, **kwargs): current_time = time.time() window_count = sum(ts > current_time - seconds for ts in call_history) if window_count >= count: time.sleep(call_history[window_count - count] - current_time + seconds) call_history.append(time.time()) return func(*args, **kwargs) return _wrapper return decorator
Call the decorated function at most count times every seconds seconds. The decorated function will sleep to ensure that at most count invocations occur within any 'seconds' second window.
def due(self): if self._duration: return self.begin + self._duration elif self._due_time: return self._due_time else: return None
Get or set the end of the todo. | Will return an :class:`Arrow` object. | May be set to anything that :func:`Arrow.get` understands. | If set to a non null value, removes any already existing duration. | Setting to None will have unexpected behavior if begin is not None. | Must not be set to an inferior value than self.begin.
def get_referenced_fields_and_fragment_names( context: ValidationContext, cached_fields_and_fragment_names: Dict, fragment: FragmentDefinitionNode, ) -> Tuple[NodeAndDefCollection, List[str]]: cached = cached_fields_and_fragment_names.get(fragment.selection_set) if cached: return cached fragment_type = type_from_ast(context.schema, fragment.type_condition) return get_fields_and_fragment_names( context, cached_fields_and_fragment_names, fragment_type, fragment.selection_set )
Get referenced fields and nested fragment names Given a reference to a fragment, return the represented collection of fields as well as a list of nested fragment names referenced via fragment spreads.
def get_channel(self, name): r = self.kraken_request('GET', 'channels/' + name) return models.Channel.wrap_get_channel(r)
Return the channel for the given name :param name: the channel name :type name: :class:`str` :returns: the model instance :rtype: :class:`models.Channel` :raises: None
def keepalive(nurse, *patients): if DISABLED: return if hashable(nurse): hashable_patients = [] for p in patients: if hashable(p): log.debug("Keeping {0} alive for lifetime of {1}".format(p, nurse)) hashable_patients.append(p) else: log.warning("Unable to keep unhashable object {0} " "alive for lifetime of {1}".format(p, nurse)) KEEPALIVE.setdefault(nurse, set()).update(hashable_patients) else: log.warning("Unable to keep objects alive for lifetime of " "unhashable object {0}".format(nurse))
Keep ``patients`` alive at least as long as ``nurse`` is around using a ``WeakKeyDictionary``.
def prt_num_sig(self, prt=sys.stdout, alpha=0.05): ctr = self.get_num_sig(alpha) prt.write("{N:6,} TOTAL: {TXT}\n".format(N=len(self.nts), TXT=" ".join([ "FDR({FDR:4})".format(FDR=ctr['FDR']), "Bonferroni({B:4})".format(B=ctr['Bonferroni']), "Benjamini({B:4})".format(B=ctr['Benjamini']), "PValue({P:4})".format(P=ctr['PValue']), os.path.basename(self.fin_davidchart)])))
Print the number of significant GO terms.
def timezone(client, location, timestamp=None, language=None): params = { "location": convert.latlng(location), "timestamp": convert.time(timestamp or datetime.utcnow()) } if language: params["language"] = language return client._request( "/maps/api/timezone/json", params)
Get time zone for a location on the earth, as well as that location's time offset from UTC. :param location: The latitude/longitude value representing the location to look up. :type location: string, dict, list, or tuple :param timestamp: Timestamp specifies the desired time as seconds since midnight, January 1, 1970 UTC. The Time Zone API uses the timestamp to determine whether or not Daylight Savings should be applied. Times before 1970 can be expressed as negative values. Optional. Defaults to ``datetime.utcnow()``. :type timestamp: int or datetime.datetime :param language: The language in which to return results. :type language: string :rtype: dict
def get_vip_settings(vip): iface = get_iface_for_address(vip) netmask = get_netmask_for_address(vip) fallback = False if iface is None: iface = config('vip_iface') fallback = True if netmask is None: netmask = config('vip_cidr') fallback = True return iface, netmask, fallback
Calculate which nic is on the correct network for the given vip. If nic or netmask discovery fail then fallback to using charm supplied config. If fallback is used this is indicated via the fallback variable. @param vip: VIP to lookup nic and cidr for. @returns (str, str, bool): eg (iface, netmask, fallback)
def set_figure_window_geometry(fig='gcf', position=None, size=None): if type(fig)==str: fig = _pylab.gcf() elif _fun.is_a_number(fig): fig = _pylab.figure(fig) if _pylab.get_backend().find('Qt') >= 0: w = fig.canvas.window() if not size == None: w.resize(size[0],size[1]) if not position == None: w.move(position[0], position[1]) elif _pylab.get_backend().find('WX') >= 0: w = fig.canvas.Parent if not size == None: w.SetSize(size) if not position == None: w.SetPosition(position)
This will currently only work for Qt4Agg and WXAgg backends. postion = [x, y] size = [width, height] fig can be 'gcf', a number, or a figure object.
def get_category_or_404(path): path_bits = [p for p in path.split('/') if p] return get_object_or_404(Category, slug=path_bits[-1])
Retrieve a Category instance by a path.
def double_tap(self, on_element): self._actions.append(lambda: self._driver.execute( Command.DOUBLE_TAP, {'element': on_element.id})) return self
Double taps on a given element. :Args: - on_element: The element to tap.
def recent_all_projects(self, limit=30, offset=0): method = 'GET' url = ('/recent-builds?circle-token={token}&limit={limit}&' 'offset={offset}'.format(token=self.client.api_token, limit=limit, offset=offset)) json_data = self.client.request(method, url) return json_data
Return information about recent builds across all projects. Args: limit (int), Number of builds to return, max=100, defaults=30. offset (int): Builds returned from this point, default=0. Returns: A list of dictionaries.
def waiting(self, timeout=0): "Return True if data is ready for the client." if self.linebuffer: return True (winput, woutput, wexceptions) = select.select((self.sock,), (), (), timeout) return winput != []
Return True if data is ready for the client.
def _flush(self): for consumer in self.consumers: if not getattr(consumer, "closed", False): consumer.flush()
Flushes all registered consumer streams.
def render_template(template, **context): parts = template.split('/') renderer = _get_renderer(parts[:-1]) return renderer.render(renderer.load_template(parts[-1:][0]), context)
Renders a given template and context. :param template: The template name :param context: the variables that should be available in the context of the template.
def create_processors_from_settings(self): config = getattr(settings, DJANGO_PROCESSOR_SETTING_NAME, []) processors = self.instantiate_objects(config) return processors
Expects the Django setting "EVENT_TRACKING_PROCESSORS" to be defined and point to a list of backend engine configurations. Example:: EVENT_TRACKING_PROCESSORS = [ { 'ENGINE': 'some.arbitrary.Processor' }, { 'ENGINE': 'some.arbitrary.OtherProcessor', 'OPTIONS': { 'user': 'foo' } }, ]
def db_create(cls, impl, working_dir): global VIRTUALCHAIN_DB_SCRIPT log.debug("Setup chain state in {}".format(working_dir)) path = config.get_snapshots_filename(impl, working_dir) if os.path.exists( path ): raise Exception("Database {} already exists") lines = [l + ";" for l in VIRTUALCHAIN_DB_SCRIPT.split(";")] con = sqlite3.connect(path, isolation_level=None, timeout=2**30) for line in lines: con.execute(line) con.row_factory = StateEngine.db_row_factory return con
Create a sqlite3 db at the given path. Create all the tables and indexes we need. Returns a db connection on success Raises an exception on error
def parse(self, vd, extent_loc): if self._initialized: raise pycdlibexception.PyCdlibInternalError('Boot Record already initialized') (descriptor_type, identifier, version, self.boot_system_identifier, self.boot_identifier, self.boot_system_use) = struct.unpack_from(self.FMT, vd, 0) if descriptor_type != VOLUME_DESCRIPTOR_TYPE_BOOT_RECORD: raise pycdlibexception.PyCdlibInvalidISO('Invalid boot record descriptor type') if identifier != b'CD001': raise pycdlibexception.PyCdlibInvalidISO('Invalid boot record identifier') if version != 1: raise pycdlibexception.PyCdlibInvalidISO('Invalid boot record version') self.orig_extent_loc = extent_loc self._initialized = True
A method to parse a Boot Record out of a string. Parameters: vd - The string to parse the Boot Record out of. extent_loc - The extent location this Boot Record is current at. Returns: Nothing.
def main(self): self.targets() self.bait(k=49) self.reversebait(maskmiddle='t', k=19) self.subsample_reads()
Run the required methods in the appropriate order
def command(state, host, hostname, command, ssh_user=None): connection_target = hostname if ssh_user: connection_target = '@'.join((ssh_user, hostname)) yield 'ssh {0} "{1}"'.format(connection_target, command)
Execute commands on other servers over SSH. + hostname: the hostname to connect to + command: the command to execute + ssh_user: connect with this user
def get_named_range(self, name): url = self.build_url(self._endpoints.get('get_named_range').format(name=name)) response = self.session.get(url) if not response: return None return self.named_range_constructor(parent=self, **{self._cloud_data_key: response.json()})
Retrieves a Named range by it's name
def extract_bad_snapshot(e): msg = e.response['Error']['Message'] error = e.response['Error']['Code'] e_snap_id = None if error == 'InvalidSnapshot.NotFound': e_snap_id = msg[msg.find("'") + 1:msg.rfind("'")] log.warning("Snapshot not found %s" % e_snap_id) elif error == 'InvalidSnapshotID.Malformed': e_snap_id = msg[msg.find('"') + 1:msg.rfind('"')] log.warning("Snapshot id malformed %s" % e_snap_id) return e_snap_id
Handle various client side errors when describing snapshots
def dumpLines(self): for i, line in enumerate(self.lines): logger.debug("Line %d:", i) logger.debug(line.dumpFragments())
For debugging dump all line and their content
def list_candidate_adapter_ports(self, full_properties=False): sg_cpc = self.cpc adapter_mgr = sg_cpc.adapters port_list = [] port_uris = self.get_property('candidate-adapter-port-uris') if port_uris: for port_uri in port_uris: m = re.match(r'^(/api/adapters/[^/]*)/.*', port_uri) adapter_uri = m.group(1) adapter = adapter_mgr.resource_object(adapter_uri) port_mgr = adapter.ports port = port_mgr.resource_object(port_uri) port_list.append(port) if full_properties: port.pull_full_properties() return port_list
Return the current candidate storage adapter port list of this storage group. The result reflects the actual list of ports used by the CPC, including any changes that have been made during discovery. The source for this information is the 'candidate-adapter-port-uris' property of the storage group object. Parameters: full_properties (bool): Controls that the full set of resource properties for each returned candidate storage adapter port is being retrieved, vs. only the following short set: "element-uri", "element-id", "class", "parent". TODO: Verify short list of properties. Returns: List of :class:`~zhmcclient.Port` objects representing the current candidate storage adapter ports of this storage group. Raises: :exc:`~zhmcclient.HTTPError` :exc:`~zhmcclient.ParseError` :exc:`~zhmcclient.AuthError` :exc:`~zhmcclient.ConnectionError`
def verify( signature: Ed25519Signature, digest: bytes, pub_key: Ed25519PublicPoint ) -> None: _ed25519.checkvalid(signature, digest, pub_key)
Verify Ed25519 signature. Raise exception if the signature is invalid.
def customtype(self): result = None if self.is_custom: self.dependency() if self._kind_module is not None: if self.kind.lower() in self._kind_module.types: result = self._kind_module.types[self.kind.lower()] return result
If this variable is a user-derivedy type, return the CustomType instance that is its kind.
def split_path(path, ref=None): path = abspath(path, ref) return path.strip(os.path.sep).split(os.path.sep)
Split a path into its components. Parameters ---------- path : str absolute or relative path with respect to `ref` ref : str or None reference path if `path` is relative Returns ------- list : str components of the path
def _value_equals(value1, value2, all_close): if value1 is None: value1 = np.nan if value2 is None: value2 = np.nan are_floats = np.can_cast(type(value1), float) and np.can_cast(type(value2), float) if all_close and are_floats: return np.isclose(value1, value2, equal_nan=True) else: if are_floats: return value1 == value2 or (value1 != value1 and value2 != value2) else: return value1 == value2
Get whether 2 values are equal value1, value2 : ~typing.Any all_close : bool compare with np.isclose instead of ==
def path(self): if not self.name: raise ValueError("Cannot determine path without a blob name.") return self.path_helper(self.bucket.path, self.name)
Getter property for the URL path to this Blob. :rtype: str :returns: The URL path to this Blob.
def get(cls, keyval, key='id', user_id=None): if keyval is None: return None if (key in cls.__table__.columns and cls.__table__.columns[key].primary_key): return cls.query.get(keyval) else: result = cls.query.filter( getattr(cls, key) == keyval) return result.first()
Fetches a single instance which has value `keyval` for the attribute `key`. Args: keyval: The value of the attribute. key (str, optional): The attribute to search by. By default, it is 'id'. Returns: A model instance if found. Else None. Examples: >>> User.get(35) user35@i.com >>> User.get('user35@i.com', key='email') user35@i.com
def prepare_query(self, symbol, start_date, end_date): query = \ 'select * from yahoo.finance.historicaldata where symbol = "%s" and startDate = "%s" and endDate = "%s"' \ % (symbol, start_date, end_date) return query
Method returns prepared request query for Yahoo YQL API.
def _tostring(value): if value is True: value = 'true' elif value is False: value = 'false' elif value is None: value = '' return unicode(value)
Convert value to XML compatible string
def __updatable(): parser = argparse.ArgumentParser() parser.add_argument('file', nargs='?', type=argparse.FileType(), default=None, help='Requirements file') args = parser.parse_args() if args.file: packages = parse_requirements_list(args.file) else: packages = get_parsed_environment_package_list() for package in packages: __list_package_updates(package['package'], package['version'])
Function used to output packages update information in the console
def cmd(send, msg, args): parser = arguments.ArgParser(args['config']) parser.add_argument('--lang', '--from', default=None) parser.add_argument('--to', default='en') parser.add_argument('msg', nargs='+') try: cmdargs = parser.parse_args(msg) except arguments.ArgumentException as e: send(str(e)) return send(gen_translate(' '.join(cmdargs.msg), cmdargs.lang, cmdargs.to))
Translate something. Syntax: {command} [--from <language code>] [--to <language code>] <text> See https://cloud.google.com/translate/v2/translate-reference#supported_languages for a list of valid language codes
def clear_rubric(self): if (self.get_rubric_metadata().is_read_only() or self.get_rubric_metadata().is_required()): raise errors.NoAccess() self._my_map['rubricId'] = self._rubric_default
Clears the rubric. raise: NoAccess - ``Metadata.isRequired()`` or ``Metadata.isReadOnly()`` is ``true`` *compliance: mandatory -- This method must be implemented.*
def with_options( self, codec_options=None, read_preference=None, write_concern=None, read_concern=None): return Collection(self.__database, self.__name, False, codec_options or self.codec_options, read_preference or self.read_preference, write_concern or self.write_concern, read_concern or self.read_concern)
Get a clone of this collection changing the specified settings. >>> coll1.read_preference Primary() >>> from pymongo import ReadPreference >>> coll2 = coll1.with_options(read_preference=ReadPreference.SECONDARY) >>> coll1.read_preference Primary() >>> coll2.read_preference Secondary(tag_sets=None) :Parameters: - `codec_options` (optional): An instance of :class:`~bson.codec_options.CodecOptions`. If ``None`` (the default) the :attr:`codec_options` of this :class:`Collection` is used. - `read_preference` (optional): The read preference to use. If ``None`` (the default) the :attr:`read_preference` of this :class:`Collection` is used. See :mod:`~pymongo.read_preferences` for options. - `write_concern` (optional): An instance of :class:`~pymongo.write_concern.WriteConcern`. If ``None`` (the default) the :attr:`write_concern` of this :class:`Collection` is used. - `read_concern` (optional): An instance of :class:`~pymongo.read_concern.ReadConcern`. If ``None`` (the default) the :attr:`read_concern` of this :class:`Collection` is used.
def insert_function(self, fname, ftype): "Inserts a new function" index = self.insert_id(fname, SharedData.KINDS.FUNCTION, [SharedData.KINDS.GLOBAL_VAR, SharedData.KINDS.FUNCTION], ftype) self.table[index].set_attribute("Params",0) return index
Inserts a new function
def replace_uri(rdf, fromuri, touri): replace_subject(rdf, fromuri, touri) replace_predicate(rdf, fromuri, touri) replace_object(rdf, fromuri, touri)
Replace all occurrences of fromuri with touri in the given model. If touri is a list or tuple of URIRef, all values will be inserted. If touri=None, will delete all occurrences of fromuri instead.
def levelize_smooth_or_improve_candidates(to_levelize, max_levels): if isinstance(to_levelize, tuple) or isinstance(to_levelize, str): to_levelize = [to_levelize for i in range(max_levels)] elif isinstance(to_levelize, list): if len(to_levelize) < max_levels: mlz = max_levels - len(to_levelize) toext = [to_levelize[-1] for i in range(mlz)] to_levelize.extend(toext) elif to_levelize is None: to_levelize = [(None, {}) for i in range(max_levels)] return to_levelize
Turn parameter in to a list per level. Helper function to preprocess the smooth and improve_candidates parameters passed to smoothed_aggregation_solver and rootnode_solver. Parameters ---------- to_levelize : {string, tuple, list} Parameter to preprocess, i.e., levelize and convert to a level-by-level list such that entry i specifies the parameter at level i max_levels : int Defines the maximum number of levels considered Returns ------- to_levelize : list The parameter list such that entry i specifies the parameter choice at level i. Notes -------- This routine is needed because the user will pass in a parameter option such as smooth='jacobi', or smooth=['jacobi', None], and this option must be "levelized", or converted to a list of length max_levels such that entry [i] in that list is the parameter choice for level i. The parameter choice in to_levelize can be a string, tuple or list. If it is a string or tuple, then that option is assumed to be the parameter setting at every level. If to_levelize is inititally a list, if the length of the list is less than max_levels, the last entry in the list defines that parameter for all subsequent levels. Examples -------- >>> from pyamg.util.utils import levelize_smooth_or_improve_candidates >>> improve_candidates = ['gauss_seidel', None] >>> levelize_smooth_or_improve_candidates(improve_candidates, 4) ['gauss_seidel', None, None, None]
def bind(self, instance): p = self.clone() p.instance = weakref.ref(instance) return p
Bind an instance to this Pangler. Returns a clone of this Pangler, with the only difference being that the new Pangler is bound to the provided instance. Both will have the same `id`, but new hooks will not be shared.
def _cancel_outstanding(self): for d in list(self._outstanding): d.addErrback(lambda _: None) d.cancel()
Cancel all of our outstanding requests