code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def get_objects(self, force=None, last_update=None, flush=False): return self._run_object_import(force=force, last_update=last_update, flush=flush, full_history=False)
Extract routine for SQL based cubes. :param force: for querying for all objects (True) or only those passed in as list :param last_update: manual override for 'changed since date'
def unfold(tensor, mode): return np.moveaxis(tensor, mode, 0).reshape((tensor.shape[mode], -1))
Returns the mode-`mode` unfolding of `tensor`. Parameters ---------- tensor : ndarray mode : int Returns ------- ndarray unfolded_tensor of shape ``(tensor.shape[mode], -1)`` Author ------ Jean Kossaifi <https://github.com/tensorly>
def write(self): index_file = self.path new_index_file = index_file + '.new' bak_index_file = index_file + '.bak' if not self._db: return with open(new_index_file, 'w') as f: json.dump(self._db, f, indent=4) if exists(index_file): copy(index_file, bak_index_file) rename(new_index_file, index_file)
Safely write the index data to the index file
def update_remote_archive(self, save_uri, timeout=-1): return self._client.update_with_zero_body(uri=save_uri, timeout=timeout)
Saves a backup of the appliance to a previously-configured remote location. Args: save_uri (dict): The URI for saving the backup to a previously configured location. timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation in OneView, just stop waiting for its completion. Returns: dict: Backup details.
def progress(self, msg, onerror=None, sep='...', end='DONE', abrt='FAIL', prog='.', excs=(Exception,), reraise=True): if not onerror: onerror = self.error() if type(onerror) is str: onerror = self.error(msg=onerror) self.pverb(msg, end=sep) prog = progress.Progress(self.pverb, end=end, abrt=abrt, prog=prog) try: yield prog prog.end() except self.ProgressOK: pass except self.ProgressAbrt as err: if reraise: raise err except KeyboardInterrupt: raise except excs as err: prog.abrt(noraise=True) if onerror: onerror(err) if self.debug: traceback.print_exc() if reraise: raise self.ProgressAbrt()
Context manager for handling interactive prog indication This context manager streamlines presenting banners and prog indicators. To start the prog, pass ``msg`` argument as a start message. For example:: printer = Console(verbose=True) with printer.progress('Checking files') as prog: # Do some checks if errors: prog.abrt() prog.end() The context manager returns a ``Progress`` instance, which provides methods like ``abrt()`` (abort), ``end()`` (end), and ``prog()`` (print prog indicator). The prog methods like ``abrt()`` and ``end()`` will raise an exception that interrupts the prog. These exceptions are ``ProgressEnd`` exception subclasses and are ``ProgressAbrt`` and ``ProgressOK`` respectively. They are silenced and not handled in any way as they only serve the purpose of flow control. Other exceptions are trapped and ``abrt()`` is called. The exceptions that should be trapped can be customized using the ``excs`` argument, which should be a tuple of exception classes. If a handler function is passed using ``onerror`` argument, then this function takes the raised exception and handles it. By default, the ``error()`` factory is called with no arguments to generate the default error handler. If string is passed, then ``error()`` factory is called with that string. Finally, when prog is aborted either naturally or when exception is raised, it is possible to reraise the ``ProgressAbrt`` exception. This is done using the ``reraise`` flag. Default is to reraise.
def _decode_ctrl_packet(self, version, packet): for i in range(5): input_bit = packet[i] self._debug(PROP_LOGLEVEL_DEBUG, "Byte " + str(i) + ": " + str((input_bit >> 7) & 1) + str((input_bit >> 6) & 1) + str((input_bit >> 5) & 1) + str((input_bit >> 4) & 1) + str((input_bit >> 3) & 1) + str((input_bit >> 2) & 1) + str((input_bit >> 1) & 1) + str(input_bit & 1)) for sensor in self._ctrl_sensor: if (sensor.sensor_type == PROP_SENSOR_FLAG): sensor.value = (packet[sensor.index // 8] >> (sensor.index % 8)) & 1 elif (sensor.sensor_type == PROP_SENSOR_RAW): sensor.value = packet
Decode a control packet into the list of sensors.
def dump_misspelling_list(self): results = [] for bad_word in sorted(self._misspelling_dict.keys()): for correction in self._misspelling_dict[bad_word]: results.append([bad_word, correction]) return results
Returns a list of misspelled words and corrections.
def bwar_pitch(return_all=False): url = "http://www.baseball-reference.com/data/war_daily_pitch.txt" s = requests.get(url).content c=pd.read_csv(io.StringIO(s.decode('utf-8'))) if return_all: return c else: cols_to_keep = ['name_common', 'mlb_ID', 'player_ID', 'year_ID', 'team_ID', 'stint_ID', 'lg_ID', 'G', 'GS', 'RA','xRA', 'BIP', 'BIP_perc','salary', 'ERA_plus', 'WAR_rep', 'WAA', 'WAA_adj','WAR'] return c[cols_to_keep]
Get data from war_daily_pitch table. Returns WAR, its components, and a few other useful stats. To get all fields from this table, supply argument return_all=True.
def to_hierarchical(self, n_repeat, n_shuffle=1): levels = self.levels codes = [np.repeat(level_codes, n_repeat) for level_codes in self.codes] codes = [x.reshape(n_shuffle, -1).ravel(order='F') for x in codes] names = self.names warnings.warn("Method .to_hierarchical is deprecated and will " "be removed in a future version", FutureWarning, stacklevel=2) return MultiIndex(levels=levels, codes=codes, names=names)
Return a MultiIndex reshaped to conform to the shapes given by n_repeat and n_shuffle. .. deprecated:: 0.24.0 Useful to replicate and rearrange a MultiIndex for combination with another Index with n_repeat items. Parameters ---------- n_repeat : int Number of times to repeat the labels on self n_shuffle : int Controls the reordering of the labels. If the result is going to be an inner level in a MultiIndex, n_shuffle will need to be greater than one. The size of each label must divisible by n_shuffle. Returns ------- MultiIndex Examples -------- >>> idx = pd.MultiIndex.from_tuples([(1, 'one'), (1, 'two'), (2, 'one'), (2, 'two')]) >>> idx.to_hierarchical(3) MultiIndex(levels=[[1, 2], ['one', 'two']], codes=[[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1], [0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1]])
def polynomial(A, x, b, coefficients, iterations=1): A, x, b = make_system(A, x, b, formats=None) for i in range(iterations): from pyamg.util.linalg import norm if norm(x) == 0: residual = b else: residual = (b - A*x) h = coefficients[0]*residual for c in coefficients[1:]: h = c*residual + A*h x += h
Apply a polynomial smoother to the system Ax=b. Parameters ---------- A : sparse matrix Sparse NxN matrix x : ndarray Approximate solution (length N) b : ndarray Right-hand side (length N) coefficients : array_like Coefficients of the polynomial. See Notes section for details. iterations : int Number of iterations to perform Returns ------- Nothing, x will be modified in place. Notes ----- The smoother has the form x[:] = x + p(A) (b - A*x) where p(A) is a polynomial in A whose scalar coefficients are specified (in descending order) by argument 'coefficients'. - Richardson iteration p(A) = c_0: polynomial_smoother(A, x, b, [c_0]) - Linear smoother p(A) = c_1*A + c_0: polynomial_smoother(A, x, b, [c_1, c_0]) - Quadratic smoother p(A) = c_2*A^2 + c_1*A + c_0: polynomial_smoother(A, x, b, [c_2, c_1, c_0]) Here, Horner's Rule is applied to avoid computing A^k directly. For efficience, the method detects the case x = 0 one matrix-vector product is avoided (since (b - A*x) is b). Examples -------- >>> # The polynomial smoother is not currently used directly >>> # in PyAMG. It is only used by the chebyshev smoothing option, >>> # which automatically calculates the correct coefficients. >>> from pyamg.gallery import poisson >>> from pyamg.util.linalg import norm >>> import numpy as np >>> from pyamg.aggregation import smoothed_aggregation_solver >>> A = poisson((10,10), format='csr') >>> b = np.ones((A.shape[0],1)) >>> sa = smoothed_aggregation_solver(A, B=np.ones((A.shape[0],1)), ... coarse_solver='pinv2', max_coarse=50, ... presmoother=('chebyshev', {'degree':3, 'iterations':1}), ... postsmoother=('chebyshev', {'degree':3, 'iterations':1})) >>> x0=np.zeros((A.shape[0],1)) >>> residuals=[] >>> x = sa.solve(b, x0=x0, tol=1e-8, residuals=residuals)
def merge_with(self, other): other = as_shape(other) if self._dims is None: return other else: try: self.assert_same_rank(other) new_dims = [] for i, dim in enumerate(self._dims): new_dims.append(dim.merge_with(other[i])) return TensorShape(new_dims) except ValueError: raise ValueError("Shapes %s and %s are not convertible" % (self, other))
Returns a `TensorShape` combining the information in `self` and `other`. The dimensions in `self` and `other` are merged elementwise, according to the rules defined for `Dimension.merge_with()`. Args: other: Another `TensorShape`. Returns: A `TensorShape` containing the combined information of `self` and `other`. Raises: ValueError: If `self` and `other` are not convertible.
def add_cookies_to_web_driver(driver, cookies): for cookie in cookies: driver.add_cookie(convert_cookie_to_dict(cookie)) return driver
Sets cookies in an existing WebDriver session.
def overlay_config(base, overlay): if not isinstance(base, collections.Mapping): return overlay if not isinstance(overlay, collections.Mapping): return overlay result = dict() for k in iterkeys(base): if k not in overlay: result[k] = base[k] for k, v in iteritems(overlay): if v is not None or (k in base and base[k] is None): if k in base: v = overlay_config(base[k], v) result[k] = v return result
Overlay one configuration over another. This overlays `overlay` on top of `base` as follows: * If either isn't a dictionary, returns `overlay`. * Any key in `base` not present in `overlay` is present in the result with its original value. * Any key in `overlay` with value :const:`None` is not present in the result, unless it also is :const:`None` in `base`. * Any key in `overlay` not present in `base` and not :const:`None` is present in the result with its new value. * Any key in both `overlay` and `base` with a non-:const:`None` value is recursively overlaid. >>> overlay_config({'a': 'b'}, {'a': 'c'}) {'a': 'c'} >>> overlay_config({'a': 'b'}, {'c': 'd'}) {'a': 'b', 'c': 'd'} >>> overlay_config({'a': {'b': 'c'}}, ... {'a': {'b': 'd', 'e': 'f'}}) {'a': {'b': 'd', 'e': 'f'}} >>> overlay_config({'a': 'b', 'c': 'd'}, {'a': None}) {'c': 'd'} :param dict base: original configuration :param dict overlay: overlay configuration :return: new overlaid configuration :returntype dict:
def fixed_timezone(offset): if offset in _tz_cache: return _tz_cache[offset] tz = _FixedTimezone(offset) _tz_cache[offset] = tz return tz
Return a Timezone instance given its offset in seconds.
def outer_horizontal_border_bottom(self): return u"{lm}{lv}{hz}{rv}".format(lm=' ' * self.margins.left, lv=self.border_style.bottom_left_corner, rv=self.border_style.bottom_right_corner, hz=self.outer_horizontals())
The complete outer bottom horizontal border section, including left and right margins. Returns: str: The bottom menu border.
def fetch_deposits_since(self, since: int) -> List[Deposit]: return self._transactions_since(self._deposits_since, 'deposits', since)
Fetch all deposits since the given timestamp.
def read_line(self): try: line = self.inp.readline().strip() except KeyboardInterrupt: raise EOFError() if not line: raise EOFError() return line
Interrupted respecting reader for stdin. Raises EOFError if the end of stream has been reached
def flags(self): return set((name.lower() for name in sorted(TIFF.FILE_FLAGS) if getattr(self, 'is_' + name)))
Return set of flags.
def system_exit(object): @functools.wraps(object) def system_exit_wrapper(*args, **kwargs): try: if object(*args, **kwargs): foundations.core.exit(0) except Exception as error: sys.stderr.write("\n".join(foundations.exceptions.format_exception(*sys.exc_info()))) foundations.core.exit(1) return system_exit_wrapper
Handles proper system exit in case of critical exception. :param object: Object to decorate. :type object: object :return: Object. :rtype: object
def srcname(self): if self.rpm_name or self.name.startswith(('python-', 'Python-')): return self.name_convertor.base_name(self.rpm_name or self.name)
Return srcname for the macro if the pypi name should be changed. Those cases are: - name was provided with -r option - pypi name is like python-<name>
def responses_of(self, request): responses = [response for index, response in self._responses(request)] if responses: return responses raise UnhandledHTTPRequestError( "The cassette (%r) doesn't contain the request (%r) asked for" % (self._path, request) )
Find the responses corresponding to a request. This function isn't actually used by VCR internally, but is provided as an external API.
def purge_portlets(portal): logger.info("Purging portlets ...") def remove_portlets(context_portlet): mapping = portal.restrictedTraverse(context_portlet) for key in mapping.keys(): if key not in PORTLETS_TO_PURGE: logger.info("Skipping portlet: '{}'".format(key)) continue logger.info("Removing portlet: '{}'".format(key)) del mapping[key] remove_portlets("++contextportlets++plone.leftcolumn") remove_portlets("++contextportlets++plone.rightcolumn") setup = portal.portal_setup setup.runImportStepFromProfile(profile, 'portlets') logger.info("Purging portlets [DONE]")
Remove old portlets. Leave the Navigation portlet only
def token_address(self) -> Address: return to_canonical_address(self.proxy.contract.functions.token().call())
Return the token of this manager.
def decrypt(self, data, decode=False): result = self.cipher().decrypt_block(data) padding = self.mode().padding() if padding is not None: result = padding.reverse_pad(result, WAESMode.__data_padding_length__) return result.decode() if decode else result
Decrypt the given data with cipher that is got from AES.cipher call. :param data: data to decrypt :param decode: whether to decode bytes to str or not :return: bytes or str (depends on decode flag)
def write(self, target, *args, **kwargs): return io_registry.write(self, target, *args, **kwargs)
Write this `SegmentList` to a file Arguments and keywords depend on the output format, see the online documentation for full details for each format. Parameters ---------- target : `str` output filename Notes -----
def generate_sphinx_all(): all_nicknames = [] def add_nickname(gtype, a, b): nickname = nickname_find(gtype) try: Operation.generate_sphinx(nickname) all_nicknames.append(nickname) except Error: pass type_map(gtype, add_nickname) return ffi.NULL type_map(type_from_name('VipsOperation'), add_nickname) all_nicknames.sort() exclude = ['scale', 'ifthenelse', 'bandjoin', 'bandrank'] all_nicknames = [x for x in all_nicknames if x not in exclude] print('.. class:: pyvips.Image\n') print(' .. rubric:: Methods\n') print(' .. autosummary::') print(' :nosignatures:\n') for nickname in all_nicknames: print(' ~{0}'.format(nickname)) print() print() for nickname in all_nicknames: docstr = Operation.generate_sphinx(nickname) docstr = docstr.replace('\n', '\n ') print(' ' + docstr)
Generate sphinx documentation. This generates a .rst file for all auto-generated image methods. Use it to regenerate the docs with something like:: $ python -c \ "import pyvips; pyvips.Operation.generate_sphinx_all()" > x And copy-paste the file contents into doc/vimage.rst in the appropriate place.
def validate_overlap(comp1, comp2, force): warnings = dict() if force is None: stat = comp2.check_overlap(comp1) if stat=='full': pass elif stat == 'partial': raise(exceptions.PartialOverlap('Spectrum and bandpass do not fully overlap. You may use force=[extrap|taper] to force this Observation anyway.')) elif stat == 'none': raise(exceptions.DisjointError('Spectrum and bandpass are disjoint')) elif force.lower() == 'taper': try: comp1=comp1.taper() except AttributeError: comp1=comp1.tabulate().taper() warnings['PartialOverlap']=force elif force.lower().startswith('extrap'): stat=comp2.check_overlap(comp1) if stat == 'partial': warnings['PartialOverlap']=force else: raise(KeyError("Illegal value force=%s; legal values=('taper','extrap')"%force)) return comp1, comp2, warnings
Validate the overlap between the wavelength sets of the two given components. Parameters ---------- comp1, comp2 : `~pysynphot.spectrum.SourceSpectrum` or `~pysynphot.spectrum.SpectralElement` Source spectrum and bandpass of an observation. force : {'extrap', 'taper', `None`} If not `None`, the components may be adjusted by extrapolation or tapering. Returns ------- comp1, comp2 Same as inputs. However, ``comp1`` might be tapered if that option is selected. warnings : dict Maps warning keyword to its description. Raises ------ KeyError Invalid ``force``. pysynphot.exceptions.DisjointError No overlap detected when ``force`` is `None`. pysynphot.exceptions.PartialOverlap Partial overlap detected when ``force`` is `None`.
def smart_query_string(parser, token): args = token.split_contents() additions = args[1:] addition_pairs = [] while additions: addition_pairs.append(additions[0:2]) additions = additions[2:] return SmartQueryStringNode(addition_pairs)
Outputs current GET query string with additions appended. Additions are provided in token pairs.
def trace_dependencies(req, requirement_set, dependencies, _visited=None): _visited = _visited or set() if req in _visited: return _visited.add(req) for reqName in req.requirements(): try: name = pkg_resources.Requirement.parse(reqName).project_name except ValueError, e: logger.error('Invalid requirement: %r (%s) in requirement %s' % ( reqName, e, req)) continue subreq = requirement_set.get_requirement(name) dependencies.append((req, subreq)) trace_dependencies(subreq, requirement_set, dependencies, _visited)
Trace all dependency relationship @param req: requirements to trace @param requirement_set: RequirementSet @param dependencies: list for storing dependencies relationships @param _visited: visited requirement set
def pipeline_counter(self): if 'pipeline_counter' in self.data: return self.data.get('pipeline_counter') elif self.pipeline is not None: return self.pipeline.data.counter
Get pipeline counter of current stage instance. Because instantiating stage instance could be performed in different ways and those return different results, we have to check where from to get counter of the pipeline. :return: pipeline counter.
def ConnectionUpdate(self, settings): connection_path = self.connection_path NM = dbusmock.get_object(MANAGER_OBJ) settings_obj = dbusmock.get_object(SETTINGS_OBJ) main_connections = settings_obj.ListConnections() if connection_path not in main_connections: raise dbus.exceptions.DBusException( 'Connection %s does not exist' % connection_path, name=MANAGER_IFACE + '.DoesNotExist',) for setting_name in settings: setting = settings[setting_name] for k in setting: if setting_name not in self.settings: self.settings[setting_name] = {} self.settings[setting_name][k] = setting[k] self.EmitSignal(CSETTINGS_IFACE, 'Updated', '', []) auto_connect = False if 'autoconnect' in settings['connection']: auto_connect = settings['connection']['autoconnect'] if auto_connect: dev = None devices = NM.GetDevices() if len(devices) > 0: dev = devices[0] if dev: activate_connection(NM, connection_path, dev, connection_path) return connection_path
Update settings on a connection. settings is a String String Variant Map Map. See https://developer.gnome.org/NetworkManager/0.9/spec.html #type-String_String_Variant_Map_Map
def eval_ast(self, ast): new_ast = ast.replace_dict(self.replacements, leaf_operation=self._leaf_op) return backends.concrete.eval(new_ast, 1)[0]
Eval the ast, replacing symbols by their last value in the model.
def _create_sample_list(in_bams, vcf_file): out_file = "%s-sample_list.txt" % os.path.splitext(vcf_file)[0] with open(out_file, "w") as out_handle: for in_bam in in_bams: with pysam.Samfile(in_bam, "rb") as work_bam: for rg in work_bam.header.get("RG", []): out_handle.write("%s\n" % rg["SM"]) return out_file
Pull sample names from input BAMs and create input sample list.
def foreign(self, value, context=None): if self.separator is None: separator = ' ' else: separator = self.separator.strip() if self.strip and hasattr(self.separator, 'strip') else self.separator value = self._clean(value) try: value = separator.join(value) except Exception as e: raise Concern("{0} caught, failed to convert to string: {1}", e.__class__.__name__, str(e)) return super().foreign(value)
Construct a string-like representation for an iterable of string-like objects.
def setup_logging(verbosity, filename=None): levels = [logging.WARNING, logging.INFO, logging.DEBUG] level = levels[min(verbosity, len(levels) - 1)] logging.root.setLevel(level) fmt = logging.Formatter('%(asctime)s %(levelname)-12s %(message)-100s ' '[%(filename)s:%(lineno)d]') hdlr = logging.StreamHandler() hdlr.setFormatter(fmt) logging.root.addHandler(hdlr) if filename: hdlr = logging.FileHandler(filename, 'a') hdlr.setFormatter(fmt) logging.root.addHandler(hdlr)
Configure logging for this tool.
def p(name="", **kwargs): with Reflect.context(**kwargs) as r: if name: instance = P_CLASS(r, stream, name, **kwargs) else: instance = P_CLASS.pop(r) instance() return instance
really quick and dirty profiling you start a profile by passing in name, you stop the top profiling by not passing in a name. You can also call this method using a with statement This is for when you just want to get a really back of envelope view of how your fast your code is, super handy, not super accurate since -- 2013-5-9 example -- p("starting profile") time.sleep(1) p() # stop the "starting profile" session # you can go N levels deep p("one") p("two") time.sleep(0.5) p() # stop profiling of "two" time.sleep(0.5) p() # stop profiling of "one" with pout.p("three"): time.sleep(0.5) name -- string -- pass this in to start a profiling session return -- context manager
def _nonmatch_class_pos(self): if self.kernel.classes_.shape[0] != 2: raise ValueError("Number of classes is {}, expected 2.".format( self.kernel.classes_.shape[0])) return 0
Return the position of the non-match class.
def run_transaction(self, command_list, do_commit=True): pass for c in command_list: if c.find(";") != -1 or c.find("\\G") != -1: raise Exception("The SQL command '%s' contains a semi-colon or \\G. This is a potential SQL injection." % c) if do_commit: sql = "START TRANSACTION;\n%s;\nCOMMIT" % "\n".join(command_list) else: sql = "START TRANSACTION;\n%s;" % "\n".join(command_list) return
This can be used to stage multiple commands and roll back the transaction if an error occurs. This is useful if you want to remove multiple records in multiple tables for one entity but do not want the deletion to occur if the entity is tied to table not specified in the list of commands. Performing this as a transaction avoids the situation where the records are partially removed. If do_commit is false, the entire transaction is cancelled.
def SetupPrometheusEndpointOnPortRange(port_range, addr=''): assert os.environ.get('RUN_MAIN') != 'true', ( 'The thread-based exporter can\'t be safely used when django\'s ' 'autoreloader is active. Use the URL exporter, or start django ' 'with --noreload. See documentation/exports.md.') for port in port_range: try: httpd = HTTPServer((addr, port), prometheus_client.MetricsHandler) except (OSError, socket.error): continue thread = PrometheusEndpointServer(httpd) thread.daemon = True thread.start() logger.info('Exporting Prometheus /metrics/ on port %s' % port) return
Like SetupPrometheusEndpointOnPort, but tries several ports. This is useful when you're running Django as a WSGI application with multiple processes and you want Prometheus to discover all workers. Each worker will grab a port and you can use Prometheus to aggregate across workers. port_range may be any iterable object that contains a list of ports. Typically this would be an xrange of contiguous ports. As soon as one port is found that can serve, use this one and stop trying. The same caveats regarding autoreload apply. Do not use this when Django's autoreloader is active.
def _magickfy_topics(topics): if topics is None: return None if isinstance(topics, six.string_types): topics = [topics, ] ts_ = [] for t__ in topics: if not t__.startswith(_MAGICK): if t__ and t__[0] == '/': t__ = _MAGICK + t__ else: t__ = _MAGICK + '/' + t__ ts_.append(t__) return ts_
Add the magick to the topics if missing.
def validate(cls, data, name, **kwargs): required = kwargs.get('required', False) if required and data is None: raise ValidationError("required", name, True) elif data is None: return elif kwargs.get('readonly'): return try: for key, value in kwargs.items(): validator = cls.validation.get(key, lambda x, y: False) if validator(data, value): raise ValidationError(key, name, value) except TypeError: raise ValidationError("unknown", name, "unknown") else: return data
Validate that a piece of data meets certain conditions
def shuffle(self): args = list(self) random.shuffle(args) self.clear() super(DogeDeque, self).__init__(args)
Shuffle the deque Deques themselves do not support this, so this will make all items into a list, shuffle that list, clear the deque, and then re-init the deque.
def unimapping(arg, level): if not isinstance(arg, collections.Mapping): raise TypeError( 'expected collections.Mapping, {} received'.format(type(arg).__name__) ) result = [] for i in arg.items(): result.append( pretty_spaces(level) + u': '.join(map(functools.partial(convert, level=level), i)) ) string = join_strings(result, level) if level is not None: string += pretty_spaces(level - 1) return u'{{{}}}'.format(string)
Mapping object to unicode string. :type arg: collections.Mapping :param arg: mapping object :type level: int :param level: deep level :rtype: unicode :return: mapping object as unicode string
def metrics_api(self): if self._metrics_api is None: if self._use_grpc: self._metrics_api = _gapic.make_metrics_api(self) else: self._metrics_api = JSONMetricsAPI(self) return self._metrics_api
Helper for log metric-related API calls. See https://cloud.google.com/logging/docs/reference/v2/rest/v2/projects.metrics
def add_request_ids_from_environment(logger, name, event_dict): if ENV_APIG_REQUEST_ID in os.environ: event_dict['api_request_id'] = os.environ[ENV_APIG_REQUEST_ID] if ENV_LAMBDA_REQUEST_ID in os.environ: event_dict['lambda_request_id'] = os.environ[ENV_LAMBDA_REQUEST_ID] return event_dict
Custom processor adding request IDs to the log event, if available.
def parse(source): if isinstance(source, str): return parse_stream(six.StringIO(source)) else: return parse_stream(source)
Parses source code returns an array of instructions suitable for optimization and execution by a Machine. Args: source: A string or stream containing source code.
def get_view(self): d = self.declaration if d.cached and self.widget: return self.widget if d.defer_loading: self.widget = FrameLayout(self.get_context()) app = self.get_context() app.deferred_call( lambda: self.widget.addView(self.load_view(), 0)) else: self.widget = self.load_view() return self.widget
Get the page to display. If a view has already been created and is cached, use that otherwise initialize the view and proxy. If defer loading is used, wrap the view in a FrameLayout and defer add view until later.
def _save(self, stateName, path): print('saving...') state = {'session': dict(self.opts), 'dialogs': self.dialogs.saveState()} self.sigSave.emit(state) self.saveThread.prepare(stateName, path, self.tmp_dir_session, state) self.saveThread.start() self.current_session = stateName r = self.opts['recent sessions'] try: r.pop(r.index(path)) except ValueError: pass r.insert(0, path)
save into 'stateName' to pyz-path
def create_admin(cls, email, password, **kwargs): data = { 'email': email, 'password': cls.hash_password(password), 'has_agreed_to_terms': True, 'state': State.approved, 'role': cls.roles.administrator.value, 'organisations': {} } data.update(**kwargs) user = cls(**data) yield user._save() raise Return(user)
Create an approved 'global' administrator :param email: the user's email address :param password: the user's plain text password :returns: a User
def jwt_proccessor(): def jwt(): token = current_accounts.jwt_creation_factory() return Markup( render_template( current_app.config['ACCOUNTS_JWT_DOM_TOKEN_TEMPLATE'], token=token ) ) def jwt_token(): return current_accounts.jwt_creation_factory() return { 'jwt': jwt, 'jwt_token': jwt_token, }
Context processor for jwt.
def exists(self): result = TimeSeries(default=False if self.default is None else True) for t, v in self: result[t] = False if v is None else True return result
returns False when the timeseries has a None value, True otherwise
def save(self, **kwargs): translated_data = self._pop_translated_data() instance = super(TranslatableModelSerializer, self).save(**kwargs) self.save_translations(instance, translated_data) return instance
Extract the translations and save them after main object save. By default all translations will be saved no matter if creating or updating an object. Users with more complex needs might define their own save and handle translation saving themselves.
def _create_broker(self, broker_id, metadata=None): broker = Broker(broker_id, metadata) if not metadata: broker.mark_inactive() rg_id = self.extract_group(broker) group = self.rgs.setdefault(rg_id, ReplicationGroup(rg_id)) group.add_broker(broker) broker.replication_group = group return broker
Create a broker object and assign to a replication group. A broker object with no metadata is considered inactive. An inactive broker may or may not belong to a group.
def bind_collection_to_model_cls(cls): cls.Collection = type('{0}.Collection'.format(cls.__name__), (cls.Collection,), {'value_type': cls}) cls.Collection.__module__ = cls.__module__
Bind collection to model's class. If collection was not specialized in process of model's declaration, subclass of collection will be created.
def get_logout_uri(self, id_token_hint=None, post_logout_redirect_uri=None, state=None, session_state=None): params = {"oxd_id": self.oxd_id} if id_token_hint: params["id_token_hint"] = id_token_hint if post_logout_redirect_uri: params["post_logout_redirect_uri"] = post_logout_redirect_uri if state: params["state"] = state if session_state: params["session_state"] = session_state logger.debug("Sending command `get_logout_uri` with params %s", params) response = self.msgr.request("get_logout_uri", **params) logger.debug("Received response: %s", response) if response['status'] == 'error': raise OxdServerError(response['data']) return response['data']['uri']
Function to logout the user. Parameters: * **id_token_hint (string, optional):** oxd server will use last used ID Token, if not provided * **post_logout_redirect_uri (string, optional):** URI to redirect, this uri would override the value given in the site-config * **state (string, optional):** site state * **session_state (string, optional):** session state Returns: **string:** The URI to which the user must be directed in order to perform the logout
def get_model_names(app_label, models): return dict( (model, get_model_name(app_label, model)) for model in models )
Map model names to their swapped equivalents for the given app
def insertDict(self, tblname, d, fields = None): if fields == None: fields = sorted(d.keys()) values = None try: SQL = 'INSERT INTO %s (%s) VALUES (%s)' % (tblname, join(fields, ", "), join(['%s' for x in range(len(fields))], ',')) values = tuple([d[k] for k in fields]) self.locked_execute(SQL, parameters = values) except Exception, e: if SQL and values: sys.stderr.write("\nSQL execution error in query '%s' %% %s at %s:" % (SQL, values, datetime.now().strftime("%Y-%m-%d %H:%M:%S"))) sys.stderr.write("\nError: '%s'.\n" % (str(e))) sys.stderr.flush() raise Exception("Error occurred during database insertion: '%s'." % str(e))
Simple function for inserting a dictionary whose keys match the fieldnames of tblname.
def resources(self, start=1, num=10): url = self._url + "/resources" params = { "f" : "json", "start" : start, "num" : num } return self._get(url=url, param_dict=params, securityHandler=self._securityHandler, proxy_url=self._proxy_url, proxy_port=self._proxy_port)
Resources lists all file resources for the organization. The start and num paging parameters are supported. Inputs: start - the number of the first entry in the result set response The index number is 1-based and the default is 1 num - the maximum number of results to be returned as a whole #
def _get_chemical_equation_piece(species_list, coefficients): def _get_token(species, coefficient): if coefficient == 1: return '{}'.format(species) else: return '{:g} {}'.format(coefficient, species) bag = [] for species, coefficient in zip(species_list, coefficients): if coefficient < 0: coefficient = -coefficient if coefficient > 0: bag.append(_get_token(species, coefficient)) return '{}'.format(' + '.join(bag))
Produce a string from chemical species and their coefficients. Parameters ---------- species_list : iterable of `str` Iterable of chemical species. coefficients : iterable of `float` Nonzero stoichiometric coefficients. The length of `species_list` and `coefficients` must be the same. Negative values are made positive and zeros are ignored along with their respective species. Examples -------- >>> from pyrrole.core import _get_chemical_equation_piece >>> _get_chemical_equation_piece(["AcOH"], [2]) '2 AcOH' >>> _get_chemical_equation_piece(["AcO-", "H+"], [-1, -1]) 'AcO- + H+' >>> _get_chemical_equation_piece("ABCD", [-2, -1, 0, -1]) '2 A + B + D'
def _put(self, uri, data): headers = self._get_headers() logging.debug("URI=" + str(uri)) logging.debug("BODY=" + json.dumps(data)) response = self.session.put(uri, headers=headers, data=json.dumps(data)) if response.status_code in [201, 204]: return data else: logging.error(response.content) response.raise_for_status()
Simple PUT operation for a given path.
def command_create_tables(self, meta_name=None, verbose=False): def _create_metadata_tables(metadata): for table in metadata.sorted_tables: if verbose: print(self._schema(table)) else: print(' '+table.name) engine = self.session.get_bind(clause=table) metadata.create_all(bind=engine, tables=[table]) if isinstance(self.metadata, MetaData): print('Creating tables...') _create_metadata_tables(self.metadata) else: for current_meta_name, metadata in self.metadata.items(): if meta_name not in (current_meta_name, None): continue print('Creating tables for {}...'.format(current_meta_name)) _create_metadata_tables(metadata)
Create tables according sqlalchemy data model. Is not a complex migration tool like alembic, just creates tables that does not exist:: ./manage.py sqla:create_tables [--verbose] [meta_name]
def inventory_maps(inv): revinv = {} rolnam = {} for d in inv: if d[0:3] == 'py:' and d in IntersphinxInventory.domainrole: r = IntersphinxInventory.domainrole[d] rolnam[r] = '' for n in inv[d]: p = inv[d][n][2] revinv[p] = (r, n) rolnam[r] += ' ' + n + ',' return revinv, rolnam
Construct dicts facilitating information lookup in an inventory dict. A reversed dict allows lookup of a tuple specifying the sphinx cross-reference role and the name of the referenced type from the intersphinx inventory url postfix string. A role-specific name lookup string allows the set of all names corresponding to a specific role to be searched via regex.
def apply(cls, self, *args, **kwargs): for key in kwargs: if key in [ x.name for x in cls.INPUTS ]: setattr(self, key, kwargs[key]) if key in [ x.name for x in cls.OUTPUTS ]: setattr(self, key, kwargs[key]) if key in [ x.name for x in cls.PARAMETERS ]: setattr(self, key, kwargs[key])
Applies kwargs arguments to the instance passed as the first argument to the call. For defined INPUTS, OUTPUTS and PARAMETERS the method extracts a corresponding value from kwargs and sets it as an instance attribute. For example, if the processor has a 'foo' parameter declared and 'foo = something' is passed to apply(), self.foo will become 'something'.
def _generate_signature(url_path, secret_key, query_args, digest=None, encoder=None): digest = digest or DEFAULT_DIGEST encoder = encoder or DEFAULT_ENCODER msg = "%s?%s" % (url_path, '&'.join('%s=%s' % i for i in query_args.sorteditems(multi=True))) if _compat.text_type: msg = msg.encode('UTF8') signature = hmac.new(secret_key, msg, digestmod=digest).digest() if _compat.PY2: return encoder(signature).rstrip('=') else: return encoder(signature).decode().rstrip('=')
Generate signature from pre-parsed URL.
def omit_loglevel(self, msg) -> bool: return self.loglevels and ( self.loglevels[0] > fontbakery.checkrunner.Status(msg) )
Determine if message is below log level.
def is_capable(cls, requested_capability): for c in requested_capability: if not c in cls.capability: return False return True
Returns true if the requested capability is supported by this plugin
def initialize( plugins, exclude_files_regex=None, exclude_lines_regex=None, path='.', scan_all_files=False, ): output = SecretsCollection( plugins, exclude_files=exclude_files_regex, exclude_lines=exclude_lines_regex, ) if os.path.isfile(path): files_to_scan = [path] elif scan_all_files: files_to_scan = _get_files_recursively(path) else: files_to_scan = _get_git_tracked_files(path) if not files_to_scan: return output if exclude_files_regex: exclude_files_regex = re.compile(exclude_files_regex, re.IGNORECASE) files_to_scan = filter( lambda file: ( not exclude_files_regex.search(file) ), files_to_scan, ) for file in files_to_scan: output.scan_file(file) return output
Scans the entire codebase for secrets, and returns a SecretsCollection object. :type plugins: tuple of detect_secrets.plugins.base.BasePlugin :param plugins: rules to initialize the SecretsCollection with. :type exclude_files_regex: str|None :type exclude_lines_regex: str|None :type path: str :type scan_all_files: bool :rtype: SecretsCollection
def from_seedhex_file(path: str) -> SigningKeyType: with open(path, 'r') as fh: seedhex = fh.read() return SigningKey.from_seedhex(seedhex)
Return SigningKey instance from Seedhex file :param str path: Hexadecimal seed file path
def get_repo_relpath(repo, relpath): from os import path if relpath[0:2] == "./": return path.join(repo, relpath[2::]) else: from os import chdir, getcwd cd = getcwd() chdir(path.expanduser(repo)) result = path.abspath(relpath) chdir(cd) return result
Returns the absolute path to the 'relpath' taken relative to the base directory of the repository.
def create_and_trunk_vlan(self, nexus_host, vlan_id, intf_type, nexus_port, vni, is_native): starttime = time.time() self.create_vlan(nexus_host, vlan_id, vni) LOG.debug("NexusDriver created VLAN: %s", vlan_id) if nexus_port: self.send_enable_vlan_on_trunk_int( nexus_host, vlan_id, intf_type, nexus_port, is_native) self.capture_and_print_timeshot( starttime, "create_all", switch=nexus_host)
Create VLAN and trunk it on the specified ports.
def accounts(self): return account.HPEAccountCollection( self._conn, utils.get_subresource_path_by(self, 'Accounts'), redfish_version=self.redfish_version)
Property to provide instance of HPEAccountCollection
def push(self, metric_name=None, metric_value=None, volume=None): graphite_path = self.path_prefix graphite_path += '.' + self.device + '.' + 'volume' graphite_path += '.' + volume + '.' + metric_name metric = Metric(graphite_path, metric_value, precision=4, host=self.device) self.publish_metric(metric)
Ship that shit off to graphite broski
def pgettext(msgctxt, message): key = msgctxt + '\x04' + message translation = get_translation().gettext(key) return message if translation == key else translation
Particular gettext' function. It works with 'msgctxt' .po modifiers and allow duplicate keys with different translations. Python 2 don't have support for this GNU gettext function, so we reimplement it. It works by joining msgctx and msgid by '4' byte.
def _ar_matrix(self): X = np.ones(self.data_length-self.max_lag) if self.ar != 0: for i in range(0, self.ar): X = np.vstack((X,self.data[(self.max_lag-i-1):-i-1])) return X
Creates the Autoregressive matrix for the model Returns ---------- X : np.ndarray Autoregressive Matrix
def form_validation_response(self, e): resp = rc.BAD_REQUEST resp.write(' '+str(e.form.errors)) return resp
Method to return form validation error information. You will probably want to override this in your own `Resource` subclass.
def _protobuf_value_type(value): if value.HasField("number_value"): return api_pb2.DATA_TYPE_FLOAT64 if value.HasField("string_value"): return api_pb2.DATA_TYPE_STRING if value.HasField("bool_value"): return api_pb2.DATA_TYPE_BOOL return None
Returns the type of the google.protobuf.Value message as an api.DataType. Returns None if the type of 'value' is not one of the types supported in api_pb2.DataType. Args: value: google.protobuf.Value message.
def change_logger_levels(logger=None, level=logging.DEBUG): if not isinstance(logger, logging.Logger): logger = logging.getLogger(logger) logger.setLevel(level) for handler in logger.handlers: handler.level = level
Go through the logger and handlers and update their levels to the one specified. :param logger: logging name or object to modify, defaults to root logger :param level: logging level to set at (10=Debug, 20=Info, 30=Warn, 40=Error)
async def cache_instruments(self, require: Dict[top_types.Mount, str] = None): checked_require = require or {} self._log.info("Updating instrument model cache") found = self._backend.get_attached_instruments(checked_require) for mount, instrument_data in found.items(): model = instrument_data.get('model') if model is not None: p = Pipette(model, self._config.instrument_offset[mount.name.lower()], instrument_data['id']) self._attached_instruments[mount] = p else: self._attached_instruments[mount] = None mod_log.info("Instruments found: {}".format( self._attached_instruments))
- Get the attached instrument on each mount and - Cache their pipette configs from pipette-config.json If specified, the require element should be a dict of mounts to instrument models describing the instruments expected to be present. This can save a subsequent of :py:attr:`attached_instruments` and also serves as the hook for the hardware simulator to decide what is attached.
def cut(self): text = self.selectedText() for editor in self.editors(): editor.cut() QtGui.QApplication.clipboard().setText(text)
Cuts the text from the serial to the clipboard.
def parse_oxi_states(self, data): try: oxi_states = { data["_atom_type_symbol"][i]: str2float(data["_atom_type_oxidation_number"][i]) for i in range(len(data["_atom_type_symbol"]))} for i, symbol in enumerate(data["_atom_type_symbol"]): oxi_states[re.sub(r"\d?[\+,\-]?$", "", symbol)] = \ str2float(data["_atom_type_oxidation_number"][i]) except (ValueError, KeyError): oxi_states = None return oxi_states
Parse oxidation states from data dictionary
def query(self): tree = pypeg2.parse(self._query, parser(), whitespace="") for walker in query_walkers(): tree = tree.accept(walker) return tree
Parse query string using given grammar. :returns: AST that represents the query in the given grammar.
def prettyln(text, fill='-', align='^', prefix='[ ', suffix=' ]', length=69): text = '{prefix}{0}{suffix}'.format(text, prefix=prefix, suffix=suffix) print( "{0:{fill}{align}{length}}".format( text, fill=fill, align=align, length=length ) )
Wrap `text` in a pretty line with maximum length.
def formfield_for_dbfield(self, db_field, **kwargs): if isinstance(db_field, fields.OrderField): kwargs['widget'] = widgets.HiddenTextInput return super(ListView, self).formfield_for_dbfield(db_field, **kwargs)
Same as parent but sets the widget for any OrderFields to HiddenTextInput.
def get_view_selection(self): if not self.MODEL_STORAGE_ID: return None, None if len(self.store) == 0: paths = [] else: model, paths = self._tree_selection.get_selected_rows() selected_model_list = [] for path in paths: model = self.store[path][self.MODEL_STORAGE_ID] selected_model_list.append(model) return self._tree_selection, selected_model_list
Get actual tree selection object and all respective models of selected rows
def install(zone, nodataset=False, brand_opts=None): ret = {'status': True} res = __salt__['cmd.run_all']('zoneadm -z {zone} install{nodataset}{brand_opts}'.format( zone=zone, nodataset=' -x nodataset' if nodataset else '', brand_opts=' {0}'.format(brand_opts) if brand_opts else '', )) ret['status'] = res['retcode'] == 0 ret['message'] = res['stdout'] if ret['status'] else res['stderr'] ret['message'] = ret['message'].replace('zoneadm: ', '') if ret['message'] == '': del ret['message'] return ret
Install the specified zone from the system. zone : string name of the zone nodataset : boolean do not create a ZFS file system brand_opts : string brand specific options to pass CLI Example: .. code-block:: bash salt '*' zoneadm.install dolores salt '*' zoneadm.install teddy True
def my_notes(self, start_index=0, limit=100, get_all=False, sort_by='loanId', sort_dir='asc'): index = start_index notes = { 'loans': [], 'total': 0, 'result': 'success' } while True: payload = { 'sortBy': sort_by, 'dir': sort_dir, 'startindex': index, 'pagesize': limit, 'namespace': '/account' } response = self.session.post('/account/loansAj.action', data=payload) json_response = response.json() if self.session.json_success(json_response): notes['loans'] += json_response['searchresult']['loans'] notes['total'] = json_response['searchresult']['totalRecords'] else: notes['result'] = json_response['result'] break if get_all is True and len(notes['loans']) < notes['total']: index += limit else: break return notes
Return all the loan notes you've already invested in. By default it'll return 100 results at a time. Parameters ---------- start_index : int, optional The result index to start on. By default only 100 records will be returned at a time, so use this to start at a later index in the results. For example, to get results 200 - 300, set `start_index` to 200. (default is 0) limit : int, optional The number of results to return per request. (default is 100) get_all : boolean, optional Return all results in one request, instead of 100 per request. sort_by : string, optional What key to sort on sort_dir : {'asc', 'desc'}, optional Which direction to sort Returns ------- dict A dictionary with a list of matching notes on the `loans` key
def fixed_length(cls, l, allow_empty=False): return cls(l, l, allow_empty=allow_empty)
Create a sedes for text data with exactly `l` encoded characters.
def create_failure(self, exception=None): if exception: return FailedFuture(type(exception), exception, None) return FailedFuture(*sys.exc_info())
This returns an object implementing IFailedFuture. If exception is None (the default) we MUST be called within an "except" block (such that sys.exc_info() returns useful information).
def get_user(self, user_id): content = self._fetch("/user/%s" % user_id) return FastlyUser(self, content)
Get a specific user.
def var(self, values, axis=0, weights=None, dtype=None): values = np.asarray(values) unique, mean = self.mean(values, axis, weights, dtype) err = values - mean.take(self.inverse, axis) if weights is None: shape = [1] * values.ndim shape[axis] = self.groups group_weights = self.count.reshape(shape) var = self.reduce(err ** 2, axis=axis, dtype=dtype) else: weights = np.asarray(weights) group_weights = self.reduce(weights, axis=axis, dtype=dtype) var = self.reduce(weights * err ** 2, axis=axis, dtype=dtype) return unique, var / group_weights
compute the variance over each group Parameters ---------- values : array_like, [keys, ...] values to take variance of per group axis : int, optional alternative reduction axis for values Returns ------- unique: ndarray, [groups] unique keys reduced : ndarray, [groups, ...] value array, reduced over groups
def close(self): self.logger = None for exc in _EXCEPTIONS: setattr(self, exc, None) try: self.mdr.close() finally: self.mdr = None
Close the connection this context wraps.
def csv_header_and_defaults(features, schema, stats, keep_target): target_name = get_target_name(features) if keep_target and not target_name: raise ValueError('Cannot find target transform') csv_header = [] record_defaults = [] for col in schema: if not keep_target and col['name'] == target_name: continue csv_header.append(col['name']) if col['type'].lower() == INTEGER_SCHEMA: dtype = tf.int64 default = int(stats['column_stats'].get(col['name'], {}).get('mean', 0)) elif col['type'].lower() == FLOAT_SCHEMA: dtype = tf.float32 default = float(stats['column_stats'].get(col['name'], {}).get('mean', 0.0)) else: dtype = tf.string default = '' record_defaults.append(tf.constant([default], dtype=dtype)) return csv_header, record_defaults
Gets csv header and default lists.
def can_create_gradebook_with_record_types(self, gradebook_record_types): if self._catalog_session is not None: return self._catalog_session.can_create_catalog_with_record_types(catalog_record_types=gradebook_record_types) return True
Tests if this user can create a single ``Gradebook`` using the desired record types. While ``GradingManager.getGradebookRecordTypes()`` can be used to examine which records are supported, this method tests which record(s) are required for creating a specific ``Gradebook``. Providing an empty array tests if a ``Gradebook`` can be created with no records. arg: gradebook_record_types (osid.type.Type[]): array of gradebook record types return: (boolean) - ``true`` if ``Gradebook`` creation using the specified ``Types`` is supported, ``false`` otherwise raise: NullArgument - ``gradebook_record_types`` is ``null`` *compliance: mandatory -- This method must be implemented.*
def folderitems(self): items = super(AddAnalysesView, self).folderitems(classic=False) return items
Return folderitems as brains
def __get_default_layouts_settings(self): LOGGER.debug("> Accessing '{0}' default layouts settings file!".format(UiConstants.layouts_file)) self.__default_layouts_settings = QSettings(umbra.ui.common.get_resource_path(UiConstants.layouts_file), QSettings.IniFormat)
Gets the default layouts settings.
def to_feather(self, fname): from pandas.io.feather_format import to_feather to_feather(self, fname)
Write out the binary feather-format for DataFrames. .. versionadded:: 0.20.0 Parameters ---------- fname : str string file path
def _sanity_check_fold_scope_locations_are_unique(ir_blocks): observed_locations = dict() for block in ir_blocks: if isinstance(block, Fold): alternate = observed_locations.get(block.fold_scope_location, None) if alternate is not None: raise AssertionError(u'Found two Fold blocks with identical FoldScopeLocations: ' u'{} {} {}'.format(alternate, block, ir_blocks)) observed_locations[block.fold_scope_location] = block
Assert that every FoldScopeLocation that exists on a Fold block is unique.
def validate_reference_data(self, ref_data): try: self._zotero_lib.check_items([ref_data]) except InvalidItemFields as e: raise InvalidZoteroItemError from e
Validate the reference data. Zotero.check_items() caches data after the first API call.
def clean(self, critical=False): clean_events_list = [] while self.len() > 0: item = self.events_list.pop() if item[1] < 0 or (not critical and item[2].startswith("CRITICAL")): clean_events_list.insert(0, item) self.events_list = clean_events_list return self.len()
Clean the logs list by deleting finished items. By default, only delete WARNING message. If critical = True, also delete CRITICAL message.
def load_ipython_extension(ipython): import IPython ipy_version = LooseVersion(IPython.__version__) if ipy_version < LooseVersion("3.0.0"): ipython.write_err("Your IPython version is older than " "version 3.0.0, the minimum for Vispy's" "IPython backend. Please upgrade your IPython" "version.") return _load_webgl_backend(ipython)
Entry point of the IPython extension Parameters ---------- IPython : IPython interpreter An instance of the IPython interpreter that is handed over to the extension