code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def plot_dot_graph(graph, filename=None): if not plot.pygraphviz_available: logger.error("Pygraphviz is not installed, cannot generate graph plot!") return if not plot.PIL_available: logger.error("PIL is not installed, cannot display graph plot!") return agraph = AGraph(graph) agraph.layout(prog='dot') if filename is None: filename = tempfile.mktemp(suffix=".png") agraph.draw(filename) image = Image.open(filename) image.show()
Plots a graph in graphviz dot notation. :param graph: the dot notation graph :type graph: str :param filename: the (optional) file to save the generated plot to. The extension determines the file format. :type filename: str
def from_point(cls, point): return cls(point.latitude, point.longitude, point.altitude)
Create and return a new ``Point`` instance from another ``Point`` instance.
def edit(self): self.changed = False with self: editor = self.get_editor() cmd = [editor, self.name] try: res = subprocess.call(cmd) except Exception as e: print("Error launching editor %(editor)s" % locals()) print(e) return if res != 0: msg = '%(editor)s returned error status %(res)d' % locals() raise EditProcessException(msg) new_data = self.read() if new_data != self.data: self.changed = self._save_diff(self.data, new_data) self.data = new_data
Edit the file
def acquire_value_set(self, *tags): setname = self._acquire_value_set(*tags) if setname is None: raise ValueError("Could not aquire a value set") return setname
Reserve a set of values for this execution. No other process can reserve the same set of values while the set is reserved. Acquired value set needs to be released after use to allow other processes to access it. Add tags to limit the possible value sets that this returns.
def asDict(self): d = {} for k, v in self.items(): d[k] = v return d
Return the Lib as a ``dict``. This is a backwards compatibility method.
def maybe_infer_tz(tz, inferred_tz): if tz is None: tz = inferred_tz elif inferred_tz is None: pass elif not timezones.tz_compare(tz, inferred_tz): raise TypeError('data is already tz-aware {inferred_tz}, unable to ' 'set specified tz: {tz}' .format(inferred_tz=inferred_tz, tz=tz)) return tz
If a timezone is inferred from data, check that it is compatible with the user-provided timezone, if any. Parameters ---------- tz : tzinfo or None inferred_tz : tzinfo or None Returns ------- tz : tzinfo or None Raises ------ TypeError : if both timezones are present but do not match
def boundary(self): return (int(self.WESTERNMOST_LONGITUDE), int(self.EASTERNMOST_LONGITUDE), int(self.MINIMUM_LATITUDE), int(self.MAXIMUM_LATITUDE))
Get the image boundary Returns: A tupple composed by the westernmost_longitude, the westernmost_longitude, the minimum_latitude and the maximum_latitude.
def get_ws_endpoint(self, private=False): path = 'bullet-public' signed = private if private: path = 'bullet-private' return self._post(path, signed)
Get websocket channel details :param private: Name of symbol e.g. KCS-BTC :type private: bool https://docs.kucoin.com/#websocket-feed .. code:: python ws_details = client.get_ws_endpoint(private=True) :returns: ApiResponse .. code:: python { "code": "200000", "data": { "instanceServers": [ { "pingInterval": 50000, "endpoint": "wss://push1-v2.kucoin.net/endpoint", "protocol": "websocket", "encrypt": true, "pingTimeout": 10000 } ], "token": "vYNlCtbz4XNJ1QncwWilJnBtmmfe4geLQDUA62kKJsDChc6I4bRDQc73JfIrlFaVYIAE0Gv2--MROnLAgjVsWkcDq_MuG7qV7EktfCEIphiqnlfpQn4Ybg==.IoORVxR2LmKV7_maOR9xOg==" } } :raises: KucoinResponseException, KucoinAPIException
def getReference(self, id_): if id_ not in self._referenceIdMap: raise exceptions.ReferenceNotFoundException(id_) return self._referenceIdMap[id_]
Returns the Reference with the specified ID or raises a ReferenceNotFoundException if it does not exist.
def list_branch(self, repo_name): req = proto.ListBranchRequest(repo=proto.Repo(name=repo_name)) res = self.stub.ListBranch(req, metadata=self.metadata) if hasattr(res, 'branch_info'): return res.branch_info return []
Lists the active Branch objects on a Repo. Params: * repo_name: The name of the repo.
def calc_expectation(grad_dict, num_batches): for key in grad_dict.keys(): grad_dict[str.format(key+"_expectation")] = mx.ndarray.sum(grad_dict[key], axis=0) / num_batches return grad_dict
Calculates the expectation of the gradients per epoch for each parameter w.r.t number of batches Parameters ---------- grad_dict: dict dictionary that maps parameter name to gradients in the mod executor group num_batches: int number of batches Returns ---------- grad_dict: dict dictionary with new keys mapping to gradients expectations
def to_dict(self): return { 'primitives': self.primitives, 'init_params': self.init_params, 'input_names': self.input_names, 'output_names': self.output_names, 'hyperparameters': self.get_hyperparameters(), 'tunable_hyperparameters': self._tunable_hyperparameters }
Return all the details of this MLPipeline in a dict. The dict structure contains all the `__init__` arguments of the MLPipeline, as well as the current hyperparameter values and the specification of the tunable_hyperparameters:: { "primitives": [ "a_primitive", "another_primitive" ], "init_params": { "a_primitive": { "an_argument": "a_value" } }, "hyperparameters": { "a_primitive#1": { "an_argument": "a_value", "another_argument": "another_value", }, "another_primitive#1": { "yet_another_argument": "yet_another_value" } }, "tunable_hyperparameters": { "another_primitive#1": { "yet_another_argument": { "type": "str", "default": "a_default_value", "values": [ "a_default_value", "yet_another_value" ] } } } }
def get_dict_for_attrs(obj, attrs): data = {} for attr in attrs: data[attr] = getattr(obj, attr) return data
Returns dictionary for each attribute from given ``obj``.
def is_pinned(self, color: Color, square: Square) -> bool: return self.pin_mask(color, square) != BB_ALL
Detects if the given square is pinned to the king of the given color.
def remove_small_objects(image, min_size=50, connectivity=1): return skimage.morphology.remove_small_objects(image, min_size=min_size, connectivity=connectivity)
Remove small objects from an boolean image. :param image: boolean numpy array or :class:`jicbioimage.core.image.Image` :returns: boolean :class:`jicbioimage.core.image.Image`
def _create_argument_parser(self): parser = self._new_argument_parser() self._add_base_arguments(parser) self._add_required_arguments(parser) return parser
Create and return the argument parser with all of the arguments and configuration ready to go. :rtype: argparse.ArgumentParser
def iterall(cls, connection=None, **kwargs): try: limit = kwargs['limit'] except KeyError: limit = None try: page = kwargs['page'] except KeyError: page = None def _all_responses(): page = 1 params = kwargs.copy() while True: params.update(page=page, limit=250) rsp = cls._make_request('GET', cls._get_all_path(), connection, params=params) if rsp: yield rsp page += 1 else: yield [] break if not (limit or page): for rsp in _all_responses(): for obj in rsp: yield cls._create_object(obj, connection=connection) else: response = cls._make_request('GET', cls._get_all_path(), connection, params=kwargs) for obj in cls._create_object(response, connection=connection): yield obj
Returns a autopaging generator that yields each object returned one by one.
def dst_to_src(self, dst_file): for map in self.mappings: src_uri = map.dst_to_src(dst_file) if (src_uri is not None): return(src_uri) raise MapperError( "Unable to translate destination path (%s) " "into a source URI." % (dst_file))
Map destination path to source URI.
def setModified(self, isModified: bool): if not isModified: self.qteUndoStack.saveState() super().setModified(isModified)
Set the modified state to ``isModified``. From a programmer's perspective this method does the same as the native ``QsciScintilla`` method but also ensures that the undo framework knows when the document state was changed. |Args| * ``isModified`` (**bool**): whether or not the document is considered unmodified. |Returns| **None** |Raises| * **QtmacsArgumentError** if at least one argument has an invalid type.
def register_rate_producer(self, rate_name: str, source: Callable[..., pd.DataFrame]=None) -> Pipeline: return self._value_manager.register_rate_producer(rate_name, source)
Marks a ``Callable`` as the producer of a named rate. This is a convenience wrapper around ``register_value_producer`` that makes sure rate data is appropriately scaled to the size of the simulation time step. It is equivalent to ``register_value_producer(value_name, source, preferred_combiner=replace_combiner, preferred_post_processor=rescale_post_processor)`` Parameters ---------- rate_name : The name of the new dynamic rate pipeline. source : A callable source for the dynamic rate pipeline. Returns ------- Callable A callable reference to the named dynamic rate pipeline.
def update(self, folder, git_repository): try: self.clone(folder, git_repository) except OSError: self.update_git_repository(folder, git_repository) self.pull(folder)
Creates or updates theme folder according given git repository. :param git_repository: git url of the theme folder :param folder: path of the git managed theme folder
def copy_files(src, ext, dst): src_path = os.path.join(os.path.dirname(__file__), src) dst_path = os.path.join(os.path.dirname(__file__), dst) file_list = os.listdir(src_path) for f in file_list: if f == '__init__.py': continue f_path = os.path.join(src_path, f) if os.path.isfile(f_path) and f.endswith(ext): shutil.copy(f_path, dst_path)
Copies files with extensions "ext" from "src" to "dst" directory.
def get_widths(self, estimation): widths = estimation[self.map_offset[1]:self.map_offset[2]]\ .reshape(self.K, 1) return widths
Get estimation on widths Parameters ---------- estimation : 1D arrary Either prior of posterior estimation Returns ------- fields : 2D array, in shape [K, 1] Estimation of widths
def run(self, timeout=None, **kwargs): from subprocess import Popen, PIPE def target(**kw): try: self.process = Popen(self.command, **kw) self.output, self.error = self.process.communicate() self.retcode = self.process.returncode except: import traceback self.error = traceback.format_exc() self.retcode = -1 if 'stdout' not in kwargs: kwargs['stdout'] = PIPE if 'stderr' not in kwargs: kwargs['stderr'] = PIPE import threading thread = threading.Thread(target=target, kwargs=kwargs) thread.start() thread.join(timeout) if thread.is_alive(): self.process.terminate() self.killed = True thread.join() return self
Run a command in a separated thread and wait timeout seconds. kwargs are keyword arguments passed to Popen. Return: self
def as_tuple(self, value): if isinstance(value, list): value = tuple(value) return value
Utility function which converts lists to tuples.
def restore_app_connection(self, port=None): self.host_port = port or utils.get_available_host_port() self._adb.forward( ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port]) try: self.connect() except: self.log.exception('Failed to re-connect to app.') raise jsonrpc_client_base.AppRestoreConnectionError( self._ad, ('Failed to restore app connection for %s at host port %s, ' 'device port %s') % (self.package, self.host_port, self.device_port)) self._proc = None self._restore_event_client()
Restores the app after device got reconnected. Instead of creating new instance of the client: - Uses the given port (or find a new available host_port if none is given). - Tries to connect to remote server with selected port. Args: port: If given, this is the host port from which to connect to remote device port. If not provided, find a new available port as host port. Raises: AppRestoreConnectionError: When the app was not able to be started.
def saelgv(vec1, vec2): vec1 = stypes.toDoubleVector(vec1) vec2 = stypes.toDoubleVector(vec2) smajor = stypes.emptyDoubleVector(3) sminor = stypes.emptyDoubleVector(3) libspice.saelgv_c(vec1, vec2, smajor, sminor) return stypes.cVectorToPython(smajor), stypes.cVectorToPython(sminor)
Find semi-axis vectors of an ellipse generated by two arbitrary three-dimensional vectors. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/saelgv_c.html :param vec1: First vector used to generate an ellipse. :type vec1: 3-Element Array of floats :param vec2: Second vector used to generate an ellipse. :type vec2: 3-Element Array of floats :return: Semi-major axis of ellipse, Semi-minor axis of ellipse. :rtype: tuple
def quaternion_rotate(X, Y): N = X.shape[0] W = np.asarray([makeW(*Y[k]) for k in range(N)]) Q = np.asarray([makeQ(*X[k]) for k in range(N)]) Qt_dot_W = np.asarray([np.dot(Q[k].T, W[k]) for k in range(N)]) W_minus_Q = np.asarray([W[k] - Q[k] for k in range(N)]) A = np.sum(Qt_dot_W, axis=0) eigen = np.linalg.eigh(A) r = eigen[1][:, eigen[0].argmax()] rot = quaternion_transform(r) return rot
Calculate the rotation Parameters ---------- X : array (N,D) matrix, where N is points and D is dimension. Y: array (N,D) matrix, where N is points and D is dimension. Returns ------- rot : matrix Rotation matrix (D,D)
def get_ip_address_list(list_name): payload = {"jsonrpc": "2.0", "id": "ID0", "method": "get_policy_ip_addresses", "params": [list_name, 0, 256]} response = __proxy__['bluecoat_sslv.call'](payload, False) return _convert_to_list(response, 'item_name')
Retrieves a specific IP address list. list_name(str): The name of the specific policy IP address list to retrieve. CLI Example: .. code-block:: bash salt '*' bluecoat_sslv.get_ip_address_list MyIPAddressList
def datetime(self, timezone=None): if timezone is None: timezone = self.timezone return _dtfromtimestamp(self.__timestamp__ - timezone)
Returns a datetime object. This object retains all information, including timezones. :param timezone = self.timezone The timezone (in seconds west of UTC) to return the value in. By default, the timezone used when constructing the class is used (local one by default). To use UTC, use timezone = 0. To use the local tz, use timezone = chronyk.LOCALTZ.
def table_names(self): query = "SELECT name FROM sqlite_master WHERE type='table'" cursor = self.connection.execute(query) results = cursor.fetchall() return [result_tuple[0] for result_tuple in results]
Returns names of all tables in the database
def batching(size): if size < 1: raise ValueError("batching() size must be at least 1") def batching_transducer(reducer): return Batching(reducer, size) return batching_transducer
Create a transducer which produces non-overlapping batches.
def allDayForDate(self,this_date,timeZone=None): if isinstance(this_date,datetime): d = this_date.date() else: d = this_date date_start = datetime(d.year,d.month,d.day) naive_start = self.startTime if timezone.is_naive(self.startTime) else timezone.make_naive(self.startTime, timezone=timeZone) naive_end = self.endTime if timezone.is_naive(self.endTime) else timezone.make_naive(self.endTime, timezone=timeZone) return ( naive_start <= date_start and naive_end >= date_start + timedelta(days=1,minutes=-30) )
This method determines whether the occurrence lasts the entirety of a specified day in the specified time zone. If no time zone is specified, then it uses the default time zone). Also, give a grace period of a few minutes to account for issues with the way events are sometimes entered.
def gen_jid(opts=None): if opts is None: salt.utils.versions.warn_until( 'Sodium', 'The `opts` argument was not passed into salt.utils.jid.gen_jid(). ' 'This will be required starting in {version}.' ) opts = {} global LAST_JID_DATETIME if opts.get('utc_jid', False): jid_dt = datetime.datetime.utcnow() else: jid_dt = datetime.datetime.now() if not opts.get('unique_jid', False): return '{0:%Y%m%d%H%M%S%f}'.format(jid_dt) if LAST_JID_DATETIME and LAST_JID_DATETIME >= jid_dt: jid_dt = LAST_JID_DATETIME + datetime.timedelta(microseconds=1) LAST_JID_DATETIME = jid_dt return '{0:%Y%m%d%H%M%S%f}_{1}'.format(jid_dt, os.getpid())
Generate a jid
def _get_indentation(line): if line.strip(): non_whitespace_index = len(line) - len(line.lstrip()) return line[:non_whitespace_index] else: return ''
Return leading whitespace.
def get(self, **kwargs): id = None if "id" in kwargs: id = kwargs["id"] del kwargs["id"] elif "pk" in kwargs: id = kwargs["pk"] del kwargs["pk"] else: raise self.model.DoesNotExist("You must provide an id to find") es = connections.get_connection("default") doc_type = self.model.search_objects.mapping.doc_type index = self.model.search_objects.mapping.index try: doc = es.get(index=index, doc_type=doc_type, id=id, **kwargs) except NotFoundError: message = "Can't find a document for {}, using id {}".format( doc_type, id) raise self.model.DoesNotExist(message) return self.from_es(doc)
Get a object from Elasticsearch by id
def snake_to_pascal(name, singularize=False): parts = name.split("_") if singularize: return "".join(p.upper() if p in _ALL_CAPS else to_singular(p.title()) for p in parts) else: return "".join(p.upper() if p in _ALL_CAPS else p.title() for p in parts)
Converts snake_case to PascalCase. If singularize is True, an attempt is made at singularizing each part of the resulting name.
def yaml_str_join(l, n): from photon.util.system import get_hostname, get_timestamp s = l.construct_sequence(n) for num, seq in enumerate(s): if seq == 'hostname': s[num] = '%s' % (get_hostname()) elif seq == 'timestamp': s[num] = '%s' % (get_timestamp()) return ''.join([str(i) for i in s])
YAML loader to join strings The keywords are as following: * `hostname`: Your hostname (from :func:`util.system.get_hostname`) * `timestamp`: Current timestamp (from :func:`util.system.get_timestamp`) :returns: A `non character` joined string |yaml_loader_returns| .. note:: Be careful with timestamps when using a `config` in :ref:`settings`. .. seealso:: |yaml_loader_seealso|
def ldap_login(self, username, password): self.connect() if self.config.get('USER_SEARCH'): result = self.bind_search(username, password) else: result = self.direct_bind(username, password) return result
Authenticate a user using ldap. This will return a userdata dict if successfull. ldap_login will return None if the user does not exist or if the credentials are invalid
def prepare_spec(self, spec, **kwargs): self.prepare_spec_debug_flag(spec, **kwargs) self.prepare_spec_export_target_checks(spec, **kwargs) spec.advise(SETUP, self.prepare_spec_advice_packages, spec, **kwargs)
Prepare a spec for usage with the generic ToolchainRuntime. Subclasses should avoid overriding this; override create_spec instead.
def bures_distance(rho0: Density, rho1: Density) -> float: fid = fidelity(rho0, rho1) op0 = asarray(rho0.asoperator()) op1 = asarray(rho1.asoperator()) tr0 = np.trace(op0) tr1 = np.trace(op1) return np.sqrt(tr0 + tr1 - 2.*np.sqrt(fid))
Return the Bures distance between mixed quantum states Note: Bures distance cannot be calculated within the tensor backend.
def get_instance(self, payload): return UsageRecordInstance(self._version, payload, sim_sid=self._solution['sim_sid'], )
Build an instance of UsageRecordInstance :param dict payload: Payload response from the API :returns: twilio.rest.wireless.v1.sim.usage_record.UsageRecordInstance :rtype: twilio.rest.wireless.v1.sim.usage_record.UsageRecordInstance
def add_file_filters(self, file_filters): file_filters = util.return_list(file_filters) self.file_filters.extend(file_filters)
Adds `file_filters` to the internal file filters. `file_filters` can be single object or iterable.
def TLV_PUT(attrs, attrNum, format, value): attrView = attrs[attrNum] if format == 's': format = str(attrView.len) + format struct.pack_into(format, attrView.buf, attrView.offset, value)
Put a tag-length-value encoded attribute.
def hash_opensubtitles(video_path): bytesize = struct.calcsize(b'<q') with open(video_path, 'rb') as f: filesize = os.path.getsize(video_path) filehash = filesize if filesize < 65536 * 2: return for _ in range(65536 // bytesize): filebuffer = f.read(bytesize) (l_value,) = struct.unpack(b'<q', filebuffer) filehash += l_value filehash &= 0xFFFFFFFFFFFFFFFF f.seek(max(0, filesize - 65536), 0) for _ in range(65536 // bytesize): filebuffer = f.read(bytesize) (l_value,) = struct.unpack(b'<q', filebuffer) filehash += l_value filehash &= 0xFFFFFFFFFFFFFFFF returnedhash = '%016x' % filehash return returnedhash
Compute a hash using OpenSubtitles' algorithm. :param str video_path: path of the video. :return: the hash. :rtype: str
def _get_environ(environ): keys = ["SERVER_NAME", "SERVER_PORT"] if _should_send_default_pii(): keys += ["REMOTE_ADDR", "HTTP_X_FORWARDED_FOR", "HTTP_X_REAL_IP"] for key in keys: if key in environ: yield key, environ[key]
Returns our whitelisted environment variables.
def file_name(self, file_type: Optional[FileType] = None) -> str: name = self.__text.word() ext = self.extension(file_type) return '{name}{ext}'.format( name=self.__sub(name), ext=ext, )
Get a random file name with some extension. :param file_type: Enum object FileType :return: File name. :Example: legislative.txt
def generate_dags(self, globals: Dict[str, Any]) -> None: dag_configs: Dict[str, Dict[str, Any]] = self.get_dag_configs() default_config: Dict[str, Any] = self.get_default_config() for dag_name, dag_config in dag_configs.items(): dag_builder: DagBuilder = DagBuilder(dag_name=dag_name, dag_config=dag_config, default_config=default_config) try: dag: Dict[str, Union[str, DAG]] = dag_builder.build() except Exception as e: raise Exception( f"Failed to generate dag {dag_name}. make sure config is properly populated. err:{e}" ) globals[dag["dag_id"]]: DAG = dag["dag"]
Generates DAGs from YAML config :param globals: The globals() from the file used to generate DAGs. The dag_id must be passed into globals() for Airflow to import
def cookie_name_check(cookie_name): cookie_match = WHTTPCookie.cookie_name_non_compliance_re.match(cookie_name.encode('us-ascii')) return len(cookie_name) > 0 and cookie_match is None
Check cookie name for validity. Return True if name is valid :param cookie_name: name to check :return: bool
def get_tasks(self, list_id, completed=False): return tasks_endpoint.get_tasks(self, list_id, completed=completed)
Gets tasks for the list with the given ID, filtered by the given completion flag
def count_mapped_reads(self, file_name, paired_end): if file_name.endswith("bam"): return self.samtools_view(file_name, param="-c -F4") if file_name.endswith("sam"): return self.samtools_view(file_name, param="-c -F4 -S") return -1
Mapped_reads are not in fastq format, so this one doesn't need to accommodate fastq, and therefore, doesn't require a paired-end parameter because it only uses samtools view. Therefore, it's ok that it has a default parameter, since this is discarded. :param str file_name: File for which to count mapped reads. :param bool paired_end: This parameter is ignored; samtools automatically correctly responds depending on the data in the bamfile. We leave the option here just for consistency, since all the other counting functions require the parameter. This makes it easier to swap counting functions during pipeline development. :return int: Either return code from samtools view command, or -1 to indicate an error state.
def remove(self, id): p = Pool.get(int(id)) p.remove() redirect(url(controller = 'pool', action = 'list'))
Remove pool.
def _validate_partition_boundary(boundary): boundary = six.text_type(boundary) match = re.search(r'^([\d.]+)(\D*)$', boundary) if match: unit = match.group(2) if not unit or unit in VALID_UNITS: return raise CommandExecutionError( 'Invalid partition boundary passed: "{0}"'.format(boundary) )
Ensure valid partition boundaries are supplied.
def systemd(service, start=True, enabled=True, unmask=False, restart=False): with settings(hide('warnings', 'running', 'stdout', 'stderr'), warn_only=True, capture=True): if restart: sudo('systemctl restart %s' % service) else: if start: sudo('systemctl start %s' % service) else: sudo('systemctl stop %s' % service) if enabled: sudo('systemctl enable %s' % service) else: sudo('systemctl disable %s' % service) if unmask: sudo('systemctl unmask %s' % service)
manipulates systemd services
def get_filename(self, renew=False): if self._fname is None or renew: self._fname = '%s.%s' % (self._id, self._format) return self._fname
Get the filename of this content. If the file name doesn't already exist, we created it as {id}.{format}.
def copyfile(source, destination, force=True): if os.path.exists(destination) and force is True: os.remove(destination) shutil.copyfile(source, destination) return destination
copy a file from a source to its destination.
def get(self, key): o = self.data[key]() if o is None: del self.data[key] raise CacheFault( "FinalizingCache has %r but its value is no more." % (key,)) log.msg(interface=iaxiom.IStatEvent, stat_cache_hits=1, key=key) return o
Get an entry from the cache by key. @raise KeyError: if the given key is not present in the cache. @raise CacheFault: (a L{KeyError} subclass) if the given key is present in the cache, but the value it points to is gone.
def copy(copy_entry: FILE_COPY_ENTRY): source_path = environ.paths.clean(copy_entry.source) output_path = environ.paths.clean(copy_entry.destination) copier = shutil.copy2 if os.path.isfile(source_path) else shutil.copytree make_output_directory(output_path) for i in range(3): try: copier(source_path, output_path) return except Exception: time.sleep(0.5) raise IOError('Unable to copy "{source}" to "{destination}"'.format( source=source_path, destination=output_path ))
Copies the specified file from its source location to its destination location.
def find_base_path_and_format(main_path, formats): for fmt in formats: try: return base_path(main_path, fmt), fmt except InconsistentPath: continue raise InconsistentPath(u"Path '{}' matches none of the export formats. " "Please make sure that jupytext.formats covers the current file " "(e.g. add '{}' to the export formats)".format(main_path, os.path.splitext(main_path)[1][1:]))
Return the base path and the format corresponding to the given path
def from_list(cls, terms_list, coefficient=1.0): if not all([isinstance(op, tuple) for op in terms_list]): raise TypeError("The type of terms_list should be a list of (name, index) " "tuples suitable for PauliTerm().") pterm = PauliTerm("I", 0) assert all([op[0] in PAULI_OPS for op in terms_list]) indices = [op[1] for op in terms_list] assert all(_valid_qubit(index) for index in indices) if len(set(indices)) != len(indices): raise ValueError("Elements of PauliTerm that are allocated using from_list must " "be on disjoint qubits. Use PauliTerm multiplication to simplify " "terms instead.") for op, index in terms_list: if op != "I": pterm._ops[index] = op if not isinstance(coefficient, Number): raise ValueError("coefficient of PauliTerm must be a Number.") pterm.coefficient = complex(coefficient) return pterm
Allocates a Pauli Term from a list of operators and indices. This is more efficient than multiplying together individual terms. :param list terms_list: A list of tuples, e.g. [("X", 0), ("Y", 1)] :return: PauliTerm
def drawPoints(self, pointPen, filterRedundantPoints=False): if filterRedundantPoints: pointPen = FilterRedundantPointPen(pointPen) for contour in self.contours: pointPen.beginPath(identifier=contour["identifier"]) for segmentType, pt, smooth, name, identifier in contour["points"]: pointPen.addPoint(pt=pt, segmentType=segmentType, smooth=smooth, name=name, identifier=identifier) pointPen.endPath() for component in self.components: pointPen.addComponent(component["baseGlyph"], component["transformation"], identifier=component["identifier"])
draw self using pointPen
def petl(self, *args, **kwargs): import petl t = self.resolved_url.get_resource().get_target() if t.target_format == 'txt': return petl.fromtext(str(t.fspath), *args, **kwargs) elif t.target_format == 'csv': return petl.fromcsv(str(t.fspath), *args, **kwargs) else: raise Exception("Can't handle")
Return a PETL source object
def from_Z(z: int): for sym, data in _pt_data.items(): if data["Atomic no"] == z: return Element(sym) raise ValueError("No element with this atomic number %s" % z)
Get an element from an atomic number. Args: z (int): Atomic number Returns: Element with atomic number z.
def dotproduct(a, b): "Calculates the dotproduct between two vecors" assert(len(a) == len(b)) out = 0 for i in range(len(a)): out += a[i] * b[i] return out
Calculates the dotproduct between two vecors
def getVariable(self, name): return lock_and_call( lambda: Variable(self._impl.getVariable(name)), self._lock )
Get the variable with the corresponding name. Args: name: Name of the variable to be found. Raises: TypeError: if the specified variable does not exist.
def _parse_directory(self): if self._parser.has_option('storage', 'directory'): directory = self._parser.get('storage', 'directory') if directory == CUSTOM_APPS_DIR: raise ConfigError("{} cannot be used as a storage directory." .format(CUSTOM_APPS_DIR)) else: directory = MACKUP_BACKUP_PATH return str(directory)
Parse the storage directory in the config. Returns: str
def names(cls): if not cls._files: for f in os.listdir(cls._image_path): if(not f.startswith('.') and os.path.isfile(os.path.join(cls._image_path, f))): cls._files.append(os.path.splitext(f)[0]) return cls._files
A list of all emoji names without file extension.
def _base_placeholder(self): notes_master = self.part.notes_master ph_type = self.element.ph_type return notes_master.placeholders.get(ph_type=ph_type)
Return the notes master placeholder this notes slide placeholder inherits from, or |None| if no placeholder of the matching type is present.
def cost(self, i, j): return dist(self.endpoints[i][1], self.endpoints[j][0])
Returns the distance between the end of path i and the start of path j.
def set_last_component_continued(self): if not self._initialized: raise pycdlibexception.PyCdlibInternalError('SL record not yet initialized!') if not self.symlink_components: raise pycdlibexception.PyCdlibInternalError('Trying to set continued on a non-existent component!') self.symlink_components[-1].set_continued()
Set the previous component of this SL record to continued. Parameters: None. Returns: Nothing.
def _get_bank_keys_redis_key(bank): opts = _get_redis_keys_opts() return '{prefix}{separator}{bank}'.format( prefix=opts['bank_keys_prefix'], separator=opts['separator'], bank=bank )
Return the Redis key for the SET of keys under a certain bank, given the bank name.
def selection_paste_data_gen(self, selection, data, freq=None): (bb_top, bb_left), (bb_bottom, bb_right) = \ selection.get_grid_bbox(self.grid.code_array.shape) bbox_height = bb_bottom - bb_top + 1 bbox_width = bb_right - bb_left + 1 for row, row_data in enumerate(itertools.cycle(data)): if row >= bbox_height: break row_data = list(row_data) duplicated_row_data = row_data * (bbox_width // len(row_data) + 1) duplicated_row_data = duplicated_row_data[:bbox_width] for col in xrange(len(duplicated_row_data)): if (bb_top, bb_left + col) not in selection: duplicated_row_data[col] = None yield duplicated_row_data
Generator that yields data for selection paste
def remove_additional_model(self, model_list_or_dict, core_objects_dict, model_name, model_key, destroy=True): if model_name == "income": self.income.prepare_destruction() self.income = None return for model_or_key in model_list_or_dict: model = model_or_key if model_key is None else model_list_or_dict[model_or_key] found = False for core_object in core_objects_dict.values(): if core_object is getattr(model, model_name): found = True break if not found: if model_key is None: if destroy: model.prepare_destruction() model_list_or_dict.remove(model) else: if destroy: model_list_or_dict[model_or_key].prepare_destruction() del model_list_or_dict[model_or_key] return
Remove one unnecessary model The method will search for the first model-object out of model_list_or_dict that represents no core-object in the dictionary of core-objects handed by core_objects_dict, remove it and return without continue to search for more model-objects which maybe are unnecessary, too. :param model_list_or_dict: could be a list or dictionary of one model type :param core_objects_dict: dictionary of one type of core-elements (rafcon.core) :param model_name: prop_name for the core-element hold by the model, this core-element is covered by the model :param model_key: if model_list_or_dict is a dictionary the key is the id of the respective element (e.g. 'state_id') :return:
def getHeadingLevel(ts, hierarchy=default_hierarchy): try: return hierarchy.index(ts.name)+1 except ValueError: if ts.name.endswith('section'): i, name = 0, ts.name while name.startswith('sub'): name, i = name[3:], i+1 if name == 'section': return i+2 return float('inf') except (AttributeError, TypeError): return float('inf')
Extract heading level for a particular Tex element, given a specified hierarchy. >>> ts = TexSoup(r'\section{Hello}').section >>> TOC.getHeadingLevel(ts) 2 >>> ts2 = TexSoup(r'\chapter{hello again}').chapter >>> TOC.getHeadingLevel(ts2) 1 >>> ts3 = TexSoup(r'\subsubsubsubsection{Hello}').subsubsubsubsection >>> TOC.getHeadingLevel(ts3) 6
def write_document(doc, fnm): with codecs.open(fnm, 'wb', 'ascii') as f: f.write(json.dumps(doc, indent=2))
Write a Text document to file. Parameters ---------- doc: Text The document to save. fnm: str The filename to save the document
def waypoint_count_send(self, seq): if self.mavlink10(): self.mav.mission_count_send(self.target_system, self.target_component, seq) else: self.mav.waypoint_count_send(self.target_system, self.target_component, seq)
wrapper for waypoint_count_send
def has_signature(body, sender): non_empty = [line for line in body.splitlines() if line.strip()] candidate = non_empty[-SIGNATURE_MAX_LINES:] upvotes = 0 for line in candidate: if len(line.strip()) > 27: continue elif contains_sender_names(sender)(line): return True elif (binary_regex_search(RE_RELAX_PHONE)(line) + binary_regex_search(RE_EMAIL)(line) + binary_regex_search(RE_URL)(line) == 1): upvotes += 1 if upvotes > 1: return True
Checks if the body has signature. Returns True or False.
def process_route_spec_config(con, vpc_info, route_spec, failed_ips, questionable_ips): if CURRENT_STATE._stop_all: logging.debug("Routespec processing. Stop requested, abort operation") return if failed_ips: logging.debug("Route spec processing. Failed IPs: %s" % ",".join(failed_ips)) else: logging.debug("Route spec processing. No failed IPs.") routes_in_rts = {} CURRENT_STATE.vpc_state.setdefault("time", datetime.datetime.now().isoformat()) chosen_routers = _update_existing_routes(route_spec, failed_ips, questionable_ips, vpc_info, con, routes_in_rts) _add_missing_routes(route_spec, failed_ips, questionable_ips, chosen_routers, vpc_info, con, routes_in_rts)
Look through the route spec and update routes accordingly. Idea: Make sure we have a route for each CIDR. If we have a route to any of the IP addresses for a given CIDR then we are good. Otherwise, pick one (usually the first) IP and create a route to that IP. If a route points at a failed or questionable IP then a new candidate is chosen, if possible.
def iterscrapers(self, method, mode = None): global discovered if discovered.has_key(self.language) and discovered[self.language].has_key(method): for Scraper in discovered[self.language][method]: yield Scraper
Iterates over all available scrapers.
def get_valid_build_systems(working_dir, package=None): from rez.plugin_managers import plugin_manager from rez.exceptions import PackageMetadataError try: package = package or get_developer_package(working_dir) except PackageMetadataError: pass if package: if getattr(package, "build_command", None) is not None: buildsys_name = "custom" else: buildsys_name = getattr(package, "build_system", None) if buildsys_name: cls = plugin_manager.get_plugin_class('build_system', buildsys_name) return [cls] clss = [] for buildsys_name in get_buildsys_types(): cls = plugin_manager.get_plugin_class('build_system', buildsys_name) if cls.is_valid_root(working_dir, package=package): clss.append(cls) child_clss = set(x.child_build_system() for x in clss) clss = list(set(clss) - child_clss) return clss
Returns the build system classes that could build the source in given dir. Args: working_dir (str): Dir containing the package definition and potentially build files. package (`Package`): Package to be built. This may or may not be needed to determine the build system. For eg, cmake just has to look for a CMakeLists.txt file, whereas the 'build_command' package field must be present for the 'custom' build system type. Returns: List of class: Valid build system class types.
def make_type(cls, basename, cardinality): if cardinality is Cardinality.one: return basename type_name = "%s%s" % (basename, cls.to_char_map[cardinality]) return type_name
Build new type name according to CardinalityField naming scheme. :param basename: Type basename of primary type (as string). :param cardinality: Cardinality of the new type (as Cardinality item). :return: Type name with CardinalityField suffix (if needed)
def load_file(self, cursor, target, fname, options): "Parses and loads a single file into the target table." with open(fname) as fin: log.debug("opening {0} in {1} load_file".format(fname, __name__)) encoding = options.get('encoding', 'utf-8') if target in self.processors: reader = self.processors[target](fin, encoding=encoding) else: reader = self.default_processor(fin, encoding=encoding) columns = getattr(reader, 'output_columns', None) for _ in xrange(int(options.get('skip-lines', 0))): fin.readline() cursor.copy_from(reader, self.qualified_names[target], columns=columns)
Parses and loads a single file into the target table.
def get(filename, ignore_fields=None): if ignore_fields is None: ignore_fields = [] with open(filename, 'r') as fh: bibtex = bibtexparser.load(fh) bibtex.entries = [{k: entry[k] for k in entry if k not in ignore_fields} for entry in bibtex.entries] return bibtex
Get all entries from a BibTeX file. :param filename: The name of the BibTeX file. :param ignore_fields: An optional list of fields to strip from the BibTeX \ file. :returns: A ``bibtexparser.BibDatabase`` object representing the fetched \ entries.
def _antisymm_12(C): nans = np.isnan(C) C[nans] = -np.einsum('jikl', C)[nans] return C
To get rid of NaNs produced by _scalar2array, antisymmetrize the first two indices of operators where C_ijkl = -C_jikl
def reset(): global _handlers, python_signal for sig, (previous, _) in _handlers.iteritems(): if not previous: previous = SIG_DFL python_signal.signal(sig, previous) _handlers = dict()
Clear global data and remove the handlers. CAUSION! This method sets as a signal handlers the ones which it has noticed on initialization time. If there has been another handler installed on top of us it will get removed by this method call.
async def start_serving(self, address=None, sockets=None, **kw): if self._server: raise RuntimeError('Already serving') server = DGServer(self._loop) loop = self._loop if sockets: for sock in sockets: transport, _ = await loop.create_datagram_endpoint( self.create_protocol, sock=sock) server.transports.append(transport) elif isinstance(address, tuple): transport, _ = await loop.create_datagram_endpoint( self.create_protocol, local_addr=address) server.transports.append(transport) else: raise RuntimeError('sockets or address must be supplied') self._set_server(server)
create the server endpoint.
def record_corrected_value(self, value, expected_interval, count=1): while True: if not self.record_value(value, count): return False if value <= expected_interval or expected_interval <= 0: return True value -= expected_interval
Record a new value into the histogram and correct for coordinated omission if needed Args: value: the value to record (must be in the valid range) expected_interval: the expected interval between 2 value samples count: incremental count (defaults to 1)
def ParseYAMLAuthorizationsList(yaml_data): try: raw_list = yaml.ParseMany(yaml_data) except (ValueError, pyyaml.YAMLError) as e: raise InvalidAPIAuthorization("Invalid YAML: %s" % e) result = [] for auth_src in raw_list: auth = APIAuthorization() auth.router_cls = _GetRouterClass(auth_src["router"]) auth.users = auth_src.get("users", []) auth.groups = auth_src.get("groups", []) auth.router_params = auth_src.get("router_params", {}) result.append(auth) return result
Parses YAML data into a list of APIAuthorization objects.
def edit(self, id, seq, resource): return self.create_or_edit(id, seq, resource)
Edit a highlight. :param id: Result ID as an int. :param seq: TestResult sequence ID as an int. :param resource: :class:`highlights.Highlight <highlights.Highlight>` object :return: :class:`highlights.Highlight <highlights.Highlight>` object :rtype: highlights.Highlight
def person(self, person_id): if not self.cache['persons'].get(person_id, None): try: person_xml = self.bc.person(person_id) p = Person(person_xml) self.cache['persons'][person_id] = p except HTTPError: return None return self.cache['persons'][person_id]
Access a Person object by id
def _get_signature(self): keys = list(self.params.keys()) keys.sort() string = "" for name in keys: string += name string += self.params[name] string += self.api_secret return md5(string)
Returns a 32-character hexadecimal md5 hash of the signature string.
def _flatten_up_to_token(self, token): if token.is_group: token = next(token.flatten()) for t in self._curr_stmt.flatten(): if t == token: break yield t
Yields all tokens up to token but excluding current.
def change_domain(self, domain): logger.info('Running %(name)s.change_domain() with new domain range:[%(ymin)s, %(ymax)s]', {"name": self.__class__, "ymin": np.min(domain), "ymax": np.max(domain)}) if np.max(domain) > np.max(self.x) or np.min(domain) < np.min(self.x): logger.error('Old domain range: [%(xmin)s, %(xmax)s] does not include new domain range:' '[%(ymin)s, %(ymax)s]', {"xmin": np.min(self.x), "xmax": np.max(self.x), "ymin": np.min(domain), "ymax": np.max(domain)}) raise ValueError('in change_domain():' 'the old domain does not include the new one') y = np.interp(domain, self.x, self.y) obj = self.__class__(np.dstack((domain, y))[0], **self.__dict__['metadata']) return obj
Creating new Curve object in memory with domain passed as a parameter. New domain must include in the original domain. Copies values from original curve and uses interpolation to calculate values for new points in domain. Calculate y - values of example curve with changed domain: >>> print(Curve([[0,0], [5, 5], [10, 0]])\ .change_domain([1, 2, 8, 9]).y) [1. 2. 2. 1.] :param domain: set of points representing new domain. Might be a list or np.array. :return: new Curve object with domain set by 'domain' parameter
def wait_for_completion(self, job_id, timeout=None): while 1: job = self.wait(job_id, timeout=timeout) if job.state in [State.COMPLETED, State.FAILED, State.CANCELED]: return job else: continue
Wait for the job given by job_id to change to COMPLETED or CANCELED. Raises a iceqube.exceptions.TimeoutError if timeout is exceeded before each job change. :param job_id: the id of the job to wait for. :param timeout: how long to wait for a job state change before timing out.
def lock(cls, pid, connection, session): with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].lock(connection, session)
Explicitly lock the specified connection in the pool :param str pid: The pool id :type connection: psycopg2.extensions.connection :param connection: The connection to add to the pool :param queries.Session session: The session to hold the lock
def can_user_approve_this_page(self, user): self.ensure_one() if not self.is_approval_required: return True if user.has_group('document_page.group_document_manager'): return True if not user.has_group( 'document_page_approval.group_document_approver_user'): return False if not self.approver_group_ids: return True return len(user.groups_id & self.approver_group_ids) > 0
Check if a user can approve this page.
def maxEntropy(n,k): s = float(k)/n if s > 0.0 and s < 1.0: entropy = - s * math.log(s,2) - (1 - s) * math.log(1 - s,2) else: entropy = 0 return n*entropy
The maximum enropy we could get with n units and k winners
def save(self, file_or_wfs, filename, overwrite=False): self.write(filename, file_or_wfs.read()) return filename
Save a file-like object or a `werkzeug.FileStorage` with the specified filename. :param storage: The file or the storage to be saved. :param filename: The destination in the storage. :param overwrite: if `False`, raise an exception if file exists in storage :raises FileExists: when file exists and overwrite is `False`
def associate_dhcp_options_to_vpc(dhcp_options_id, vpc_id=None, vpc_name=None, region=None, key=None, keyid=None, profile=None): try: vpc_id = check_vpc(vpc_id, vpc_name, region, key, keyid, profile) if not vpc_id: return {'associated': False, 'error': {'message': 'VPC {0} does not exist.'.format(vpc_name or vpc_id)}} conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile) if conn.associate_dhcp_options(dhcp_options_id, vpc_id): log.info('DHCP options with id %s were associated with VPC %s', dhcp_options_id, vpc_id) return {'associated': True} else: log.warning('DHCP options with id %s were not associated with VPC %s', dhcp_options_id, vpc_id) return {'associated': False, 'error': {'message': 'DHCP options could not be associated.'}} except BotoServerError as e: return {'associated': False, 'error': __utils__['boto.get_error'](e)}
Given valid DHCP options id and a valid VPC id, associate the DHCP options record with the VPC. Returns True if the DHCP options record were associated and returns False if the DHCP options record was not associated. CLI Example: .. code-block:: bash salt myminion boto_vpc.associate_dhcp_options_to_vpc 'dhcp-a0bl34pp' 'vpc-6b1fe402'
def meta(self): if not self._pv.meta_data_property or not self._meta_target: return {} return getattr(self._meta_target, self._pv.meta_data_property)
Value of the bound meta-property on the target.