code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def predict(self, data, num_iteration=None, raw_score=False, pred_leaf=False, pred_contrib=False, data_has_header=False, is_reshape=True, **kwargs): predictor = self._to_predictor(copy.deepcopy(kwargs)) if num_iteration is None: num_iteration = self.best_iteration return predictor.predict(data, num_iteration, raw_score, pred_leaf, pred_contrib, data_has_header, is_reshape)
Make a prediction. Parameters ---------- data : string, numpy array, pandas DataFrame, H2O DataTable's Frame or scipy.sparse Data source for prediction. If string, it represents the path to txt file. num_iteration : int or None, optional (default=None) Limit number of iterations in the prediction. If None, if the best iteration exists, it is used; otherwise, all iterations are used. If <= 0, all iterations are used (no limits). raw_score : bool, optional (default=False) Whether to predict raw scores. pred_leaf : bool, optional (default=False) Whether to predict leaf index. pred_contrib : bool, optional (default=False) Whether to predict feature contributions. Note ---- If you want to get more explanations for your model's predictions using SHAP values, like SHAP interaction values, you can install the shap package (https://github.com/slundberg/shap). Note that unlike the shap package, with ``pred_contrib`` we return a matrix with an extra column, where the last column is the expected value. data_has_header : bool, optional (default=False) Whether the data has header. Used only if data is string. is_reshape : bool, optional (default=True) If True, result is reshaped to [nrow, ncol]. **kwargs Other parameters for the prediction. Returns ------- result : numpy array Prediction result.
def remove_gateway_router(self, router): router_id = self._find_router_id(router) return self.network_conn.remove_gateway_router(router=router_id)
Removes an external network gateway from the specified router
def get_OID_field(fs): if arcpyFound == False: raise Exception("ArcPy is required to use this function") desc = arcpy.Describe(fs) if desc.hasOID: return desc.OIDFieldName return None
returns a featureset's object id field
def is_object(brain_or_object): if is_portal(brain_or_object): return True if is_at_content(brain_or_object): return True if is_dexterity_content(brain_or_object): return True if is_brain(brain_or_object): return True return False
Check if the passed in object is a supported portal content object :param brain_or_object: A single catalog brain or content object :type brain_or_object: Portal Object :returns: True if the passed in object is a valid portal content
def _validate(self): if self.region_type not in regions_attributes: raise ValueError("'{0}' is not a valid region type in this package" .format(self.region_type)) if self.coordsys not in valid_coordsys['DS9'] + valid_coordsys['CRTF']: raise ValueError("'{0}' is not a valid coordinate reference frame " "in astropy".format(self.coordsys))
Checks whether all the attributes of this object is valid.
def get_current_environment(self, note=None): shutit_global.shutit_global_object.yield_to_draw() self.handle_note(note) res = self.get_current_shutit_pexpect_session_environment().environment_id self.handle_note_after(note) return res
Returns the current environment id from the current shutit_pexpect_session
def str2int(self, num): radix, alphabet = self.radix, self.alphabet if radix <= 36 and alphabet[:radix].lower() == BASE85[:radix].lower(): return int(num, radix) ret = 0 lalphabet = alphabet[:radix] for char in num: if char not in lalphabet: raise ValueError("invalid literal for radix2int() with radix " "%d: '%s'" % (radix, num)) ret = ret * radix + self.cached_map[char] return ret
Converts a string into an integer. If possible, the built-in python conversion will be used for speed purposes. :param num: A string that will be converted to an integer. :rtype: integer :raise ValueError: when *num* is invalid
def get_all_destinations(self, server_id): server = self._get_server(server_id) return server.conn.EnumerateInstances(DESTINATION_CLASSNAME, namespace=server.interop_ns)
Return all listener destinations in a WBEM server. This function contacts the WBEM server and retrieves the listener destinations by enumerating the instances of CIM class "CIM_ListenerDestinationCIMXML" in the Interop namespace of the WBEM server. Parameters: server_id (:term:`string`): The server ID of the WBEM server, returned by :meth:`~pywbem.WBEMSubscriptionManager.add_server`. Returns: :class:`py:list` of :class:`~pywbem.CIMInstance`: The listener destination instances. Raises: Exceptions raised by :class:`~pywbem.WBEMConnection`.
def columns(self, **kw): fut = self._run_operation(self._impl.columns, **kw) return fut
Creates a results set of column names in specified tables by executing the ODBC SQLColumns function. Each row fetched has the following columns. :param table: the table tname :param catalog: the catalog name :param schema: the schmea name :param column: string search pattern for column names.
def stdchannel_redirected(stdchannel, dest_filename, fake=False): if fake: yield return oldstdchannel = dest_file = None try: oldstdchannel = os.dup(stdchannel.fileno()) dest_file = open(dest_filename, 'w') os.dup2(dest_file.fileno(), stdchannel.fileno()) yield finally: if oldstdchannel is not None: os.dup2(oldstdchannel, stdchannel.fileno()) if dest_file is not None: dest_file.close()
A context manager to temporarily redirect stdout or stderr e.g.: with stdchannel_redirected(sys.stderr, os.devnull): if compiler.has_function('clock_gettime', libraries=['rt']): libraries.append('rt')
def needed(name, required): return [ relative_field(r, name) if r and startswith_field(r, name) else None for r in required ]
RETURN SUBSET IF name IN REQUIRED
def _update_dict(self, newdict): if self.bugzilla: self.bugzilla.post_translation({}, newdict) aliases = self.bugzilla._get_bug_aliases() for newname, oldname in aliases: if oldname not in newdict: continue if newname not in newdict: newdict[newname] = newdict[oldname] elif newdict[newname] != newdict[oldname]: log.debug("Update dict contained differing alias values " "d[%s]=%s and d[%s]=%s , dropping the value " "d[%s]", newname, newdict[newname], oldname, newdict[oldname], oldname) del(newdict[oldname]) for key in newdict.keys(): if key not in self._bug_fields: self._bug_fields.append(key) self.__dict__.update(newdict) if 'id' not in self.__dict__ and 'bug_id' not in self.__dict__: raise TypeError("Bug object needs a bug_id")
Update internal dictionary, in a way that ensures no duplicate entries are stored WRT field aliases
def nonull_dict(self): return {k: v for k, v in six.iteritems(self.dict) if v and k != '_codes'}
Like dict, but does not hold any null values. :return:
def code_memory_read(self, addr, num_bytes): buf_size = num_bytes buf = (ctypes.c_uint8 * buf_size)() res = self._dll.JLINKARM_ReadCodeMem(addr, buf_size, buf) if res < 0: raise errors.JLinkException(res) return list(buf)[:res]
Reads bytes from code memory. Note: This is similar to calling ``memory_read`` or ``memory_read8``, except that this uses a cache and reads ahead. This should be used in instances where you want to read a small amount of bytes at a time, and expect to always read ahead. Args: self (JLink): the ``JLink`` instance addr (int): starting address from which to read num_bytes (int): number of bytes to read Returns: A list of bytes read from the target. Raises: JLinkException: if memory could not be read.
def handleStatus(self, version, code, message): "extends handleStatus to instantiate a local response object" proxy.ProxyClient.handleStatus(self, version, code, message) self._response = client.Response(version, code, message, {}, None)
extends handleStatus to instantiate a local response object
def add_binary(self, data, address=0, overwrite=False): address *= self.word_size_bytes self._segments.add(_Segment(address, address + len(data), bytearray(data), self.word_size_bytes), overwrite)
Add given data at given address. Set `overwrite` to ``True`` to allow already added data to be overwritten.
def remove_one(self): self.__bulk.add_delete(self.__selector, _DELETE_ONE, collation=self.__collation)
Remove a single document matching the selector criteria.
def to_json(self): roots = [] for r in self.roots: roots.append(r.to_json()) return {'roots': roots}
Returns the JSON representation of this graph.
def save_expectations_config( self, filepath=None, discard_failed_expectations=True, discard_result_format_kwargs=True, discard_include_configs_kwargs=True, discard_catch_exceptions_kwargs=True, suppress_warnings=False ): if filepath == None: pass expectations_config = self.get_expectations_config( discard_failed_expectations, discard_result_format_kwargs, discard_include_configs_kwargs, discard_catch_exceptions_kwargs, suppress_warnings ) expectation_config_str = json.dumps(expectations_config, indent=2) open(filepath, 'w').write(expectation_config_str)
Writes ``_expectation_config`` to a JSON file. Writes the DataAsset's expectation config to the specified JSON ``filepath``. Failing expectations \ can be excluded from the JSON expectations config with ``discard_failed_expectations``. The kwarg key-value \ pairs :ref:`result_format`, :ref:`include_config`, and :ref:`catch_exceptions` are optionally excluded from the JSON \ expectations config. Args: filepath (string): \ The location and name to write the JSON config file to. discard_failed_expectations (boolean): \ If True, excludes expectations that do not return ``success = True``. \ If False, all expectations are written to the JSON config file. discard_result_format_kwargs (boolean): \ If True, the :ref:`result_format` attribute for each expectation is not written to the JSON config file. \ discard_include_configs_kwargs (boolean): \ If True, the :ref:`include_config` attribute for each expectation is not written to the JSON config file.\ discard_catch_exceptions_kwargs (boolean): \ If True, the :ref:`catch_exceptions` attribute for each expectation is not written to the JSON config \ file. suppress_warnings (boolean): \ It True, all warnings raised by Great Expectations, as a result of dropped expectations, are \ suppressed.
def create(self, publish): target_url = self.client.get_url('PUBLISH', 'POST', 'create') r = self.client.request('POST', target_url, json=publish._serialize()) return self.create_from_result(r.json())
Creates a new publish group.
def to_ascii_hex(value: int, digits: int) -> str: if digits < 1: return '' text = '' for _ in range(0, digits): text = chr(ord('0') + (value % 0x10)) + text value //= 0x10 return text
Converts an int value to ASCII hex, as used by LifeSOS. Unlike regular hex, it uses the first 6 characters that follow numerics on the ASCII table instead of A - F.
def _forward_mode(self, *args): f_val, f_diff = self.f._forward_mode(*args) g_val, g_diff = self.g._forward_mode(*args) val = f_val + g_val diff = f_diff + g_diff return val, diff
Forward mode differentiation for a sum
def from_header(self, param_name, field): return self.__from_source(param_name, field, lambda: request.headers, 'header')
A decorator that converts a request header into a function parameter based on the specified field. :param str param_name: The parameter which receives the argument. :param Field field: The field class or instance used to deserialize the request header to a Python object. :return: A function
def get_empty_dimension(**kwargs): dimension = JSONObject(Dimension()) dimension.id = None dimension.name = '' dimension.description = '' dimension.project_id = None dimension.units = [] return dimension
Returns a dimension object initialized with empty values
def scale_image(self): new_width = int(self.figcanvas.fwidth * self._scalestep ** self._scalefactor) new_height = int(self.figcanvas.fheight * self._scalestep ** self._scalefactor) self.figcanvas.setFixedSize(new_width, new_height)
Scale the image size.
def clear(self): with self._hlock: self.handlers.clear() with self._mlock: self.memoize.clear()
Discards all registered handlers and cached results
def qteGetMode(self, mode: str): for item in self._qteModeList: if item[0] == mode: return item return None
Return a tuple containing the ``mode``, its value, and its associated ``QLabel`` instance. |Args| * ``mode`` (**str**): size and position of new window. |Returns| * (**str**, **object**, **QLabel**: (mode, value, label). |Raises| * **QtmacsArgumentError** if at least one argument has an invalid type.
def _parse(read_method: Callable) -> HTTPResponse: response = read_method(4096) while b'HTTP/' not in response or b'\r\n\r\n' not in response: response += read_method(4096) fake_sock = _FakeSocket(response) response = HTTPResponse(fake_sock) response.begin() return response
Trick to standardize the API between sockets and SSLConnection objects.
def _comparator_lt(filter_value, tested_value): if is_string(filter_value): value_type = type(tested_value) try: filter_value = value_type(filter_value) except (TypeError, ValueError): if value_type is int: try: filter_value = float(filter_value) except (TypeError, ValueError): return False else: return False try: return tested_value < filter_value except TypeError: return False
Tests if the filter value is strictly greater than the tested value tested_value < filter_value
def text(message: Text, default: Text = "", validate: Union[Type[Validator], Callable[[Text], bool], None] = None, qmark: Text = DEFAULT_QUESTION_PREFIX, style: Optional[Style] = None, **kwargs: Any) -> Question: merged_style = merge_styles([DEFAULT_STYLE, style]) validator = build_validator(validate) def get_prompt_tokens(): return [("class:qmark", qmark), ("class:question", ' {} '.format(message))] p = PromptSession(get_prompt_tokens, style=merged_style, validator=validator, **kwargs) p.default_buffer.reset(Document(default)) return Question(p.app)
Prompt the user to enter a free text message. This question type can be used to prompt the user for some text input. Args: message: Question text default: Default value will be returned if the user just hits enter. validate: Require the entered value to pass a validation. The value can not be submited until the validator accepts it (e.g. to check minimum password length). This can either be a function accepting the input and returning a boolean, or an class reference to a subclass of the prompt toolkit Validator class. qmark: Question prefix displayed in front of the question. By default this is a `?` style: A custom color and style for the question parts. You can configure colors as well as font types for different elements. Returns: Question: Question instance, ready to be prompted (using `.ask()`).
def analyze_frames(cls, workdir): record = cls(None, workdir) obj = {} with open(os.path.join(workdir, 'frames', 'frames.json')) as f: obj = json.load(f) record.device_info = obj['device'] record.frames = obj['frames'] record.analyze_all() record.save()
generate draft from recorded frames
def array_append(path, *values, **kwargs): return _gen_4spec(LCB_SDCMD_ARRAY_ADD_LAST, path, MultiValue(*values), create_path=kwargs.pop('create_parents', False), **kwargs)
Add new values to the end of an array. :param path: Path to the array. The path should contain the *array itself* and not an element *within* the array :param values: one or more values to append :param create_parents: Create the array if it does not exist .. note:: Specifying multiple values in `values` is more than just syntactical sugar. It allows the server to insert the values as one single unit. If you have multiple values to append to the same array, ensure they are specified as multiple arguments to `array_append` rather than multiple `array_append` commands to :cb_bmeth:`mutate_in` This operation is only valid in :cb_bmeth:`mutate_in`. .. seealso:: :func:`array_prepend`, :func:`upsert`
def stats(self, ops=(min, max, np.median, sum)): names = [op.__name__ for op in ops] ops = [_zero_on_type_error(op) for op in ops] columns = [[op(column) for op in ops] for column in self.columns] table = type(self)().with_columns(zip(self.labels, columns)) stats = table._unused_label('statistic') table[stats] = names table.move_to_start(stats) return table
Compute statistics for each column and place them in a table.
def get_list_by_name(self, display_name): if not display_name: raise ValueError('Must provide a valid list display name') url = self.build_url(self._endpoints.get('get_list_by_name').format(display_name=display_name)) response = self.con.get(url) if not response: return [] data = response.json() return self.list_constructor(parent=self, **{self._cloud_data_key: data})
Returns a sharepoint list based on the display name of the list
def get_queryset(self): queryset = super(MostVotedManager, self).get_queryset() sql = messages = queryset.extra( select={ 'vote_count': sql, } ) return messages.order_by('-vote_count', 'received_time')
Query for the most voted messages sorting by the sum of voted and after by date.
def load_pretrained(self, wgts_fname:str, itos_fname:str, strict:bool=True): "Load a pretrained model and adapts it to the data vocabulary." old_itos = pickle.load(open(itos_fname, 'rb')) old_stoi = {v:k for k,v in enumerate(old_itos)} wgts = torch.load(wgts_fname, map_location=lambda storage, loc: storage) if 'model' in wgts: wgts = wgts['model'] wgts = convert_weights(wgts, old_stoi, self.data.train_ds.vocab.itos) self.model.load_state_dict(wgts, strict=strict)
Load a pretrained model and adapts it to the data vocabulary.
def end_task_type(self, task_type_str): assert ( task_type_str in self._task_dict ), "Task type has not been started yet: {}".format(task_type_str) self._log_progress() del self._task_dict[task_type_str]
Call when processing of all tasks of the given type is completed, typically just after exiting a loop that processes many tasks of the given type. Progress messages logged at intervals will typically not include the final entry which shows that processing is 100% complete, so a final progress message is logged here.
def yield_once(iterator): @wraps(iterator) def yield_once_generator(*args, **kwargs): yielded = set() for item in iterator(*args, **kwargs): if item not in yielded: yielded.add(item) yield item return yield_once_generator
Decorator to make an iterator returned by a method yield each result only once. >>> @yield_once ... def generate_list(foo): ... return foo >>> list(generate_list([1, 2, 1])) [1, 2] :param iterator: Any method that returns an iterator :return: An method returning an iterator that yields every result only once at most.
def pretrain(self, train_set, validation_set=None): self.do_pretrain = True def set_params_func(autoenc, autoencgraph): params = autoenc.get_parameters(graph=autoencgraph) self.encoding_w_.append(params['enc_w']) self.encoding_b_.append(params['enc_b']) return SupervisedModel.pretrain_procedure( self, self.autoencoders, self.autoencoder_graphs, set_params_func=set_params_func, train_set=train_set, validation_set=validation_set)
Perform Unsupervised pretraining of the autoencoder.
def get_chain(self, name, table="filter"): return [r for r in self.rules if r["table"] == table and r["chain"] == name]
Get the list of rules for a particular chain. Chain order is kept intact. Args: name (str): chain name, e.g. `` table (str): table name, defaults to ``filter`` Returns: list: rules
def stream( streaming_fn, status=200, headers=None, content_type="text/plain; charset=utf-8", ): return StreamingHTTPResponse( streaming_fn, headers=headers, content_type=content_type, status=status )
Accepts an coroutine `streaming_fn` which can be used to write chunks to a streaming response. Returns a `StreamingHTTPResponse`. Example usage:: @app.route("/") async def index(request): async def streaming_fn(response): await response.write('foo') await response.write('bar') return stream(streaming_fn, content_type='text/plain') :param streaming_fn: A coroutine accepts a response and writes content to that response. :param mime_type: Specific mime_type. :param headers: Custom Headers.
def save_metadata_json(self, filename: str, structure: JsonExportable) -> None: if self.compress_json: filename += '.json.xz' else: filename += '.json' save_structure_to_file(structure, filename) if isinstance(structure, (Post, StoryItem)): self.context.log('json', end=' ', flush=True)
Saves metadata JSON file of a structure.
def rgb2cmy(self, img, whitebg=False): tmp = img*1.0 if whitebg: tmp = (1.0 - (img - img.min())/(img.max() - img.min())) out = tmp*0.0 out[:,:,0] = (tmp[:,:,1] + tmp[:,:,2])/2.0 out[:,:,1] = (tmp[:,:,0] + tmp[:,:,2])/2.0 out[:,:,2] = (tmp[:,:,0] + tmp[:,:,1])/2.0 return out
transforms image from RGB to CMY
def edit_tab(self, index): self.setFocus(True) self.tab_index = index rect = self.main.tabRect(index) rect.adjust(1, 1, -2, -1) self.setFixedSize(rect.size()) self.move(self.main.mapToGlobal(rect.topLeft())) text = self.main.tabText(index) text = text.replace(u'&', u'') if self.split_char: text = text.split(self.split_char)[self.split_index] self.setText(text) self.selectAll() if not self.isVisible(): self.show()
Activate the edit tab.
def delete(self, database, key, callback=None): token = self._get_token() self._enqueue(self._PendingItem(token, BlobCommand(token=token, database=database, content=DeleteCommand(key=key.bytes)), callback))
Delete an item from the given database. :param database: The database from which to delete the value. :type database: .BlobDatabaseID :param key: The key to delete. :type key: uuid.UUID :param callback: A callback to be called on success or failure.
def cartesian_product(arrays, flat=True, copy=False): arrays = np.broadcast_arrays(*np.ix_(*arrays)) if flat: return tuple(arr.flatten() if copy else arr.flat for arr in arrays) return tuple(arr.copy() if copy else arr for arr in arrays)
Efficient cartesian product of a list of 1D arrays returning the expanded array views for each dimensions. By default arrays are flattened, which may be controlled with the flat flag. The array views can be turned into regular arrays with the copy flag.
def taxon_info(taxid, ncbi, outFH): taxid = int(taxid) tax_name = ncbi.get_taxid_translator([taxid])[taxid] rank = list(ncbi.get_rank([taxid]).values())[0] lineage = ncbi.get_taxid_translator(ncbi.get_lineage(taxid)) lineage = ['{}:{}'.format(k,v) for k,v in lineage.items()] lineage = ';'.join(lineage) x = [str(x) for x in [tax_name, taxid, rank, lineage]] outFH.write('\t'.join(x) + '\n')
Write info on taxid
def undo(self): if not self._wasUndo: self._qteIndex = len(self._qteStack) else: self._qteIndex -= 1 self._wasUndo = True if self._qteIndex <= 0: return undoObj = self._qteStack[self._qteIndex - 1] undoObj = QtmacsUndoCommand(undoObj) self._push(undoObj) if (self._qteIndex - 1) == self._qteLastSavedUndoIndex: self.qtesigSavedState.emit(QtmacsMessage()) self.saveState()
Undo the last command by adding its inverse action to the stack. This method automatically takes care of applying the correct inverse action when it is called consecutively (ie. without a calling ``push`` in between). The ``qtesigSavedState`` signal is triggered whenever enough undo operations have been performed to put the document back into the last saved state. ..warning: The ``qtesigSaveState`` is triggered whenever the logic of the undo operations **should** have led back to that state, but since the ``UndoStack`` only stacks and ``QtmacsUndoCommand`` objects it may well be the document is **not** in the last saved state, eg. because not all modifications were protected by undo objects, or because the ``QtmacsUndoCommand`` objects have a bug. It is therefore advisable to check in the calling class if the content is indeed identical by comparing it with a temporarily stored copy. |Args| * **None** |Signals| * ``qtesigSavedState``: the document is the last saved state. |Returns| * **None** |Raises| * **None**
def set_image_path(self, identifier, path): f = open(path, 'rb') self.set_image_data(identifier, f.read()) f.close()
Set data for an image mentioned in the template. @param identifier: Identifier of the image; refer to the image in the template by setting "py3o.[identifier]" as the name of that image. @type identifier: string @param path: Image path on the file system @type path: string
def _get_lattice_parameters(lattice): return np.array(np.sqrt(np.dot(lattice.T, lattice).diagonal()), dtype='double')
Return basis vector lengths Parameters ---------- lattice : array_like Basis vectors given as column vectors shape=(3, 3), dtype='double' Returns ------- ndarray, shape=(3,), dtype='double'
def descendants(self, node, relations=None, reflexive=False): if reflexive: decs = self.descendants(node, relations, reflexive=False) decs.append(node) return decs g = None if relations is None: g = self.get_graph() else: g = self.get_filtered_graph(relations) if node in g: return list(nx.descendants(g, node)) else: return []
Returns all descendants of specified node. The default implementation is to use networkx, but some implementations of the Ontology class may use a database or service backed implementation, for large graphs. Arguments --------- node : str identifier for node in ontology reflexive : bool if true, return query node in graph relations : list relation (object property) IDs used to filter Returns ------- list[str] descendant node IDs
def get_adjustable_form(self, element_dispatch): adjustable_form = {} for key in element_dispatch.keys(): adjustable_form[key] = element_dispatch[key]() return adjustable_form
Create an adjustable form from an element dispatch table.
def raw_diff(self): udiff_copy = self.copy_iterator() if self.__format == 'gitdiff': udiff_copy = self._parse_gitdiff(udiff_copy) return u''.join(udiff_copy)
Returns raw string as udiff
def get_max_levels_metadata(self): metadata = dict(self._max_levels_metadata) metadata.update({'existing_cardinal_values': self.my_osid_object_form._my_map['maxLevels']}) return Metadata(**metadata)
get the metadata for max levels
def _get_ancestors_path(self, model, levels=None): if not issubclass(model, self.model): raise ValueError( "%r is not a subclass of %r" % (model, self.model)) ancestry = [] parent_link = model._meta.get_ancestor_link(self.model) if levels: levels -= 1 while parent_link is not None: related = parent_link.remote_field ancestry.insert(0, related.get_accessor_name()) if levels or levels is None: parent_model = related.model parent_link = parent_model._meta.get_ancestor_link( self.model) else: parent_link = None return LOOKUP_SEP.join(ancestry)
Serves as an opposite to _get_subclasses_recurse, instead walking from the Model class up the Model's ancestry and constructing the desired select_related string backwards.
def hx2dp(string): string = stypes.stringToCharP(string) lenout = ctypes.c_int(80) errmsg = stypes.stringToCharP(lenout) number = ctypes.c_double() error = ctypes.c_int() libspice.hx2dp_c(string, lenout, ctypes.byref(number), ctypes.byref(error), errmsg) if not error.value: return number.value else: return stypes.toPythonString(errmsg)
Convert a string representing a double precision number in a base 16 scientific notation into its equivalent double precision number. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/hx2dp_c.html :param string: Hex form string to convert to double precision. :type string: str :return: Double precision value to be returned, Or Error Message. :rtype: float or str
def StatFS(self, path=None): if platform.system() == "Windows": raise RuntimeError("os.statvfs not available on Windows") local_path = client_utils.CanonicalPathToLocalPath(path or self.path) return os.statvfs(local_path)
Call os.statvfs for a given list of rdf_paths. OS X and Linux only. Note that a statvfs call for a network filesystem (e.g. NFS) that is unavailable, e.g. due to no network, will result in the call blocking. Args: path: a Unicode string containing the path or None. If path is None the value in self.path is used. Returns: posix.statvfs_result object Raises: RuntimeError: if called on windows
def set_state(self, state, *, index=0): return self.set_values({ ATTR_DEVICE_STATE: int(state) }, index=index)
Set state of a light.
def read_file(self, url, location=None): response = requests_retry_session().get(url, timeout=10.0) response.raise_for_status() text = response.text if 'tab-width' in self.options: text = text.expandtabs(self.options['tab-width']) return text.splitlines(True)
Read content from the web by overriding `LiteralIncludeReader.read_file`.
def remove_volume(self): lvremove_cmd = ['sudo'] if self.lvm_command().sudo() is True else [] lvremove_cmd.extend(['lvremove', '-f', self.volume_path()]) subprocess.check_output(lvremove_cmd, timeout=self.__class__.__lvm_snapshot_remove_cmd_timeout__)
Remove this volume :return: None
def _convert_simple_value_boolean_query_to_and_boolean_queries(tree, keyword): def _create_operator_node(value_node): base_node = value_node.op if isinstance(value_node, NotOp) else value_node updated_base_node = KeywordOp(keyword, base_node) if keyword else ValueOp(base_node) return NotOp(updated_base_node) if isinstance(value_node, NotOp) else updated_base_node def _get_bool_op_type(bool_op): return AndOp if isinstance(bool_op, And) else OrOp new_tree_root = _get_bool_op_type(tree.bool_op)(None, None) current_tree = new_tree_root previous_tree = tree while True: current_tree.left = _create_operator_node(previous_tree.left) if not isinstance(previous_tree.right, SimpleValueBooleanQuery): current_tree.right = _create_operator_node(previous_tree.right) break previous_tree = previous_tree.right current_tree.right = _get_bool_op_type(previous_tree.bool_op)(None, None) current_tree = current_tree.right return new_tree_root
Chain SimpleValueBooleanQuery values into chained AndOp queries with the given current Keyword.
def get_directory_list_doc(self, configs): if not isinstance(configs, (tuple, list)): configs = [configs] util.check_list_type(configs, dict, 'configs', allow_none=False) return self.__directory_list_descriptor(configs)
JSON dict description of a protorpc.remote.Service in list format. Args: configs: Either a single dict or a list of dicts containing the service configurations to list. Returns: dict, The directory list document as a JSON dict.
def is_url(path): try: parse_result = urlparse(path) return all((parse_result.scheme, parse_result.netloc, parse_result.path)) except ValueError: return False
Test if path represents a valid URL. :param str path: Path to file. :return: True if path is valid url string, False otherwise. :rtype: :py:obj:`True` or :py:obj:`False`
def save_form(self, request, form, change): name = form.cleaned_data['name'] origin_url = form.cleaned_data['origin_url'] res = ClonedRepo(name=name, origin=origin_url) LOG.info("New repo form produced %s" % str(res)) form.save(commit=False) return res
Here we pluck out the data to create a new cloned repo. Form is an instance of NewRepoForm.
def dumps(columns): fp = BytesIO() dump(columns, fp) fp.seek(0) return fp.read()
Serialize ``columns`` to a JSON formatted ``bytes`` object.
def app(environ, start_response): try: if environ['REQUEST_METHOD'] == 'GET': start_response('200 OK', [('content-type', 'text/html')]) return ['Hellow world!'] else: start_response( '405 Method Not Allowed', [('content-type', 'text/html')]) return [''] except Exception as ex: start_response( '500 Internal Server Error', [('content-type', 'text/html')], sys.exc_info()) return _handle_exc(ex)
Simple WSGI application. Returns 200 OK response with 'Hellow world!' in the body for GET requests. Returns 405 Method Not Allowed for all other methods. Returns 500 Internal Server Error if an exception is thrown. The response body will not include the error or any information about it. The error and its stack trace will be reported to FogBugz via BugzScout, though. :param environ: WSGI environ :param start_response: function that accepts status string and headers
def files_cp(self, source, dest, **kwargs): args = (source, dest) return self._client.request('/files/cp', args, **kwargs)
Copies files within the MFS. Due to the nature of IPFS this will not actually involve any of the file's content being copied. .. code-block:: python >>> c.files_ls("/") {'Entries': [ {'Size': 0, 'Hash': '', 'Name': 'Software', 'Type': 0}, {'Size': 0, 'Hash': '', 'Name': 'test', 'Type': 0} ]} >>> c.files_cp("/test", "/bla") '' >>> c.files_ls("/") {'Entries': [ {'Size': 0, 'Hash': '', 'Name': 'Software', 'Type': 0}, {'Size': 0, 'Hash': '', 'Name': 'bla', 'Type': 0}, {'Size': 0, 'Hash': '', 'Name': 'test', 'Type': 0} ]} Parameters ---------- source : str Filepath within the MFS to copy from dest : str Destination filepath with the MFS to which the file will be copied to
def part(z, s): r if sage_included: if s == 1: return np.real(z) elif s == -1: return np.imag(z) elif s == 0: return z else: if s == 1: return z.real elif s == -1: return z.imag elif s == 0: return z
r"""Get the real or imaginary part of a complex number.
def _import_parsers(): global ARCGIS_NODES global ARCGIS_ROOTS global ArcGISParser global FGDC_ROOT global FgdcParser global ISO_ROOTS global IsoParser global VALID_ROOTS if ARCGIS_NODES is None or ARCGIS_ROOTS is None or ArcGISParser is None: from gis_metadata.arcgis_metadata_parser import ARCGIS_NODES from gis_metadata.arcgis_metadata_parser import ARCGIS_ROOTS from gis_metadata.arcgis_metadata_parser import ArcGISParser if FGDC_ROOT is None or FgdcParser is None: from gis_metadata.fgdc_metadata_parser import FGDC_ROOT from gis_metadata.fgdc_metadata_parser import FgdcParser if ISO_ROOTS is None or IsoParser is None: from gis_metadata.iso_metadata_parser import ISO_ROOTS from gis_metadata.iso_metadata_parser import IsoParser if VALID_ROOTS is None: VALID_ROOTS = {FGDC_ROOT}.union(ARCGIS_ROOTS + ISO_ROOTS)
Lazy imports to prevent circular dependencies between this module and utils
def put(self, template_name): self.reqparse.add_argument('template', type=str, required=True) args = self.reqparse.parse_args() template = db.Template.find_one(template_name=template_name) if not template: return self.make_response('No such template found', HTTP.NOT_FOUND) changes = diff(template.template, args['template']) template.template = args['template'] template.is_modified = True db.session.add(template) db.session.commit() auditlog( event='template.update', actor=session['user'].username, data={ 'template_name': template_name, 'template_changes': changes } ) return self.make_response('Template {} has been updated'.format(template_name))
Update a template
def _valid_locales(locales, normalize): if normalize: normalizer = lambda x: locale.normalize(x.strip()) else: normalizer = lambda x: x.strip() return list(filter(can_set_locale, map(normalizer, locales)))
Return a list of normalized locales that do not throw an ``Exception`` when set. Parameters ---------- locales : str A string where each locale is separated by a newline. normalize : bool Whether to call ``locale.normalize`` on each locale. Returns ------- valid_locales : list A list of valid locales.
def attr_sep(self, new_sep: str) -> None: self._attr_sep = new_sep self._filters_tree = self._generate_filters_tree()
Set the new value for the attribute separator. When the new value is assigned a new tree is generated.
def commit( self, message: str, files_to_add: typing.Optional[typing.Union[typing.List[str], str]] = None, allow_empty: bool = False, ): message = str(message) LOGGER.debug('message: %s', message) files_to_add = self._sanitize_files_to_add(files_to_add) LOGGER.debug('files to add: %s', files_to_add) if not message: LOGGER.error('empty commit message') sys.exit(-1) if os.getenv('APPVEYOR'): LOGGER.info('committing on AV, adding skip_ci tag') message = self.add_skip_ci_to_commit_msg(message) if files_to_add is None: self.stage_all() else: self.reset_index() self.stage_subset(*files_to_add) if self.index_is_empty() and not allow_empty: LOGGER.error('empty commit') sys.exit(-1) self.repo.index.commit(message=message)
Commits changes to the repo :param message: first line of the message :type message: str :param files_to_add: files to commit :type files_to_add: optional list of str :param allow_empty: allow dummy commit :type allow_empty: bool
def clamp(color, min_v, max_v): h, s, v = rgb_to_hsv(*map(down_scale, color)) min_v, max_v = map(down_scale, (min_v, max_v)) v = min(max(min_v, v), max_v) return tuple(map(up_scale, hsv_to_rgb(h, s, v)))
Clamps a color such that the value is between min_v and max_v.
def to_nullable_string(value): if value == None: return None if type(value) == datetime.date: return value.isoformat() if type(value) == datetime.datetime: if value.tzinfo == None: return value.isoformat() + "Z" else: return value.isoformat() if type(value) == list: builder = '' for element in value: if len(builder) > 0: builder = builder + "," builder = builder + element return builder.__str__() return str(value)
Converts value into string or returns None when value is None. :param value: the value to convert. :return: string value or None when value is None.
def remove_duplicates(self, configs=None): if configs is None: c = self.configs else: c = configs struct = c.view(c.dtype.descr * 4) configs_unique = np.unique(struct).view(c.dtype).reshape(-1, 4) if configs is None: self.configs = configs_unique else: return configs_unique
remove duplicate entries from 4-point configurations. If no configurations are provided, then use self.configs. Unique configurations are only returned if configs is not None. Parameters ---------- configs: Nx4 numpy.ndarray, optional remove duplicates from these configurations instead from self.configs. Returns ------- configs_unique: Kx4 numpy.ndarray unique configurations. Only returned if configs is not None
def download_saved_posts(self, max_count: int = None, fast_update: bool = False, post_filter: Optional[Callable[[Post], bool]] = None) -> None: self.context.log("Retrieving saved posts...") count = 1 for post in Profile.from_username(self.context, self.context.username).get_saved_posts(): if max_count is not None and count > max_count: break if post_filter is not None and not post_filter(post): self.context.log("<{} skipped>".format(post), flush=True) continue self.context.log("[{:>3}] ".format(count), end=str(), flush=True) count += 1 with self.context.error_catcher('Download saved posts'): downloaded = self.download_post(post, target=':saved') if fast_update and not downloaded: break
Download user's saved pictures. :param max_count: Maximum count of pictures to download :param fast_update: If true, abort when first already-downloaded picture is encountered :param post_filter: function(post), which returns True if given picture should be downloaded
def look_at(self, x, y, z): for camera in self.cameras: camera.look_at(x, y, z)
Converges the two cameras to look at the specific point
def make_basic_table(self, file_type): table_data = {sample: items['kv'] for sample, items in self.mod_data[file_type].items() } table_headers = {} for column_header, (description, header_options) in file_types[file_type]['kv_descriptions'].items(): table_headers[column_header] = { 'rid': '{}_{}_bbmstheader'.format(file_type, column_header), 'title': column_header, 'description': description, } table_headers[column_header].update(header_options) tconfig = { 'id': file_type + '_bbm_table', 'namespace': 'BBTools' } for sample in table_data: for key, value in table_data[sample].items(): try: table_data[sample][key] = float(value) except ValueError: pass return table.plot(table_data, table_headers, tconfig)
Create table of key-value items in 'file_type'.
def configure(project=LOGGING_PROJECT): if not project: sys.stderr.write('!! Error: The $LOGGING_PROJECT enviroment ' 'variable is required in order to set up cloud logging. ' 'Cloud logging is disabled.\n') return try: with contextlib.redirect_stderr(io.StringIO()): client = glog.Client(project) client.setup_logging(logging.INFO) except: logging.basicConfig(level=logging.INFO) sys.stderr.write('!! Cloud logging disabled\n')
Configures cloud logging This is called for all main calls. If a $LOGGING_PROJECT is environment variable configured, then STDERR and STDOUT are redirected to cloud logging.
def _clean_text(self, branch): if branch.text and self.input_text_formatter: branch.text = self.input_text_formatter(branch.text) try: for child in branch: self._clean_text(child) if branch.text and branch.text.find(child.text) >= 0: branch.text = branch.text.replace(child.text, '', 1) except TypeError: pass
Remove text from node if same text exists in its children. Apply string formatter if set.
def folder2db(folder_name, debug, energy_limit, skip_folders, goto_reaction): folder_name = folder_name.rstrip('/') skip = [] for s in skip_folders.split(', '): for sk in s.split(','): skip.append(sk) pub_id = _folder2db.main(folder_name, debug, energy_limit, skip, goto_reaction) if pub_id: print('') print('') print('Ready to release the data?') print( " Send it to the Catalysis-Hub server with 'cathub db2server {folder_name}/{pub_id}.db'.".format(**locals())) print(" Then log in at www.catalysis-hub.org/upload/ to verify and release. ")
Read folder and collect data in local sqlite3 database
def get_module_files(src_directory, blacklist, list_all=False): files = [] for directory, dirnames, filenames in os.walk(src_directory): if directory in blacklist: continue _handle_blacklist(blacklist, dirnames, filenames) if not list_all and "__init__.py" not in filenames: dirnames[:] = () continue for filename in filenames: if _is_python_file(filename): src = os.path.join(directory, filename) files.append(src) return files
given a package directory return a list of all available python module's files in the package and its subpackages :type src_directory: str :param src_directory: path of the directory corresponding to the package :type blacklist: list or tuple :param blacklist: iterable list of files or directories to ignore. :type list_all: bool :param list_all: get files from all paths, including ones without __init__.py :rtype: list :return: the list of all available python module's files in the package and its subpackages
def validate_pattern(fn): directory, pattern = parse_pattern(fn) if directory is None: print_err("Invalid pattern {}.".format(fn)) return None, None target = resolve_path(directory) mode = auto(get_mode, target) if not mode_exists(mode): print_err("cannot access '{}': No such file or directory".format(fn)) return None, None if not mode_isdir(mode): print_err("cannot access '{}': Not a directory".format(fn)) return None, None return target, pattern
On success return an absolute path and a pattern. Otherwise print a message and return None, None
def _get_texture(arr, default, n_items, from_bounds): if not hasattr(default, '__len__'): default = [default] n_cols = len(default) if arr is None: arr = np.tile(default, (n_items, 1)) assert arr.shape == (n_items, n_cols) arr = arr[np.newaxis, ...].astype(np.float64) assert arr.shape == (1, n_items, n_cols) assert len(from_bounds) == 2 m, M = map(float, from_bounds) assert np.all(arr >= m) assert np.all(arr <= M) arr = (arr - m) / (M - m) assert np.all(arr >= 0) assert np.all(arr <= 1.) return arr
Prepare data to be uploaded as a texture. The from_bounds must be specified.
def expand_models(self, target, data): if isinstance(data, dict): data = data.values() for chunk in data: if target in chunk: yield self.init_target_object(target, chunk) else: for key, item in chunk.items(): yield self.init_single_object(key, item)
Generates all objects from given data.
def int_imf_dm(m1,m2,m,imf,bywhat='bymass',integral='normal'): ind_m = (m >= min(m1,m2)) & (m <= max(m1,m2)) if integral is 'normal': int_func = sc.integrate.trapz elif integral is 'cum': int_func = sc.integrate.cumtrapz else: print("Error in int_imf_dm: don't know how to integrate") return 0 if bywhat is 'bymass': return int_func(m[ind_m]*imf[ind_m],m[ind_m]) elif bywhat is 'bynumber': return int_func(imf[ind_m],m[ind_m]) else: print("Error in int_imf_dm: don't know by what to integrate") return 0
Integrate IMF between m1 and m2. Parameters ---------- m1 : float Min mass m2 : float Max mass m : float Mass array imf : float IMF array bywhat : string, optional 'bymass' integrates the mass that goes into stars of that mass interval; or 'bynumber' which integrates the number of stars in that mass interval. The default is 'bymass'. integrate : string, optional 'normal' uses sc.integrate.trapz; 'cum' returns cumulative trapezoidal integral. The default is 'normal'.
def finditer_noregex(string, sub, whole_word): start = 0 while True: start = string.find(sub, start) if start == -1: return if whole_word: if start: pchar = string[start - 1] else: pchar = ' ' try: nchar = string[start + len(sub)] except IndexError: nchar = ' ' if nchar in DocumentWordsProvider.separators and \ pchar in DocumentWordsProvider.separators: yield start start += len(sub) else: yield start start += 1
Search occurrences using str.find instead of regular expressions. :param string: string to parse :param sub: search string :param whole_word: True to select whole words only
def get_aligned_collection(self, value=0, data_type=None, unit=None, mutable=None): header = self._check_aligned_header(data_type, unit) values = self._check_aligned_value(value) if mutable is None: collection = self.__class__(header, values, self.datetimes) else: if self._enumeration is None: self._get_mutable_enumeration() if mutable is False: col_obj = self._enumeration['immutable'][self._collection_type] else: col_obj = self._enumeration['mutable'][self._collection_type] collection = col_obj(header, values, self.datetimes) collection._validated_a_period = self._validated_a_period return collection
Return a Collection aligned with this one composed of one repeated value. Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes. Args: value: A value to be repeated in the aliged collection values or A list of values that has the same length as this collection. Default: 0. data_type: The data type of the aligned collection. Default is to use the data type of this collection. unit: The unit of the aligned collection. Default is to use the unit of this collection or the base unit of the input data_type (if it exists). mutable: An optional Boolean to set whether the returned aligned collection is mutable (True) or immutable (False). The default is None, which will simply set the aligned collection to have the same mutability as the starting collection.
def _build_from(baseip): from ipaddress import ip_address try: ip_address(baseip) except ValueError: if 'http' not in baseip[0:4].lower(): baseip = urlunsplit(['http', baseip, '', '', '']) spl = urlsplit(baseip) if '.xml' not in spl.path: sep = '' if spl.path.endswith('/') else '/' spl = spl._replace(path=spl.path+sep+'description.xml') return spl.geturl() else: return urlunsplit(('http', baseip, '/description.xml', '', ''))
Build URL for description.xml from ip
def unlink_user_account(self, id, provider, user_id): url = self._url('{}/identities/{}/{}'.format(id, provider, user_id)) return self.client.delete(url)
Unlink a user account Args: id (str): The user_id of the user identity. provider (str): The type of identity provider (e.g: facebook). user_id (str): The unique identifier for the user for the identity. See: https://auth0.com/docs/api/management/v2#!/Users/delete_user_identity_by_user_id
def switch(scm, to_branch, verbose, fake): scm.fake = fake scm.verbose = fake or verbose scm.repo_check() if to_branch is None: scm.display_available_branches() raise click.BadArgumentUsage('Please specify a branch to switch to') scm.stash_log() status_log(scm.checkout_branch, 'Switching to {0}.'.format( crayons.yellow(to_branch)), to_branch) scm.unstash_log()
Switches from one branch to another, safely stashing and restoring local changes.
def perform_request(self, request_type, *args, **kwargs): req = api_request(request_type, *args, **kwargs) res = self.send_request(req) return res
Create and send a request. `request_type` is the request type (string). This is used to look up a plugin, whose request class is instantiated and passed the remaining arguments passed to this function.
def worker_edit(worker, lbn, settings, profile='default'): settings['cmd'] = 'update' settings['mime'] = 'prop' settings['w'] = lbn settings['sw'] = worker return _do_http(settings, profile)['worker.result.type'] == 'OK'
Edit the worker settings Note: http://tomcat.apache.org/connectors-doc/reference/status.html Data Parameters for the standard Update Action CLI Examples: .. code-block:: bash salt '*' modjk.worker_edit node1 loadbalancer1 "{'vwf': 500, 'vwd': 60}" salt '*' modjk.worker_edit node1 loadbalancer1 "{'vwf': 500, 'vwd': 60}" other-profile
def get_app_names(self): app_names = set() for name in self.apps: app_names.add(name) return app_names
Return application names. Return the list of application names that are available in the database. Returns: set of str.
def split_state(self, state): if self.state_separator: return state.split(self.state_separator) return list(state)
Split state string. Parameters ---------- state : `str` Returns ------- `list` of `str`
def _get_dis_func(self, union): union_types = union.__args__ if NoneType in union_types: union_types = tuple( e for e in union_types if e is not NoneType ) if not all(hasattr(e, "__attrs_attrs__") for e in union_types): raise ValueError( "Only unions of attr classes supported " "currently. Register a loads hook manually." ) return create_uniq_field_dis_func(*union_types)
Fetch or try creating a disambiguation function for a union.
def set_hostname(hostname=None, deploy=False): if not hostname: raise CommandExecutionError("Hostname option must not be none.") ret = {} query = {'type': 'config', 'action': 'set', 'xpath': '/config/devices/entry[@name=\'localhost.localdomain\']/deviceconfig/system', 'element': '<hostname>{0}</hostname>'.format(hostname)} ret.update(__proxy__['panos.call'](query)) if deploy is True: ret.update(commit()) return ret
Set the hostname of the Palo Alto proxy minion. A commit will be required before this is processed. CLI Example: Args: hostname (str): The hostname to set deploy (bool): If true then commit the full candidate configuration, if false only set pending change. .. code-block:: bash salt '*' panos.set_hostname newhostname salt '*' panos.set_hostname newhostname deploy=True
def resize(self, w, h): self.plane_w = w self.plane_h = h self.plane_ratio = self.char_ratio * w / h if self.crosshairs: self.crosshairs_coord = ((w + 2) // 2, (h + 2) // 2)
Used when resizing the plane, resets the plane ratio factor. :param w: New width of the visible section of the plane. :param h: New height of the visible section of the plane.
def _name_with_flags(self, include_restricted, title=None): name = "Special: " if self.special else "" name += self.name if title: name += " - {}".format(title) if include_restricted and self.restricted: name += " (R)" name += " (BB)" if self.both_blocks else "" name += " (A)" if self.administrative else "" name += " (S)" if self.sticky else "" name += " (Deleted)" if self.deleted else "" return name
Generate the name with flags.