_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_24400 | See Migration guide for more details. tf.compat.v1.raw_ops.TensorListSplit
tf.raw_ops.TensorListSplit(
tensor, element_shape, lengths, name=None
)
list[i] corresponds to lengths[i] tensors from the input tensor. The tensor must have rank at least 1 and contain exactly sum(lengths) elements. tensor: The input tensor. element_shape: A shape compatible with that of elements in the tensor. lengths: Vector of sizes of the 0th dimension of tensors in the list. output_handle: The list.
Args
tensor A Tensor.
element_shape A Tensor. Must be one of the following types: int32, int64.
lengths A Tensor of type int64.
name A name for the operation (optional).
Returns A Tensor of type variant. | |
doc_24401 | Used to create the config attribute by the Flask constructor. The instance_relative parameter is passed in from the constructor of Flask (there named instance_relative_config) and indicates if the config should be relative to the instance path or the root path of the application. Changelog New in version 0.8. Parameters
instance_relative (bool) – Return type
flask.config.Config | |
doc_24402 | Find the spec for a module, optionally relative to the specified package name. If the module is in sys.modules, then sys.modules[name].__spec__ is returned (unless the spec would be None or is not set, in which case ValueError is raised). Otherwise a search using sys.meta_path is done. None is returned if no spec is found. If name is for a submodule (contains a dot), the parent module is automatically imported. name and package work the same as for import_module(). New in version 3.4. Changed in version 3.7: Raises ModuleNotFoundError instead of AttributeError if package is in fact not a package (i.e. lacks a __path__ attribute). | |
doc_24403 |
An array with ones at and below the given diagonal and zeros elsewhere. Parameters
Nint
Number of rows in the array.
Mint, optional
Number of columns in the array. By default, M is taken equal to N.
kint, optional
The sub-diagonal at and below which the array is filled. k = 0 is the main diagonal, while k < 0 is below it, and k > 0 is above. The default is 0.
dtypedtype, optional
Data type of the returned array. The default is float.
likearray_like
Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as like supports the __array_function__ protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns
trindarray of shape (N, M)
Array with its lower triangle filled with ones and zero elsewhere; in other words T[i,j] == 1 for j <= i + k, 0 otherwise. Examples >>> np.tri(3, 5, 2, dtype=int)
array([[1, 1, 1, 0, 0],
[1, 1, 1, 1, 0],
[1, 1, 1, 1, 1]])
>>> np.tri(3, 5, -1)
array([[0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0.],
[1., 1., 0., 0., 0.]]) | |
doc_24404 | Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function. Parameters
state_dict (dict) – a dict containing parameters and persistent buffers.
strict (bool, optional) – whether to strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict() function. Default: True
Returns
missing_keys is a list of str containing the missing keys
unexpected_keys is a list of str containing the unexpected keys Return type
NamedTuple with missing_keys and unexpected_keys fields | |
doc_24405 | Must be called if the programmer wants to use colors, and before any other color manipulation routine is called. It is good practice to call this routine right after initscr(). start_color() initializes eight basic colors (black, red, green, yellow, blue, magenta, cyan, and white), and two global variables in the curses module, COLORS and COLOR_PAIRS, containing the maximum number of colors and color-pairs the terminal can support. It also restores the colors on the terminal to the values they had when the terminal was just turned on. | |
doc_24406 | Returns the current abstract base class cache token. The token is an opaque object (that supports equality testing) identifying the current version of the abstract base class cache for virtual subclasses. The token changes with every call to ABCMeta.register() on any ABC. New in version 3.4. | |
doc_24407 |
Context-manager that enables gradient calculation. Enables gradient calculation, if it has been disabled via no_grad or set_grad_enabled. This context manager is thread local; it will not affect computation in other threads. Also functions as a decorator. (Make sure to instantiate with parenthesis.) Example: >>> x = torch.tensor([1], requires_grad=True)
>>> with torch.no_grad():
... with torch.enable_grad():
... y = x * 2
>>> y.requires_grad
True
>>> y.backward()
>>> x.grad
>>> @torch.enable_grad()
... def doubler(x):
... return x * 2
>>> with torch.no_grad():
... z = doubler(x)
>>> z.requires_grad
True | |
doc_24408 |
Wraps an arbitrary nn.Sequential module to train on using synchronous pipeline parallelism. If the module requires lots of memory and doesn’t fit on a single GPU, pipeline parallelism is a useful technique to employ for training. The implementation is based on the torchgpipe paper. Pipe combines pipeline parallelism with checkpointing to reduce peak memory required to train while minimizing device under-utilization. You should place all the modules on the appropriate devices and wrap them into an nn.Sequential module defining the desired order of execution. Parameters
module (nn.Sequential) – sequential module to be parallelized using pipelining. Each module in the sequence has to have all of its parameters on a single device. Each module in the sequence has to either be an nn.Module or nn.Sequential (to combine multiple sequential modules on a single device)
chunks (int) – number of micro-batches (default: 1)
checkpoint (str) – when to enable checkpointing, one of 'always', 'except_last', or 'never' (default: 'except_last'). 'never' disables checkpointing completely, 'except_last' enables checkpointing for all micro-batches except the last one and 'always' enables checkpointing for all micro-batches.
deferred_batch_norm (bool) – whether to use deferred BatchNorm moving statistics (default: False). If set to True, we track statistics across multiple micro-batches to update the running statistics per mini-batch. Raises
TypeError – the module is not a nn.Sequential.
ValueError – invalid arguments Example::
Pipeline of two FC layers across GPUs 0 and 1. >>> fc1 = nn.Linear(16, 8).cuda(0)
>>> fc2 = nn.Linear(8, 4).cuda(1)
>>> model = nn.Sequential(fc1, fc2)
>>> model = Pipe(model, chunks=8)
>>> input = torch.rand(16, 16).cuda(0)
>>> output_rref = model(input)
Note You can wrap a Pipe model with torch.nn.parallel.DistributedDataParallel only when the checkpoint parameter of Pipe is 'never'. Note Pipe only supports intra-node pipelining currently, but will be expanded to support inter-node pipelining in the future. The forward function returns an RRef to allow for inter-node pipelining in the future, where the output might be on a remote host. For intra-node pipelinining you can use local_value() to retrieve the output locally. Warning Pipe is experimental and subject to change.
forward(input) [source]
Processes a single input mini-batch through the pipe and returns an RRef pointing to the output. Pipe is a fairly transparent module wrapper. It doesn’t modify the input and output signature of the underlying module. But there’s type restriction. Input and output have to be a Tensor or a sequence of tensors. This restriction is applied at partition boundaries too. The input tensor is split into multiple micro-batches based on the chunks parameter used to initialize Pipe. The batch size is assumed to be the first dimension of the tensor and if the batch size is less than chunks, the number of micro-batches is equal to the batch size. Parameters
input (torch.Tensor or sequence of Tensor) – input mini-batch Returns
RRef to the output of the mini-batch Raises
TypeError – input is not a tensor or sequence of tensors. | |
doc_24409 | Send an HTTPS request, which can be either GET or POST, depending on req.has_data(). | |
doc_24410 | See Migration guide for more details. tf.compat.v1.nn.log_softmax
tf.compat.v1.math.log_softmax(
logits, axis=None, name=None, dim=None
)
Warning: SOME ARGUMENTS ARE DEPRECATED: (dim). They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead For each batch i and class j we have logsoftmax = logits - log(reduce_sum(exp(logits), axis))
Args
logits A non-empty Tensor. Must be one of the following types: half, float32, float64.
axis The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
name A name for the operation (optional).
dim Deprecated alias for axis.
Returns A Tensor. Has the same type as logits. Same shape as logits.
Raises
InvalidArgumentError if logits is empty or axis is beyond the last dimension of logits. | |
doc_24411 |
Print the compiler customizations to stdout. Parameters
None
Returns
None
Notes Printing is only done if the distutils log threshold is < 2. | |
doc_24412 |
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y) Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X) | |
doc_24413 |
Set the color of the line. Parameters
colorcolor | |
doc_24414 | setName()
Old getter/setter API for name; use it directly as a property instead. | |
doc_24415 |
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters | |
doc_24416 | See Migration guide for more details. tf.compat.v1.raw_ops.InfeedDequeue
tf.raw_ops.InfeedDequeue(
dtype, shape, name=None
)
Args
dtype A tf.DType. The type of elements in the tensor.
shape A tf.TensorShape or list of ints. The shape of the tensor.
name A name for the operation (optional).
Returns A Tensor of type dtype. | |
doc_24417 | Return the canonical encoding of the argument. Currently, the encoding of a Decimal instance is always canonical, so this operation returns its argument unchanged. | |
doc_24418 | The maximum size (in bytes) of socket buffer usage for this user. This limits the amount of network memory, and hence the amount of mbufs, that this user may hold at any time. Availability: FreeBSD 9 or later. New in version 3.4. | |
doc_24419 | os.O_WRONLY
os.O_RDWR
os.O_APPEND
os.O_CREAT
os.O_EXCL
os.O_TRUNC
The above constants are available on Unix and Windows. | |
doc_24420 | The Sniffer class is used to deduce the format of a CSV file. The Sniffer class provides two methods:
sniff(sample, delimiters=None)
Analyze the given sample and return a Dialect subclass reflecting the parameters found. If the optional delimiters parameter is given, it is interpreted as a string containing possible valid delimiter characters.
has_header(sample)
Analyze the sample text (presumed to be in CSV format) and return True if the first row appears to be a series of column headers. | |
doc_24421 |
The gated linear unit. Computes: GLU(a,b)=a⊗σ(b)\text{GLU}(a, b) = a \otimes \sigma(b)
where input is split in half along dim to form a and b, σ\sigma is the sigmoid function and ⊗\otimes is the element-wise product between matrices. See Language Modeling with Gated Convolutional Networks. Parameters
input (Tensor) – input tensor
dim (int) – dimension on which to split the input. Default: -1 | |
doc_24422 | A string describing the name of the field on the user model that is used as the unique identifier. This will usually be a username of some kind, but it can also be an email address, or any other unique identifier. The field must be unique (i.e., have unique=True set in its definition), unless you use a custom authentication backend that can support non-unique usernames. In the following example, the field identifier is used as the identifying field: class MyUser(AbstractBaseUser):
identifier = models.CharField(max_length=40, unique=True)
...
USERNAME_FIELD = 'identifier' | |
doc_24423 |
Calculate the rolling minimum. Parameters
*args
For NumPy compatibility and will not have an effect on the result.
engine:str, default None
'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.3.0.
engine_kwargs:dict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.3.0. **kwargs
For NumPy compatibility and will not have an effect on the result. Returns
Series or DataFrame
Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling
Calling rolling with Series data. pandas.DataFrame.rolling
Calling rolling with DataFrames. pandas.Series.min
Aggregating min for Series. pandas.DataFrame.min
Aggregating min for DataFrame. Notes See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine. Examples Performing a rolling minimum with a window size of 3.
>>> s = pd.Series([4, 3, 5, 2, 6])
>>> s.rolling(3).min()
0 NaN
1 NaN
2 3.0
3 2.0
4 2.0
dtype: float64 | |
doc_24424 | Return the cosine of x radians. | |
doc_24425 | Create a subprocess. The limit argument sets the buffer limit for StreamReader wrappers for Process.stdout and Process.stderr (if subprocess.PIPE is passed to stdout and stderr arguments). Return a Process instance. See the documentation of loop.subprocess_exec() for other parameters. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. | |
doc_24426 | tf.compat.v1.data.make_one_shot_iterator(
dataset
)
Note: The returned iterator will be initialized automatically. A "one-shot" iterator does not support re-initialization.
Args
dataset A tf.data.Dataset.
Returns A tf.data.Iterator for elements of dataset. | |
doc_24427 |
Register a custom accessor on Index objects. Parameters
name:str
Name under which the accessor should be registered. A warning is issued if this name conflicts with a preexisting attribute. Returns
callable
A class decorator. See also register_dataframe_accessor
Register a custom accessor on DataFrame objects. register_series_accessor
Register a custom accessor on Series objects. register_index_accessor
Register a custom accessor on Index objects. Notes When accessed, your accessor will be initialized with the pandas object the user is interacting with. So the signature must be
def __init__(self, pandas_object): # noqa: E999
...
For consistency with pandas methods, you should raise an AttributeError if the data passed to your accessor has an incorrect dtype.
>>> pd.Series(['a', 'b']).dt
Traceback (most recent call last):
...
AttributeError: Can only use .dt accessor with datetimelike values
Examples In your library code:
import pandas as pd
@pd.api.extensions.register_dataframe_accessor("geo")
class GeoAccessor:
def __init__(self, pandas_obj):
self._obj = pandas_obj
@property
def center(self):
# return the geographic center point of this DataFrame
lat = self._obj.latitude
lon = self._obj.longitude
return (float(lon.mean()), float(lat.mean()))
def plot(self):
# plot this array's data on a map, e.g., using Cartopy
pass
Back in an interactive IPython session:
In [1]: ds = pd.DataFrame({"longitude": np.linspace(0, 10),
...: "latitude": np.linspace(0, 20)})
In [2]: ds.geo.center
Out[2]: (5.0, 10.0)
In [3]: ds.geo.plot() # plots data on a map | |
doc_24428 | Called right before the application context is popped. When handling a request, the application context is popped after the request context. See do_teardown_request(). This calls all functions decorated with teardown_appcontext(). Then the appcontext_tearing_down signal is sent. This is called by AppContext.pop(). Changelog New in version 0.9. Parameters
exc (Optional[BaseException]) – Return type
None | |
doc_24429 | See Migration guide for more details. tf.compat.v1.image.random_jpeg_quality
tf.image.random_jpeg_quality(
image, min_jpeg_quality, max_jpeg_quality, seed=None
)
min_jpeg_quality must be in the interval [0, 100] and less than max_jpeg_quality. max_jpeg_quality must be in the interval [0, 100]. Usage Example:
x = [[[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0]],
[[7.0, 8.0, 9.0],
[10.0, 11.0, 12.0]]]
tf.image.random_jpeg_quality(x, 75, 95)
<tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy=...>
Args
image 3D image. Size of the last dimension must be 1 or 3.
min_jpeg_quality Minimum jpeg encoding quality to use.
max_jpeg_quality Maximum jpeg encoding quality to use.
seed An operation-specific seed. It will be used in conjunction with the graph-level seed to determine the real seeds that will be used in this operation. Please see the documentation of set_random_seed for its interaction with the graph-level random seed.
Returns Adjusted image(s), same shape and DType as image.
Raises
ValueError if min_jpeg_quality or max_jpeg_quality is invalid. | |
doc_24430 | Returns an HTML string with all help texts in an <ul>. This is helpful when adding password validation to forms, as you can pass the output directly to the help_text parameter of a form field. | |
doc_24431 | Same as the get_db_prep_value(), but called when the field value must be saved to the database. By default returns get_db_prep_value(). | |
doc_24432 |
Return Index with duplicate values removed. Parameters
keep:{‘first’, ‘last’, False}, default ‘first’
‘first’ : Drop duplicates except for the first occurrence. ‘last’ : Drop duplicates except for the last occurrence. False : Drop all duplicates. Returns
deduplicated:Index
See also Series.drop_duplicates
Equivalent method on Series. DataFrame.drop_duplicates
Equivalent method on DataFrame. Index.duplicated
Related method on Index, indicating duplicate Index values. Examples Generate an pandas.Index with duplicate values.
>>> idx = pd.Index(['lama', 'cow', 'lama', 'beetle', 'lama', 'hippo'])
The keep parameter controls which duplicate values are removed. The value ‘first’ keeps the first occurrence for each set of duplicated entries. The default value of keep is ‘first’.
>>> idx.drop_duplicates(keep='first')
Index(['lama', 'cow', 'beetle', 'hippo'], dtype='object')
The value ‘last’ keeps the last occurrence for each set of duplicated entries.
>>> idx.drop_duplicates(keep='last')
Index(['cow', 'beetle', 'lama', 'hippo'], dtype='object')
The value False discards all sets of duplicated entries.
>>> idx.drop_duplicates(keep=False)
Index(['cow', 'beetle', 'hippo'], dtype='object') | |
doc_24433 | tf.compat.v1.while_loop(
cond, body, loop_vars, shape_invariants=None, parallel_iterations=10,
back_prop=True, swap_memory=False, name=None, maximum_iterations=None,
return_same_structure=False
)
cond is a callable returning a boolean scalar tensor. body is a callable returning a (possibly nested) tuple, namedtuple or list of tensors of the same arity (length and structure) and types as loop_vars. loop_vars is a (possibly nested) tuple, namedtuple or list of tensors that is passed to both cond and body. cond and body both take as many arguments as there are loop_vars. In addition to regular Tensors or IndexedSlices, the body may accept and return TensorArray objects. The flows of the TensorArray objects will be appropriately forwarded between loops and during gradient calculations. Note that while_loop calls cond and body exactly once (inside the call to while_loop, and not at all during Session.run()). while_loop stitches together the graph fragments created during the cond and body calls with some additional graph nodes to create the graph flow that repeats body until cond returns false. For correctness, tf.while_loop() strictly enforces shape invariants for the loop variables. A shape invariant is a (possibly partial) shape that is unchanged across the iterations of the loop. An error will be raised if the shape of a loop variable after an iteration is determined to be more general than or incompatible with its shape invariant. For example, a shape of [11, None] is more general than a shape of [11, 17], and [11, 21] is not compatible with [11, 17]. By default (if the argument shape_invariants is not specified), it is assumed that the initial shape of each tensor in loop_vars is the same in every iteration. The shape_invariants argument allows the caller to specify a less specific shape invariant for each loop variable, which is needed if the shape varies between iterations. The tf.Tensor.set_shape function may also be used in the body function to indicate that the output loop variable has a particular shape. The shape invariant for SparseTensor and IndexedSlices are treated specially as follows: a) If a loop variable is a SparseTensor, the shape invariant must be TensorShape([r]) where r is the rank of the dense tensor represented by the sparse tensor. It means the shapes of the three tensors of the SparseTensor are ([None], [None, r], [r]). NOTE: The shape invariant here is the shape of the SparseTensor.dense_shape property. It must be the shape of a vector. b) If a loop variable is an IndexedSlices, the shape invariant must be a shape invariant of the values tensor of the IndexedSlices. It means the shapes of the three tensors of the IndexedSlices are (shape, [shape[0]], [shape.ndims]). while_loop implements non-strict semantics, enabling multiple iterations to run in parallel. The maximum number of parallel iterations can be controlled by parallel_iterations, which gives users some control over memory consumption and execution order. For correct programs, while_loop should return the same result for any parallel_iterations > 0. For training, TensorFlow stores the tensors that are produced in the forward inference and are needed in back propagation. These tensors are a main source of memory consumption and often cause OOM errors when training on GPUs. When the flag swap_memory is true, we swap out these tensors from GPU to CPU. This for example allows us to train RNN models with very long sequences and large batches.
Args
cond A callable that represents the termination condition of the loop.
body A callable that represents the loop body.
loop_vars A (possibly nested) tuple, namedtuple or list of numpy array, Tensor, and TensorArray objects.
shape_invariants The shape invariants for the loop variables.
parallel_iterations The number of iterations allowed to run in parallel. It must be a positive integer.
back_prop Whether backprop is enabled for this while loop.
swap_memory Whether GPU-CPU memory swap is enabled for this loop.
name Optional name prefix for the returned tensors.
maximum_iterations Optional maximum number of iterations of the while loop to run. If provided, the cond output is AND-ed with an additional condition ensuring the number of iterations executed is no greater than maximum_iterations.
return_same_structure If True, output has same structure as loop_vars. If eager execution is enabled, this is ignored (and always treated as True).
Returns The output tensors for the loop variables after the loop. If return_same_structure is True, the return value has the same structure as loop_vars. If return_same_structure is False, the return value is a Tensor, TensorArray or IndexedSlice if the length of loop_vars is 1, or a list otherwise.
Raises
TypeError if cond or body is not callable.
ValueError if loop_vars is empty. Example: i = tf.constant(0)
c = lambda i: tf.less(i, 10)
b = lambda i: tf.add(i, 1)
r = tf.while_loop(c, b, [i])
Example with nesting and a namedtuple: import collections
Pair = collections.namedtuple('Pair', 'j, k')
ijk_0 = (tf.constant(0), Pair(tf.constant(1), tf.constant(2)))
c = lambda i, p: i < 10
b = lambda i, p: (i + 1, Pair((p.j + p.k), (p.j - p.k)))
ijk_final = tf.while_loop(c, b, ijk_0)
Example using shape_invariants: i0 = tf.constant(0)
m0 = tf.ones([2, 2])
c = lambda i, m: i < 10
b = lambda i, m: [i+1, tf.concat([m, m], axis=0)]
tf.while_loop(
c, b, loop_vars=[i0, m0],
shape_invariants=[i0.get_shape(), tf.TensorShape([None, 2])])
Example which demonstrates non-strict semantics: In the following example, the final value of the counter i does not depend on x. So the while_loop can increment the counter parallel to updates of x. However, because the loop counter at one loop iteration depends on the value at the previous iteration, the loop counter itself cannot be incremented in parallel. Hence if we just want the final value of the counter (which we print on the line print(sess.run(i))), then x will never be incremented, but the counter will be updated on a single thread. Conversely, if we want the value of the output (which we print on the line print(sess.run(out).shape)), then the counter may be incremented on its own thread, while x can be incremented in parallel on a separate thread. In the extreme case, it is conceivable that the thread incrementing the counter runs until completion before x is incremented even a single time. The only thing that can never happen is that the thread updating x can never get ahead of the counter thread because the thread incrementing x depends on the value of the counter. import tensorflow as tf
n = 10000
x = tf.constant(list(range(n)))
c = lambda i, x: i < n
b = lambda i, x: (tf.compat.v1.Print(i + 1, [i]), tf.compat.v1.Print(x + 1,
[i], "x:"))
i, out = tf.while_loop(c, b, (0, x))
with tf.compat.v1.Session() as sess:
print(sess.run(i)) # prints [0] ... [9999]
# The following line may increment the counter and x in parallel.
# The counter thread may get ahead of the other thread, but not the
# other way around. So you may see things like
# [9996] x:[9987]
# meaning that the counter thread is on iteration 9996,
# while the other thread is on iteration 9987
print(sess.run(out).shape) | |
doc_24434 | Expand this tensor to the same size as other. self.expand_as(other) is equivalent to self.expand(other.size()). Please see expand() for more information about expand. Parameters
other (torch.Tensor) – The result tensor has the same size as other. | |
doc_24435 |
Convert times to midnight. The time component of the date-time is converted to midnight i.e. 00:00:00. This is useful in cases, when the time does not matter. Length is unaltered. The timezones are unaffected. This method is available on Series with datetime values under the .dt accessor, and directly on Datetime Array/Index. Returns
DatetimeArray, DatetimeIndex or Series
The same type as the original data. Series will have the same name and index. DatetimeIndex will have the same name. See also floor
Floor the datetimes to the specified freq. ceil
Ceil the datetimes to the specified freq. round
Round the datetimes to the specified freq. Examples
>>> idx = pd.date_range(start='2014-08-01 10:00', freq='H',
... periods=3, tz='Asia/Calcutta')
>>> idx
DatetimeIndex(['2014-08-01 10:00:00+05:30',
'2014-08-01 11:00:00+05:30',
'2014-08-01 12:00:00+05:30'],
dtype='datetime64[ns, Asia/Calcutta]', freq='H')
>>> idx.normalize()
DatetimeIndex(['2014-08-01 00:00:00+05:30',
'2014-08-01 00:00:00+05:30',
'2014-08-01 00:00:00+05:30'],
dtype='datetime64[ns, Asia/Calcutta]', freq=None) | |
doc_24436 | torch.quantization.quantize(model, run_fn, run_args, mapping=None, inplace=False) [source]
Quantize the input float model with post training static quantization. First it will prepare the model for calibration, then it calls run_fn which will run the calibration step, after that we will convert the model to a quantized model. Parameters
model – input float model
run_fn – a calibration function for calibrating the prepared model
run_args – positional arguments for run_fn
inplace – carry out model transformations in-place, the original module is mutated
mapping – correspondence between original module types and quantized counterparts Returns
Quantized model.
torch.quantization.quantize_dynamic(model, qconfig_spec=None, dtype=torch.qint8, mapping=None, inplace=False) [source]
Converts a float model to dynamic (i.e. weights-only) quantized model. Replaces specified modules with dynamic weight-only quantized versions and output the quantized model. For simplest usage provide dtype argument that can be float16 or qint8. Weight-only quantization by default is performed for layers with large weights size - i.e. Linear and RNN variants. Fine grained control is possible with qconfig and mapping that act similarly to quantize(). If qconfig is provided, the dtype argument is ignored. Parameters
model – input model
qconfig_spec –
Either: A dictionary that maps from name or type of submodule to quantization configuration, qconfig applies to all submodules of a given module unless qconfig for the submodules are specified (when the submodule already has qconfig attribute). Entries in the dictionary need to be QConfigDynamic instances. A set of types and/or submodule names to apply dynamic quantization to, in which case the dtype argument is used to specify the bit-width
inplace – carry out model transformations in-place, the original module is mutated
mapping – maps type of a submodule to a type of corresponding dynamically quantized version with which the submodule needs to be replaced
torch.quantization.quantize_qat(model, run_fn, run_args, inplace=False) [source]
Do quantization aware training and output a quantized model Parameters
model – input model
run_fn – a function for evaluating the prepared model, can be a function that simply runs the prepared model or a training loop
run_args – positional arguments for run_fn
Returns
Quantized model.
torch.quantization.prepare(model, inplace=False, allow_list=None, observer_non_leaf_module_list=None, prepare_custom_config_dict=None) [source]
Prepares a copy of the model for quantization calibration or quantization-aware training. Quantization configuration should be assigned preemptively to individual submodules in .qconfig attribute. The model will be attached with observer or fake quant modules, and qconfig will be propagated. Parameters
model – input model to be modified in-place
inplace – carry out model transformations in-place, the original module is mutated
allow_list – list of quantizable modules
observer_non_leaf_module_list – list of non-leaf modules we want to add observer
prepare_custom_config_dict – customization configuration dictionary for prepare function # Example of prepare_custom_config_dict:
prepare_custom_config_dict = {
# user will manually define the corresponding observed
# module class which has a from_float class method that converts
# float custom module to observed custom module
"float_to_observed_custom_module_class": {
CustomModule: ObservedCustomModule
}
}
torch.quantization.prepare_qat(model, mapping=None, inplace=False) [source]
Prepares a copy of the model for quantization calibration or quantization-aware training and converts it to quantized version. Quantization configuration should be assigned preemptively to individual submodules in .qconfig attribute. Parameters
model – input model to be modified in-place
mapping – dictionary that maps float modules to quantized modules to be replaced.
inplace – carry out model transformations in-place, the original module is mutated
torch.quantization.convert(module, mapping=None, inplace=False, remove_qconfig=True, convert_custom_config_dict=None) [source]
Converts submodules in input module to a different module according to mapping by calling from_float method on the target module class. And remove qconfig at the end if remove_qconfig is set to True. Parameters
module – prepared and calibrated module
mapping – a dictionary that maps from source module type to target module type, can be overwritten to allow swapping user defined Modules
inplace – carry out model transformations in-place, the original module is mutated
convert_custom_config_dict – custom configuration dictionary for convert function # Example of convert_custom_config_dict:
convert_custom_config_dict = {
# user will manually define the corresponding quantized
# module class which has a from_observed class method that converts
# observed custom module to quantized custom module
"observed_to_quantized_custom_module_class": {
ObservedCustomModule: QuantizedCustomModule
}
}
class torch.quantization.QConfig [source]
Describes how to quantize a layer or a part of the network by providing settings (observer classes) for activations and weights respectively. Note that QConfig needs to contain observer classes (like MinMaxObserver) or a callable that returns instances on invocation, not the concrete observer instances themselves. Quantization preparation function will instantiate observers multiple times for each of the layers. Observer classes have usually reasonable default arguments, but they can be overwritten with with_args method (that behaves like functools.partial): my_qconfig = QConfig(activation=MinMaxObserver.with_args(dtype=torch.qint8), weight=default_observer.with_args(dtype=torch.qint8))
class torch.quantization.QConfigDynamic [source]
Describes how to dynamically quantize a layer or a part of the network by providing settings (observer classes) for weights. It’s like QConfig, but for dynamic quantization. Note that QConfigDynamic needs to contain observer classes (like MinMaxObserver) or a callable that returns instances on invocation, not the concrete observer instances themselves. Quantization function will instantiate observers multiple times for each of the layers. Observer classes have usually reasonable default arguments, but they can be overwritten with with_args method (that behaves like functools.partial): my_qconfig = QConfigDynamic(weight=default_observer.with_args(dtype=torch.qint8))
Preparing model for quantization
torch.quantization.fuse_modules(model, modules_to_fuse, inplace=False, fuser_func=<function fuse_known_modules>, fuse_custom_config_dict=None) [source]
Fuses a list of modules into a single module Fuses only the following sequence of modules: conv, bn conv, bn, relu conv, relu linear, relu bn, relu All other sequences are left unchanged. For these sequences, replaces the first item in the list with the fused module, replacing the rest of the modules with identity. Parameters
model – Model containing the modules to be fused
modules_to_fuse – list of list of module names to fuse. Can also be a list of strings if there is only a single list of modules to fuse.
inplace – bool specifying if fusion happens in place on the model, by default a new model is returned
fuser_func – Function that takes in a list of modules and outputs a list of fused modules of the same length. For example, fuser_func([convModule, BNModule]) returns the list [ConvBNModule, nn.Identity()] Defaults to torch.quantization.fuse_known_modules
fuse_custom_config_dict – custom configuration for fusion # Example of fuse_custom_config_dict
fuse_custom_config_dict = {
# Additional fuser_method mapping
"additional_fuser_method_mapping": {
(torch.nn.Conv2d, torch.nn.BatchNorm2d): fuse_conv_bn
},
}
Returns
model with fused modules. A new copy is created if inplace=True. Examples: >>> m = myModel()
>>> # m is a module containing the sub-modules below
>>> modules_to_fuse = [ ['conv1', 'bn1', 'relu1'], ['submodule.conv', 'submodule.relu']]
>>> fused_m = torch.quantization.fuse_modules(m, modules_to_fuse)
>>> output = fused_m(input)
>>> m = myModel()
>>> # Alternately provide a single list of modules to fuse
>>> modules_to_fuse = ['conv1', 'bn1', 'relu1']
>>> fused_m = torch.quantization.fuse_modules(m, modules_to_fuse)
>>> output = fused_m(input)
class torch.quantization.QuantStub(qconfig=None) [source]
Quantize stub module, before calibration, this is same as an observer, it will be swapped as nnq.Quantize in convert. Parameters
qconfig – quantization configuration for the tensor, if qconfig is not provided, we will get qconfig from parent modules
class torch.quantization.DeQuantStub [source]
Dequantize stub module, before calibration, this is same as identity, this will be swapped as nnq.DeQuantize in convert.
class torch.quantization.QuantWrapper(module) [source]
A wrapper class that wraps the input module, adds QuantStub and DeQuantStub and surround the call to module with call to quant and dequant modules. This is used by the quantization utility functions to add the quant and dequant modules, before convert function QuantStub will just be observer, it observes the input tensor, after convert, QuantStub will be swapped to nnq.Quantize which does actual quantization. Similarly for DeQuantStub.
torch.quantization.add_quant_dequant(module) [source]
Wrap the leaf child module in QuantWrapper if it has a valid qconfig Note that this function will modify the children of module inplace and it can return a new module which wraps the input module as well. Parameters
module – input module with qconfig attributes for all the leaf modules
we want to quantize (that) – Returns
Either the inplace modified module with submodules wrapped in QuantWrapper based on qconfig or a new QuantWrapper module which wraps the input module, the latter case only happens when the input module is a leaf module and we want to quantize it.
Utility functions
torch.quantization.add_observer_(module, qconfig_propagation_list=None, non_leaf_module_list=None, device=None, custom_module_class_mapping=None) [source]
Add observer for the leaf child of the module. This function insert observer module to all leaf child module that has a valid qconfig attribute. Parameters
module – input module with qconfig attributes for all the leaf modules that we want to quantize
device – parent device, if any
non_leaf_module_list – list of non-leaf modules we want to add observer Returns
None, module is modified inplace with added observer modules and forward_hooks
torch.quantization.swap_module(mod, mapping, custom_module_class_mapping) [source]
Swaps the module if it has a quantized counterpart and it has an observer attached. Parameters
mod – input module
mapping – a dictionary that maps from nn module to nnq module Returns
The corresponding quantized module of mod
torch.quantization.propagate_qconfig_(module, qconfig_dict=None, allow_list=None) [source]
Propagate qconfig through the module hierarchy and assign qconfig attribute on each leaf module Parameters
module – input module
qconfig_dict – dictionary that maps from name or type of submodule to quantization configuration, qconfig applies to all submodules of a given module unless qconfig for the submodules are specified (when the submodule already has qconfig attribute) Returns
None, module is modified inplace with qconfig attached
torch.quantization.default_eval_fn(model, calib_data) [source]
Default evaluation function takes a torch.utils.data.Dataset or a list of input Tensors and run the model on the dataset
Observers
class torch.quantization.ObserverBase(dtype) [source]
Base observer Module. Any observer implementation should derive from this class. Concrete observers should follow the same API. In forward, they will update the statistics of the observed Tensor. And they should provide a calculate_qparams function that computes the quantization parameters given the collected statistics. Parameters
dtype – Quantized data type
classmethod with_args(**kwargs)
Wrapper that allows creation of class factories. This can be useful when there is a need to create classes with the same constructor arguments, but different instances. Example: >>> Foo.with_args = classmethod(_with_args)
>>> foo_builder = Foo.with_args(a=3, b=4).with_args(answer=42)
>>> foo_instance1 = foo_builder()
>>> foo_instance2 = foo_builder()
>>> id(foo_instance1) == id(foo_instance2)
False
class torch.quantization.MinMaxObserver(dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the running min and max values. This observer uses the tensor min/max statistics to compute the quantization parameters. The module records the running minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit
quant_min – Minimum quantization value. If unspecified, it will follow the 8-bit setup.
quant_max – Maximum quantization value. If unspecified, it will follow the 8-bit setup. Given running min/max as xminx_\text{min} and xmaxx_\text{max} , scale ss and zero point zz are computed as: The running minimum/maximum xmin/maxx_\text{min/max} is computed as: xmin={min(X)if xmin=Nonemin(xmin,min(X))otherwisexmax={max(X)if xmax=Nonemax(xmax,max(X))otherwise\begin{array}{ll} x_\text{min} &= \begin{cases} \min(X) & \text{if~}x_\text{min} = \text{None} \\ \min\left(x_\text{min}, \min(X)\right) & \text{otherwise} \end{cases}\\ x_\text{max} &= \begin{cases} \max(X) & \text{if~}x_\text{max} = \text{None} \\ \max\left(x_\text{max}, \max(X)\right) & \text{otherwise} \end{cases}\\ \end{array}
where XX is the observed tensor. The scale ss and zero point zz are then computed as: if Symmetric:s=2max(∣xmin∣,xmax)/(Qmax−Qmin)z={0if dtype is qint8128otherwiseOtherwise:s=(xmax−xmin)/(Qmax−Qmin)z=Qmin−round(xmin/s)\begin{aligned} \text{if Symmetric:}&\\ &s = 2 \max(|x_\text{min}|, x_\text{max}) / \left( Q_\text{max} - Q_\text{min} \right) \\ &z = \begin{cases} 0 & \text{if dtype is qint8} \\ 128 & \text{otherwise} \end{cases}\\ \text{Otherwise:}&\\ &s = \left( x_\text{max} - x_\text{min} \right ) / \left( Q_\text{max} - Q_\text{min} \right ) \\ &z = Q_\text{min} - \text{round}(x_\text{min} / s) \end{aligned}
where QminQ_\text{min} and QmaxQ_\text{max} are the minimum and maximum of the quantized data type. Warning Only works with torch.per_tensor_symmetric quantization scheme Warning dtype can only take torch.qint8 or torch.quint8. Note If the running minimum equals to the running maximum, the scale and zero_point are set to 1.0 and 0.
class torch.quantization.MovingAverageMinMaxObserver(averaging_constant=0.01, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the moving average of the min and max values. This observer computes the quantization parameters based on the moving averages of minimums and maximums of the incoming tensors. The module records the average minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters
averaging_constant – Averaging constant for min/max.
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit
quant_min – Minimum quantization value. If unspecified, it will follow the 8-bit setup.
quant_max – Maximum quantization value. If unspecified, it will follow the 8-bit setup. The moving average min/max is computed as follows xmin={min(X)if xmin=None(1−c)xmin+cmin(X)otherwisexmax={max(X)if xmax=None(1−c)xmax+cmax(X)otherwise\begin{array}{ll} x_\text{min} = \begin{cases} \min(X) & \text{if~}x_\text{min} = \text{None} \\ (1 - c) x_\text{min} + c \min(X) & \text{otherwise} \end{cases}\\ x_\text{max} = \begin{cases} \max(X) & \text{if~}x_\text{max} = \text{None} \\ (1 - c) x_\text{max} + c \max(X) & \text{otherwise} \end{cases}\\ \end{array}
where xmin/maxx_\text{min/max} is the running average min/max, XX is is the incoming tensor, and cc is the averaging_constant. The scale and zero point are then computed as in MinMaxObserver. Note Only works with torch.per_tensor_affine quantization scheme. Note If the running minimum equals to the running maximum, the scale and zero_point are set to 1.0 and 0.
class torch.quantization.PerChannelMinMaxObserver(ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the running per channel min and max values. This observer uses the tensor min/max statistics to compute the per channel quantization parameters. The module records the running minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters
ch_axis – Channel axis
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit
quant_min – Minimum quantization value. If unspecified, it will follow the 8-bit setup.
quant_max – Maximum quantization value. If unspecified, it will follow the 8-bit setup. The quantization parameters are computed the same way as in MinMaxObserver, with the difference that the running min/max values are stored per channel. Scales and zero points are thus computed per channel as well. Note If the running minimum equals to the running maximum, the scales and zero_points are set to 1.0 and 0.
class torch.quantization.MovingAveragePerChannelMinMaxObserver(averaging_constant=0.01, ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the running per channel min and max values. This observer uses the tensor min/max statistics to compute the per channel quantization parameters. The module records the running minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters
averaging_constant – Averaging constant for min/max.
ch_axis – Channel axis
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit
quant_min – Minimum quantization value. If unspecified, it will follow the 8-bit setup.
quant_max – Maximum quantization value. If unspecified, it will follow the 8-bit setup. The quantization parameters are computed the same way as in MovingAverageMinMaxObserver, with the difference that the running min/max values are stored per channel. Scales and zero points are thus computed per channel as well. Note If the running minimum equals to the running maximum, the scales and zero_points are set to 1.0 and 0.
class torch.quantization.HistogramObserver(bins=2048, upsample_rate=128, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False) [source]
The module records the running histogram of tensor values along with min/max values. calculate_qparams will calculate scale and zero_point. Parameters
bins – Number of bins to use for the histogram
upsample_rate – Factor by which the histograms are upsampled, this is used to interpolate histograms with varying ranges across observations
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit The scale and zero point are computed as follows:
Create the histogram of the incoming inputs.
The histogram is computed continuously, and the ranges per bin change with every new tensor observed.
Search the distribution in the histogram for optimal min/max values.
The search for the min/max values ensures the minimization of the quantization error with respect to the floating point model.
Compute the scale and zero point the same way as in the
MinMaxObserver
class torch.quantization.FakeQuantize(observer=<class 'torch.quantization.observer.MovingAverageMinMaxObserver'>, quant_min=0, quant_max=255, **observer_kwargs) [source]
Simulate the quantize and dequantize operations in training time. The output of this module is given by x_out = (clamp(round(x/scale + zero_point), quant_min, quant_max)-zero_point)*scale
scale defines the scale factor used for quantization.
zero_point specifies the quantized value to which 0 in floating point maps to
quant_min specifies the minimum allowable quantized value.
quant_max specifies the maximum allowable quantized value.
fake_quant_enable controls the application of fake quantization on tensors, note that statistics can still be updated.
observer_enable controls statistics collection on tensors
dtype specifies the quantized dtype that is being emulated with fake-quantization,
allowable values are torch.qint8 and torch.quint8. The values of quant_min and quant_max should be chosen to be consistent with the dtype Parameters
observer (module) – Module for observing statistics on input tensors and calculating scale and zero-point.
quant_min (int) – The minimum allowable quantized value.
quant_max (int) – The maximum allowable quantized value.
observer_kwargs (optional) – Arguments for the observer module Variables
~FakeQuantize.observer (Module) – User provided module that collects statistics on the input tensor and provides a method to calculate scale and zero-point.
class torch.quantization.NoopObserver(dtype=torch.float16, custom_op_name='') [source]
Observer that doesn’t do anything and just passes its configuration to the quantized module’s .from_float(). Primarily used for quantization to float16 which doesn’t require determining ranges. Parameters
dtype – Quantized data type
custom_op_name – (temporary) specify this observer for an operator that doesn’t require any observation (Can be used in Graph Mode Passes for special case ops).
Debugging utilities
torch.quantization.get_observer_dict(mod, target_dict, prefix='') [source]
Traverse the modules and save all observers into dict. This is mainly used for quantization accuracy debug :param mod: the top module we want to save all observers :param prefix: the prefix for the current module :param target_dict: the dictionary used to save all the observers
class torch.quantization.RecordingObserver(**kwargs) [source]
The module is mainly for debug and records the tensor values during runtime. Parameters
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit
nn.intrinsic | |
doc_24437 |
Set the rotation of the text. Parameters
sfloat or {'vertical', 'horizontal'}
The rotation angle in degrees in mathematically positive direction (counterclockwise). 'horizontal' equals 0, 'vertical' equals 90. | |
doc_24438 |
Remove an event from the event list -- by default, the last. Note that this does not check that there are events, much like the normal pop method. If no events exist, this will throw an exception. | |
doc_24439 | If you need to disable a site-wide action you can call AdminSite.disable_action(). For example, you can use this method to remove the built-in “delete selected objects” action: admin.site.disable_action('delete_selected')
Once you’ve done the above, that action will no longer be available site-wide. If, however, you need to re-enable a globally-disabled action for one particular model, list it explicitly in your ModelAdmin.actions list: # Globally disable delete selected
admin.site.disable_action('delete_selected')
# This ModelAdmin will not have delete_selected available
class SomeModelAdmin(admin.ModelAdmin):
actions = ['some_other_action']
...
# This one will
class AnotherModelAdmin(admin.ModelAdmin):
actions = ['delete_selected', 'a_third_action']
... | |
doc_24440 |
Bases: matplotlib.transforms.Transform The inverse of the polar transform, mapping Cartesian coordinate space x and y back to theta and r. Parameters
shorthand_namestr
A string representing the "name" of the transform. The name carries no significance other than to improve the readability of str(transform) when DEBUG=True. has_inverse=True
True if this transform has a corresponding inverse transform.
input_dims=2
The number of input dimensions of this transform. Must be overridden (with integers) in the subclass.
inverted()[source]
Return the corresponding inverse transformation. It holds x == self.inverted().transform(self.transform(x)). The return value of this method should be treated as temporary. An update to self does not cause a corresponding update to its inverted copy.
output_dims=2
The number of output dimensions of this transform. Must be overridden (with integers) in the subclass.
transform_non_affine(xy)[source]
Apply only the non-affine part of this transformation. transform(values) is always equivalent to transform_affine(transform_non_affine(values)). In non-affine transformations, this is generally equivalent to transform(values). In affine transformations, this is always a no-op. Parameters
valuesarray
The input values as NumPy array of length input_dims or shape (N x input_dims). Returns
array
The output values as NumPy array of length input_dims or shape (N x output_dims), depending on the input. | |
doc_24441 |
Return the url. | |
doc_24442 |
Alias for get_facecolor. | |
doc_24443 | Return a string representation of the ASCII character c. If c is printable, this string is the character itself. If the character is a control character (0x00–0x1f) the string consists of a caret ('^') followed by the corresponding uppercase letter. If the character is an ASCII delete (0x7f) the string is '^?'. If the character has its meta bit (0x80) set, the meta bit is stripped, the preceding rules applied, and '!' prepended to the result. | |
doc_24444 |
Return the joinstyle. | |
doc_24445 | Set search_fields to enable a search box on the admin change list page. This should be set to a list of field names that will be searched whenever somebody submits a search query in that text box. These fields should be some kind of text field, such as CharField or TextField. You can also perform a related lookup on a ForeignKey or ManyToManyField with the lookup API “follow” notation: search_fields = ['foreign_key__related_fieldname']
For example, if you have a blog entry with an author, the following definition would enable searching blog entries by the email address of the author: search_fields = ['user__email']
When somebody does a search in the admin search box, Django splits the search query into words and returns all objects that contain each of the words, case-insensitive (using the icontains lookup), where each word must be in at least one of search_fields. For example, if search_fields is set to ['first_name', 'last_name'] and a user searches for john lennon, Django will do the equivalent of this SQL WHERE clause: WHERE (first_name ILIKE '%john%' OR last_name ILIKE '%john%')
AND (first_name ILIKE '%lennon%' OR last_name ILIKE '%lennon%')
The search query can contain quoted phrases with spaces. For example, if a user searches for "john winston" or 'john winston', Django will do the equivalent of this SQL WHERE clause: WHERE (first_name ILIKE '%john winston%' OR last_name ILIKE '%john winston%')
If you don’t want to use icontains as the lookup, you can use any lookup by appending it the field. For example, you could use exact by setting search_fields to ['first_name__exact']. Some (older) shortcuts for specifying a field lookup are also available. You can prefix a field in search_fields with the following characters and it’s equivalent to adding __<lookup> to the field:
Prefix Lookup
^ startswith
= iexact
@ search
None icontains If you need to customize search you can use ModelAdmin.get_search_results() to provide additional or alternate search behavior. Changed in Django 3.2: Support for searching against quoted phrases with spaces was added. | |
doc_24446 |
Return x to the power p, (x**p). If x contains negative values, the output is converted to the complex domain. Parameters
xarray_like
The input value(s).
parray_like of ints
The power(s) to which x is raised. If x contains multiple values, p has to either be a scalar, or contain the same number of values as x. In the latter case, the result is x[0]**p[0], x[1]**p[1], .... Returns
outndarray or scalar
The result of x**p. If x and p are scalars, so is out, otherwise an array is returned. See also numpy.power
Examples >>> np.set_printoptions(precision=4)
>>> np.emath.power([2, 4], 2)
array([ 4, 16])
>>> np.emath.power([2, 4], -2)
array([0.25 , 0.0625])
>>> np.emath.power([-2, 4], 2)
array([ 4.-0.j, 16.+0.j]) | |
doc_24447 |
For each element in self, return a copy with the trailing characters removed. See also char.rstrip | |
doc_24448 |
Apply hysteresis thresholding to image. This algorithm finds regions where image is greater than high OR image is greater than low and that region is connected to a region greater than high. Parameters
imagearray, shape (M,[ N, …, P])
Grayscale input image.
lowfloat, or array of same shape as image
Lower threshold.
highfloat, or array of same shape as image
Higher threshold. Returns
thresholdedarray of bool, same shape as image
Array in which True indicates the locations where image was above the hysteresis threshold. References
1
J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1986; vol. 8, pp.679-698. DOI:10.1109/TPAMI.1986.4767851 Examples >>> image = np.array([1, 2, 3, 2, 1, 2, 1, 3, 2])
>>> apply_hysteresis_threshold(image, 1.5, 2.5).astype(int)
array([0, 1, 1, 1, 0, 0, 0, 1, 1]) | |
doc_24449 | stop handling Unicode text input events stop_text_input() -> None Stop receiving pygame.TEXTEDITING and pygame.TEXTINPUT events. Text input events handling is on by default New in pygame 2.0.0. | |
doc_24450 |
Perform regression on samples in X. For an one-class model, +1 (inlier) or -1 (outlier) is returned. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns
y_predndarray of shape (n_samples,) | |
doc_24451 | Exception raised on any errors. The reason for the exception is passed to the constructor as a string. | |
doc_24452 |
Return whether the artist is animated. | |
doc_24453 | re.VERBOSE
This flag allows you to write regular expressions that look nicer and are more readable by allowing you to visually separate logical sections of the pattern and add comments. Whitespace within the pattern is ignored, except when in a character class, or when preceded by an unescaped backslash, or within tokens like *?, (?: or (?P<...>. When a line contains a # that is not in a character class and is not preceded by an unescaped backslash, all characters from the leftmost such # through the end of the line are ignored. This means that the two following regular expression objects that match a decimal number are functionally equal: a = re.compile(r"""\d + # the integral part
\. # the decimal point
\d * # some fractional digits""", re.X)
b = re.compile(r"\d+\.\d*")
Corresponds to the inline flag (?x). | |
doc_24454 | Turn a 10-tuple as returned by parsedate_tz() into a UTC timestamp (seconds since the Epoch). If the timezone item in the tuple is None, assume local time. | |
doc_24455 | Leave cbreak mode. Return to normal “cooked” mode with line buffering. | |
doc_24456 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_24457 | Recursively move a file or directory (src) to another location (dst) and return the destination. If the destination is an existing directory, then src is moved inside that directory. If the destination already exists but is not a directory, it may be overwritten depending on os.rename() semantics. If the destination is on the current filesystem, then os.rename() is used. Otherwise, src is copied to dst using copy_function and then removed. In case of symlinks, a new symlink pointing to the target of src will be created in or as dst and src will be removed. If copy_function is given, it must be a callable that takes two arguments src and dst, and will be used to copy src to dst if os.rename() cannot be used. If the source is a directory, copytree() is called, passing it the copy_function(). The default copy_function is copy2(). Using copy() as the copy_function allows the move to succeed when it is not possible to also copy the metadata, at the expense of not copying any of the metadata. Raises an auditing event shutil.move with arguments src, dst. Changed in version 3.3: Added explicit symlink handling for foreign filesystems, thus adapting it to the behavior of GNU’s mv. Now returns dst. Changed in version 3.5: Added the copy_function keyword argument. Changed in version 3.8: Platform-specific fast-copy syscalls may be used internally in order to copy the file more efficiently. See Platform-dependent efficient copy operations section. Changed in version 3.9: Accepts a path-like object for both src and dst. | |
doc_24458 |
Call self as a function. | |
doc_24459 | glob.glob(pathname, *, recursive=False)
Return a possibly-empty list of path names that match pathname, which must be a string containing a path specification. pathname can be either absolute (like /usr/src/Python-1.5/Makefile) or relative (like ../../Tools/*/*.gif), and can contain shell-style wildcards. Broken symlinks are included in the results (as in the shell). Whether or not the results are sorted depends on the file system. If a file that satisfies conditions is removed or added during the call of this function, whether a path name for that file be included is unspecified. If recursive is true, the pattern “**” will match any files and zero or more directories, subdirectories and symbolic links to directories. If the pattern is followed by an os.sep or os.altsep then files will not match. Raises an auditing event glob.glob with arguments pathname, recursive. Note Using the “**” pattern in large directory trees may consume an inordinate amount of time. Changed in version 3.5: Support for recursive globs using “**”.
glob.iglob(pathname, *, recursive=False)
Return an iterator which yields the same values as glob() without actually storing them all simultaneously. Raises an auditing event glob.glob with arguments pathname, recursive.
glob.escape(pathname)
Escape all special characters ('?', '*' and '['). This is useful if you want to match an arbitrary literal string that may have special characters in it. Special characters in drive/UNC sharepoints are not escaped, e.g. on Windows escape('//?/c:/Quo vadis?.txt') returns '//?/c:/Quo vadis[?].txt'. New in version 3.4.
For example, consider a directory containing the following files: 1.gif, 2.txt, card.gif and a subdirectory sub which contains only the file 3.txt. glob() will produce the following results. Notice how any leading components of the path are preserved. >>> import glob
>>> glob.glob('./[0-9].*')
['./1.gif', './2.txt']
>>> glob.glob('*.gif')
['1.gif', 'card.gif']
>>> glob.glob('?.gif')
['1.gif']
>>> glob.glob('**/*.txt', recursive=True)
['2.txt', 'sub/3.txt']
>>> glob.glob('./**/', recursive=True)
['./', './sub/']
If the directory contains files starting with . they won’t be matched by default. For example, consider a directory containing card.gif and .card.gif: >>> import glob
>>> glob.glob('*.gif')
['card.gif']
>>> glob.glob('.c*')
['.card.gif']
See also
Module fnmatch
Shell-style filename (not path) expansion | |
doc_24460 | tf.compat.v1.logging.set_verbosity(
v
) | |
doc_24461 | See Migration guide for more details. tf.compat.v1.raw_ops.AnonymousRandomSeedGenerator
tf.raw_ops.AnonymousRandomSeedGenerator(
seed, seed2, name=None
)
Args
seed A Tensor of type int64.
seed2 A Tensor of type int64.
name A name for the operation (optional).
Returns A tuple of Tensor objects (handle, deleter). handle A Tensor of type resource.
deleter A Tensor of type variant. | |
doc_24462 | See Migration guide for more details. tf.compat.v1.image.adjust_jpeg_quality
tf.image.adjust_jpeg_quality(
image, jpeg_quality, name=None
)
This is a convenience method that converts an image to uint8 representation, encodes it to jpeg with jpeg_quality, decodes it, and then converts back to the original data type. jpeg_quality must be in the interval [0, 100]. Usage Example:
x = [[[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0]],
[[7.0, 8.0, 9.0],
[10.0, 11.0, 12.0]]]
tf.image.adjust_jpeg_quality(x, 75)
<tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy=
array([[[1., 1., 1.],
[1., 1., 1.]],
[[1., 1., 1.],
[1., 1., 1.]]], dtype=float32)>
Args
image 3D image. The size of the last dimension must be None, 1 or 3.
jpeg_quality Python int or Tensor of type int32. jpeg encoding quality.
name A name for this operation (optional).
Returns Adjusted image, same shape and DType as image.
Raises
InvalidArgumentError quality must be in [0,100]
InvalidArgumentError image must have 1 or 3 channels | |
doc_24463 |
Bases: torch.distributions.distribution.Distribution Creates a Multinomial distribution parameterized by total_count and either probs or logits (but not both). The innermost dimension of probs indexes over categories. All other dimensions index over batches. Note that total_count need not be specified if only log_prob() is called (see example below) Note The probs argument must be non-negative, finite and have a non-zero sum, and it will be normalized to sum to 1 along the last dimension. attr:probs will return this normalized value. The logits argument will be interpreted as unnormalized log probabilities and can therefore be any real number. It will likewise be normalized so that the resulting probabilities sum to 1 along the last dimension. attr:logits will return this normalized value.
sample() requires a single shared total_count for all parameters and samples.
log_prob() allows different total_count for each parameter and sample. Example: >>> m = Multinomial(100, torch.tensor([ 1., 1., 1., 1.]))
>>> x = m.sample() # equal probability of 0, 1, 2, 3
tensor([ 21., 24., 30., 25.])
>>> Multinomial(probs=torch.tensor([1., 1., 1., 1.])).log_prob(x)
tensor([-4.1338])
Parameters
total_count (int) – number of trials
probs (Tensor) – event probabilities
logits (Tensor) – event log probabilities (unnormalized)
arg_constraints = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}
expand(batch_shape, _instance=None) [source]
log_prob(value) [source]
property logits
property mean
property param_shape
property probs
sample(sample_shape=torch.Size([])) [source]
property support
total_count: int = None
property variance | |
doc_24464 |
Put a value into a specified place in a field defined by a data-type. Place val into a’s field defined by dtype and beginning offset bytes into the field. Parameters
valobject
Value to be placed in field.
dtypedtype object
Data-type of the field in which to place val.
offsetint, optional
The number of bytes into the field at which to place val. Returns
None
See also getfield
Examples >>> x = np.eye(3)
>>> x.getfield(np.float64)
array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
>>> x.setfield(3, np.int32)
>>> x.getfield(np.int32)
array([[3, 3, 3],
[3, 3, 3],
[3, 3, 3]], dtype=int32)
>>> x
array([[1.0e+000, 1.5e-323, 1.5e-323],
[1.5e-323, 1.0e+000, 1.5e-323],
[1.5e-323, 1.5e-323, 1.0e+000]])
>>> x.setfield(np.eye(3), np.int32)
>>> x
array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]]) | |
doc_24465 |
Lily of the valley plant stem. This plant stem on a pre-prepared slide was imaged with confocal fluorescence microscopy (Nikon C1 inverted microscope). Image shape is (922, 922, 4). That is 922x922 pixels in X-Y, with 4 color channels. Real-space voxel size is 1.24 microns in X-Y. Data type is unsigned 16-bit integers. Returns
lily(922, 922, 4) uint16 ndarray
Lily 2D multichannel image. Notes This image was acquired by Genevieve Buckley at Monasoh Micro Imaging in 2018. License: CC0 | |
doc_24466 | Return the main Thread object. In normal conditions, the main thread is the thread from which the Python interpreter was started. New in version 3.4. | |
doc_24467 | django.db.models.signals.pre_init
Whenever you instantiate a Django model, this signal is sent at the beginning of the model’s __init__() method. Arguments sent with this signal:
sender The model class that just had an instance created.
args A list of positional arguments passed to __init__().
kwargs A dictionary of keyword arguments passed to __init__(). For example, the tutorial has this line: q = Question(question_text="What's new?", pub_date=timezone.now())
The arguments sent to a pre_init handler would be:
Argument Value
sender
Question (the class itself)
args
[] (an empty list because there were no positional arguments passed to __init__())
kwargs
{'question_text': "What's new?", 'pub_date': datetime.datetime(2012, 2, 26, 13, 0, 0, 775217, tzinfo=<UTC>)}
post_init
django.db.models.signals.post_init
Like pre_init, but this one is sent when the __init__() method finishes. Arguments sent with this signal:
sender As above: the model class that just had an instance created.
instance
The actual instance of the model that’s just been created. Note instance._state isn’t set before sending the post_init signal, so _state attributes always have their default values. For example, _state.db is None. Warning For performance reasons, you shouldn’t perform queries in receivers of pre_init or post_init signals because they would be executed for each instance returned during queryset iteration. pre_save
django.db.models.signals.pre_save
This is sent at the beginning of a model’s save() method. Arguments sent with this signal:
sender The model class.
instance The actual instance being saved.
raw A boolean; True if the model is saved exactly as presented (i.e. when loading a fixture). One should not query/modify other records in the database as the database might not be in a consistent state yet.
using The database alias being used.
update_fields The set of fields to update as passed to Model.save(), or None if update_fields wasn’t passed to save(). post_save
django.db.models.signals.post_save
Like pre_save, but sent at the end of the save() method. Arguments sent with this signal:
sender The model class.
instance The actual instance being saved.
created A boolean; True if a new record was created.
raw A boolean; True if the model is saved exactly as presented (i.e. when loading a fixture). One should not query/modify other records in the database as the database might not be in a consistent state yet.
using The database alias being used.
update_fields The set of fields to update as passed to Model.save(), or None if update_fields wasn’t passed to save(). pre_delete
django.db.models.signals.pre_delete
Sent at the beginning of a model’s delete() method and a queryset’s delete() method. Arguments sent with this signal:
sender The model class.
instance The actual instance being deleted.
using The database alias being used. post_delete
django.db.models.signals.post_delete
Like pre_delete, but sent at the end of a model’s delete() method and a queryset’s delete() method. Arguments sent with this signal:
sender The model class.
instance
The actual instance being deleted. Note that the object will no longer be in the database, so be very careful what you do with this instance.
using The database alias being used. m2m_changed
django.db.models.signals.m2m_changed
Sent when a ManyToManyField is changed on a model instance. Strictly speaking, this is not a model signal since it is sent by the ManyToManyField, but since it complements the pre_save/post_save and pre_delete/post_delete when it comes to tracking changes to models, it is included here. Arguments sent with this signal:
sender The intermediate model class describing the ManyToManyField. This class is automatically created when a many-to-many field is defined; you can access it using the through attribute on the many-to-many field.
instance The instance whose many-to-many relation is updated. This can be an instance of the sender, or of the class the ManyToManyField is related to.
action
A string indicating the type of update that is done on the relation. This can be one of the following:
"pre_add" Sent before one or more objects are added to the relation.
"post_add" Sent after one or more objects are added to the relation.
"pre_remove" Sent before one or more objects are removed from the relation.
"post_remove" Sent after one or more objects are removed from the relation.
"pre_clear" Sent before the relation is cleared.
"post_clear" Sent after the relation is cleared.
reverse Indicates which side of the relation is updated (i.e., if it is the forward or reverse relation that is being modified).
model The class of the objects that are added to, removed from or cleared from the relation.
pk_set
For the pre_add and post_add actions, this is a set of primary key values that will be, or have been, added to the relation. This may be a subset of the values submitted to be added, since inserts must filter existing values in order to avoid a database IntegrityError. For the pre_remove and post_remove actions, this is a set of primary key values that was submitted to be removed from the relation. This is not dependent on whether the values actually will be, or have been, removed. In particular, non-existent values may be submitted, and will appear in pk_set, even though they have no effect on the database. For the pre_clear and post_clear actions, this is None.
using The database alias being used. For example, if a Pizza can have multiple Topping objects, modeled like this: class Topping(models.Model):
# ...
pass
class Pizza(models.Model):
# ...
toppings = models.ManyToManyField(Topping)
If we connected a handler like this: from django.db.models.signals import m2m_changed
def toppings_changed(sender, **kwargs):
# Do something
pass
m2m_changed.connect(toppings_changed, sender=Pizza.toppings.through)
and then did something like this: >>> p = Pizza.objects.create(...)
>>> t = Topping.objects.create(...)
>>> p.toppings.add(t)
the arguments sent to a m2m_changed handler (toppings_changed in the example above) would be:
Argument Value
sender
Pizza.toppings.through (the intermediate m2m class)
instance
p (the Pizza instance being modified)
action
"pre_add" (followed by a separate signal with "post_add")
reverse
False (Pizza contains the ManyToManyField, so this call modifies the forward relation)
model
Topping (the class of the objects added to the Pizza)
pk_set
{t.id} (since only Topping t was added to the relation)
using
"default" (since the default router sends writes here) And if we would then do something like this: >>> t.pizza_set.remove(p)
the arguments sent to a m2m_changed handler would be:
Argument Value
sender
Pizza.toppings.through (the intermediate m2m class)
instance
t (the Topping instance being modified)
action
"pre_remove" (followed by a separate signal with "post_remove")
reverse
True (Pizza contains the ManyToManyField, so this call modifies the reverse relation)
model
Pizza (the class of the objects removed from the Topping)
pk_set
{p.id} (since only Pizza p was removed from the relation)
using
"default" (since the default router sends writes here) class_prepared
django.db.models.signals.class_prepared
Sent whenever a model class has been “prepared” – that is, once model has been defined and registered with Django’s model system. Django uses this signal internally; it’s not generally used in third-party applications. Since this signal is sent during the app registry population process, and AppConfig.ready() runs after the app registry is fully populated, receivers cannot be connected in that method. One possibility is to connect them AppConfig.__init__() instead, taking care not to import models or trigger calls to the app registry. Arguments that are sent with this signal:
sender The model class which was just prepared. Management signals Signals sent by django-admin. pre_migrate
django.db.models.signals.pre_migrate
Sent by the migrate command before it starts to install an application. It’s not emitted for applications that lack a models module. Arguments sent with this signal:
sender An AppConfig instance for the application about to be migrated/synced.
app_config Same as sender.
verbosity
Indicates how much information manage.py is printing on screen. See the --verbosity flag for details. Functions which listen for pre_migrate should adjust what they output to the screen based on the value of this argument.
interactive
If interactive is True, it’s safe to prompt the user to input things on the command line. If interactive is False, functions which listen for this signal should not try to prompt for anything. For example, the django.contrib.auth app only prompts to create a superuser when interactive is True.
stdout
New in Django 4.0. A stream-like object where verbose output should be redirected.
using The alias of database on which a command will operate.
plan The migration plan that is going to be used for the migration run. While the plan is not public API, this allows for the rare cases when it is necessary to know the plan. A plan is a list of two-tuples with the first item being the instance of a migration class and the second item showing if the migration was rolled back (True) or applied (False).
apps An instance of Apps containing the state of the project before the migration run. It should be used instead of the global apps registry to retrieve the models you want to perform operations on. post_migrate
django.db.models.signals.post_migrate
Sent at the end of the migrate (even if no migrations are run) and flush commands. It’s not emitted for applications that lack a models module. Handlers of this signal must not perform database schema alterations as doing so may cause the flush command to fail if it runs during the migrate command. Arguments sent with this signal:
sender An AppConfig instance for the application that was just installed.
app_config Same as sender.
verbosity
Indicates how much information manage.py is printing on screen. See the --verbosity flag for details. Functions which listen for post_migrate should adjust what they output to the screen based on the value of this argument.
interactive
If interactive is True, it’s safe to prompt the user to input things on the command line. If interactive is False, functions which listen for this signal should not try to prompt for anything. For example, the django.contrib.auth app only prompts to create a superuser when interactive is True.
stdout
New in Django 4.0. A stream-like object where verbose output should be redirected.
using The database alias used for synchronization. Defaults to the default database.
plan The migration plan that was used for the migration run. While the plan is not public API, this allows for the rare cases when it is necessary to know the plan. A plan is a list of two-tuples with the first item being the instance of a migration class and the second item showing if the migration was rolled back (True) or applied (False).
apps An instance of Apps containing the state of the project after the migration run. It should be used instead of the global apps registry to retrieve the models you want to perform operations on. For example, you could register a callback in an AppConfig like this: from django.apps import AppConfig
from django.db.models.signals import post_migrate
def my_callback(sender, **kwargs):
# Your specific logic here
pass
class MyAppConfig(AppConfig):
...
def ready(self):
post_migrate.connect(my_callback, sender=self)
Note If you provide an AppConfig instance as the sender argument, please ensure that the signal is registered in ready(). AppConfigs are recreated for tests that run with a modified set of INSTALLED_APPS (such as when settings are overridden) and such signals should be connected for each new AppConfig instance. Request/response signals Signals sent by the core framework when processing a request. request_started
django.core.signals.request_started
Sent when Django begins processing an HTTP request. Arguments sent with this signal:
sender The handler class – e.g. django.core.handlers.wsgi.WsgiHandler – that handled the request.
environ The environ dictionary provided to the request. request_finished
django.core.signals.request_finished
Sent when Django finishes delivering an HTTP response to the client. Arguments sent with this signal:
sender The handler class, as above. got_request_exception
django.core.signals.got_request_exception
This signal is sent whenever Django encounters an exception while processing an incoming HTTP request. Arguments sent with this signal:
sender Unused (always None).
request The HttpRequest object. Test signals Signals only sent when running tests. setting_changed
django.test.signals.setting_changed
This signal is sent when the value of a setting is changed through the django.test.TestCase.settings() context manager or the django.test.override_settings() decorator/context manager. It’s actually sent twice: when the new value is applied (“setup”) and when the original value is restored (“teardown”). Use the enter argument to distinguish between the two. You can also import this signal from django.core.signals to avoid importing from django.test in non-test situations. Arguments sent with this signal:
sender The settings handler.
setting The name of the setting.
value The value of the setting after the change. For settings that initially don’t exist, in the “teardown” phase, value is None.
enter A boolean; True if the setting is applied, False if restored. template_rendered
django.test.signals.template_rendered
Sent when the test system renders a template. This signal is not emitted during normal operation of a Django server – it is only available during testing. Arguments sent with this signal:
sender The Template object which was rendered.
template Same as sender
context The Context with which the template was rendered. Database Wrappers Signals sent by the database wrapper when a database connection is initiated. connection_created
django.db.backends.signals.connection_created
Sent when the database wrapper makes the initial connection to the database. This is particularly useful if you’d like to send any post connection commands to the SQL backend. Arguments sent with this signal:
sender The database wrapper class – i.e. django.db.backends.postgresql.DatabaseWrapper or django.db.backends.mysql.DatabaseWrapper, etc.
connection The database connection that was opened. This can be used in a multiple-database configuration to differentiate connection signals from different databases. | |
doc_24468 | Returns a boolean indicating whether the geometry is a LinearRing. | |
doc_24469 | Read until newline or EOF and return a single str. If the stream is already at EOF, an empty string is returned. If size is specified, at most size characters will be read. | |
doc_24470 | Wait until the close() method completes. | |
doc_24471 | lt(other) -> Tensor See torch.less(). | |
doc_24472 | A specialized alternative to CGIHandler, for use when deploying on Microsoft’s IIS web server, without having set the config allowPathInfo option (IIS>=7) or metabase allowPathInfoForScriptMappings (IIS<7). By default, IIS gives a PATH_INFO that duplicates the SCRIPT_NAME at the front, causing problems for WSGI applications that wish to implement routing. This handler strips any such duplicated path. IIS can be configured to pass the correct PATH_INFO, but this causes another bug where PATH_TRANSLATED is wrong. Luckily this variable is rarely used and is not guaranteed by WSGI. On IIS<7, though, the setting can only be made on a vhost level, affecting all other script mappings, many of which break when exposed to the PATH_TRANSLATED bug. For this reason IIS<7 is almost never deployed with the fix (Even IIS7 rarely uses it because there is still no UI for it.). There is no way for CGI code to tell whether the option was set, so a separate handler class is provided. It is used in the same way as CGIHandler, i.e., by calling IISCGIHandler().run(app), where app is the WSGI application object you wish to invoke. New in version 3.2. | |
doc_24473 | See Migration guide for more details. tf.compat.v1.linalg.LinearOperator
tf.linalg.LinearOperator(
dtype, graph_parents=None, is_non_singular=None, is_self_adjoint=None,
is_positive_definite=None, is_square=None, name=None, parameters=None
)
Subclasses of LinearOperator provide access to common methods on a (batch) matrix, without the need to materialize the matrix. This allows: Matrix free computations Operators that take advantage of special structure, while providing a consistent API to users. Subclassing To enable a public method, subclasses should implement the leading-underscore version of the method. The argument signature should be identical except for the omission of name="...". For example, to enable matmul(x, adjoint=False, name="matmul") a subclass should implement _matmul(x, adjoint=False). Performance contract Subclasses should only implement the assert methods (e.g. assert_non_singular) if they can be done in less than O(N^3) time. Class docstrings should contain an explanation of computational complexity. Since this is a high-performance library, attention should be paid to detail, and explanations can include constants as well as Big-O notation. Shape compatibility LinearOperator subclasses should operate on a [batch] matrix with compatible shape. Class docstrings should define what is meant by compatible shape. Some subclasses may not support batching. Examples: x is a batch matrix with compatible shape for matmul if operator.shape = [B1,...,Bb] + [M, N], b >= 0,
x.shape = [B1,...,Bb] + [N, R]
rhs is a batch matrix with compatible shape for solve if operator.shape = [B1,...,Bb] + [M, N], b >= 0,
rhs.shape = [B1,...,Bb] + [M, R]
Example docstring for subclasses. This operator acts like a (batch) matrix A with shape [B1,...,Bb, M, N] for some b >= 0. The first b indices index a batch member. For every batch index (i1,...,ib), A[i1,...,ib, : :] is an m x n matrix. Again, this matrix A may not be materialized, but for purposes of identifying and working with compatible arguments the shape is relevant. Examples: some_tensor = ... shape = ????
operator = MyLinOp(some_tensor)
operator.shape()
==> [2, 4, 4]
operator.log_abs_determinant()
==> Shape [2] Tensor
x = ... Shape [2, 4, 5] Tensor
operator.matmul(x)
==> Shape [2, 4, 5] Tensor
Shape compatibility This operator acts on batch matrices with compatible shape. FILL IN WHAT IS MEANT BY COMPATIBLE SHAPE Performance FILL THIS IN Matrix property hints This LinearOperator is initialized with boolean flags of the form is_X, for X = non_singular, self_adjoint, positive_definite, square. These have the following meaning: If is_X == True, callers should expect the operator to have the property X. This is a promise that should be fulfilled, but is not a runtime assert. For example, finite floating point precision may result in these promises being violated. If is_X == False, callers should expect the operator to not have X. If is_X == None (the default), callers should have no expectation either way. Initialization parameters All subclasses of LinearOperator are expected to pass a parameters argument to super().__init__(). This should be a dict containing the unadulterated arguments passed to the subclass __init__. For example, MyLinearOperator with an initializer should look like: def __init__(self, operator, is_square=False, name=None):
parameters = dict(
operator=operator,
is_square=is_square,
name=name
)
...
super().__init__(..., parameters=parameters)
```
Users can then access `my_linear_operator.parameters` to see all arguments
passed to its initializer.
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr>
<tr>
<td>
`dtype`
</td>
<td>
The type of the this `LinearOperator`. Arguments to `matmul` and
`solve` will have to be this type.
</td>
</tr><tr>
<td>
`graph_parents`
</td>
<td>
(Deprecated) Python list of graph prerequisites of this
`LinearOperator` Typically tensors that are passed during initialization
</td>
</tr><tr>
<td>
`is_non_singular`
</td>
<td>
Expect that this operator is non-singular.
</td>
</tr><tr>
<td>
`is_self_adjoint`
</td>
<td>
Expect that this operator is equal to its hermitian
transpose. If `dtype` is real, this is equivalent to being symmetric.
</td>
</tr><tr>
<td>
`is_positive_definite`
</td>
<td>
Expect that this operator is positive definite,
meaning the quadratic form `x^H A x` has positive real part for all
nonzero `x`. Note that we do not require the operator to be
self-adjoint to be positive-definite. See:
<a href="https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices">https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices</a>
</td>
</tr><tr>
<td>
`is_square`
</td>
<td>
Expect that this operator acts like square [batch] matrices.
</td>
</tr><tr>
<td>
`name`
</td>
<td>
A name for this `LinearOperator`.
</td>
</tr><tr>
<td>
`parameters`
</td>
<td>
Python `dict` of parameters used to instantiate this
`LinearOperator`.
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Raises</h2></th></tr>
<tr>
<td>
`ValueError`
</td>
<td>
If any member of graph_parents is `None` or not a `Tensor`.
</td>
</tr><tr>
<td>
`ValueError`
</td>
<td>
If hints are set incorrectly.
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Attributes</h2></th></tr>
<tr>
<td>
`H`
</td>
<td>
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`.
Note that calling `self.adjoint()` and `self.H` are equivalent.
</td>
</tr><tr>
<td>
`batch_shape`
</td>
<td>
`TensorShape` of batch dimensions of this `LinearOperator`.
If this operator acts like the batch matrix `A` with
`A.shape = [B1,...,Bb, M, N]`, then this returns
`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]`
</td>
</tr><tr>
<td>
`domain_dimension`
</td>
<td>
Dimension (in the sense of vector spaces) of the domain of this operator.
If this operator acts like the batch matrix `A` with
`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
</td>
</tr><tr>
<td>
`dtype`
</td>
<td>
The `DType` of `Tensor`s handled by this `LinearOperator`.
</td>
</tr><tr>
<td>
`graph_parents`
</td>
<td>
List of graph dependencies of this `LinearOperator`. (deprecated)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Do not call `graph_parents`.
</td>
</tr><tr>
<td>
`is_non_singular`
</td>
<td>
</td>
</tr><tr>
<td>
`is_positive_definite`
</td>
<td>
</td>
</tr><tr>
<td>
`is_self_adjoint`
</td>
<td>
</td>
</tr><tr>
<td>
`is_square`
</td>
<td>
Return `True/False` depending on if this operator is square.
</td>
</tr><tr>
<td>
`parameters`
</td>
<td>
Dictionary of parameters used to instantiate this `LinearOperator`.
</td>
</tr><tr>
<td>
`range_dimension`
</td>
<td>
Dimension (in the sense of vector spaces) of the range of this operator.
If this operator acts like the batch matrix `A` with
`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
</td>
</tr><tr>
<td>
`shape`
</td>
<td>
`TensorShape` of this `LinearOperator`.
If this operator acts like the batch matrix `A` with
`A.shape = [B1,...,Bb, M, N]`, then this returns
`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`.
</td>
</tr><tr>
<td>
`tensor_rank`
</td>
<td>
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with
`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
</td>
</tr>
</table>
## Methods
<h3 id="add_to_tensor"><code>add_to_tensor</code></h3>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/v2.4.0/tensorflow/python/ops/linalg/linear_operator.py#L1077-L1090">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>add_to_tensor(
x, name='add_to_tensor'
)
</code></pre>
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Args</th></tr>
<tr>
<td>
`x`
</td>
<td>
`Tensor` with same `dtype` and shape broadcastable to `self.shape`.
</td>
</tr><tr>
<td>
`name`
</td>
<td>
A name to give this `Op`.
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Returns</th></tr>
<tr class="alt">
<td colspan="2">
A `Tensor` with broadcast shape and same `dtype` as `self`.
</td>
</tr>
</table>
<h3 id="adjoint"><code>adjoint</code></h3>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/v2.4.0/tensorflow/python/ops/linalg/linear_operator.py#L933-L948">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>adjoint(
name='adjoint'
)
</code></pre>
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`.
Note that calling `self.adjoint()` and `self.H` are equivalent.
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Args</th></tr>
<tr>
<td>
`name`
</td>
<td>
A name for this `Op`.
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Returns</th></tr>
<tr class="alt">
<td colspan="2">
`LinearOperator` which represents the adjoint of this `LinearOperator`.
</td>
</tr>
</table>
<h3 id="assert_non_singular"><code>assert_non_singular</code></h3>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/v2.4.0/tensorflow/python/ops/linalg/linear_operator.py#L541-L559">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>assert_non_singular(
name='assert_non_singular'
)
</code></pre>
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
ConditionNumber < max{100, range_dimension, domain_dimension} * eps, eps := np.finfo(self.dtype.as_numpy_dtype).eps ```
Args
name A string name to prepend to created ops.
Returns An Assert Op, that, when run, will raise an InvalidArgumentError if the operator is singular.
assert_positive_definite View source
assert_positive_definite(
name='assert_positive_definite'
)
Returns an Op that asserts this operator is positive definite. Here, positive definite means that the quadratic form x^H A x has positive real part for all nonzero x. Note that we do not require the operator to be self-adjoint to be positive definite.
Args
name A name to give this Op.
Returns An Assert Op, that, when run, will raise an InvalidArgumentError if the operator is not positive definite.
assert_self_adjoint View source
assert_self_adjoint(
name='assert_self_adjoint'
)
Returns an Op that asserts this operator is self-adjoint. Here we check that this operator is exactly equal to its hermitian transpose.
Args
name A string name to prepend to created ops.
Returns An Assert Op, that, when run, will raise an InvalidArgumentError if the operator is not self-adjoint.
batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of batch dimensions of this operator, determined at runtime. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns a Tensor holding [B1,...,Bb].
Args
name A name for this Op.
Returns int32 Tensor
cholesky View source
cholesky(
name='cholesky'
)
Returns a Cholesky factor as a LinearOperator. Given A representing this LinearOperator, if A is positive definite self-adjoint, return L, where A = L L^T, i.e. the cholesky decomposition.
Args
name A name for this Op.
Returns LinearOperator which represents the lower triangular matrix in the Cholesky decomposition.
Raises
ValueError When the LinearOperator is not hinted to be positive definite and self adjoint. cond View source
cond(
name='cond'
)
Returns the condition number of this linear operator.
Args
name A name for this Op.
Returns Shape [B1,...,Bb] Tensor of same dtype as self.
determinant View source
determinant(
name='det'
)
Determinant for every batch member.
Args
name A name for this Op.
Returns Tensor with shape self.batch_shape and same dtype as self.
Raises
NotImplementedError If self.is_square is False. diag_part View source
diag_part(
name='diag_part'
)
Efficiently get the [batch] diagonal part of this operator. If this operator has shape [B1,...,Bb, M, N], this returns a Tensor diagonal, of shape [B1,...,Bb, min(M, N)], where diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]. my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
Args
name A name for this Op.
Returns
diag_part A Tensor of same dtype as self. domain_dimension_tensor View source
domain_dimension_tensor(
name='domain_dimension_tensor'
)
Dimension (in the sense of vector spaces) of the domain of this operator. Determined at runtime. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns N.
Args
name A name for this Op.
Returns int32 Tensor
eigvals View source
eigvals(
name='eigvals'
)
Returns the eigenvalues of this linear operator. If the operator is marked as self-adjoint (via is_self_adjoint) this computation can be more efficient.
Note: This currently only supports self-adjoint operators.
Args
name A name for this Op.
Returns Shape [B1,...,Bb, N] Tensor of same dtype as self.
inverse View source
inverse(
name='inverse'
)
Returns the Inverse of this LinearOperator. Given A representing this LinearOperator, return a LinearOperator representing A^-1.
Args
name A name scope to use for ops added by this method.
Returns LinearOperator representing inverse of this matrix.
Raises
ValueError When the LinearOperator is not hinted to be non_singular. log_abs_determinant View source
log_abs_determinant(
name='log_abs_det'
)
Log absolute value of determinant for every batch member.
Args
name A name for this Op.
Returns Tensor with shape self.batch_shape and same dtype as self.
Raises
NotImplementedError If self.is_square is False. matmul View source
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
Transform [batch] matrix x with left multiplication: x --> Ax. # Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
Args
x LinearOperator or Tensor with compatible shape and same dtype as self. See class docstring for definition of compatibility.
adjoint Python bool. If True, left multiply by the adjoint: A^H x.
adjoint_arg Python bool. If True, compute A x^H where x^H is the hermitian transpose (transposition and complex conjugation).
name A name for this Op.
Returns A LinearOperator or Tensor with shape [..., M, R] and same dtype as self.
matvec View source
matvec(
x, adjoint=False, name='matvec'
)
Transform [batch] vector x with left multiplication: x --> Ax. # Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
Args
x Tensor with compatible shape and same dtype as self. x is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility.
adjoint Python bool. If True, left multiply by the adjoint: A^H x.
name A name for this Op.
Returns A Tensor with shape [..., M] and same dtype as self.
range_dimension_tensor View source
range_dimension_tensor(
name='range_dimension_tensor'
)
Dimension (in the sense of vector spaces) of the range of this operator. Determined at runtime. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns M.
Args
name A name for this Op.
Returns int32 Tensor
shape_tensor View source
shape_tensor(
name='shape_tensor'
)
Shape of this LinearOperator, determined at runtime. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns a Tensor holding [B1,...,Bb, M, N], equivalent to tf.shape(A).
Args
name A name for this Op.
Returns int32 Tensor
solve View source
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
Solve (exact or approx) R (batch) systems of equations: A X = rhs. The returned Tensor will be close to an exact solution if A is well conditioned. Otherwise closeness will vary. See class docstring for details. Examples: # Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
Args
rhs Tensor with same dtype as this operator and compatible shape. rhs is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility.
adjoint Python bool. If True, solve the system involving the adjoint of this LinearOperator: A^H X = rhs.
adjoint_arg Python bool. If True, solve A X = rhs^H where rhs^H is the hermitian transpose (transposition and complex conjugation).
name A name scope to use for ops added by this method.
Returns Tensor with shape [...,N, R] and same dtype as rhs.
Raises
NotImplementedError If self.is_non_singular or is_square is False. solvevec View source
solvevec(
rhs, adjoint=False, name='solve'
)
Solve single equation with best effort: A X = rhs. The returned Tensor will be close to an exact solution if A is well conditioned. Otherwise closeness will vary. See class docstring for details. Examples: # Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
Args
rhs Tensor with same dtype as this operator. rhs is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions.
adjoint Python bool. If True, solve the system involving the adjoint of this LinearOperator: A^H X = rhs.
name A name scope to use for ops added by this method.
Returns Tensor with shape [...,N] and same dtype as rhs.
Raises
NotImplementedError If self.is_non_singular or is_square is False. tensor_rank_tensor View source
tensor_rank_tensor(
name='tensor_rank_tensor'
)
Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns b + 2.
Args
name A name for this Op.
Returns int32 Tensor, determined at runtime.
to_dense View source
to_dense(
name='to_dense'
)
Return a dense (batch) matrix representing this operator. trace View source
trace(
name='trace'
)
Trace of the linear operator, equal to sum of self.diag_part(). If the operator is square, this is also the sum of the eigenvalues.
Args
name A name for this Op.
Returns Shape [B1,...,Bb] Tensor of same dtype as self.
__matmul__ View source
__matmul__(
other
) | |
doc_24474 |
Return length of the ticks in points. | |
doc_24475 |
Raise self to the power other, in place. | |
doc_24476 | Inplace row normalize using the l1 norm | |
doc_24477 | Gets the Surface the PixelArray uses. surface -> Surface The Surface the PixelArray was created for. | |
doc_24478 | tf.compat.v1.get_collection(
key, scope=None
)
See tf.Graph.get_collection for more details.
Args
key The key for the collection. For example, the GraphKeys class contains many standard names for collections.
scope (Optional.) If supplied, the resulting list is filtered to include only items whose name attribute matches using re.match. Items without a name attribute are never returned if a scope is supplied and the choice or re.match means that a scope without special tokens filters by prefix.
Returns The list of values in the collection with the given name, or an empty list if no value has been added to that collection. The list contains the values in the order under which they were collected.
Eager Compatibility Collections are not supported when eager execution is enabled. | |
doc_24479 |
Find indices where elements of v should be inserted in a to maintain order. For full documentation, see numpy.searchsorted See also numpy.searchsorted
equivalent function | |
doc_24480 |
Bases: object The base class for anything that participates in the transform tree and needs to invalidate its parents or be invalidated. This includes classes that are not really transforms, such as bounding boxes, since some transforms depend on bounding boxes to compute their values. Parameters
shorthand_namestr
A string representing the "name" of the transform. The name carries no significance other than to improve the readability of str(transform) when DEBUG=True. INVALID=3
INVALID_AFFINE=2
INVALID_NON_AFFINE=1
__copy__()[source]
__deepcopy__(memo)[source]
__dict__=mappingproxy({'__module__': 'matplotlib.transforms', '__doc__': '\n The base class for anything that participates in the transform tree\n and needs to invalidate its parents or be invalidated. This includes\n classes that are not really transforms, such as bounding boxes, since some\n transforms depend on bounding boxes to compute their values.\n ', 'INVALID_NON_AFFINE': 1, 'INVALID_AFFINE': 2, 'INVALID': 3, 'is_affine': False, 'is_bbox': False, 'pass_through': False, '__init__': <function TransformNode.__init__>, '__getstate__': <function TransformNode.__getstate__>, '__setstate__': <function TransformNode.__setstate__>, '__copy__': <function TransformNode.__copy__>, '__deepcopy__': <function TransformNode.__deepcopy__>, 'invalidate': <function TransformNode.invalidate>, '_invalidate_internal': <function TransformNode._invalidate_internal>, 'set_children': <function TransformNode.set_children>, 'frozen': <function TransformNode.frozen>, '__dict__': <attribute '__dict__' of 'TransformNode' objects>, '__weakref__': <attribute '__weakref__' of 'TransformNode' objects>, '__annotations__': {}})
__getstate__()[source]
__init__(shorthand_name=None)[source]
Parameters
shorthand_namestr
A string representing the "name" of the transform. The name carries no significance other than to improve the readability of str(transform) when DEBUG=True.
__module__='matplotlib.transforms'
__setstate__(data_dict)[source]
__weakref__
list of weak references to the object (if defined)
frozen()[source]
Return a frozen copy of this transform node. The frozen copy will not be updated when its children change. Useful for storing a previously known state of a transform where copy.deepcopy() might normally be used.
invalidate()[source]
Invalidate this TransformNode and triggers an invalidation of its ancestors. Should be called any time the transform changes.
is_affine=False
is_bbox=False
pass_through=False
If pass_through is True, all ancestors will always be invalidated, even if 'self' is already invalid.
set_children(*children)[source]
Set the children of the transform, to let the invalidation system know which transforms can invalidate this transform. Should be called from the constructor of any transforms that depend on other transforms. | |
doc_24481 |
A class to hold the parameters for a subplot. Defaults are given by rcParams["figure.subplot.[name]"]. Parameters
leftfloat
The position of the left edge of the subplots, as a fraction of the figure width.
rightfloat
The position of the right edge of the subplots, as a fraction of the figure width.
bottomfloat
The position of the bottom edge of the subplots, as a fraction of the figure height.
topfloat
The position of the top edge of the subplots, as a fraction of the figure height.
wspacefloat
The width of the padding between subplots, as a fraction of the average Axes width.
hspacefloat
The height of the padding between subplots, as a fraction of the average Axes height. update(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)[source]
Update the dimensions of the passed parameters. None means unchanged.
propertyvalidate[source] | |
doc_24482 | See Migration guide for more details. tf.compat.v1.data.experimental.copy_to_device
tf.data.experimental.copy_to_device(
target_device, source_device='/cpu:0'
)
Args
target_device The name of a device to which elements will be copied.
source_device The original device on which input_dataset will be placed.
Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply. | |
doc_24483 |
Applies the rectified linear unit function element-wise: ReLU(x)=(x)+=max(0,x)\text{ReLU}(x) = (x)^+ = \max(0, x) Parameters
inplace – can optionally do the operation in-place. Default: False Shape:
Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.ReLU()
>>> input = torch.randn(2)
>>> output = m(input)
An implementation of CReLU - https://arxiv.org/abs/1603.05201
>>> m = nn.ReLU()
>>> input = torch.randn(2).unsqueeze(0)
>>> output = torch.cat((m(input),m(-input))) | |
doc_24484 |
Prune entire (currently unpruned) channels in a tensor at random. Parameters
amount (int or float) – quantity of parameters to prune. If float, should be between 0.0 and 1.0 and represent the fraction of parameters to prune. If int, it represents the absolute number of parameters to prune.
dim (int, optional) – index of the dim along which we define channels to prune. Default: -1.
classmethod apply(module, name, amount, dim=-1) [source]
Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters
module (nn.Module) – module containing the tensor to prune
name (str) – parameter name within module on which pruning will act.
amount (int or float) – quantity of parameters to prune. If float, should be between 0.0 and 1.0 and represent the fraction of parameters to prune. If int, it represents the absolute number of parameters to prune.
dim (int, optional) – index of the dim along which we define channels to prune. Default: -1.
apply_mask(module)
Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters
module (nn.Module) – module containing the tensor to prune Returns
pruned version of the input tensor Return type
pruned_tensor (torch.Tensor)
compute_mask(t, default_mask) [source]
Computes and returns a mask for the input tensor t. Starting from a base default_mask (which should be a mask of ones if the tensor has not been pruned yet), generate a random mask to apply on top of the default_mask by randomly zeroing out channels along the specified dim of the tensor. Parameters
t (torch.Tensor) – tensor representing the parameter to prune
default_mask (torch.Tensor) – Base mask from previous pruning iterations, that need to be respected after the new mask is applied. Same dims as t. Returns
mask to apply to t, of same dims as t Return type
mask (torch.Tensor) Raises
IndexError – if self.dim >= len(t.shape)
prune(t, default_mask=None, importance_scores=None)
Computes and returns a pruned version of input tensor t according to the pruning rule specified in compute_mask(). Parameters
t (torch.Tensor) – tensor to prune (of same dimensions as default_mask).
importance_scores (torch.Tensor) – tensor of importance scores (of same shape as t) used to compute mask for pruning t. The values in this tensor indicate the importance of the corresponding elements in the t that is being pruned. If unspecified or None, the tensor t will be used in its place.
default_mask (torch.Tensor, optional) – mask from previous pruning iteration, if any. To be considered when determining what portion of the tensor that pruning should act on. If None, default to a mask of ones. Returns
pruned version of tensor t.
remove(module)
Removes the pruning reparameterization from a module. The pruned parameter named name remains permanently pruned, and the parameter named name+'_orig' is removed from the parameter list. Similarly, the buffer named name+'_mask' is removed from the buffers. Note Pruning itself is NOT undone or reversed! | |
doc_24485 | Read a plist file. fp should be a readable and binary file object. Return the unpacked root object (which usually is a dictionary). The fmt is the format of the file and the following values are valid:
None: Autodetect the file format
FMT_XML: XML file format
FMT_BINARY: Binary plist format The dict_type is the type used for dictionaries that are read from the plist file. XML data for the FMT_XML format is parsed using the Expat parser from xml.parsers.expat – see its documentation for possible exceptions on ill-formed XML. Unknown elements will simply be ignored by the plist parser. The parser for the binary format raises InvalidFileException when the file cannot be parsed. New in version 3.4. | |
doc_24486 |
Check if types match. New in version 1.7.0. Parameters
otherobject
Class instance. Returns
boolboolean
True if other is same class as self | |
doc_24487 | The version string of the zlib library actually loaded by the interpreter. New in version 3.3. | |
doc_24488 |
This is a sequential container which calls the Conv1d and ReLU modules. During quantization this will be replaced with the corresponding fused module. | |
doc_24489 |
Update the subplot position from figure.subplotpars. | |
doc_24490 |
Return unbiased kurtosis over requested axis. Kurtosis obtained using Fisher’s definition of kurtosis (kurtosis of normal == 0.0). Normalized by N-1. Parameters
axis:{index (0), columns (1)}
Axis for the function to be applied on.
skipna:bool, default True
Exclude NA/null values when computing the result.
level:int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
numeric_only:bool, default None
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series. **kwargs
Additional keyword arguments to be passed to the function. Returns
Series or DataFrame (if level specified) | |
doc_24491 | Outputs the feed in the given encoding to outfile, which is a file-like object. Subclasses should override this. | |
doc_24492 | Register a custom template global, available application wide. Like Flask.template_global() but for a blueprint. Changelog New in version 0.10. Parameters
name (Optional[str]) – the optional name of the global, otherwise the function name will be used. Return type
Callable | |
doc_24493 | Return the inverse hyperbolic tangent of x. | |
doc_24494 | The format of a MIME document allows for some text between the blank line following the headers, and the first multipart boundary string. Normally, this text is never visible in a MIME-aware mail reader because it falls outside the standard MIME armor. However, when viewing the raw text of the message, or when viewing the message in a non-MIME aware reader, this text can become visible. The preamble attribute contains this leading extra-armor text for MIME documents. When the Parser discovers some text after the headers but before the first boundary string, it assigns this text to the message’s preamble attribute. When the Generator is writing out the plain text representation of a MIME message, and it finds the message has a preamble attribute, it will write this text in the area between the headers and the first boundary. See email.parser and email.generator for details. Note that if the message object has no preamble, the preamble attribute will be None. | |
doc_24495 |
A torch.nn.ConvTranspose1d module with lazy initialization of the in_channels argument of the ConvTranspose1d that is inferred from the input.size(1). Parameters
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of the input. Default: 0
output_padding (int or tuple, optional) – Additional size added to one side of the output shape. Default: 0
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1 See also torch.nn.ConvTranspose1d and torch.nn.modules.lazy.LazyModuleMixin
cls_to_become
alias of ConvTranspose1d | |
doc_24496 |
Callback processing for mouse movement events. Backend derived classes should call this function on any motion-notify-event. This method will call all functions connected to the 'motion_notify_event' with a MouseEvent instance. Parameters
xfloat
The canvas coordinates where 0=left.
yfloat
The canvas coordinates where 0=bottom. guiEvent
The native UI event that generated the Matplotlib event. | |
doc_24497 | tf.keras.layers.AvgPool2D Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.AveragePooling2D, tf.compat.v1.keras.layers.AvgPool2D
tf.keras.layers.AveragePooling2D(
pool_size=(2, 2), strides=None, padding='valid', data_format=None,
**kwargs
)
Arguments
pool_size integer or tuple of 2 integers, factors by which to downscale (vertical, horizontal). (2, 2) will halve the input in both spatial dimension. If only one integer is specified, the same window length will be used for both dimensions.
strides Integer, tuple of 2 integers, or None. Strides values. If None, it will default to pool_size.
padding One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input.
data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape: If data_format='channels_last': 4D tensor with shape (batch_size, rows, cols, channels). If data_format='channels_first': 4D tensor with shape (batch_size, channels, rows, cols). Output shape: If data_format='channels_last': 4D tensor with shape (batch_size, pooled_rows, pooled_cols, channels). If data_format='channels_first': 4D tensor with shape (batch_size, channels, pooled_rows, pooled_cols). | |
doc_24498 |
Set the text size. | |
doc_24499 | The EnvBuilder class accepts the following keyword arguments on instantiation:
system_site_packages – a Boolean value indicating that the system Python site-packages should be available to the environment (defaults to False).
clear – a Boolean value which, if true, will delete the contents of any existing target directory, before creating the environment.
symlinks – a Boolean value indicating whether to attempt to symlink the Python binary rather than copying.
upgrade – a Boolean value which, if true, will upgrade an existing environment with the running Python - for use when that Python has been upgraded in-place (defaults to False).
with_pip – a Boolean value which, if true, ensures pip is installed in the virtual environment. This uses ensurepip with the --default-pip option.
prompt – a String to be used after virtual environment is activated (defaults to None which means directory name of the environment would be used). If the special string "." is provided, the basename of the current directory is used as the prompt.
upgrade_deps – Update the base venv modules to the latest on PyPI Changed in version 3.4: Added the with_pip parameter New in version 3.6: Added the prompt parameter New in version 3.9: Added the upgrade_deps parameter Creators of third-party virtual environment tools will be free to use the provided EnvBuilder class as a base class. The returned env-builder is an object which has a method, create:
create(env_dir)
Create a virtual environment by specifying the target directory (absolute or relative to the current directory) which is to contain the virtual environment. The create method will either create the environment in the specified directory, or raise an appropriate exception. The create method of the EnvBuilder class illustrates the hooks available for subclass customization: def create(self, env_dir):
"""
Create a virtualized Python environment in a directory.
env_dir is the target directory to create an environment in.
"""
env_dir = os.path.abspath(env_dir)
context = self.ensure_directories(env_dir)
self.create_configuration(context)
self.setup_python(context)
self.setup_scripts(context)
self.post_setup(context)
Each of the methods ensure_directories(), create_configuration(), setup_python(), setup_scripts() and post_setup() can be overridden.
ensure_directories(env_dir)
Creates the environment directory and all necessary directories, and returns a context object. This is just a holder for attributes (such as paths), for use by the other methods. The directories are allowed to exist already, as long as either clear or upgrade were specified to allow operating on an existing environment directory.
create_configuration(context)
Creates the pyvenv.cfg configuration file in the environment.
setup_python(context)
Creates a copy or symlink to the Python executable in the environment. On POSIX systems, if a specific executable python3.x was used, symlinks to python and python3 will be created pointing to that executable, unless files with those names already exist.
setup_scripts(context)
Installs activation scripts appropriate to the platform into the virtual environment.
upgrade_dependencies(context)
Upgrades the core venv dependency packages (currently pip and setuptools) in the environment. This is done by shelling out to the pip executable in the environment. New in version 3.9.
post_setup(context)
A placeholder method which can be overridden in third party implementations to pre-install packages in the virtual environment or perform other post-creation steps.
Changed in version 3.7.2: Windows now uses redirector scripts for python[w].exe instead of copying the actual binaries. In 3.7.2 only setup_python() does nothing unless running from a build in the source tree. Changed in version 3.7.3: Windows copies the redirector scripts as part of setup_python() instead of setup_scripts(). This was not the case in 3.7.2. When using symlinks, the original executables will be linked. In addition, EnvBuilder provides this utility method that can be called from setup_scripts() or post_setup() in subclasses to assist in installing custom scripts into the virtual environment.
install_scripts(context, path)
path is the path to a directory that should contain subdirectories “common”, “posix”, “nt”, each containing scripts destined for the bin directory in the environment. The contents of “common” and the directory corresponding to os.name are copied after some text replacement of placeholders:
__VENV_DIR__ is replaced with the absolute path of the environment directory.
__VENV_NAME__ is replaced with the environment name (final path segment of environment directory).
__VENV_PROMPT__ is replaced with the prompt (the environment name surrounded by parentheses and with a following space)
__VENV_BIN_NAME__ is replaced with the name of the bin directory (either bin or Scripts).
__VENV_PYTHON__ is replaced with the absolute path of the environment’s executable. The directories are allowed to exist (for when an existing environment is being upgraded). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.