_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_25300
Delete all occurrences of the field with name name from the message’s headers. No exception is raised if the named field isn’t present in the headers.
doc_25301
An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a low-level method (ndarray(…)) for instantiating an array. For more information, refer to the numpy module and examine the methods and attributes of an array. Parameters (for the __new__ method; see Notes below) shapetuple of ints Shape of created array. dtypedata-type, optional Any object that can be interpreted as a numpy data type. bufferobject exposing buffer interface, optional Used to fill the array with data. offsetint, optional Offset of array data in buffer. stridestuple of ints, optional Strides of data in memory. order{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also array Construct an array. zeros Create an array, each element of which is zero. empty Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). dtype Create a data-type. numpy.typing.NDArray An ndarray alias generic w.r.t. its dtype.type. Notes There are two modes of creating an array using __new__: If buffer is None, then only shape, dtype, and order are used. If buffer is an object exposing the buffer interface, then all keywords are interpreted. No __init__ method is needed because the array is fully initialized after the __new__ method. Examples These examples illustrate the low-level ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray. First mode, buffer is None: >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes Tndarray Transpose of the array. databuffer The array’s elements, in memory. dtypedtype object Describes the format of the elements in the array. flagsdict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. flatnumpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., x.flat = 3 (See ndarray.flat for assignment examples; TODO). imagndarray Imaginary part of the array. realndarray Real part of the array. sizeint Number of elements in the array. itemsizeint The memory use of each array element in bytes. nbytesint The total number of bytes required to store the array data, i.e., itemsize * size. ndimint The array’s number of dimensions. shapetuple of ints Shape of the array. stridestuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous (3, 4) array of type int16 in C-order has strides (8, 2). This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (2 * 4). ctypesctypes object Class containing properties of the array needed for interaction with ctypes. basendarray If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored.
doc_25302
See torch.prod()
doc_25303
Safely iterates line-based over an input stream. If the input stream is not a LimitedStream the limit parameter is mandatory. This uses the stream’s read() method internally as opposite to the readline() method that is unsafe and can only be used in violation of the WSGI specification. The same problem applies to the __iter__ function of the input stream which calls readline() without arguments. If you need line-by-line processing it’s strongly recommended to iterate over the input stream using this helper function. Changelog New in version 0.11.10: added support for the cap_at_buffer parameter. New in version 0.9: added support for iterators as input stream. Changed in version 0.8: This function now ensures that the limit was reached. Parameters stream (Union[Iterable[bytes], BinaryIO]) – the stream or iterate to iterate over. limit (Optional[int]) – the limit in bytes for the stream. (Usually content length. Not necessary if the stream is a LimitedStream. buffer_size (int) – The optional buffer size. cap_at_buffer (bool) – if this is set chunks are split if they are longer than the buffer size. Internally this is implemented that the buffer size might be exhausted by a factor of two however. Return type Iterator[bytes]
doc_25304
Setup for writing the movie file. Parameters figFigure The figure to grab the rendered frames from. outfilestr The filename of the resulting movie file. dpifloat, default: fig.dpi The dpi of the output file. This, with the figure size, controls the size in pixels of the resulting movie file. frame_prefixstr, optional The filename prefix to use for temporary files. If None (the default), files are written to a temporary directory which is deleted by cleanup; if not None, no temporary files are deleted.
doc_25305
Required. A reference to the django_content_type database table, which contains a record for each installed model.
doc_25306
See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyAdagrad tf.raw_ops.SparseApplyAdagrad( var, accum, lr, grad, indices, use_locking=False, update_slots=True, name=None ) That is for rows we have grad for, we update var and accum as follows: $$accum += grad * grad$$ $$var -= lr * grad * (1 / sqrt(accum))$$ Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Learning rate. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. update_slots An optional bool. Defaults to True. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
doc_25307
Remove all items from the ModuleDict.
doc_25308
Returns true for each element if all cased characters in the string are lowercase and there is at least one cased character, false otherwise. See also char.islower
doc_25309
Find the indices of the first and last unmasked values along an axis. If all values are masked, return None. Otherwise, return a list of two tuples, corresponding to the indices of the first and last unmasked values respectively. Parameters aarray_like The input array. axisint, optional Axis along which to perform the operation. If None (default), applies to a flattened version of the array. Returns edgesndarray or list An array of start and end indexes if there are any masked data in the array. If there are no masked data in the array, edges is a list of the first and last index. See also flatnotmasked_contiguous, flatnotmasked_edges, notmasked_contiguous clump_masked, clump_unmasked Examples >>> a = np.arange(9).reshape((3, 3)) >>> m = np.zeros_like(a) >>> m[1:, 1:] = 1 >>> am = np.ma.array(a, mask=m) >>> np.array(am[~am.mask]) array([0, 1, 2, 3, 6]) >>> np.ma.notmasked_edges(am) array([0, 6])
doc_25310
tf.compat.v1.tpu.batch_parallel( computation, inputs=None, num_shards=1, infeed_queue=None, device_assignment=None, name=None, xla_options=None ) Convenience wrapper around shard(). inputs must be a list of Tensors or None (equivalent to an empty list). Each input is split into num_shards pieces along the 0-th dimension, and computation is applied to each shard in parallel. Tensors are broadcast to all shards if they are lexically captured by computation. e.g., x = tf.constant(7) def computation(): return x + 3 ... = shard(computation, ...) The outputs from all shards are concatenated back together along their 0-th dimension. Inputs and outputs of the computation must be at least rank-1 Tensors. Args computation A Python function that builds a computation to apply to each shard of the input. inputs A list of input tensors or None (equivalent to an empty list). The 0-th dimension of each Tensor must have size divisible by num_shards. num_shards The number of shards. infeed_queue If not None, the InfeedQueue from which to append a tuple of arguments as inputs to computation. device_assignment If not None, a DeviceAssignment describing the mapping between logical cores in the computation with physical cores in the TPU topology. Uses a default device assignment if None. The DeviceAssignment may be omitted if each shard of the computation uses only one core, and there is either only one shard, or the number of shards is equal to the number of cores in the TPU system. name (Deprecated) Does nothing. xla_options An instance of tpu.XLAOptions which indicates the options passed to XLA compiler. Use None for default options. Returns A list of output tensors. Raises ValueError If num_shards <= 0
doc_25311
Return boolean if values in the object are monotonic_decreasing. Returns bool
doc_25312
assertIsNotNone(expr, msg=None) Test that expr is (or is not) None. New in version 3.1.
doc_25313
Similar to str.format(**mapping), except that mapping is used directly and not copied to a dict. This is useful if for example mapping is a dict subclass: >>> class Default(dict): ... def __missing__(self, key): ... return key ... >>> '{name} was born in {country}'.format_map(Default(name='Guido')) 'Guido was born in country' New in version 3.2.
doc_25314
A copy of the holiday array indicating additional invalid days.
doc_25315
Copy a docstring from another source function (if present).
doc_25316
Sets the threshold for this logger to level. Logging messages which are less severe than level will be ignored; logging messages which have severity level or higher will be emitted by whichever handler or handlers service this logger, unless a handler’s level has been set to a higher severity level than level. When a logger is created, the level is set to NOTSET (which causes all messages to be processed when the logger is the root logger, or delegation to the parent when the logger is a non-root logger). Note that the root logger is created with level WARNING. The term ‘delegation to the parent’ means that if a logger has a level of NOTSET, its chain of ancestor loggers is traversed until either an ancestor with a level other than NOTSET is found, or the root is reached. If an ancestor is found with a level other than NOTSET, then that ancestor’s level is treated as the effective level of the logger where the ancestor search began, and is used to determine how a logging event is handled. If the root is reached, and it has a level of NOTSET, then all messages will be processed. Otherwise, the root’s level will be used as the effective level. See Logging Levels for a list of levels. Changed in version 3.2: The level parameter now accepts a string representation of the level such as ‘INFO’ as an alternative to the integer constants such as INFO. Note, however, that levels are internally stored as integers, and methods such as e.g. getEffectiveLevel() and isEnabledFor() will return/expect to be passed integers.
doc_25317
New in Django 3.2. Encode, optionally compress, append current timestamp, and sign complex data structure (e.g. list, tuple, or dictionary).
doc_25318
bytearray.isascii() Return True if the sequence is empty or all bytes in the sequence are ASCII, False otherwise. ASCII bytes are in the range 0-0x7F. New in version 3.7.
doc_25319
Alias for field number 2
doc_25320
Return 'PROPNAME or alias' if s has an alias, else return 'PROPNAME'. e.g., for the line markerfacecolor property, which has an alias, return 'markerfacecolor or mfc' and for the transform property, which does not, return 'transform'.
doc_25321
tf.compat.v1.Variable( initial_value=None, trainable=None, collections=None, validate_shape=True, caching_device=None, name=None, variable_def=None, dtype=None, expected_shape=None, import_scope=None, constraint=None, use_resource=None, synchronization=tf.VariableSynchronization.AUTO, aggregation=tf.compat.v1.VariableAggregation.NONE, shape=None ) A variable maintains state in the graph across calls to run(). You add a variable to the graph by constructing an instance of the class Variable. The Variable() constructor requires an initial value for the variable, which can be a Tensor of any type and shape. The initial value defines the type and shape of the variable. After construction, the type and shape of the variable are fixed. The value can be changed using one of the assign methods. If you want to change the shape of a variable later you have to use an assign Op with validate_shape=False. Just like any Tensor, variables created with Variable() can be used as inputs for other Ops in the graph. Additionally, all the operators overloaded for the Tensor class are carried over to variables, so you can also add nodes to the graph by just doing arithmetic on variables. import tensorflow as tf # Create a variable. w = tf.Variable(<initial-value>, name=<optional-name>) # Use the variable in the graph like any Tensor. y = tf.matmul(w, ...another variable or tensor...) # The overloaded operators are available too. z = tf.sigmoid(w + y) # Assign a new value to the variable with `assign()` or a related method. w.assign(w + 1.0) w.assign_add(1.0) When you launch the graph, variables have to be explicitly initialized before you can run Ops that use their value. You can initialize a variable by running its initializer op, restoring the variable from a save file, or simply running an assign Op that assigns a value to the variable. In fact, the variable initializer op is just an assign Op that assigns the variable's initial value to the variable itself. # Launch the graph in a session. with tf.compat.v1.Session() as sess: # Run the variable initializer. sess.run(w.initializer) # ...you now can run ops that use the value of 'w'... The most common initialization pattern is to use the convenience function global_variables_initializer() to add an Op to the graph that initializes all the variables. You then run that Op after launching the graph. # Add an Op to initialize global variables. init_op = tf.compat.v1.global_variables_initializer() # Launch the graph in a session. with tf.compat.v1.Session() as sess: # Run the Op that initializes global variables. sess.run(init_op) # ...you can now run any Op that uses variable values... If you need to create a variable with an initial value dependent on another variable, use the other variable's initialized_value(). This ensures that variables are initialized in the right order. All variables are automatically collected in the graph where they are created. By default, the constructor adds the new variable to the graph collection GraphKeys.GLOBAL_VARIABLES. The convenience function global_variables() returns the contents of that collection. When building a machine learning model it is often convenient to distinguish between variables holding the trainable model parameters and other variables such as a global step variable used to count training steps. To make this easier, the variable constructor supports a trainable=<bool> parameter. If True, the new variable is also added to the graph collection GraphKeys.TRAINABLE_VARIABLES. The convenience function trainable_variables() returns the contents of this collection. The various Optimizer classes use this collection as the default list of variables to optimize. Warning: tf.Variable objects by default have a non-intuitive memory model. A Variable is represented internally as a mutable Tensor which can non-deterministically alias other Tensors in a graph. The set of operations which consume a Variable and can lead to aliasing is undetermined and can change across TensorFlow versions. Avoid writing code which relies on the value of a Variable either changing or not changing as other operations happen. For example, using Variable objects or simple functions thereof as predicates in a tf.cond is dangerous and error-prone:v = tf.Variable(True) tf.cond(v, lambda: v.assign(False), my_false_fn) # Note: this is broken. Here, adding use_resource=True when constructing the variable will fix any nondeterminism issues: v = tf.Variable(True, use_resource=True) tf.cond(v, lambda: v.assign(False), my_false_fn) To use the replacement for variables which does not have these issues: Add use_resource=True when constructing tf.Variable; Call tf.compat.v1.get_variable_scope().set_use_resource(True) inside a tf.compat.v1.variable_scope before the tf.compat.v1.get_variable() call. Args initial_value A Tensor, or Python object convertible to a Tensor, which is the initial value for the Variable. The initial value must have a shape specified unless validate_shape is set to False. Can also be a callable with no argument that returns the initial value when called. In that case, dtype must be specified. (Note that initializer functions from init_ops.py must first be bound to a shape before being used here.) trainable If True, also adds the variable to the graph collection GraphKeys.TRAINABLE_VARIABLES. This collection is used as the default list of variables to use by the Optimizer classes. Defaults to True, unless synchronization is set to ON_READ, in which case it defaults to False. collections List of graph collections keys. The new variable is added to these collections. Defaults to [GraphKeys.GLOBAL_VARIABLES]. validate_shape If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known. caching_device Optional device string describing where the Variable should be cached for reading. Defaults to the Variable's device. If not None, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through Switch and other conditional statements. name Optional name for the variable. Defaults to 'Variable' and gets uniquified automatically. variable_def VariableDef protocol buffer. If not None, recreates the Variable object with its contents, referencing the variable's nodes in the graph, which must already exist. The graph is not changed. variable_def and the other arguments are mutually exclusive. dtype If set, initial_value will be converted to the given type. If None, either the datatype will be kept (if initial_value is a Tensor), or convert_to_tensor will decide. expected_shape A TensorShape. If set, initial_value is expected to have this shape. import_scope Optional string. Name scope to add to the Variable. Only used when initializing from protocol buffer. constraint An optional projection function to be applied to the variable after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected Tensor representing the value of the variable and return the Tensor for the projected value (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training. use_resource whether to use resource variables. synchronization Indicates when a distributed a variable will be aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. aggregation Indicates how a distributed variable will be aggregated. Accepted values are constants defined in the class tf.VariableAggregation. shape (optional) The shape of this variable. If None, the shape of initial_value will be used. When setting this argument to tf.TensorShape(None) (representing an unspecified shape), the variable can be assigned with values of different shapes. Raises ValueError If both variable_def and initial_value are specified. ValueError If the initial value is not specified, or does not have a shape and validate_shape is True. RuntimeError If eager execution is enabled. Attributes aggregation constraint Returns the constraint function associated with this variable. device The device of this variable. dtype The DType of this variable. graph The Graph of this variable. initial_value Returns the Tensor used as the initial value for the variable. Note that this is different from initialized_value() which runs the op that initializes the variable before returning its value. This method returns the tensor that is used by the op that initializes the variable. initializer The initializer operation for this variable. name The name of this variable. op The Operation of this variable. shape The TensorShape of this variable. synchronization trainable Child Classes class SaveSliceInfo Methods assign View source assign( value, use_locking=False, name=None, read_value=True ) Assigns a new value to the variable. This is essentially a shortcut for assign(self, value). Args value A Tensor. The new value for this variable. use_locking If True, use locking during the assignment. name The name of the operation to be created read_value if True, will return something which evaluates to the new value of the variable; if False will return the assign op. Returns The updated variable. If read_value is false, instead returns None in Eager mode and the assign op in graph mode. assign_add View source assign_add( delta, use_locking=False, name=None, read_value=True ) Adds a value to this variable. This is essentially a shortcut for assign_add(self, delta). Args delta A Tensor. The value to add to this variable. use_locking If True, use locking during the operation. name The name of the operation to be created read_value if True, will return something which evaluates to the new value of the variable; if False will return the assign op. Returns The updated variable. If read_value is false, instead returns None in Eager mode and the assign op in graph mode. assign_sub View source assign_sub( delta, use_locking=False, name=None, read_value=True ) Subtracts a value from this variable. This is essentially a shortcut for assign_sub(self, delta). Args delta A Tensor. The value to subtract from this variable. use_locking If True, use locking during the operation. name The name of the operation to be created read_value if True, will return something which evaluates to the new value of the variable; if False will return the assign op. Returns The updated variable. If read_value is false, instead returns None in Eager mode and the assign op in graph mode. batch_scatter_update View source batch_scatter_update( sparse_delta, use_locking=False, name=None ) Assigns tf.IndexedSlices to this variable batch-wise. Analogous to batch_gather. This assumes that this variable and the sparse_delta IndexedSlices have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following: num_prefix_dims = sparse_delta.indices.ndims - 1 batch_dim = num_prefix_dims + 1 sparse_delta.updates.shape = sparse_delta.indices.shape + var.shape[ batch_dim:] where sparse_delta.updates.shape[:num_prefix_dims] == sparse_delta.indices.shape[:num_prefix_dims] == var.shape[:num_prefix_dims] And the operation performed can be expressed as: var[i_1, ..., i_n, sparse_delta.indices[i_1, ..., i_n, j]] = sparse_delta.updates[ i_1, ..., i_n, j] When sparse_delta.indices is a 1D tensor, this operation is equivalent to scatter_update. To avoid this operation one can looping over the first ndims of the variable and using scatter_update on the subtensors that result of slicing the first dimension. This is a valid option for ndims = 1, but less efficient than this implementation. Args sparse_delta tf.IndexedSlices to be assigned to this variable. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. count_up_to View source count_up_to( limit ) Increments this variable until it reaches limit. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Prefer Dataset.range instead. When that Op is run it tries to increment the variable by 1. If incrementing the variable would bring it above limit then the Op raises the exception OutOfRangeError. If no error is raised, the Op outputs the value of the variable before the increment. This is essentially a shortcut for count_up_to(self, limit). Args limit value at which incrementing the variable raises an error. Returns A Tensor that will hold the variable value before the increment. If no other Op modifies this variable, the values produced will all be distinct. eval View source eval( session=None ) In a session, computes and returns the value of this variable. This is not a graph construction method, it does not add ops to the graph. This convenience method requires a session where the graph containing this variable has been launched. If no session is passed, the default session is used. See tf.compat.v1.Session for more information on launching a graph and on sessions. v = tf.Variable([1, 2]) init = tf.compat.v1.global_variables_initializer() with tf.compat.v1.Session() as sess: sess.run(init) # Usage passing the session explicitly. print(v.eval(sess)) # Usage with the default session. The 'with' block # above makes 'sess' the default session. print(v.eval()) Args session The session to use to evaluate this variable. If none, the default session is used. Returns A numpy ndarray with a copy of the value of this variable. experimental_ref View source experimental_ref() DEPRECATED FUNCTION Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use ref() instead. from_proto View source @staticmethod from_proto( variable_def, import_scope=None ) Returns a Variable object created from variable_def. gather_nd View source gather_nd( indices, name=None ) Gather slices from params into a Tensor with shape specified by indices. See tf.gather_nd for details. Args indices A Tensor. Must be one of the following types: int32, int64. Index tensor. name A name for the operation (optional). Returns A Tensor. Has the same type as params. get_shape View source get_shape() Alias of Variable.shape. initialized_value View source initialized_value() Returns the value of the initialized variable. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts. You should use this instead of the variable itself to initialize another variable with a value that depends on the value of this variable. # Initialize 'v' with a random tensor. v = tf.Variable(tf.random.truncated_normal([10, 40])) # Use `initialized_value` to guarantee that `v` has been # initialized before its value is used to initialize `w`. # The random values are picked only once. w = tf.Variable(v.initialized_value() * 2.0) Returns A Tensor holding the value of this variable after its initializer has run. load View source load( value, session=None ) Load new value into this variable. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Prefer Variable.assign which has equivalent behavior in 2.X. Writes new value to variable's memory. Doesn't add ops to the graph. This convenience method requires a session where the graph containing this variable has been launched. If no session is passed, the default session is used. See tf.compat.v1.Session for more information on launching a graph and on sessions. v = tf.Variable([1, 2]) init = tf.compat.v1.global_variables_initializer() with tf.compat.v1.Session() as sess: sess.run(init) # Usage passing the session explicitly. v.load([2, 3], sess) print(v.eval(sess)) # prints [2 3] # Usage with the default session. The 'with' block # above makes 'sess' the default session. v.load([3, 4], sess) print(v.eval()) # prints [3 4] Args value New variable value session The session to use to evaluate this variable. If none, the default session is used. Raises ValueError Session is not passed and no default session read_value View source read_value() Returns the value of this variable, read in the current context. Can be different from value() if it's on another device, with control dependencies, etc. Returns A Tensor containing the value of the variable. ref View source ref() Returns a hashable reference object to this Variable. The primary use case for this API is to put variables in a set/dictionary. We can't put variables in a set/dictionary as variable.__hash__() is no longer available starting Tensorflow 2.0. The following will raise an exception starting 2.0 x = tf.Variable(5) y = tf.Variable(10) z = tf.Variable(10) variable_set = {x, y, z} Traceback (most recent call last): TypeError: Variable is unhashable. Instead, use tensor.ref() as the key. variable_dict = {x: 'five', y: 'ten'} Traceback (most recent call last): TypeError: Variable is unhashable. Instead, use tensor.ref() as the key. Instead, we can use variable.ref(). variable_set = {x.ref(), y.ref(), z.ref()} x.ref() in variable_set True variable_dict = {x.ref(): 'five', y.ref(): 'ten', z.ref(): 'ten'} variable_dict[y.ref()] 'ten' Also, the reference object provides .deref() function that returns the original Variable. x = tf.Variable(5) x.ref().deref() <tf.Variable 'Variable:0' shape=() dtype=int32, numpy=5> scatter_add View source scatter_add( sparse_delta, use_locking=False, name=None ) Adds tf.IndexedSlices to this variable. Args sparse_delta tf.IndexedSlices to be added to this variable. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. scatter_div View source scatter_div( sparse_delta, use_locking=False, name=None ) Divide this variable by tf.IndexedSlices. Args sparse_delta tf.IndexedSlices to divide this variable by. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. scatter_max View source scatter_max( sparse_delta, use_locking=False, name=None ) Updates this variable with the max of tf.IndexedSlices and itself. Args sparse_delta tf.IndexedSlices to use as an argument of max with this variable. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. scatter_min View source scatter_min( sparse_delta, use_locking=False, name=None ) Updates this variable with the min of tf.IndexedSlices and itself. Args sparse_delta tf.IndexedSlices to use as an argument of min with this variable. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. scatter_mul View source scatter_mul( sparse_delta, use_locking=False, name=None ) Multiply this variable by tf.IndexedSlices. Args sparse_delta tf.IndexedSlices to multiply this variable by. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. scatter_nd_add View source scatter_nd_add( indices, updates, name=None ) Applies sparse addition to individual values or slices in a Variable. The Variable has rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into self. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of self. updates is Tensor of rank Q-1+P-K with shape: [d_0, ..., d_{Q-2}, self.shape[K], ..., self.shape[P-1]]. For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this: v = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) add = v.scatter_nd_add(indices, updates) with tf.compat.v1.Session() as sess: print sess.run(add) The resulting update to v would look like this: [1, 13, 3, 14, 14, 6, 7, 20] See tf.scatter_nd for more details about how to make updates to slices. Args indices The indices to be used in the operation. updates The values to be used in the operation. name the name of the operation. Returns The updated variable. scatter_nd_sub View source scatter_nd_sub( indices, updates, name=None ) Applies sparse subtraction to individual values or slices in a Variable. Assuming the variable has rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into self. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of self. updates is Tensor of rank Q-1+P-K with shape: [d_0, ..., d_{Q-2}, self.shape[K], ..., self.shape[P-1]]. For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this: v = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) op = v.scatter_nd_sub(indices, updates) with tf.compat.v1.Session() as sess: print sess.run(op) The resulting update to v would look like this: [1, -9, 3, -6, -6, 6, 7, -4] See tf.scatter_nd for more details about how to make updates to slices. Args indices The indices to be used in the operation. updates The values to be used in the operation. name the name of the operation. Returns The updated variable. scatter_nd_update View source scatter_nd_update( indices, updates, name=None ) Applies sparse assignment to individual values or slices in a Variable. The Variable has rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into self. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of self. updates is Tensor of rank Q-1+P-K with shape: [d_0, ..., d_{Q-2}, self.shape[K], ..., self.shape[P-1]]. For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this: v = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) op = v.scatter_nd_assign(indices, updates) with tf.compat.v1.Session() as sess: print sess.run(op) The resulting update to v would look like this: [1, 11, 3, 10, 9, 6, 7, 12] See tf.scatter_nd for more details about how to make updates to slices. Args indices The indices to be used in the operation. updates The values to be used in the operation. name the name of the operation. Returns The updated variable. scatter_sub View source scatter_sub( sparse_delta, use_locking=False, name=None ) Subtracts tf.IndexedSlices from this variable. Args sparse_delta tf.IndexedSlices to be subtracted from this variable. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. scatter_update View source scatter_update( sparse_delta, use_locking=False, name=None ) Assigns tf.IndexedSlices to this variable. Args sparse_delta tf.IndexedSlices to be assigned to this variable. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. set_shape View source set_shape( shape ) Overrides the shape for this variable. Args shape the TensorShape representing the overridden shape. sparse_read View source sparse_read( indices, name=None ) Gather slices from params axis axis according to indices. This function supports a subset of tf.gather, see tf.gather for details on usage. Args indices The index Tensor. Must be one of the following types: int32, int64. Must be in range [0, params.shape[axis]). name A name for the operation (optional). Returns A Tensor. Has the same type as params. to_proto View source to_proto( export_scope=None ) Converts a Variable to a VariableDef protocol buffer. Args export_scope Optional string. Name scope to remove. Returns A VariableDef protocol buffer, or None if the Variable is not in the specified name scope. value View source value() Returns the last snapshot of this variable. You usually do not need to call this method as all ops that need the value of the variable call it automatically through a convert_to_tensor() call. Returns a Tensor which holds the value of the variable. You can not assign a new value to this tensor as it is not a reference to the variable. To avoid copies, if the consumer of the returned value is on the same device as the variable, this actually returns the live value of the variable, not a copy. Updates to the variable are seen by the consumer. If the consumer is on a different device it will get a copy of the variable. Returns A Tensor containing the value of the variable. __abs__ View source __abs__( x, name=None ) Computes the absolute value of a tensor. Given a tensor of integer or floating-point values, this operation returns a tensor of the same type, where each element contains the absolute value of the corresponding element in the input. Given a tensor x of complex numbers, this operation returns a tensor of type float32 or float64 that is the absolute value of each element in x. For a complex number \(a + bj\), its absolute value is computed as \(\sqrt{a^2 + b^2}\). For example: # real number x = tf.constant([-2.25, 3.25]) tf.abs(x) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([2.25, 3.25], dtype=float32)> # complex number x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]]) tf.abs(x) <tf.Tensor: shape=(2, 1), dtype=float64, numpy= array([[5.25594901], [6.60492241]])> Args x A Tensor or SparseTensor of type float16, float32, float64, int32, int64, complex64 or complex128. name A name for the operation (optional). Returns A Tensor or SparseTensor of the same size, type and sparsity as x, with absolute values. Note, for complex64 or complex128 input, the returned Tensor will be of type float32 or float64, respectively. __add__ View source __add__( x, y ) The operation invoked by the Tensor.add operator. Purpose in the API: This method is exposed in TensorFlow's API so that library developers can register dispatching for <a href="https://www.tensorflow.org/api_docs/python/tf/Tensor#__add__"><code>Tensor.__add__</code></a> to allow it to handle custom composite tensors & other custom objects. The API symbol is not intended to be called by users directly and does appear in TensorFlow's generated documentation. Args x The left-hand side of the + operator. y The right-hand side of the + operator. name an optional name for the operation. Returns The result of the elementwise + operation. __and__ View source __and__( x, y ) __div__ View source __div__( x, y ) Divides x / y elementwise (using Python 2 division operator semantics). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Deprecated in favor of operator or tf.math.divide. Note: Prefer using the Tensor division operator or tf.divide which obey Python 3 division operator semantics. This function divides x and y, forcing Python 2 semantics. That is, if x and y are both integers then the result will be an integer. This is in contrast to Python 3, where division with / is always a float while division with // is always an integer. Args x Tensor numerator of real numeric type. y Tensor denominator of real numeric type. name A name for the operation (optional). Returns x / y returns the quotient of x and y. __eq__ View source __eq__( other ) Compares two variables element-wise for equality. __floordiv__ View source __floordiv__( x, y ) Divides x / y elementwise, rounding toward the most negative integer. The same as tf.compat.v1.div(x,y) for integers, but uses tf.floor(tf.compat.v1.div(x,y)) for floating point arguments so that the result is always an integer (though possibly an integer represented as floating point). This op is generated by x // y floor division in Python 3 and in Python 2.7 with from __future__ import division. x and y must have the same type, and the result will have the same type as well. Args x Tensor numerator of real numeric type. y Tensor denominator of real numeric type. name A name for the operation (optional). Returns x / y rounded down. Raises TypeError If the inputs are complex. __ge__ __ge__( x, y, name=None ) Returns the truth value of (x >= y) element-wise. Note: math.greater_equal supports broadcasting. More about broadcasting here Example: x = tf.constant([5, 4, 6, 7]) y = tf.constant([5, 2, 5, 10]) tf.math.greater_equal(x, y) ==> [True, True, True, False] x = tf.constant([5, 4, 6, 7]) y = tf.constant([5]) tf.math.greater_equal(x, y) ==> [True, False, True, True] Args x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor of type bool. __getitem__ View source __getitem__( var, slice_spec ) Creates a slice helper object given a variable. This allows creating a sub-tensor from part of the current contents of a variable. See tf.Tensor.getitem for detailed examples of slicing. This function in addition also allows assignment to a sliced range. This is similar to __setitem__ functionality in Python. However, the syntax is different so that the user can capture the assignment operation for grouping or passing to sess.run(). For example, import tensorflow as tf A = tf.Variable([[1,2,3], [4,5,6], [7,8,9]], dtype=tf.float32) with tf.compat.v1.Session() as sess: sess.run(tf.compat.v1.global_variables_initializer()) print(sess.run(A[:2, :2])) # => [[1,2], [4,5]] op = A[:2,:2].assign(22. * tf.ones((2, 2))) print(sess.run(op)) # => [[22, 22, 3], [22, 22, 6], [7,8,9]] Note that assignments currently do not support NumPy broadcasting semantics. Args var An ops.Variable object. slice_spec The arguments to Tensor.getitem. Returns The appropriate slice of "tensor", based on "slice_spec". As an operator. The operator also has a assign() method that can be used to generate an assignment operator. Raises ValueError If a slice range is negative size. TypeError TypeError: If the slice indices aren't int, slice, ellipsis, tf.newaxis or int32/int64 tensors. __gt__ __gt__( x, y, name=None ) Returns the truth value of (x > y) element-wise. Note: math.greater supports broadcasting. More about broadcasting here Example: x = tf.constant([5, 4, 6]) y = tf.constant([5, 2, 5]) tf.math.greater(x, y) ==> [False, True, True] x = tf.constant([5, 4, 6]) y = tf.constant([5]) tf.math.greater(x, y) ==> [False, False, True] Args x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor of type bool. __invert__ View source __invert__( x, name=None ) __iter__ View source __iter__() Dummy method to prevent iteration. Do not call. NOTE(mrry): If we register getitem as an overloaded operator, Python will valiantly attempt to iterate over the variable's Tensor from 0 to infinity. Declaring this method prevents this unintended behavior. Raises TypeError when invoked. __le__ __le__( x, y, name=None ) Returns the truth value of (x <= y) element-wise. Note: math.less_equal supports broadcasting. More about broadcasting here Example: x = tf.constant([5, 4, 6]) y = tf.constant([5]) tf.math.less_equal(x, y) ==> [True, True, False] x = tf.constant([5, 4, 6]) y = tf.constant([5, 6, 6]) tf.math.less_equal(x, y) ==> [True, True, True] Args x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor of type bool. __lt__ __lt__( x, y, name=None ) Returns the truth value of (x < y) element-wise. Note: math.less supports broadcasting. More about broadcasting here Example: x = tf.constant([5, 4, 6]) y = tf.constant([5]) tf.math.less(x, y) ==> [False, True, False] x = tf.constant([5, 4, 6]) y = tf.constant([5, 6, 7]) tf.math.less(x, y) ==> [False, True, True] Args x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor of type bool. __matmul__ View source __matmul__( x, y ) Multiplies matrix a by matrix b, producing a * b. The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication dimensions, and any further outer dimensions specify matching batch size. Both matrices must be of the same type. The supported types are: float16, float32, float64, int32, complex64, complex128. Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to True. These are False by default. If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding a_is_sparse or b_is_sparse flag to True. These are False by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes bfloat16 or float32. A simple 2-D tensor matrix multiplication: a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) a # 2-D tensor <tf.Tensor: shape=(2, 3), dtype=int32, numpy= array([[1, 2, 3], [4, 5, 6]], dtype=int32)> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) b # 2-D tensor <tf.Tensor: shape=(3, 2), dtype=int32, numpy= array([[ 7, 8], [ 9, 10], [11, 12]], dtype=int32)> c = tf.matmul(a, b) c # `a` * `b` <tf.Tensor: shape=(2, 2), dtype=int32, numpy= array([[ 58, 64], [139, 154]], dtype=int32)> A batch matrix multiplication with batch shape [2]: a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3]) a # 3-D tensor <tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy= array([[[ 1, 2, 3], [ 4, 5, 6]], [[ 7, 8, 9], [10, 11, 12]]], dtype=int32)> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2]) b # 3-D tensor <tf.Tensor: shape=(2, 3, 2), dtype=int32, numpy= array([[[13, 14], [15, 16], [17, 18]], [[19, 20], [21, 22], [23, 24]]], dtype=int32)> c = tf.matmul(a, b) c # `a` * `b` <tf.Tensor: shape=(2, 2, 2), dtype=int32, numpy= array([[[ 94, 100], [229, 244]], [[508, 532], [697, 730]]], dtype=int32)> Since python >= 3.5 the @ operator is supported (see PEP 465). In TensorFlow, it simply calls the tf.matmul() function, so the following lines are equivalent: d = a @ b @ [[10], [11]] d = tf.matmul(tf.matmul(a, b), [[10], [11]]) Args a tf.Tensor of type float16, float32, float64, int32, complex64, complex128 and rank > 1. b tf.Tensor with same type and rank as a. transpose_a If True, a is transposed before multiplication. transpose_b If True, b is transposed before multiplication. adjoint_a If True, a is conjugated and transposed before multiplication. adjoint_b If True, b is conjugated and transposed before multiplication. a_is_sparse If True, a is treated as a sparse matrix. Notice, this does not support tf.sparse.SparseTensor, it just makes optimizations that assume most values in a are zero. See tf.sparse.sparse_dense_matmul for some support for tf.sparse.SparseTensor multiplication. b_is_sparse If True, b is treated as a sparse matrix. Notice, this does not support tf.sparse.SparseTensor, it just makes optimizations that assume most values in a are zero. See tf.sparse.sparse_dense_matmul for some support for tf.sparse.SparseTensor multiplication. name Name for the operation (optional). Returns A tf.Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes are False: output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j]), for all indices i, j. Note This is matrix product, not element-wise product. Raises ValueError If transpose_a and adjoint_a, or transpose_b and adjoint_b are both set to True. __mod__ View source __mod__( x, y ) Returns element-wise remainder of division. When x < 0 xor y < 0 is true, this follows Python semantics in that the result here is consistent with a flooring divide. E.g. floor(x / y) * y + mod(x, y) = x. Note: math.floormod supports broadcasting. More about broadcasting here Args x A Tensor. Must be one of the following types: int32, int64, uint64, bfloat16, half, float32, float64. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor. Has the same type as x. __mul__ View source __mul__( x, y ) Dispatches cwise mul for "DenseDense" and "DenseSparse". __ne__ View source __ne__( other ) Compares two variables element-wise for equality. __neg__ __neg__( x, name=None ) Computes numerical negative value element-wise. I.e., \(y = -x\). Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64, complex64, complex128. name A name for the operation (optional). Returns A Tensor. Has the same type as x. __or__ View source __or__( x, y ) __pow__ View source __pow__( x, y ) Computes the power of one value to another. Given a tensor x and a tensor y, this operation computes \(x^y\) for corresponding elements in x and y. For example: x = tf.constant([[2, 2], [3, 3]]) y = tf.constant([[8, 16], [2, 3]]) tf.pow(x, y) # [[256, 65536], [9, 27]] Args x A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128. y A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128. name A name for the operation (optional). Returns A Tensor. __radd__ View source __radd__( y, x ) The operation invoked by the Tensor.add operator. Purpose in the API: This method is exposed in TensorFlow's API so that library developers can register dispatching for <a href="https://www.tensorflow.org/api_docs/python/tf/Tensor#__add__"><code>Tensor.__add__</code></a> to allow it to handle custom composite tensors & other custom objects. The API symbol is not intended to be called by users directly and does appear in TensorFlow's generated documentation. Args x The left-hand side of the + operator. y The right-hand side of the + operator. name an optional name for the operation. Returns The result of the elementwise + operation. __rand__ View source __rand__( y, x ) __rdiv__ View source __rdiv__( y, x ) Divides x / y elementwise (using Python 2 division operator semantics). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Deprecated in favor of operator or tf.math.divide. Note: Prefer using the Tensor division operator or tf.divide which obey Python 3 division operator semantics. This function divides x and y, forcing Python 2 semantics. That is, if x and y are both integers then the result will be an integer. This is in contrast to Python 3, where division with / is always a float while division with // is always an integer. Args x Tensor numerator of real numeric type. y Tensor denominator of real numeric type. name A name for the operation (optional). Returns x / y returns the quotient of x and y. __rfloordiv__ View source __rfloordiv__( y, x ) Divides x / y elementwise, rounding toward the most negative integer. The same as tf.compat.v1.div(x,y) for integers, but uses tf.floor(tf.compat.v1.div(x,y)) for floating point arguments so that the result is always an integer (though possibly an integer represented as floating point). This op is generated by x // y floor division in Python 3 and in Python 2.7 with from __future__ import division. x and y must have the same type, and the result will have the same type as well. Args x Tensor numerator of real numeric type. y Tensor denominator of real numeric type. name A name for the operation (optional). Returns x / y rounded down. Raises TypeError If the inputs are complex. __rmatmul__ View source __rmatmul__( y, x ) Multiplies matrix a by matrix b, producing a * b. The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication dimensions, and any further outer dimensions specify matching batch size. Both matrices must be of the same type. The supported types are: float16, float32, float64, int32, complex64, complex128. Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to True. These are False by default. If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding a_is_sparse or b_is_sparse flag to True. These are False by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes bfloat16 or float32. A simple 2-D tensor matrix multiplication: a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) a # 2-D tensor <tf.Tensor: shape=(2, 3), dtype=int32, numpy= array([[1, 2, 3], [4, 5, 6]], dtype=int32)> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) b # 2-D tensor <tf.Tensor: shape=(3, 2), dtype=int32, numpy= array([[ 7, 8], [ 9, 10], [11, 12]], dtype=int32)> c = tf.matmul(a, b) c # `a` * `b` <tf.Tensor: shape=(2, 2), dtype=int32, numpy= array([[ 58, 64], [139, 154]], dtype=int32)> A batch matrix multiplication with batch shape [2]: a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3]) a # 3-D tensor <tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy= array([[[ 1, 2, 3], [ 4, 5, 6]], [[ 7, 8, 9], [10, 11, 12]]], dtype=int32)> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2]) b # 3-D tensor <tf.Tensor: shape=(2, 3, 2), dtype=int32, numpy= array([[[13, 14], [15, 16], [17, 18]], [[19, 20], [21, 22], [23, 24]]], dtype=int32)> c = tf.matmul(a, b) c # `a` * `b` <tf.Tensor: shape=(2, 2, 2), dtype=int32, numpy= array([[[ 94, 100], [229, 244]], [[508, 532], [697, 730]]], dtype=int32)> Since python >= 3.5 the @ operator is supported (see PEP 465). In TensorFlow, it simply calls the tf.matmul() function, so the following lines are equivalent: d = a @ b @ [[10], [11]] d = tf.matmul(tf.matmul(a, b), [[10], [11]]) Args a tf.Tensor of type float16, float32, float64, int32, complex64, complex128 and rank > 1. b tf.Tensor with same type and rank as a. transpose_a If True, a is transposed before multiplication. transpose_b If True, b is transposed before multiplication. adjoint_a If True, a is conjugated and transposed before multiplication. adjoint_b If True, b is conjugated and transposed before multiplication. a_is_sparse If True, a is treated as a sparse matrix. Notice, this does not support tf.sparse.SparseTensor, it just makes optimizations that assume most values in a are zero. See tf.sparse.sparse_dense_matmul for some support for tf.sparse.SparseTensor multiplication. b_is_sparse If True, b is treated as a sparse matrix. Notice, this does not support tf.sparse.SparseTensor, it just makes optimizations that assume most values in a are zero. See tf.sparse.sparse_dense_matmul for some support for tf.sparse.SparseTensor multiplication. name Name for the operation (optional). Returns A tf.Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes are False: output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j]), for all indices i, j. Note This is matrix product, not element-wise product. Raises ValueError If transpose_a and adjoint_a, or transpose_b and adjoint_b are both set to True. __rmod__ View source __rmod__( y, x ) Returns element-wise remainder of division. When x < 0 xor y < 0 is true, this follows Python semantics in that the result here is consistent with a flooring divide. E.g. floor(x / y) * y + mod(x, y) = x. Note: math.floormod supports broadcasting. More about broadcasting here Args x A Tensor. Must be one of the following types: int32, int64, uint64, bfloat16, half, float32, float64. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor. Has the same type as x. __rmul__ View source __rmul__( y, x ) Dispatches cwise mul for "DenseDense" and "DenseSparse". __ror__ View source __ror__( y, x ) __rpow__ View source __rpow__( y, x ) Computes the power of one value to another. Given a tensor x and a tensor y, this operation computes \(x^y\) for corresponding elements in x and y. For example: x = tf.constant([[2, 2], [3, 3]]) y = tf.constant([[8, 16], [2, 3]]) tf.pow(x, y) # [[256, 65536], [9, 27]] Args x A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128. y A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128. name A name for the operation (optional). Returns A Tensor. __rsub__ View source __rsub__( y, x ) Returns x - y element-wise. Note: Subtract supports broadcasting. More about broadcasting here Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128, uint32. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor. Has the same type as x. __rtruediv__ View source __rtruediv__( y, x ) Divides x / y elementwise (using Python 3 division operator semantics). Note: Prefer using the Tensor operator or tf.divide which obey Python division operator semantics. This function forces Python 3 division operator semantics where all integer arguments are cast to floating types first. This op is generated by normal x / y division in Python 3 and in Python 2.7 with from __future__ import division. If you want integer division that rounds down, use x // y or tf.math.floordiv. x and y must have the same numeric type. If the inputs are floating point, the output will have the same type. If the inputs are integral, the inputs are cast to float32 for int8 and int16 and float64 for int32 and int64 (matching the behavior of Numpy). Args x Tensor numerator of numeric type. y Tensor denominator of numeric type. name A name for the operation (optional). Returns x / y evaluated in floating point. Raises TypeError If x and y have different dtypes. __rxor__ View source __rxor__( y, x ) __sub__ View source __sub__( x, y ) Returns x - y element-wise. Note: Subtract supports broadcasting. More about broadcasting here Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128, uint32. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor. Has the same type as x. __truediv__ View source __truediv__( x, y ) Divides x / y elementwise (using Python 3 division operator semantics). Note: Prefer using the Tensor operator or tf.divide which obey Python division operator semantics. This function forces Python 3 division operator semantics where all integer arguments are cast to floating types first. This op is generated by normal x / y division in Python 3 and in Python 2.7 with from __future__ import division. If you want integer division that rounds down, use x // y or tf.math.floordiv. x and y must have the same numeric type. If the inputs are floating point, the output will have the same type. If the inputs are integral, the inputs are cast to float32 for int8 and int16 and float64 for int32 and int64 (matching the behavior of Numpy). Args x Tensor numerator of numeric type. y Tensor denominator of numeric type. name A name for the operation (optional). Returns x / y evaluated in floating point. Raises TypeError If x and y have different dtypes. __xor__ View source __xor__( x, y )
doc_25322
New in Django 4.0. Tells Django which value should be returned when the expression is used to apply a function over an empty result set. Defaults to NotImplemented which forces the expression to be computed on the database.
doc_25323
Bases: matplotlib.backend_bases.RendererBase The renderer handles drawing/rendering operations. This is a minimal do-nothing class that can be used to get started when writing a new backend. Refer to backend_bases.RendererBase for documentation of the methods. draw_image(gc, x, y, im)[source] Draw an RGBA image. Parameters gcGraphicsContextBase A graphics context with clipping information. xscalar The distance in physical units (i.e., dots or pixels) from the left hand side of the canvas. yscalar The distance in physical units (i.e., dots or pixels) from the bottom side of the canvas. im(N, M, 4) array-like of np.uint8 An array of RGBA pixels. transformmatplotlib.transforms.Affine2DBase If and only if the concrete backend is written such that option_scale_image() returns True, an affine transformation (i.e., an Affine2DBase) may be passed to draw_image(). The translation vector of the transformation is given in physical units (i.e., dots or pixels). Note that the transformation does not override x and y, and has to be applied before translating the result by x and y (this can be accomplished by adding x and y to the translation vector defined by transform). draw_path(gc, path, transform, rgbFace=None)[source] Draw a Path instance using the given affine transform. draw_text(gc, x, y, s, prop, angle, ismath=False, mtext=None)[source] Draw the text instance. Parameters gcGraphicsContextBase The graphics context. xfloat The x location of the text in display coords. yfloat The y location of the text baseline in display coords. sstr The text string. propmatplotlib.font_manager.FontProperties The font properties. anglefloat The rotation angle in degrees anti-clockwise. mtextmatplotlib.text.Text The original text object to be rendered. Notes Note for backend implementers: When you are trying to determine if you have gotten your bounding box right (which is what enables the text layout/alignment to work properly), it helps to change the line in text.py: if 0: bbox_artist(self, renderer) to if 1, and then the actual bounding box will be plotted along with your text. flipy()[source] Return whether y values increase from top to bottom. Note that this only affects drawing of texts and images. get_canvas_width_height()[source] Return the canvas width and height in display coords. get_text_width_height_descent(s, prop, ismath)[source] Get the width, height, and descent (offset from the bottom to the baseline), in display coords, of the string s with FontProperties prop. new_gc()[source] Return an instance of a GraphicsContextBase. points_to_pixels(points)[source] Convert points to display units. You need to override this function (unless your backend doesn't have a dpi, e.g., postscript or svg). Some imaging systems assume some value for pixels per inch: points to pixels = points * pixels_per_inch/72 * dpi/72 Parameters pointsfloat or array-like a float or a numpy array of float Returns Points converted to pixels
doc_25324
Return the intensity of the red, green, and blue (RGB) components in the color color_number, which must be between 0 and COLORS - 1. Return a 3-tuple, containing the R,G,B values for the given color, which will be between 0 (no component) and 1000 (maximum amount of component).
doc_25325
Apply only the non-affine part of this transformation. transform(values) is always equivalent to transform_affine(transform_non_affine(values)). In non-affine transformations, this is generally equivalent to transform(values). In affine transformations, this is always a no-op. Parameters valuesarray The input values as NumPy array of length input_dims or shape (N x input_dims). Returns array The output values as NumPy array of length input_dims or shape (N x output_dims), depending on the input.
doc_25326
Removes the pruning reparameterization from a module. The pruned parameter named name remains permanently pruned, and the parameter named name+'_orig' is removed from the parameter list. Similarly, the buffer named name+'_mask' is removed from the buffers. Note Pruning itself is NOT undone or reversed!
doc_25327
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_25328
Make sure nframes is correct, and close the file if it was opened by wave. This method is called upon object collection. It will raise an exception if the output stream is not seekable and nframes does not match the number of frames actually written.
doc_25329
Return a tuple containing names of free variables in this function.
doc_25330
Return DataFrame with duplicate rows removed. Considering certain columns is optional. Indexes, including time indexes are ignored. Parameters subset:column label or sequence of labels, optional Only consider certain columns for identifying duplicates, by default use all of the columns. keep:{‘first’, ‘last’, False}, default ‘first’ Determines which duplicates (if any) to keep. - first : Drop duplicates except for the first occurrence. - last : Drop duplicates except for the last occurrence. - False : Drop all duplicates. inplace:bool, default False Whether to drop duplicates in place or to return a copy. ignore_index:bool, default False If True, the resulting axis will be labeled 0, 1, …, n - 1. New in version 1.0.0. Returns DataFrame or None DataFrame with duplicates removed or None if inplace=True. See also DataFrame.value_counts Count unique combinations of columns. Examples Consider dataset containing ramen rating. >>> df = pd.DataFrame({ ... 'brand': ['Yum Yum', 'Yum Yum', 'Indomie', 'Indomie', 'Indomie'], ... 'style': ['cup', 'cup', 'cup', 'pack', 'pack'], ... 'rating': [4, 4, 3.5, 15, 5] ... }) >>> df brand style rating 0 Yum Yum cup 4.0 1 Yum Yum cup 4.0 2 Indomie cup 3.5 3 Indomie pack 15.0 4 Indomie pack 5.0 By default, it removes duplicate rows based on all columns. >>> df.drop_duplicates() brand style rating 0 Yum Yum cup 4.0 2 Indomie cup 3.5 3 Indomie pack 15.0 4 Indomie pack 5.0 To remove duplicates on specific column(s), use subset. >>> df.drop_duplicates(subset=['brand']) brand style rating 0 Yum Yum cup 4.0 2 Indomie cup 3.5 To remove duplicates and keep last occurrences, use keep. >>> df.drop_duplicates(subset=['brand', 'style'], keep='last') brand style rating 1 Yum Yum cup 4.0 2 Indomie cup 3.5 4 Indomie pack 5.0
doc_25331
The application registry provides the following public API. Methods that aren’t listed below are considered private and may change without notice.
doc_25332
tf.compat.v1.nn.conv2d( input, filter=None, strides=None, padding=None, use_cudnn_on_gpu=True, data_format='NHWC', dilations=[1, 1, 1, 1], name=None, filters=None ) Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, out_channels], this op performs the following: Flattens the filter to a 2-D matrix with shape [filter_height * filter_width * in_channels, output_channels]. Extracts image patches from the input tensor to form a virtual tensor of shape [batch, out_height, out_width, filter_height * filter_width * in_channels]. For each patch, right-multiplies the filter matrix and the image patch vector. In detail, with the default NHWC format, output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k] Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertical strides, strides = [1, stride, stride, 1]. Args input A Tensor. Must be one of the following types: half, bfloat16, float32, float64. A 4-D tensor. The dimension order is interpreted according to the value of data_format, see below for details. filter A Tensor. Must have the same type as input. A 4-D tensor of shape [filter_height, filter_width, in_channels, out_channels] strides An int or list of ints that has length 1, 2 or 4. The stride of the sliding window for each dimension of input. If a single value is given it is replicated in the H and W dimension. By default the N and C dimensions are set to 1. The dimension order is determined by the value of data_format, see below for details. padding Either the string "SAME" or "VALID" indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is "NHWC", this should be in the form [[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is "NCHW", this should be in the form [[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]. use_cudnn_on_gpu An optional bool. Defaults to True. data_format An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width]. dilations An int or list of ints that has length 1, 2 or 4, defaults to 1. The dilation factor for each dimension ofinput. If a single value is given it is replicated in the H and W dimension. By default the N and C dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1. name A name for the operation (optional). filters Alias for filter. Returns A Tensor. Has the same type as input.
doc_25333
Create a new composite transform that is the result of applying transform a then transform b. You will generally not call this constructor directly but write a + b instead, which will automatically choose the best kind of composite transform instance to create.
doc_25334
Total bytes consumed by the elements of the array. Notes Does not include memory consumed by non-element attributes of the array object. Examples >>> x = np.zeros((3,5,2), dtype=np.complex128) >>> x.nbytes 480 >>> np.prod(x.shape) * x.itemsize 480
doc_25335
Perform a merge by key distance. This is similar to a left-join except that we match on nearest key rather than equal keys. Both DataFrames must be sorted by the key. For each row in the left DataFrame: A “backward” search selects the last row in the right DataFrame whose ‘on’ key is less than or equal to the left’s key. A “forward” search selects the first row in the right DataFrame whose ‘on’ key is greater than or equal to the left’s key. A “nearest” search selects the row in the right DataFrame whose ‘on’ key is closest in absolute distance to the left’s key. The default is “backward” and is compatible in versions below 0.20.0. The direction parameter was added in version 0.20.0 and introduces “forward” and “nearest”. Optionally match on equivalent keys with ‘by’ before searching with ‘on’. Parameters left:DataFrame or named Series right:DataFrame or named Series on:label Field name to join on. Must be found in both DataFrames. The data MUST be ordered. Furthermore this must be a numeric column, such as datetimelike, integer, or float. On or left_on/right_on must be given. left_on:label Field name to join on in left DataFrame. right_on:label Field name to join on in right DataFrame. left_index:bool Use the index of the left DataFrame as the join key. right_index:bool Use the index of the right DataFrame as the join key. by:column name or list of column names Match on these columns before performing merge operation. left_by:column name Field names to match on in the left DataFrame. right_by:column name Field names to match on in the right DataFrame. suffixes:2-length sequence (tuple, list, …) Suffix to apply to overlapping column names in the left and right side, respectively. tolerance:int or Timedelta, optional, default None Select asof tolerance within this range; must be compatible with the merge index. allow_exact_matches:bool, default True If True, allow matching with the same ‘on’ value (i.e. less-than-or-equal-to / greater-than-or-equal-to) If False, don’t match the same ‘on’ value (i.e., strictly less-than / strictly greater-than). direction:‘backward’ (default), ‘forward’, or ‘nearest’ Whether to search for prior, subsequent, or closest matches. Returns merged:DataFrame See also merge Merge with a database-style join. merge_ordered Merge with optional filling/interpolation. Examples >>> left = pd.DataFrame({"a": [1, 5, 10], "left_val": ["a", "b", "c"]}) >>> left a left_val 0 1 a 1 5 b 2 10 c >>> right = pd.DataFrame({"a": [1, 2, 3, 6, 7], "right_val": [1, 2, 3, 6, 7]}) >>> right a right_val 0 1 1 1 2 2 2 3 3 3 6 6 4 7 7 >>> pd.merge_asof(left, right, on="a") a left_val right_val 0 1 a 1 1 5 b 3 2 10 c 7 >>> pd.merge_asof(left, right, on="a", allow_exact_matches=False) a left_val right_val 0 1 a NaN 1 5 b 3.0 2 10 c 7.0 >>> pd.merge_asof(left, right, on="a", direction="forward") a left_val right_val 0 1 a 1.0 1 5 b 6.0 2 10 c NaN >>> pd.merge_asof(left, right, on="a", direction="nearest") a left_val right_val 0 1 a 1 1 5 b 6 2 10 c 7 We can use indexed DataFrames as well. >>> left = pd.DataFrame({"left_val": ["a", "b", "c"]}, index=[1, 5, 10]) >>> left left_val 1 a 5 b 10 c >>> right = pd.DataFrame({"right_val": [1, 2, 3, 6, 7]}, index=[1, 2, 3, 6, 7]) >>> right right_val 1 1 2 2 3 3 6 6 7 7 >>> pd.merge_asof(left, right, left_index=True, right_index=True) left_val right_val 1 a 1 5 b 3 10 c 7 Here is a real-world times-series example >>> quotes = pd.DataFrame( ... { ... "time": [ ... pd.Timestamp("2016-05-25 13:30:00.023"), ... pd.Timestamp("2016-05-25 13:30:00.023"), ... pd.Timestamp("2016-05-25 13:30:00.030"), ... pd.Timestamp("2016-05-25 13:30:00.041"), ... pd.Timestamp("2016-05-25 13:30:00.048"), ... pd.Timestamp("2016-05-25 13:30:00.049"), ... pd.Timestamp("2016-05-25 13:30:00.072"), ... pd.Timestamp("2016-05-25 13:30:00.075") ... ], ... "ticker": [ ... "GOOG", ... "MSFT", ... "MSFT", ... "MSFT", ... "GOOG", ... "AAPL", ... "GOOG", ... "MSFT" ... ], ... "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01], ... "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03] ... } ... ) >>> quotes time ticker bid ask 0 2016-05-25 13:30:00.023 GOOG 720.50 720.93 1 2016-05-25 13:30:00.023 MSFT 51.95 51.96 2 2016-05-25 13:30:00.030 MSFT 51.97 51.98 3 2016-05-25 13:30:00.041 MSFT 51.99 52.00 4 2016-05-25 13:30:00.048 GOOG 720.50 720.93 5 2016-05-25 13:30:00.049 AAPL 97.99 98.01 6 2016-05-25 13:30:00.072 GOOG 720.50 720.88 7 2016-05-25 13:30:00.075 MSFT 52.01 52.03 >>> trades = pd.DataFrame( ... { ... "time": [ ... pd.Timestamp("2016-05-25 13:30:00.023"), ... pd.Timestamp("2016-05-25 13:30:00.038"), ... pd.Timestamp("2016-05-25 13:30:00.048"), ... pd.Timestamp("2016-05-25 13:30:00.048"), ... pd.Timestamp("2016-05-25 13:30:00.048") ... ], ... "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"], ... "price": [51.95, 51.95, 720.77, 720.92, 98.0], ... "quantity": [75, 155, 100, 100, 100] ... } ... ) >>> trades time ticker price quantity 0 2016-05-25 13:30:00.023 MSFT 51.95 75 1 2016-05-25 13:30:00.038 MSFT 51.95 155 2 2016-05-25 13:30:00.048 GOOG 720.77 100 3 2016-05-25 13:30:00.048 GOOG 720.92 100 4 2016-05-25 13:30:00.048 AAPL 98.00 100 By default we are taking the asof of the quotes >>> pd.merge_asof(trades, quotes, on="time", by="ticker") time ticker price quantity bid ask 0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96 1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98 2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93 3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93 4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN We only asof within 2ms between the quote time and the trade time >>> pd.merge_asof( ... trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms") ... ) time ticker price quantity bid ask 0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96 1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN 2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93 3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93 4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN We only asof within 10ms between the quote time and the trade time and we exclude exact matches on time. However prior data will propagate forward >>> pd.merge_asof( ... trades, ... quotes, ... on="time", ... by="ticker", ... tolerance=pd.Timedelta("10ms"), ... allow_exact_matches=False ... ) time ticker price quantity bid ask 0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN 1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98 2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN 3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN 4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
doc_25336
See Migration guide for more details. tf.compat.v1.io.match_filenames_once, tf.compat.v1.train.match_filenames_once tf.io.match_filenames_once( pattern, name=None ) Note: The order of the files returned is deterministic. Args pattern A file pattern (glob), or 1D tensor of file patterns. name A name for the operations (optional). Returns A variable that is initialized to the list of files matching the pattern(s).
doc_25337
Applies the element-wise function: Softplus(x)=1β∗log⁡(1+exp⁡(β∗x))\text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x)) SoftPlus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positive. For numerical stability the implementation reverts to the linear function when input×β>thresholdinput \times \beta > threshold . Parameters beta – the β\beta value for the Softplus formulation. Default: 1 threshold – values above this revert to a linear function. Default: 20 Shape: Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.Softplus() >>> input = torch.randn(2) >>> output = m(input)
doc_25338
tf.compat.v1.metrics.precision_at_k( labels, predictions, k, class_id=None, weights=None, metrics_collections=None, updates_collections=None, name=None ) If class_id is specified, we calculate precision by considering only the entries in the batch for which class_id is in the top-k highest predictions, and computing the fraction of them for which class_id is indeed a correct label. If class_id is not specified, we'll calculate precision as how often on average a class among the top-k classes with the highest predicted values of a batch entry is correct and can be found in the label for that entry. precision_at_k creates two local variables, true_positive_at_<k> and false_positive_at_<k>, that are used to compute the precision@k frequency. This frequency is ultimately returned as precision_at_<k>: an idempotent operation that simply divides true_positive_at_<k> by total (true_positive_at_<k> + false_positive_at_<k>). For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the precision_at_<k>. Internally, a top_k operation computes a Tensor indicating the top k predictions. Set operations applied to top_k and labels calculate the true positives and false positives weighted by weights. Then update_op increments true_positive_at_<k> and false_positive_at_<k> using these values. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels int64 Tensor or SparseTensor with shape [D1, ... DN, num_labels] or [D1, ... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and labels has shape [batch_size, num_labels]. [D1, ... DN] must match predictions. Values should be in range [0, num_classes), where num_classes is the last dimension of predictions. Values outside this range are ignored. predictions Float Tensor with shape [D1, ... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1, ... DN] must match labels. k Integer, k for @k metric. class_id Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of predictions. If class_id is outside this range, the method returns NAN. weights Tensor whose rank is either 0, or n-1, where n is the rank of labels. If the latter, it must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that values should be added to. updates_collections An optional list of collections that updates should be added to. name Name of new update operation, and namespace for other dependent ops. Returns precision Scalar float64 Tensor with the value of true_positives divided by the sum of true_positives and false_positives. update_op Operation that increments true_positives and false_positives variables appropriately, and whose value matches precision. Raises ValueError If weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
doc_25339
Example: fav_color = request.session.pop('fav_color', 'blue')
doc_25340
Return the distance between x and the nearest adjacent number. Parameters xarray_like Values to find the spacing of. outndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. wherearray_like, optional This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs For other keyword-only arguments, see the ufunc docs. Returns outndarray or scalar The spacing of values of x. This is a scalar if x is a scalar. Notes It can be considered as a generalization of EPS: spacing(np.float64(1)) == np.finfo(np.float64).eps, and there should not be any representable number between x + spacing(x) and x for any finite x. Spacing of +- inf and NaN is NaN. Examples >>> np.spacing(1) == np.finfo(np.float64).eps True
doc_25341
The degree of the series. New in version 1.5.0. Returns degreeint Degree of the series, one less than the number of coefficients.
doc_25342
Write the bytestring in str to file descriptor fd at position of offset, leaving the file offset unchanged. Return the number of bytes actually written. Availability: Unix. New in version 3.3.
doc_25343
Alias for get_facecolor.
doc_25344
See Migration guide for more details. tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight tf.keras.layers.experimental.preprocessing.RandomHeight( factor, interpolation='bilinear', seed=None, name=None, **kwargs ) Adjusts the height of a batch of images by a random factor. The input should be a 4-D tensor in the "channels_last" image data format. By default, this layer is inactive during inference. Arguments factor A positive float (fraction of original height), or a tuple of size 2 representing lower and upper bound for resizing vertically. When represented as a single float, this value is used for both the upper and lower bound. For instance, factor=(0.2, 0.3) results in an output with height changed by a random amount in the range [20%, 30%]. factor=(-0.2, 0.3) results in an output with height changed by a random amount in the range [-20%, +30%].factor=0.2results in an output with height changed by a random amount in the range[-20%, +20%]. </td> </tr><tr> <td>interpolation</td> <td> String, the interpolation method. Defaults tobilinear. Supportsbilinear,nearest,bicubic,area,lanczos3,lanczos5,gaussian,mitchellcubic</td> </tr><tr> <td>seed</td> <td> Integer. Used to create a random seed. </td> </tr><tr> <td>name` A string, the name of the layer. Input shape: 4D tensor with shape: (samples, height, width, channels) (data_format='channels_last'). Output shape: 4D tensor with shape: (samples, random_height, width, channels). Methods adapt View source adapt( data, reset_state=True ) Fits the state of the preprocessing layer to the data being passed. Arguments data The data to train on. It can be passed either as a tf.data Dataset, or as a numpy array. reset_state Optional argument specifying whether to clear the state of the layer at the start of the call to adapt, or whether to start from the existing state. This argument may not be relevant to all preprocessing layers: a subclass of PreprocessingLayer may choose to throw if 'reset_state' is set to False.
doc_25345
Return Timedelta Array/Index as object ndarray of datetime.timedelta objects. Returns timedeltas:ndarray[object]
doc_25346
Compute the kernel density estimate at points X with the given kernel, using the distance metric specified at tree creation. Parameters Xarray-like of shape (n_samples, n_features) An array of points to query. Last dimension should match dimension of training data. hfloat the bandwidth of the kernel kernelstr, default=”gaussian” specify the kernel to use. Options are - ‘gaussian’ - ‘tophat’ - ‘epanechnikov’ - ‘exponential’ - ‘linear’ - ‘cosine’ Default is kernel = ‘gaussian’ atol, rtolfloat, default=0, 1e-8 Specify the desired relative and absolute tolerance of the result. If the true result is K_true, then the returned result K_ret satisfies abs(K_true - K_ret) < atol + rtol * K_ret The default is zero (i.e. machine precision) for both. breadth_firstbool, default=False If True, use a breadth-first search. If False (default) use a depth-first search. Breadth-first is generally faster for compact kernels and/or high tolerances. return_logbool, default=False Return the logarithm of the result. This can be more accurate than returning the result itself for narrow kernels. Returns densityndarray of shape X.shape[:-1] The array of (log)-density evaluations
doc_25347
Fit model to data. Parameters Xarray-like of shape (n_samples, n_features) Training vectors, where n_samples is the number of samples and n_features is the number of predictors. Yarray-like of shape (n_samples,) or (n_samples, n_targets) Target vectors, where n_samples is the number of samples and n_targets is the number of response variables.
doc_25348
Stack arrays in sequence horizontally (column wise). This is equivalent to concatenation along the second axis, except for 1-D arrays where it concatenates along the first axis. Rebuilds arrays divided by hsplit. This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions concatenate, stack and block provide more general stacking and concatenation operations. Parameters tupsequence of ndarrays The arrays must have the same shape along all but the second axis, except 1-D arrays which can be any length. Returns stackedndarray The array formed by stacking the given arrays. See also concatenate Join a sequence of arrays along an existing axis. stack Join a sequence of arrays along a new axis. block Assemble an nd-array from nested lists of blocks. vstack Stack arrays in sequence vertically (row wise). dstack Stack arrays in sequence depth wise (along third axis). column_stack Stack 1-D arrays as columns into a 2-D array. hsplit Split an array into multiple sub-arrays horizontally (column-wise). Notes The function is applied to both the _data and the _mask, if any. Examples >>> a = np.array((1,2,3)) >>> b = np.array((4,5,6)) >>> np.hstack((a,b)) array([1, 2, 3, 4, 5, 6]) >>> a = np.array([[1],[2],[3]]) >>> b = np.array([[4],[5],[6]]) >>> np.hstack((a,b)) array([[1, 4], [2, 5], [3, 6]])
doc_25349
The subpath within the ZIP file where modules are searched. This is the empty string for zipimporter objects which point to the root of the ZIP file.
doc_25350
class sklearn.preprocessing.OrdinalEncoder(*, categories='auto', dtype=<class 'numpy.float64'>, handle_unknown='error', unknown_value=None) [source] Encode categorical features as an integer array. The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features. The features are converted to ordinal integers. This results in a single column of integers (0 to n_categories - 1) per feature. Read more in the User Guide. New in version 0.20. Parameters categories‘auto’ or a list of array-like, default=’auto’ Categories (unique values) per feature: ‘auto’ : Determine categories automatically from the training data. list : categories[i] holds the categories expected in the ith column. The passed categories should not mix strings and numeric values, and should be sorted in case of numeric values. The used categories can be found in the categories_ attribute. dtypenumber type, default np.float64 Desired dtype of output. handle_unknown{‘error’, ‘use_encoded_value’}, default=’error’ When set to ‘error’ an error will be raised in case an unknown categorical feature is present during transform. When set to ‘use_encoded_value’, the encoded value of unknown categories will be set to the value given for the parameter unknown_value. In inverse_transform, an unknown category will be denoted as None. New in version 0.24. unknown_valueint or np.nan, default=None When the parameter handle_unknown is set to ‘use_encoded_value’, this parameter is required and will set the encoded value of unknown categories. It has to be distinct from the values used to encode any of the categories in fit. If set to np.nan, the dtype parameter must be a float dtype. New in version 0.24. Attributes categories_list of arrays The categories of each feature determined during fit (in order of the features in X and corresponding with the output of transform). This does not include categories that weren’t seen during fit. See also OneHotEncoder Performs a one-hot encoding of categorical features. LabelEncoder Encodes target labels with values between 0 and n_classes-1. Examples Given a dataset with two features, we let the encoder find the unique values per feature and transform the data to an ordinal encoding. >>> from sklearn.preprocessing import OrdinalEncoder >>> enc = OrdinalEncoder() >>> X = [['Male', 1], ['Female', 3], ['Female', 2]] >>> enc.fit(X) OrdinalEncoder() >>> enc.categories_ [array(['Female', 'Male'], dtype=object), array([1, 2, 3], dtype=object)] >>> enc.transform([['Female', 3], ['Male', 1]]) array([[0., 2.], [1., 0.]]) >>> enc.inverse_transform([[1, 0], [0, 1]]) array([['Male', 1], ['Female', 2]], dtype=object) Methods fit(X[, y]) Fit the OrdinalEncoder to X. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. inverse_transform(X) Convert the data back to the original representation. set_params(**params) Set the parameters of this estimator. transform(X) Transform X to ordinal codes. fit(X, y=None) [source] Fit the OrdinalEncoder to X. Parameters Xarray-like, shape [n_samples, n_features] The data to determine the categories of each feature. yNone Ignored. This parameter exists only for compatibility with Pipeline. Returns self fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. inverse_transform(X) [source] Convert the data back to the original representation. Parameters Xarray-like or sparse matrix, shape [n_samples, n_encoded_features] The transformed data. Returns X_trarray-like, shape [n_samples, n_features] Inverse transformed array. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Transform X to ordinal codes. Parameters Xarray-like, shape [n_samples, n_features] The data to encode. Returns X_outsparse matrix or a 2-d array Transformed input. Examples using sklearn.preprocessing.OrdinalEncoder Categorical Feature Support in Gradient Boosting Combine predictors using stacking Poisson regression and non-normal loss
doc_25351
See Migration guide for more details. tf.compat.v1.raw_ops.MutableHashTable tf.raw_ops.MutableHashTable( key_dtype, value_dtype, container='', shared_name='', use_node_name_sharing=False, name=None ) This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a scalar. Data can be inserted into the table using the insert operations. It does not support the initialization operation. Args key_dtype A tf.DType. Type of the table keys. value_dtype A tf.DType. Type of the table values. container An optional string. Defaults to "". If non-empty, this table is placed in the given container. Otherwise, a default container is used. shared_name An optional string. Defaults to "". If non-empty, this table is shared under the given name across multiple sessions. use_node_name_sharing An optional bool. Defaults to False. If true and shared_name is empty, the table is shared using the node name. name A name for the operation (optional). Returns A Tensor of type mutable string.
doc_25352
Serialize objects to ASCII-encoded JSON. If this is disabled, the JSON returned from jsonify will contain Unicode characters. This has security implications when rendering the JSON into JavaScript in templates, and should typically remain enabled. Default: True
doc_25353
tf.argmax tf.math.argmax( input, axis=None, output_type=tf.dtypes.int64, name=None ) In case of identity returns the smallest index. For example: A = tf.constant([2, 20, 30, 3, 6]) tf.math.argmax(A) # A[2] is maximum in tensor A <tf.Tensor: shape=(), dtype=int64, numpy=2> B = tf.constant([[2, 20, 30, 3, 6], [3, 11, 16, 1, 8], [14, 45, 23, 5, 27]]) tf.math.argmax(B, 0) <tf.Tensor: shape=(5,), dtype=int64, numpy=array([2, 2, 0, 2, 2])> tf.math.argmax(B, 1) <tf.Tensor: shape=(3,), dtype=int64, numpy=array([2, 2, 1])> C = tf.constant([0, 0, 0, 0]) tf.math.argmax(C) # Returns smallest index in case of ties <tf.Tensor: shape=(), dtype=int64, numpy=0> Args input A Tensor. axis An integer, the axis to reduce across. Default to 0. output_type An optional output dtype (tf.int32 or tf.int64). Defaults to tf.int64. name An optional name for the operation. Returns A Tensor of type output_type.
doc_25354
Convert a mapping object or a sequence of two-element tuples, which may contain str or bytes objects, to a percent-encoded ASCII text string. If the resultant string is to be used as a data for POST operation with the urlopen() function, then it should be encoded to bytes, otherwise it would result in a TypeError. The resulting string is a series of key=value pairs separated by '&' characters, where both key and value are quoted using the quote_via function. By default, quote_plus() is used to quote the values, which means spaces are quoted as a '+' character and ‘/’ characters are encoded as %2F, which follows the standard for GET requests (application/x-www-form-urlencoded). An alternate function that can be passed as quote_via is quote(), which will encode spaces as %20 and not encode ‘/’ characters. For maximum control of what is quoted, use quote and specify a value for safe. When a sequence of two-element tuples is used as the query argument, the first element of each tuple is a key and the second is a value. The value element in itself can be a sequence and in that case, if the optional parameter doseq evaluates to True, individual key=value pairs separated by '&' are generated for each element of the value sequence for the key. The order of parameters in the encoded string will match the order of parameter tuples in the sequence. The safe, encoding, and errors parameters are passed down to quote_via (the encoding and errors parameters are only passed when a query element is a str). To reverse this encoding process, parse_qs() and parse_qsl() are provided in this module to parse query strings into Python data structures. Refer to urllib examples to find out how the urllib.parse.urlencode() method can be used for generating the query string of a URL or data for a POST request. Changed in version 3.2: query supports bytes and string objects. New in version 3.5: quote_via parameter.
doc_25355
Alias for get_linewidth.
doc_25356
Control what happens when the cursor of a window is moved off the edge of the window or scrolling region, either as a result of a newline action on the bottom line, or typing the last character of the last line. If flag is False, the cursor is left on the bottom line. If flag is True, the window is scrolled up one line. Note that in order to get the physical scrolling effect on the terminal, it is also necessary to call idlok().
doc_25357
Return the current Thread object, corresponding to the caller’s thread of control. If the caller’s thread of control was not created through the threading module, a dummy thread object with limited functionality is returned.
doc_25358
Upsamples the input to either the given size or the given scale_factor Warning This function is deprecated in favor of torch.nn.functional.interpolate(). This is equivalent with nn.functional.interpolate(...). Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. The algorithm used for upsampling is determined by mode. Currently temporal, spatial and volumetric upsampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape. The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width. The modes available for upsampling are: nearest, linear (3D-only), bilinear, bicubic (4D-only), trilinear (5D-only) Parameters input (Tensor) – the input tensor size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) – output spatial size. scale_factor (float or Tuple[float]) – multiplier for spatial size. Has to match input size if it is a tuple. mode (string) – algorithm used for upsampling: 'nearest' | 'linear' | 'bilinear' | 'bicubic' | 'trilinear'. Default: 'nearest' align_corners (bool, optional) – Geometrically, we consider the pixels of the input and output as squares rather than points. If set to True, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to False, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation independent of input size when scale_factor is kept the same. This only has an effect when mode is 'linear', 'bilinear', 'bicubic' or 'trilinear'. Default: False Note With mode='bicubic', it’s possible to cause overshoot, in other words it can produce negative values or values greater than 255 for images. Explicitly call result.clamp(min=0, max=255) if you want to reduce the overshoot when displaying the image. Warning With align_corners = True, the linearly interpolating modes (linear, bilinear, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is align_corners = False. See Upsample for concrete examples on how this affects the outputs.
doc_25359
This attribute is a lazily computed tuple (possibly empty) of unique type variables found in __args__: >>> from typing import TypeVar >>> T = TypeVar('T') >>> list[T].__parameters__ (~T,)
doc_25360
Returns True if the scrap module is currently initialized. get_init() -> bool Gets the scrap module's initialization state. Returns: True if the pygame.scrap module is currently initialized, False otherwise Return type: bool New in pygame 1.9.5.
doc_25361
See Migration guide for more details. tf.compat.v1.raw_ops.QuantizedMul tf.raw_ops.QuantizedMul( x, y, min_x, max_x, min_y, max_y, Toutput=tf.dtypes.qint32, name=None ) Args x A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. y A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. min_x A Tensor of type float32. The float value that the lowest quantized x value represents. max_x A Tensor of type float32. The float value that the highest quantized x value represents. min_y A Tensor of type float32. The float value that the lowest quantized y value represents. max_y A Tensor of type float32. The float value that the highest quantized y value represents. Toutput An optional tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16. Defaults to tf.qint32. name A name for the operation (optional). Returns A tuple of Tensor objects (z, min_z, max_z). z A Tensor of type Toutput. min_z A Tensor of type float32. max_z A Tensor of type float32.
doc_25362
Set the locator of the minor ticker. Parameters locatorLocator Examples using matplotlib.axis.Axis.set_minor_locator Secondary Axis Date tick labels Anatomy of a figure Centering labels between ticks Date Demo Convert Major and minor ticks
doc_25363
Converts a tuple of index arrays into an array of flat indices, applying boundary modes to the multi-index. Parameters multi_indextuple of array_like A tuple of integer arrays, one array for each dimension. dimstuple of ints The shape of array into which the indices from multi_index apply. mode{‘raise’, ‘wrap’, ‘clip’}, optional Specifies how out-of-bounds indices are handled. Can specify either one mode or a tuple of modes, one mode per index. ‘raise’ – raise an error (default) ‘wrap’ – wrap around ‘clip’ – clip to the range In ‘clip’ mode, a negative index which would normally wrap will clip to 0 instead. order{‘C’, ‘F’}, optional Determines whether the multi-index should be viewed as indexing in row-major (C-style) or column-major (Fortran-style) order. Returns raveled_indicesndarray An array of indices into the flattened version of an array of dimensions dims. See also unravel_index Notes New in version 1.6.0. Examples >>> arr = np.array([[3,6,6],[4,5,1]]) >>> np.ravel_multi_index(arr, (7,6)) array([22, 41, 37]) >>> np.ravel_multi_index(arr, (7,6), order='F') array([31, 41, 13]) >>> np.ravel_multi_index(arr, (4,6), mode='clip') array([22, 23, 19]) >>> np.ravel_multi_index(arr, (4,4), mode=('clip','wrap')) array([12, 13, 13]) >>> np.ravel_multi_index((3,1,4,1), (6,7,8,9)) 1621
doc_25364
Same as buffer(), but allows customizing the style of the buffer. end_cap_style can be round (1), flat (2), or square (3). join_style can be round (1), mitre (2), or bevel (3). Mitre ratio limit (mitre_limit) only affects mitered join style.
doc_25365
See Migration guide for more details. tf.compat.v1.raw_ops.LeakyReluGrad tf.raw_ops.LeakyReluGrad( gradients, features, alpha=0.2, name=None ) Args gradients A Tensor. Must be one of the following types: half, bfloat16, float32, float64. The backpropagated gradients to the corresponding LeakyRelu operation. features A Tensor. Must have the same type as gradients. The features passed as input to the corresponding LeakyRelu operation, OR the outputs of that operation (both work equivalently). alpha An optional float. Defaults to 0.2. name A name for the operation (optional). Returns A Tensor. Has the same type as gradients.
doc_25366
Remove the conflicts that caused due gathering implied features flags. Parameters ‘flags’ list, compiler flags flags should be sorted from the lowest to the highest interest. Returns list, filtered from any conflicts. Examples >>> self.cc_normalize_flags(['-march=armv8.2-a+fp16', '-march=armv8.2-a+dotprod']) ['armv8.2-a+fp16+dotprod'] >>> self.cc_normalize_flags( ['-msse', '-msse2', '-msse3', '-mssse3', '-msse4.1', '-msse4.2', '-mavx', '-march=core-avx2'] ) ['-march=core-avx2']
doc_25367
See Migration guide for more details. tf.compat.v1.raw_ops.DrawBoundingBoxes tf.raw_ops.DrawBoundingBoxes( images, boxes, name=None ) Outputs a copy of images but draws on top of the pixels zero or more bounding boxes specified by the locations in boxes. The coordinates of the each bounding box in boxes are encoded as [y_min, x_min, y_max, x_max]. The bounding box coordinates are floats in [0.0, 1.0] relative to the width and height of the underlying image. For example, if an image is 100 x 200 pixels (height x width) and the bounding box is [0.1, 0.2, 0.5, 0.9], the upper-left and bottom-right coordinates of the bounding box will be (40, 10) to (180, 50) (in (x,y) coordinates). Parts of the bounding box may fall outside the image. Args images A Tensor. Must be one of the following types: float32, half. 4-D with shape [batch, height, width, depth]. A batch of images. boxes A Tensor of type float32. 3-D with shape [batch, num_bounding_boxes, 4] containing bounding boxes. name A name for the operation (optional). Returns A Tensor. Has the same type as images.
doc_25368
Delete name from sys.modules.
doc_25369
The number of days in the month.
doc_25370
Return a dictionary containing traceback information. This is the main extension point for customizing exception reports, for example: from django.views.debug import ExceptionReporter class CustomExceptionReporter(ExceptionReporter): def get_traceback_data(self): data = super().get_traceback_data() # ... remove/add something here ... return data
doc_25371
# args & kwargs are optional, for models which take positional/keyword arguments. ... How to implement an entrypoint? Here is a code snippet specifies an entrypoint for resnet18 model if we expand the implementation in pytorch/vision/hubconf.py. In most case importing the right function in hubconf.py is sufficient. Here we just want to use the expanded version as an example to show how it works. You can see the full script in pytorch/vision repo dependencies = ['torch'] from torchvision.models.resnet import resnet18 as _resnet18 # resnet18 is the name of entrypoint def resnet18(pretrained=False, **kwargs): """ # This docstring shows up in hub.help() Resnet18 model pretrained (bool): kwargs, load pretrained weights into the model """ # Call the model, load pretrained weights model = _resnet18(pretrained=pretrained, **kwargs) return model dependencies variable is a list of package names required to load the model. Note this might be slightly different from dependencies required for training a model. args and kwargs are passed along to the real callable function. Docstring of the function works as a help message. It explains what does the model do and what are the allowed positional/keyword arguments. It’s highly recommended to add a few examples here. Entrypoint function can either return a model(nn.module), or auxiliary tools to make the user workflow smoother, e.g. tokenizers. Callables prefixed with underscore are considered as helper functions which won’t show up in torch.hub.list(). Pretrained weights can either be stored locally in the github repo, or loadable by torch.hub.load_state_dict_from_url(). If less than 2GB, it’s recommended to attach it to a project release and use the url from the release. In the example above torchvision.models.resnet.resnet18 handles pretrained, alternatively you can put the following logic in the entrypoint definition. if pretrained: # For checkpoint saved in local github repo, e.g. <RELATIVE_PATH_TO_CHECKPOINT>=weights/save.pth dirname = os.path.dirname(__file__) checkpoint = os.path.join(dirname, <RELATIVE_PATH_TO_CHECKPOINT>) state_dict = torch.load(checkpoint) model.load_state_dict(state_dict) # For checkpoint saved elsewhere checkpoint = 'https://download.pytorch.org/models/resnet18-5c106cde.pth' model.load_state_dict(torch.hub.load_state_dict_from_url(checkpoint, progress=False)) Important Notice The published models should be at least in a branch/tag. It can’t be a random commit. Loading models from Hub Pytorch Hub provides convenient APIs to explore all available models in hub through torch.hub.list(), show docstring and examples through torch.hub.help() and load the pre-trained models using torch.hub.load(). torch.hub.list(github, force_reload=False) [source] List all entrypoints available in github hubconf. Parameters github (string) – a string with format “repo_owner/repo_name[:tag_name]” with an optional tag/branch. The default branch is master if not specified. Example: ‘pytorch/vision[:hub]’ force_reload (bool, optional) – whether to discard the existing cache and force a fresh download. Default is False. Returns a list of available entrypoint names Return type entrypoints Example >>> entrypoints = torch.hub.list('pytorch/vision', force_reload=True) torch.hub.help(github, model, force_reload=False) [source] Show the docstring of entrypoint model. Parameters github (string) – a string with format <repo_owner/repo_name[:tag_name]> with an optional tag/branch. The default branch is master if not specified. Example: ‘pytorch/vision[:hub]’ model (string) – a string of entrypoint name defined in repo’s hubconf.py force_reload (bool, optional) – whether to discard the existing cache and force a fresh download. Default is False. Example >>> print(torch.hub.help('pytorch/vision', 'resnet18', force_reload=True)) torch.hub.load(repo_or_dir, model, *args, **kwargs) [source] Load a model from a github repo or a local directory. Note: Loading a model is the typical use case, but this can also be used to for loading other objects such as tokenizers, loss functions, etc. If source is 'github', repo_or_dir is expected to be of the form repo_owner/repo_name[:tag_name] with an optional tag/branch. If source is 'local', repo_or_dir is expected to be a path to a local directory. Parameters repo_or_dir (string) – repo name (repo_owner/repo_name[:tag_name]), if source = 'github'; or a path to a local directory, if source = 'local'. model (string) – the name of a callable (entrypoint) defined in the repo/dir’s hubconf.py. *args (optional) – the corresponding args for callable model. source (string, optional) – 'github' | 'local'. Specifies how repo_or_dir is to be interpreted. Default is 'github'. force_reload (bool, optional) – whether to force a fresh download of the github repo unconditionally. Does not have any effect if source = 'local'. Default is False. verbose (bool, optional) – If False, mute messages about hitting local caches. Note that the message about first download cannot be muted. Does not have any effect if source = 'local'. Default is True. **kwargs (optional) – the corresponding kwargs for callable model. Returns The output of the model callable when called with the given *args and **kwargs. Example >>> # from a github repo >>> repo = 'pytorch/vision' >>> model = torch.hub.load(repo, 'resnet50', pretrained=True) >>> # from a local directory >>> path = '/some/local/path/pytorch/vision' >>> model = torch.hub.load(path, 'resnet50', pretrained=True) torch.hub.download_url_to_file(url, dst, hash_prefix=None, progress=True) [source] Download object at the given URL to a local path. Parameters url (string) – URL of the object to download dst (string) – Full path where object will be saved, e.g. /tmp/temporary_file hash_prefix (string, optional) – If not None, the SHA256 downloaded file should start with hash_prefix. Default: None progress (bool, optional) – whether or not to display a progress bar to stderr Default: True Example >>> torch.hub.download_url_to_file('https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth', '/tmp/temporary_file') torch.hub.load_state_dict_from_url(url, model_dir=None, map_location=None, progress=True, check_hash=False, file_name=None) [source] Loads the Torch serialized object at the given URL. If downloaded file is a zip file, it will be automatically decompressed. If the object is already present in model_dir, it’s deserialized and returned. The default value of model_dir is <hub_dir>/checkpoints where hub_dir is the directory returned by get_dir(). Parameters url (string) – URL of the object to download model_dir (string, optional) – directory in which to save the object map_location (optional) – a function or a dict specifying how to remap storage locations (see torch.load) progress (bool, optional) – whether or not to display a progress bar to stderr. Default: True check_hash (bool, optional) – If True, the filename part of the URL should follow the naming convention filename-<sha256>.ext where <sha256> is the first eight or more digits of the SHA256 hash of the contents of the file. The hash is used to ensure unique names and to verify the contents of the file. Default: False file_name (string, optional) – name for the downloaded file. Filename from url will be used if not set. Example >>> state_dict = torch.hub.load_state_dict_from_url('https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth') Running a loaded model: Note that *args and **kwargs in torch.hub.load() are used to instantiate a model. After you have loaded a model, how can you find out what you can do with the model? A suggested workflow is dir(model) to see all available methods of the model. help(model.foo) to check what arguments model.foo takes to run To help users explore without referring to documentation back and forth, we strongly recommend repo owners make function help messages clear and succinct. It’s also helpful to include a minimal working example. Where are my downloaded models saved? The locations are used in the order of Calling hub.set_dir(<PATH_TO_HUB_DIR>) $TORCH_HOME/hub, if environment variable TORCH_HOME is set. $XDG_CACHE_HOME/torch/hub, if environment variable XDG_CACHE_HOME is set. ~/.cache/torch/hub torch.hub.get_dir() [source] Get the Torch Hub cache directory used for storing downloaded models & weights. If set_dir() is not called, default path is $TORCH_HOME/hub where environment variable $TORCH_HOME defaults to $XDG_CACHE_HOME/torch. $XDG_CACHE_HOME follows the X Design Group specification of the Linux filesystem layout, with a default value ~/.cache if the environment variable is not set. torch.hub.set_dir(d) [source] Optionally set the Torch Hub directory used to save downloaded models & weights. Parameters d (string) – path to a local folder to save downloaded models & weights. Caching logic By default, we don’t clean up files after loading it. Hub uses the cache by default if it already exists in the directory returned by get_dir(). Users can force a reload by calling hub.load(..., force_reload=True). This will delete the existing github folder and downloaded weights, reinitialize a fresh download. This is useful when updates are published to the same branch, users can keep up with the latest release. Known limitations: Torch hub works by importing the package as if it was installed. There’re some side effects introduced by importing in Python. For example, you can see new items in Python caches sys.modules and sys.path_importer_cache which is normal Python behavior. A known limitation that worth mentioning here is user CANNOT load two different branches of the same repo in the same python process. It’s just like installing two packages with the same name in Python, which is not good. Cache might join the party and give you surprises if you actually try that. Of course it’s totally fine to load them in separate processes.
doc_25372
Return the thread stack size used when creating new threads. The optional size argument specifies the stack size to be used for subsequently created threads, and must be 0 (use platform or configured default) or a positive integer value of at least 32,768 (32 KiB). If size is not specified, 0 is used. If changing the thread stack size is unsupported, a RuntimeError is raised. If the specified stack size is invalid, a ValueError is raised and the stack size is unmodified. 32 KiB is currently the minimum supported stack size value to guarantee sufficient stack space for the interpreter itself. Note that some platforms may have particular restrictions on values for the stack size, such as requiring a minimum stack size > 32 KiB or requiring allocation in multiples of the system memory page size - platform documentation should be referred to for more information (4 KiB pages are common; using multiples of 4096 for the stack size is the suggested approach in the absence of more specific information). Availability: Windows, systems with POSIX threads.
doc_25373
Calculate all central image moments up to a certain order. The center coordinates (cr, cc) can be calculated from the raw moments as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}. Note that central moments are translation invariant but not scale and rotation invariant. Parameters imagenD double or uint8 array Rasterized shape as image. centertuple of float, optional Coordinates of the image centroid. This will be computed if it is not provided. orderint, optional The maximum order of moments computed. Returns mu(order + 1, order + 1) array Central image moments. References 1 Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009. 2 B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005. 3 T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993. 4 https://en.wikipedia.org/wiki/Image_moment Examples >>> image = np.zeros((20, 20), dtype=np.double) >>> image[13:17, 13:17] = 1 >>> M = moments(image) >>> centroid = (M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]) >>> moments_central(image, centroid) array([[16., 0., 20., 0.], [ 0., 0., 0., 0.], [20., 0., 25., 0.], [ 0., 0., 0., 0.]])
doc_25374
Hide the entire index / column headers, or specific rows / columns from display. New in version 1.4.0. Parameters subset:label, array-like, IndexSlice, optional A valid 1d input or single key along the axis within DataFrame.loc[<subset>, :] or DataFrame.loc[:, <subset>] depending upon axis, to limit data to select hidden rows / columns. axis:{“index”, 0, “columns”, 1} Apply to the index or columns. level:int, str, list The level(s) to hide in a MultiIndex if hiding the entire index / column headers. Cannot be used simultaneously with subset. names:bool Whether to hide the level name(s) of the index / columns headers in the case it (or at least one the levels) remains visible. Returns self:Styler Notes This method has multiple functionality depending upon the combination of the subset, level and names arguments (see examples). The axis argument is used only to control whether the method is applied to row or column headers: Argument combinations subset level names Effect None None False The axis-Index is hidden entirely. None None True Only the axis-Index names are hidden. None Int, Str, List False Specified axis-MultiIndex levels are hidden entirely. None Int, Str, List True Specified axis-MultiIndex levels are hidden entirely and the names of remaining axis-MultiIndex levels. Subset None False The specified data rows/columns are hidden, but the axis-Index itself, and names, remain unchanged. Subset None True The specified data rows/columns and axis-Index names are hidden, but the axis-Index itself remains unchanged. Subset Int, Str, List Boolean ValueError: cannot supply subset and level simultaneously. Note this method only hides the identifed elements so can be chained to hide multiple elements in sequence. Examples Simple application hiding specific rows: >>> df = pd.DataFrame([[1,2], [3,4], [5,6]], index=["a", "b", "c"]) >>> df.style.hide(["a", "b"]) 0 1 c 5 6 Hide the index and retain the data values: >>> midx = pd.MultiIndex.from_product([["x", "y"], ["a", "b", "c"]]) >>> df = pd.DataFrame(np.random.randn(6,6), index=midx, columns=midx) >>> df.style.format("{:.1f}").hide() x y a b c a b c 0.1 0.0 0.4 1.3 0.6 -1.4 0.7 1.0 1.3 1.5 -0.0 -0.2 1.4 -0.8 1.6 -0.2 -0.4 -0.3 0.4 1.0 -0.2 -0.8 -1.2 1.1 -0.6 1.2 1.8 1.9 0.3 0.3 0.8 0.5 -0.3 1.2 2.2 -0.8 Hide specific rows in a MultiIndex but retain the index: >>> df.style.format("{:.1f}").hide(subset=(slice(None), ["a", "c"])) ... x y a b c a b c x b 0.7 1.0 1.3 1.5 -0.0 -0.2 y b -0.6 1.2 1.8 1.9 0.3 0.3 Hide specific rows and the index through chaining: >>> df.style.format("{:.1f}").hide(subset=(slice(None), ["a", "c"])).hide() ... x y a b c a b c 0.7 1.0 1.3 1.5 -0.0 -0.2 -0.6 1.2 1.8 1.9 0.3 0.3 Hide a specific level: >>> df.style.format("{:,.1f}").hide(level=1) x y a b c a b c x 0.1 0.0 0.4 1.3 0.6 -1.4 0.7 1.0 1.3 1.5 -0.0 -0.2 1.4 -0.8 1.6 -0.2 -0.4 -0.3 y 0.4 1.0 -0.2 -0.8 -1.2 1.1 -0.6 1.2 1.8 1.9 0.3 0.3 0.8 0.5 -0.3 1.2 2.2 -0.8 Hiding just the index level names: >>> df.index.names = ["lev0", "lev1"] >>> df.style.format("{:,.1f}").hide(names=True) x y a b c a b c x a 0.1 0.0 0.4 1.3 0.6 -1.4 b 0.7 1.0 1.3 1.5 -0.0 -0.2 c 1.4 -0.8 1.6 -0.2 -0.4 -0.3 y a 0.4 1.0 -0.2 -0.8 -1.2 1.1 b -0.6 1.2 1.8 1.9 0.3 0.3 c 0.8 0.5 -0.3 1.2 2.2 -0.8 Examples all produce equivalently transposed effects with axis="columns".
doc_25375
Convert 32-bit positive integers from network to host byte order. On machines where the host byte order is the same as network byte order, this is a no-op; otherwise, it performs a 4-byte swap operation.
doc_25376
Return whether face is colored.
doc_25377
tf.compat.v1.convert_to_tensor_or_sparse_tensor( value, dtype=None, name=None ) Args value A SparseTensor, SparseTensorValue, or an object whose type has a registered Tensor conversion function. dtype Optional element type for the returned tensor. If missing, the type is inferred from the type of value. name Optional name to use if a new Tensor is created. Returns A SparseTensor or Tensor based on value. Raises RuntimeError If result type is incompatible with dtype.
doc_25378
See Migration guide for more details. tf.compat.v1.raw_ops.MapUnstageNoKey tf.raw_ops.MapUnstageNoKey( indices, dtypes, capacity=0, memory_limit=0, container='', shared_name='', name=None ) from the underlying container. If the underlying container does not contain elements, the op will block until it does. Args indices A Tensor of type int32. dtypes A list of tf.DTypes that has length >= 1. capacity An optional int that is >= 0. Defaults to 0. memory_limit An optional int that is >= 0. Defaults to 0. container An optional string. Defaults to "". shared_name An optional string. Defaults to "". name A name for the operation (optional). Returns A tuple of Tensor objects (key, values). key A Tensor of type int64. values A list of Tensor objects of type dtypes.
doc_25379
Return the Transform instance used by this artist.
doc_25380
Set the linestyle of the line. Parameters ls{'-', '--', '-.', ':', '', (offset, on-off-seq), ...} Possible values: A string: linestyle description '-' or 'solid' solid line '--' or 'dashed' dashed line '-.' or 'dashdot' dash-dotted line ':' or 'dotted' dotted line 'none', 'None', ' ', or '' draw nothing Alternatively a dash tuple of the following form can be provided: (offset, onoffseq) where onoffseq is an even length tuple of on and off ink in points. See also set_dashes(). For examples see Linestyles.
doc_25381
Clear the axis. This resets axis properties to their default values: the label the scale locators, formatters and ticks major and minor grid units registered callbacks
doc_25382
ABC for classes that provide the __len__() method.
doc_25383
Sent with a preflight request to indicate which headers will be sent with the cross origin request. Set access_control_allow_headers on the response to indicate which headers are allowed.
doc_25384
Attributes memory_limit_mb repeated float memory_limit_mb priority repeated int32 priority
doc_25385
human readable description of operation argument
doc_25386
Element class. This class defines the Element interface, and provides a reference implementation of this interface. The element name, attribute names, and attribute values can be either bytestrings or Unicode strings. tag is the element name. attrib is an optional dictionary, containing element attributes. extra contains additional attributes, given as keyword arguments. tag A string identifying what kind of data this element represents (the element type, in other words). text tail These attributes can be used to hold additional data associated with the element. Their values are usually strings but may be any application-specific object. If the element is created from an XML file, the text attribute holds either the text between the element’s start tag and its first child or end tag, or None, and the tail attribute holds either the text between the element’s end tag and the next tag, or None. For the XML data <a><b>1<c>2<d/>3</c></b>4</a> the a element has None for both text and tail attributes, the b element has text "1" and tail "4", the c element has text "2" and tail None, and the d element has text None and tail "3". To collect the inner text of an element, see itertext(), for example "".join(element.itertext()). Applications may store arbitrary objects in these attributes. attrib A dictionary containing the element’s attributes. Note that while the attrib value is always a real mutable Python dictionary, an ElementTree implementation may choose to use another internal representation, and create the dictionary only if someone asks for it. To take advantage of such implementations, use the dictionary methods below whenever possible. The following dictionary-like methods work on the element attributes. clear() Resets an element. This function removes all subelements, clears all attributes, and sets the text and tail attributes to None. get(key, default=None) Gets the element attribute named key. Returns the attribute value, or default if the attribute was not found. items() Returns the element attributes as a sequence of (name, value) pairs. The attributes are returned in an arbitrary order. keys() Returns the elements attribute names as a list. The names are returned in an arbitrary order. set(key, value) Set the attribute key on the element to value. The following methods work on the element’s children (subelements). append(subelement) Adds the element subelement to the end of this element’s internal list of subelements. Raises TypeError if subelement is not an Element. extend(subelements) Appends subelements from a sequence object with zero or more elements. Raises TypeError if a subelement is not an Element. New in version 3.2. find(match, namespaces=None) Finds the first subelement matching match. match may be a tag name or a path. Returns an element instance or None. namespaces is an optional mapping from namespace prefix to full name. Pass '' as prefix to move all unprefixed tag names in the expression into the given namespace. findall(match, namespaces=None) Finds all matching subelements, by tag name or path. Returns a list containing all matching elements in document order. namespaces is an optional mapping from namespace prefix to full name. Pass '' as prefix to move all unprefixed tag names in the expression into the given namespace. findtext(match, default=None, namespaces=None) Finds text for the first subelement matching match. match may be a tag name or a path. Returns the text content of the first matching element, or default if no element was found. Note that if the matching element has no text content an empty string is returned. namespaces is an optional mapping from namespace prefix to full name. Pass '' as prefix to move all unprefixed tag names in the expression into the given namespace. insert(index, subelement) Inserts subelement at the given position in this element. Raises TypeError if subelement is not an Element. iter(tag=None) Creates a tree iterator with the current element as the root. The iterator iterates over this element and all elements below it, in document (depth first) order. If tag is not None or '*', only elements whose tag equals tag are returned from the iterator. If the tree structure is modified during iteration, the result is undefined. New in version 3.2. iterfind(match, namespaces=None) Finds all matching subelements, by tag name or path. Returns an iterable yielding all matching elements in document order. namespaces is an optional mapping from namespace prefix to full name. New in version 3.2. itertext() Creates a text iterator. The iterator loops over this element and all subelements, in document order, and returns all inner text. New in version 3.2. makeelement(tag, attrib) Creates a new element object of the same type as this element. Do not call this method, use the SubElement() factory function instead. remove(subelement) Removes subelement from the element. Unlike the find* methods this method compares elements based on the instance identity, not on tag value or contents. Element objects also support the following sequence type methods for working with subelements: __delitem__(), __getitem__(), __setitem__(), __len__(). Caution: Elements with no subelements will test as False. This behavior will change in future versions. Use specific len(elem) or elem is None test instead. element = root.find('foo') if not element: # careful! print("element not found, or element has no subelements") if element is None: print("element not found") Prior to Python 3.8, the serialisation order of the XML attributes of elements was artificially made predictable by sorting the attributes by their name. Based on the now guaranteed ordering of dicts, this arbitrary reordering was removed in Python 3.8 to preserve the order in which attributes were originally parsed or created by user code. In general, user code should try not to depend on a specific ordering of attributes, given that the XML Information Set explicitly excludes the attribute order from conveying information. Code should be prepared to deal with any ordering on input. In cases where deterministic XML output is required, e.g. for cryptographic signing or test data sets, canonical serialisation is available with the canonicalize() function. In cases where canonical output is not applicable but a specific attribute order is still desirable on output, code should aim for creating the attributes directly in the desired order, to avoid perceptual mismatches for readers of the code. In cases where this is difficult to achieve, a recipe like the following can be applied prior to serialisation to enforce an order independently from the Element creation: def reorder_attributes(root): for el in root.iter(): attrib = el.attrib if len(attrib) > 1: # adjust attribute order, e.g. by sorting attribs = sorted(attrib.items()) attrib.clear() attrib.update(attribs)
doc_25387
Set new levels on MultiIndex. Defaults to returning new index. Parameters levels:sequence or list of sequence New level(s) to apply. level:int, level name, or sequence of int/level names (default None) Level(s) to set (None for all levels). inplace:bool If True, mutates in place. Deprecated since version 1.2.0. verify_integrity:bool, default True If True, checks that levels and codes are compatible. Returns new index (of same type and class…etc) or None The same type as the caller or None if inplace=True. Examples >>> idx = pd.MultiIndex.from_tuples( ... [ ... (1, "one"), ... (1, "two"), ... (2, "one"), ... (2, "two"), ... (3, "one"), ... (3, "two") ... ], ... names=["foo", "bar"] ... ) >>> idx MultiIndex([(1, 'one'), (1, 'two'), (2, 'one'), (2, 'two'), (3, 'one'), (3, 'two')], names=['foo', 'bar']) >>> idx.set_levels([['a', 'b', 'c'], [1, 2]]) MultiIndex([('a', 1), ('a', 2), ('b', 1), ('b', 2), ('c', 1), ('c', 2)], names=['foo', 'bar']) >>> idx.set_levels(['a', 'b', 'c'], level=0) MultiIndex([('a', 'one'), ('a', 'two'), ('b', 'one'), ('b', 'two'), ('c', 'one'), ('c', 'two')], names=['foo', 'bar']) >>> idx.set_levels(['a', 'b'], level='bar') MultiIndex([(1, 'a'), (1, 'b'), (2, 'a'), (2, 'b'), (3, 'a'), (3, 'b')], names=['foo', 'bar']) If any of the levels passed to set_levels() exceeds the existing length, all of the values from that argument will be stored in the MultiIndex levels, though the values will be truncated in the MultiIndex output. >>> idx.set_levels([['a', 'b', 'c'], [1, 2, 3, 4]], level=[0, 1]) MultiIndex([('a', 1), ('a', 2), ('b', 1), ('b', 2), ('c', 1), ('c', 2)], names=['foo', 'bar']) >>> idx.set_levels([['a', 'b', 'c'], [1, 2, 3, 4]], level=[0, 1]).levels FrozenList([['a', 'b', 'c'], [1, 2, 3, 4]])
doc_25388
Draw contour regions on an unstructured triangular grid. The triangulation can be specified in one of two ways; either tricontourf(triangulation, ...) where triangulation is a Triangulation object, or tricontourf(x, y, ...) tricontourf(x, y, triangles, ...) tricontourf(x, y, triangles=triangles, ...) tricontourf(x, y, mask=mask, ...) tricontourf(x, y, triangles, mask=mask, ...) in which case a Triangulation object will be created. See that class' docstring for an explanation of these cases. The remaining arguments may be: tricontourf(..., Z) where Z is the array of values to contour, one per point in the triangulation. The level values are chosen automatically. tricontourf(..., Z, levels) contour up to levels+1 automatically chosen contour levels (levels intervals). tricontourf(..., Z, levels) draw contour regions at the values specified in sequence levels, which must be in increasing order. tricontourf(Z, **kwargs) Use keyword arguments to control colors, linewidth, origin, cmap ... see below for more details. Parameters triangulationTriangulation, optional The unstructured triangular grid. If specified, then x, y, triangles, and mask are not accepted. x, yarray-like, optional The coordinates of the values in Z. triangles(ntri, 3) array-like of int, optional For each triangle, the indices of the three points that make up the triangle, ordered in an anticlockwise manner. If not specified, the Delaunay triangulation is calculated. mask(ntri,) array-like of bool, optional Which triangles are masked out. Z2D array-like The height values over which the contour is drawn. levelsint or array-like, optional Determines the number and positions of the contour lines / regions. If an int n, use MaxNLocator, which tries to automatically choose no more than n+1 "nice" contour levels between vmin and vmax. If array-like, draw contour lines at the specified levels. The values must be in increasing order. Returns TriContourSet Other Parameters colorscolor string or sequence of colors, optional The colors of the levels, i.e., the contour regions. The sequence is cycled for the levels in ascending order. If the sequence is shorter than the number of levels, it's repeated. As a shortcut, single color strings may be used in place of one-element lists, i.e. 'red' instead of ['red'] to color all levels with the same color. This shortcut does only work for color strings, not for other ways of specifying colors. By default (value None), the colormap specified by cmap will be used. alphafloat, default: 1 The alpha blending value, between 0 (transparent) and 1 (opaque). cmapstr or Colormap, default: rcParams["image.cmap"] (default: 'viridis') A Colormap instance or registered colormap name. The colormap maps the level values to colors. If both colors and cmap are given, an error is raised. normNormalize, optional If a colormap is used, the Normalize instance scales the level values to the canonical colormap range [0, 1] for mapping to colors. If not given, the default linear scaling is used. vmin, vmaxfloat, optional If not None, either or both of these values will be supplied to the Normalize instance, overriding the default color scaling based on levels. origin{None, 'upper', 'lower', 'image'}, default: None Determines the orientation and exact position of Z by specifying the position of Z[0, 0]. This is only relevant, if X, Y are not given. None: Z[0, 0] is at X=0, Y=0 in the lower left corner. 'lower': Z[0, 0] is at X=0.5, Y=0.5 in the lower left corner. 'upper': Z[0, 0] is at X=N+0.5, Y=0.5 in the upper left corner. 'image': Use the value from rcParams["image.origin"] (default: 'upper'). extent(x0, x1, y0, y1), optional If origin is not None, then extent is interpreted as in imshow: it gives the outer pixel boundaries. In this case, the position of Z[0, 0] is the center of the pixel, not a corner. If origin is None, then (x0, y0) is the position of Z[0, 0], and (x1, y1) is the position of Z[-1, -1]. This argument is ignored if X and Y are specified in the call to contour. locatorticker.Locator subclass, optional The locator is used to determine the contour levels if they are not given explicitly via levels. Defaults to MaxNLocator. extend{'neither', 'both', 'min', 'max'}, default: 'neither' Determines the tricontourf-coloring of values that are outside the levels range. If 'neither', values outside the levels range are not colored. If 'min', 'max' or 'both', color the values below, above or below and above the levels range. Values below min(levels) and above max(levels) are mapped to the under/over values of the Colormap. Note that most colormaps do not have dedicated colors for these by default, so that the over and under values are the edge values of the colormap. You may want to set these values explicitly using Colormap.set_under and Colormap.set_over. Note An existing TriContourSet does not get notified if properties of its colormap are changed. Therefore, an explicit call to ContourSet.changed() is needed after modifying the colormap. The explicit call can be left out, if a colorbar is assigned to the TriContourSet because it internally calls ContourSet.changed(). xunits, yunitsregistered units, optional Override axis units by specifying an instance of a matplotlib.units.ConversionInterface. antialiasedbool, optional Enable antialiasing, overriding the defaults. For filled contours, the default is True. For line contours, it is taken from rcParams["lines.antialiased"] (default: True). hatcheslist[str], optional A list of cross hatch patterns to use on the filled areas. If None, no hatching will be added to the contour. Hatching is supported in the PostScript, PDF, SVG and Agg backends only. Notes tricontourf fills intervals that are closed at the top; that is, for boundaries z1 and z2, the filled region is: z1 < Z <= z2 except for the lowest interval, which is closed on both sides (i.e. it includes the lowest value).
doc_25389
Return sample standard deviation over requested axis. Normalized by N-1 by default. This can be changed using the ddof argument. Parameters axis:{index (0)} skipna:bool, default True Exclude NA/null values. If an entire row/column is NA, the result will be NA. level:int or level name, default None If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a scalar. ddof:int, default 1 Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. numeric_only:bool, default None Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series. Returns scalar or Series (if level specified) Notes To have the same behaviour as numpy.std, use ddof=0 (instead of the default ddof=1) Examples >>> df = pd.DataFrame({'person_id': [0, 1, 2, 3], ... 'age': [21, 25, 62, 43], ... 'height': [1.61, 1.87, 1.49, 2.01]} ... ).set_index('person_id') >>> df age height person_id 0 21 1.61 1 25 1.87 2 62 1.49 3 43 2.01 The standard deviation of the columns can be found as follows: >>> df.std() age 18.786076 height 0.237417 Alternatively, ddof=0 can be set to normalize by N instead of N-1: >>> df.std(ddof=0) age 16.269219 height 0.205609
doc_25390
Return instr if opname is found, otherwise throws AssertionError.
doc_25391
Set the maximum depth of the Python interpreter stack to limit. This limit prevents infinite recursion from causing an overflow of the C stack and crashing Python. The highest possible limit is platform-dependent. A user may need to set the limit higher when they have a program that requires deep recursion and a platform that supports a higher limit. This should be done with care, because a too-high limit can lead to a crash. If the new limit is too low at the current recursion depth, a RecursionError exception is raised. Changed in version 3.5.1: A RecursionError exception is now raised if the new limit is too low at the current recursion depth.
doc_25392
See Migration guide for more details. tf.compat.v1.UnconnectedGradients The gradient of y with respect to x can be zero in two different ways: there could be no differentiable path in the graph connecting x to y (and so we can statically prove that the gradient is zero) or it could be that runtime values of tensors in a particular execution lead to a gradient of zero (say, if a relu unit happens to not be activated). To allow you to distinguish between these two cases you can choose what value gets returned for the gradient when there is no path in the graph from x to y: NONE: Indicates that [None] will be returned if there is no path from x to y ZERO: Indicates that a zero tensor will be returned in the shape of x. Class Variables NONE tf.UnconnectedGradients ZERO tf.UnconnectedGradients
doc_25393
'DEFAULT_AUTHENTICATION_CLASSES': [ 'rest_framework.authentication.BasicAuthentication', 'rest_framework.authentication.SessionAuthentication', ] } You can also set the authentication scheme on a per-view or per-viewset basis, using the APIView class-based views. from rest_framework.authentication import SessionAuthentication, BasicAuthentication from rest_framework.permissions import IsAuthenticated from rest_framework.response import Response from rest_framework.views import APIView class ExampleView(APIView): authentication_classes = [SessionAuthentication, BasicAuthentication] permission_classes = [IsAuthenticated] def get(self, request, format=None): content = { 'user': str(request.user), # `django.contrib.auth.User` instance. 'auth': str(request.auth), # None } return Response(content) Or, if you're using the @api_view decorator with function based views. @api_view(['GET']) @authentication_classes([SessionAuthentication, BasicAuthentication]) @permission_classes([IsAuthenticated]) def example_view(request, format=None): content = { 'user': str(request.user), # `django.contrib.auth.User` instance. 'auth': str(request.auth), # None } return Response(content) Unauthorized and Forbidden responses When an unauthenticated request is denied permission there are two different error codes that may be appropriate. HTTP 401 Unauthorized HTTP 403 Permission Denied HTTP 401 responses must always include a WWW-Authenticate header, that instructs the client how to authenticate. HTTP 403 responses do not include the WWW-Authenticate header. The kind of response that will be used depends on the authentication scheme. Although multiple authentication schemes may be in use, only one scheme may be used to determine the type of response. The first authentication class set on the view is used when determining the type of response. Note that when a request may successfully authenticate, but still be denied permission to perform the request, in which case a 403 Permission Denied response will always be used, regardless of the authentication scheme. Apache mod_wsgi specific configuration Note that if deploying to Apache using mod_wsgi, the authorization header is not passed through to a WSGI application by default, as it is assumed that authentication will be handled by Apache, rather than at an application level. If you are deploying to Apache, and using any non-session based authentication, you will need to explicitly configure mod_wsgi to pass the required headers through to the application. This can be done by specifying the WSGIPassAuthorization directive in the appropriate context and setting it to 'On'. # this can go in either server config, virtual host, directory or .htaccess WSGIPassAuthorization On API Reference BasicAuthentication This authentication scheme uses HTTP Basic Authentication, signed against a user's username and password. Basic authentication is generally only appropriate for testing. If successfully authenticated, BasicAuthentication provides the following credentials. request.user will be a Django User instance. request.auth will be None. Unauthenticated responses that are denied permission will result in an HTTP 401 Unauthorized response with an appropriate WWW-Authenticate header. For example: WWW-Authenticate: Basic realm="api" Note: If you use BasicAuthentication in production you must ensure that your API is only available over https. You should also ensure that your API clients will always re-request the username and password at login, and will never store those details to persistent storage. TokenAuthentication This authentication scheme uses a simple token-based HTTP Authentication scheme. Token authentication is appropriate for client-server setups, such as native desktop and mobile clients. To use the TokenAuthentication scheme you'll need to configure the authentication classes to include TokenAuthentication, and additionally include rest_framework.authtoken in your INSTALLED_APPS setting: INSTALLED_APPS = [ ... 'rest_framework.authtoken' ] Note: Make sure to run manage.py migrate after changing your settings. The rest_framework.authtoken app provides Django database migrations. You'll also need to create tokens for your users. from rest_framework.authtoken.models import Token token = Token.objects.create(user=...) print(token.key) For clients to authenticate, the token key should be included in the Authorization HTTP header. The key should be prefixed by the string literal "Token", with whitespace separating the two strings. For example: Authorization: Token 9944b09199c62bcf9418ad846dd0e4bbdfc6ee4b Note: If you want to use a different keyword in the header, such as Bearer, simply subclass TokenAuthentication and set the keyword class variable. If successfully authenticated, TokenAuthentication provides the following credentials. request.user will be a Django User instance. request.auth will be a rest_framework.authtoken.models.Token instance. Unauthenticated responses that are denied permission will result in an HTTP 401 Unauthorized response with an appropriate WWW-Authenticate header. For example: WWW-Authenticate: Token The curl command line tool may be useful for testing token authenticated APIs. For example: curl -X GET http://127.0.0.1:8000/api/example/ -H 'Authorization: Token 9944b09199c62bcf9418ad846dd0e4bbdfc6ee4b' Note: If you use TokenAuthentication in production you must ensure that your API is only available over https. Generating Tokens By using signals If you want every user to have an automatically generated Token, you can simply catch the User's post_save signal. from django.conf import settings from django.db.models.signals import post_save from django.dispatch import receiver from rest_framework.authtoken.models import Token @receiver(post_save, sender=settings.AUTH_USER_MODEL) def create_auth_token(sender, instance=None, created=False, **kwargs): if created: Token.objects.create(user=instance) Note that you'll want to ensure you place this code snippet in an installed models.py module, or some other location that will be imported by Django on startup. If you've already created some users, you can generate tokens for all existing users like this: from django.contrib.auth.models import User from rest_framework.authtoken.models import Token for user in User.objects.all(): Token.objects.get_or_create(user=user) By exposing an api endpoint When using TokenAuthentication, you may want to provide a mechanism for clients to obtain a token given the username and password. REST framework provides a built-in view to provide this behaviour. To use it, add the obtain_auth_token view to your URLconf: from rest_framework.authtoken import views urlpatterns += [ path('api-token-auth/', views.obtain_auth_token) ] Note that the URL part of the pattern can be whatever you want to use. The obtain_auth_token view will return a JSON response when valid username and password fields are POSTed to the view using form data or JSON: { 'token' : '9944b09199c62bcf9418ad846dd0e4bbdfc6ee4b' } Note that the default obtain_auth_token view explicitly uses JSON requests and responses, rather than using default renderer and parser classes in your settings. By default, there are no permissions or throttling applied to the obtain_auth_token view. If you do wish to apply to throttle you'll need to override the view class, and include them using the throttle_classes attribute. If you need a customized version of the obtain_auth_token view, you can do so by subclassing the ObtainAuthToken view class, and using that in your url conf instead. For example, you may return additional user information beyond the token value: from rest_framework.authtoken.views import ObtainAuthToken from rest_framework.authtoken.models import Token from rest_framework.response import Response class CustomAuthToken(ObtainAuthToken): def post(self, request, *args, **kwargs): serializer = self.serializer_class(data=request.data, context={'request': request}) serializer.is_valid(raise_exception=True) user = serializer.validated_data['user'] token, created = Token.objects.get_or_create(user=user) return Response({ 'token': token.key, 'user_id': user.pk, 'email': user.email }) And in your urls.py: urlpatterns += [ path('api-token-auth/', CustomAuthToken.as_view()) ] With Django admin It is also possible to create Tokens manually through the admin interface. In case you are using a large user base, we recommend that you monkey patch the TokenAdmin class customize it to your needs, more specifically by declaring the user field as raw_field. your_app/admin.py: from rest_framework.authtoken.admin import TokenAdmin TokenAdmin.raw_id_fields = ['user'] Using Django manage.py command Since version 3.6.4 it's possible to generate a user token using the following command: ./manage.py drf_create_token <username> this command will return the API token for the given user, creating it if it doesn't exist: Generated token 9944b09199c62bcf9418ad846dd0e4bbdfc6ee4b for user user1 In case you want to regenerate the token (for example if it has been compromised or leaked) you can pass an additional parameter: ./manage.py drf_create_token -r <username> SessionAuthentication This authentication scheme uses Django's default session backend for authentication. Session authentication is appropriate for AJAX clients that are running in the same session context as your website. If successfully authenticated, SessionAuthentication provides the following credentials. request.user will be a Django User instance. request.auth will be None. Unauthenticated responses that are denied permission will result in an HTTP 403 Forbidden response. If you're using an AJAX-style API with SessionAuthentication, you'll need to make sure you include a valid CSRF token for any "unsafe" HTTP method calls, such as PUT, PATCH, POST or DELETE requests. See the Django CSRF documentation for more details. Warning: Always use Django's standard login view when creating login pages. This will ensure your login views are properly protected. CSRF validation in REST framework works slightly differently from standard Django due to the need to support both session and non-session based authentication to the same views. This means that only authenticated requests require CSRF tokens, and anonymous requests may be sent without CSRF tokens. This behaviour is not suitable for login views, which should always have CSRF validation applied. RemoteUserAuthentication This authentication scheme allows you to delegate authentication to your web server, which sets the REMOTE_USER environment variable. To use it, you must have django.contrib.auth.backends.RemoteUserBackend (or a subclass) in your AUTHENTICATION_BACKENDS setting. By default, RemoteUserBackend creates User objects for usernames that don't already exist. To change this and other behaviour, consult the Django documentation. If successfully authenticated, RemoteUserAuthentication provides the following credentials: request.user will be a Django User instance. request.auth will be None. Consult your web server's documentation for information about configuring an authentication method, e.g.: Apache Authentication How-To NGINX (Restricting Access) Custom authentication To implement a custom authentication scheme, subclass BaseAuthentication and override the .authenticate(self, request) method. The method should return a two-tuple of (user, auth) if authentication succeeds, or None otherwise. In some circumstances instead of returning None, you may want to raise an AuthenticationFailed exception from the .authenticate() method. Typically the approach you should take is: If authentication is not attempted, return None. Any other authentication schemes also in use will still be checked. If authentication is attempted but fails, raise an AuthenticationFailed exception. An error response will be returned immediately, regardless of any permissions checks, and without checking any other authentication schemes. You may also override the .authenticate_header(self, request) method. If implemented, it should return a string that will be used as the value of the WWW-Authenticate header in a HTTP 401 Unauthorized response. If the .authenticate_header() method is not overridden, the authentication scheme will return HTTP 403 Forbidden responses when an unauthenticated request is denied access. Note: When your custom authenticator is invoked by the request object's .user or .auth properties, you may see an AttributeError re-raised as a WrappedAttributeError. This is necessary to prevent the original exception from being suppressed by the outer property access. Python will not recognize that the AttributeError originates from your custom authenticator and will instead assume that the request object does not have a .user or .auth property. These errors should be fixed or otherwise handled by your authenticator. Example The following example will authenticate any incoming request as the user given by the username in a custom request header named 'X-USERNAME'. from django.contrib.auth.models import User from rest_framework import authentication from rest_framework import exceptions class ExampleAuthentication(authentication.BaseAuthentication): def authenticate(self, request): username = request.META.get('HTTP_X_USERNAME') if not username: return None try: user = User.objects.get(username=username) except User.DoesNotExist: raise exceptions.AuthenticationFailed('No such user') return (user, None) Third party packages The following third-party packages are also available. Django OAuth Toolkit The Django OAuth Toolkit package provides OAuth 2.0 support and works with Python 3.4+. The package is maintained by jazzband and uses the excellent OAuthLib. The package is well documented, and well supported and is currently our recommended package for OAuth 2.0 support. Installation & configuration Install using pip. pip install django-oauth-toolkit Add the package to your INSTALLED_APPS and modify your REST framework settings. INSTALLED_APPS = [ ... 'oauth2_provider', ] REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': [ 'oauth2_provider.contrib.rest_framework.OAuth2Authentication', ] } For more details see the Django REST framework - Getting started documentation. Django REST framework OAuth The Django REST framework OAuth package provides both OAuth1 and OAuth2 support for REST framework. This package was previously included directly in the REST framework but is now supported and maintained as a third-party package. Installation & configuration Install the package using pip. pip install djangorestframework-oauth For details on configuration and usage see the Django REST framework OAuth documentation for authentication and permissions. JSON Web Token Authentication JSON Web Token is a fairly new standard which can be used for token-based authentication. Unlike the built-in TokenAuthentication scheme, JWT Authentication doesn't need to use a database to validate a token. A package for JWT authentication is djangorestframework-simplejwt which provides some features as well as a pluggable token blacklist app. Hawk HTTP Authentication The HawkREST library builds on the Mohawk library to let you work with Hawk signed requests and responses in your API. Hawk lets two parties securely communicate with each other using messages signed by a shared key. It is based on HTTP MAC access authentication (which was based on parts of OAuth 1.0). HTTP Signature Authentication HTTP Signature (currently a IETF draft) provides a way to achieve origin authentication and message integrity for HTTP messages. Similar to Amazon's HTTP Signature scheme, used by many of its services, it permits stateless, per-request authentication. Elvio Toccalino maintains the djangorestframework-httpsignature (outdated) package which provides an easy to use HTTP Signature Authentication mechanism. You can use the updated fork version of djangorestframework-httpsignature, which is drf-httpsig. Djoser Djoser library provides a set of views to handle basic actions such as registration, login, logout, password reset and account activation. The package works with a custom user model and uses token-based authentication. This is ready to use REST implementation of the Django authentication system. django-rest-auth / dj-rest-auth This library provides a set of REST API endpoints for registration, authentication (including social media authentication), password reset, retrieve and update user details, etc. By having these API endpoints, your client apps such as AngularJS, iOS, Android, and others can communicate to your Django backend site independently via REST APIs for user management. There are currently two forks of this project. Django-rest-auth is the original project, but is not currently receiving updates. Dj-rest-auth is a newer fork of the project. django-rest-framework-social-oauth2 Django-rest-framework-social-oauth2 library provides an easy way to integrate social plugins (facebook, twitter, google, etc.) to your authentication system and an easy oauth2 setup. With this library, you will be able to authenticate users based on external tokens (e.g. facebook access token), convert these tokens to "in-house" oauth2 tokens and use and generate oauth2 tokens to authenticate your users. django-rest-knox Django-rest-knox library provides models and views to handle token-based authentication in a more secure and extensible way than the built-in TokenAuthentication scheme - with Single Page Applications and Mobile clients in mind. It provides per-client tokens, and views to generate them when provided some other authentication (usually basic authentication), to delete the token (providing a server enforced logout) and to delete all tokens (logs out all clients that a user is logged into). drfpasswordless drfpasswordless adds (Medium, Square Cash inspired) passwordless support to Django REST Framework's TokenAuthentication scheme. Users log in and sign up with a token sent to a contact point like an email address or a mobile number. django-rest-authemail django-rest-authemail provides a RESTful API interface for user signup and authentication. Email addresses are used for authentication, rather than usernames. API endpoints are available for signup, signup email verification, login, logout, password reset, password reset verification, email change, email change verification, password change, and user detail. A fully functional example project and detailed instructions are included. Django-Rest-Durin Django-Rest-Durin is built with the idea to have one library that does token auth for multiple Web/CLI/Mobile API clients via one interface but allows different token configuration for each API Client that consumes the API. It provides support for multiple tokens per user via custom models, views, permissions that work with Django-Rest-Framework. The token expiration time can be different per API client and is customizable via the Django Admin Interface. More information can be found in the Documentation. authentication.py
doc_25394
All arguments are optional and default to 0. Arguments may be integers or floats, and may be positive or negative. Only days, seconds and microseconds are stored internally. Arguments are converted to those units: A millisecond is converted to 1000 microseconds. A minute is converted to 60 seconds. An hour is converted to 3600 seconds. A week is converted to 7 days. and days, seconds and microseconds are then normalized so that the representation is unique, with 0 <= microseconds < 1000000 0 <= seconds < 3600*24 (the number of seconds in one day) -999999999 <= days <= 999999999 The following example illustrates how any arguments besides days, seconds and microseconds are “merged” and normalized into those three resulting attributes: >>> from datetime import timedelta >>> delta = timedelta( ... days=50, ... seconds=27, ... microseconds=10, ... milliseconds=29000, ... minutes=5, ... hours=8, ... weeks=2 ... ) >>> # Only days, seconds, and microseconds remain >>> delta datetime.timedelta(days=64, seconds=29156, microseconds=10) If any argument is a float and there are fractional microseconds, the fractional microseconds left over from all arguments are combined and their sum is rounded to the nearest microsecond using round-half-to-even tiebreaker. If no argument is a float, the conversion and normalization processes are exact (no information is lost). If the normalized value of days lies outside the indicated range, OverflowError is raised. Note that normalization of negative values may be surprising at first. For example: >>> from datetime import timedelta >>> d = timedelta(microseconds=-1) >>> (d.days, d.seconds, d.microseconds) (-1, 86399, 999999)
doc_25395
Read up to n bytes from the memory buffer. If n is not specified or negative, all bytes are returned.
doc_25396
Return str(self).
doc_25397
Set the JoinStyle for the collection (for all its elements). Parameters jsJoinStyle or {'miter', 'round', 'bevel'}
doc_25398
Compile a regular expression pattern into a regular expression object, which can be used for matching using its match(), search() and other methods, described below. The expression’s behaviour can be modified by specifying a flags value. Values can be any of the following variables, combined using bitwise OR (the | operator). The sequence prog = re.compile(pattern) result = prog.match(string) is equivalent to result = re.match(pattern, string) but using re.compile() and saving the resulting regular expression object for reuse is more efficient when the expression will be used several times in a single program. Note The compiled versions of the most recent patterns passed to re.compile() and the module-level matching functions are cached, so programs that use only a few regular expressions at a time needn’t worry about compiling regular expressions.
doc_25399
class sklearn.ensemble.StackingRegressor(estimators, final_estimator=None, *, cv=None, n_jobs=None, passthrough=False, verbose=0) [source] Stack of estimators with a final regressor. Stacked generalization consists in stacking the output of individual estimator and use a regressor to compute the final prediction. Stacking allows to use the strength of each individual estimator by using their output as input of a final estimator. Note that estimators_ are fitted on the full X while final_estimator_ is trained using cross-validated predictions of the base estimators using cross_val_predict. Read more in the User Guide. New in version 0.22. Parameters estimatorslist of (str, estimator) Base estimators which will be stacked together. Each element of the list is defined as a tuple of string (i.e. name) and an estimator instance. An estimator can be set to ‘drop’ using set_params. final_estimatorestimator, default=None A regressor which will be used to combine the base estimators. The default regressor is a RidgeCV. cvint, cross-validation generator or an iterable, default=None Determines the cross-validation splitting strategy used in cross_val_predict to train final_estimator. Possible inputs for cv are: None, to use the default 5-fold cross validation, integer, to specify the number of folds in a (Stratified) KFold, An object to be used as a cross-validation generator, An iterable yielding train, test splits. For integer/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Note A larger number of split will provide no benefits if the number of training samples is large enough. Indeed, the training time will increase. cv is not used for model evaluation but for prediction. n_jobsint, default=None The number of jobs to run in parallel for fit of all estimators. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. passthroughbool, default=False When False, only the predictions of estimators will be used as training data for final_estimator. When True, the final_estimator is trained on the predictions as well as the original training data. verboseint, default=0 Verbosity level. Attributes estimators_list of estimator The elements of the estimators parameter, having been fitted on the training data. If an estimator has been set to 'drop', it will not appear in estimators_. named_estimators_Bunch Attribute to access any fitted sub-estimators by name. final_estimator_estimator The regressor to stacked the base estimators fitted. References 1 Wolpert, David H. “Stacked generalization.” Neural networks 5.2 (1992): 241-259. Examples >>> from sklearn.datasets import load_diabetes >>> from sklearn.linear_model import RidgeCV >>> from sklearn.svm import LinearSVR >>> from sklearn.ensemble import RandomForestRegressor >>> from sklearn.ensemble import StackingRegressor >>> X, y = load_diabetes(return_X_y=True) >>> estimators = [ ... ('lr', RidgeCV()), ... ('svr', LinearSVR(random_state=42)) ... ] >>> reg = StackingRegressor( ... estimators=estimators, ... final_estimator=RandomForestRegressor(n_estimators=10, ... random_state=42) ... ) >>> from sklearn.model_selection import train_test_split >>> X_train, X_test, y_train, y_test = train_test_split( ... X, y, random_state=42 ... ) >>> reg.fit(X_train, y_train).score(X_test, y_test) 0.3... Methods fit(X, y[, sample_weight]) Fit the estimators. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get the parameters of an estimator from the ensemble. predict(X, **predict_params) Predict target for X. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of an estimator from the ensemble. transform(X) Return the predictions for X for each estimator. fit(X, y, sample_weight=None) [source] Fit the estimators. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vectors, where n_samples is the number of samples and n_features is the number of features. yarray-like of shape (n_samples,) Target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. Returns selfobject fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters deepbool, default=True Setting it to True gets the various estimators and the parameters of the estimators as well. property n_features_in_ Number of features seen during fit. predict(X, **predict_params) [source] Predict target for X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vectors, where n_samples is the number of samples and n_features is the number of features. **predict_paramsdict of str -> obj Parameters to the predict called by the final_estimator. Note that this may be used to return uncertainties from some estimators with return_std or return_cov. Be aware that it will only accounts for uncertainty in the final estimator. Returns y_predndarray of shape (n_samples,) or (n_samples, n_output) Predicted targets. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of an estimator from the ensemble. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in estimators. Parameters **paramskeyword arguments Specific parameters using e.g. set_params(parameter_name=new_value). In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’. transform(X) [source] Return the predictions for X for each estimator. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns y_predsndarray of shape (n_samples, n_estimators) Prediction outputs for each estimator. Examples using sklearn.ensemble.StackingRegressor Combine predictors using stacking