_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_2800
Performs a single optimization step. Parameters closure (callable, optional) – A closure that reevaluates the model and returns the loss.
doc_2801
See Migration guide for more details. tf.compat.v1.raw_ops.SparseMatrixSoftmax tf.raw_ops.SparseMatrixSoftmax( logits, type, name=None ) Calculate the softmax of the innermost dimensions of a SparseMatrix. Missing values are treated as -inf (i.e., logits of zero probability); and the output has the same sparsity structure as the input (though missing values in the output may now be treated as having probability zero). Args logits A Tensor of type variant. A CSRSparseMatrix. type A tf.DType from: tf.float32, tf.float64. name A name for the operation (optional). Returns A Tensor of type variant.
doc_2802
Return time object with same time and tzinfo.
doc_2803
If key is in the dictionary, return its value. If not, insert key with a value of default and return default. default defaults to None.
doc_2804
is_tensor Returns True if obj is a PyTorch tensor. is_storage Returns True if obj is a PyTorch storage object. is_complex Returns True if the data type of input is a complex data type i.e., one of torch.complex64, and torch.complex128. is_floating_point Returns True if the data type of input is a floating point data type i.e., one of torch.float64, torch.float32, torch.float16, and torch.bfloat16. is_nonzero Returns True if the input is a single element tensor which is not equal to zero after type conversions. set_default_dtype Sets the default floating point dtype to d. get_default_dtype Get the current default floating point torch.dtype. set_default_tensor_type Sets the default torch.Tensor type to floating point tensor type t. numel Returns the total number of elements in the input tensor. set_printoptions Set options for printing. set_flush_denormal Disables denormal floating numbers on CPU. Creation Ops Note Random sampling creation ops are listed under Random sampling and include: torch.rand() torch.rand_like() torch.randn() torch.randn_like() torch.randint() torch.randint_like() torch.randperm() You may also use torch.empty() with the In-place random sampling methods to create torch.Tensor s with values sampled from a broader range of distributions. tensor Constructs a tensor with data. sparse_coo_tensor Constructs a sparse tensor in COO(rdinate) format with specified values at the given indices. as_tensor Convert the data into a torch.Tensor. as_strided Create a view of an existing torch.Tensor input with specified size, stride and storage_offset. from_numpy Creates a Tensor from a numpy.ndarray. zeros Returns a tensor filled with the scalar value 0, with the shape defined by the variable argument size. zeros_like Returns a tensor filled with the scalar value 0, with the same size as input. ones Returns a tensor filled with the scalar value 1, with the shape defined by the variable argument size. ones_like Returns a tensor filled with the scalar value 1, with the same size as input. arange Returns a 1-D tensor of size ⌈end−startstep⌉\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil with values from the interval [start, end) taken with common difference step beginning from start. range Returns a 1-D tensor of size ⌊end−startstep⌋+1\left\lfloor \frac{\text{end} - \text{start}}{\text{step}} \right\rfloor + 1 with values from start to end with step step. linspace Creates a one-dimensional tensor of size steps whose values are evenly spaced from start to end, inclusive. logspace Creates a one-dimensional tensor of size steps whose values are evenly spaced from basestart{{\text{{base}}}}^{{\text{{start}}}} to baseend{{\text{{base}}}}^{{\text{{end}}}} , inclusive, on a logarithmic scale with base base. eye Returns a 2-D tensor with ones on the diagonal and zeros elsewhere. empty Returns a tensor filled with uninitialized data. empty_like Returns an uninitialized tensor with the same size as input. empty_strided Returns a tensor filled with uninitialized data. full Creates a tensor of size size filled with fill_value. full_like Returns a tensor with the same size as input filled with fill_value. quantize_per_tensor Converts a float tensor to a quantized tensor with given scale and zero point. quantize_per_channel Converts a float tensor to a per-channel quantized tensor with given scales and zero points. dequantize Returns an fp32 Tensor by dequantizing a quantized Tensor complex Constructs a complex tensor with its real part equal to real and its imaginary part equal to imag. polar Constructs a complex tensor whose elements are Cartesian coordinates corresponding to the polar coordinates with absolute value abs and angle angle. heaviside Computes the Heaviside step function for each element in input. Indexing, Slicing, Joining, Mutating Ops cat Concatenates the given sequence of seq tensors in the given dimension. chunk Splits a tensor into a specific number of chunks. column_stack Creates a new tensor by horizontally stacking the tensors in tensors. dstack Stack tensors in sequence depthwise (along third axis). gather Gathers values along an axis specified by dim. hstack Stack tensors in sequence horizontally (column wise). index_select Returns a new tensor which indexes the input tensor along dimension dim using the entries in index which is a LongTensor. masked_select Returns a new 1-D tensor which indexes the input tensor according to the boolean mask mask which is a BoolTensor. movedim Moves the dimension(s) of input at the position(s) in source to the position(s) in destination. moveaxis Alias for torch.movedim(). narrow Returns a new tensor that is a narrowed version of input tensor. nonzero reshape Returns a tensor with the same data and number of elements as input, but with the specified shape. row_stack Alias of torch.vstack(). scatter Out-of-place version of torch.Tensor.scatter_() scatter_add Out-of-place version of torch.Tensor.scatter_add_() split Splits the tensor into chunks. squeeze Returns a tensor with all the dimensions of input of size 1 removed. stack Concatenates a sequence of tensors along a new dimension. swapaxes Alias for torch.transpose(). swapdims Alias for torch.transpose(). t Expects input to be <= 2-D tensor and transposes dimensions 0 and 1. take Returns a new tensor with the elements of input at the given indices. tensor_split Splits a tensor into multiple sub-tensors, all of which are views of input, along dimension dim according to the indices or number of sections specified by indices_or_sections. tile Constructs a tensor by repeating the elements of input. transpose Returns a tensor that is a transposed version of input. unbind Removes a tensor dimension. unsqueeze Returns a new tensor with a dimension of size one inserted at the specified position. vstack Stack tensors in sequence vertically (row wise). where Return a tensor of elements selected from either x or y, depending on condition. Generators Generator Creates and returns a generator object that manages the state of the algorithm which produces pseudo random numbers. Random sampling seed Sets the seed for generating random numbers to a non-deterministic random number. manual_seed Sets the seed for generating random numbers. initial_seed Returns the initial seed for generating random numbers as a Python long. get_rng_state Returns the random number generator state as a torch.ByteTensor. set_rng_state Sets the random number generator state. torch.default_generator Returns the default CPU torch.Generator bernoulli Draws binary random numbers (0 or 1) from a Bernoulli distribution. multinomial Returns a tensor where each row contains num_samples indices sampled from the multinomial probability distribution located in the corresponding row of tensor input. normal Returns a tensor of random numbers drawn from separate normal distributions whose mean and standard deviation are given. poisson Returns a tensor of the same size as input with each element sampled from a Poisson distribution with rate parameter given by the corresponding element in input i.e., rand Returns a tensor filled with random numbers from a uniform distribution on the interval [0,1)[0, 1) rand_like Returns a tensor with the same size as input that is filled with random numbers from a uniform distribution on the interval [0,1)[0, 1) . randint Returns a tensor filled with random integers generated uniformly between low (inclusive) and high (exclusive). randint_like Returns a tensor with the same shape as Tensor input filled with random integers generated uniformly between low (inclusive) and high (exclusive). randn Returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution). randn_like Returns a tensor with the same size as input that is filled with random numbers from a normal distribution with mean 0 and variance 1. randperm Returns a random permutation of integers from 0 to n - 1. In-place random sampling There are a few more in-place random sampling functions defined on Tensors as well. Click through to refer to their documentation: torch.Tensor.bernoulli_() - in-place version of torch.bernoulli() torch.Tensor.cauchy_() - numbers drawn from the Cauchy distribution torch.Tensor.exponential_() - numbers drawn from the exponential distribution torch.Tensor.geometric_() - elements drawn from the geometric distribution torch.Tensor.log_normal_() - samples from the log-normal distribution torch.Tensor.normal_() - in-place version of torch.normal() torch.Tensor.random_() - numbers sampled from the discrete uniform distribution torch.Tensor.uniform_() - numbers sampled from the continuous uniform distribution Quasi-random sampling quasirandom.SobolEngine The torch.quasirandom.SobolEngine is an engine for generating (scrambled) Sobol sequences. Serialization save Saves an object to a disk file. load Loads an object saved with torch.save() from a file. Parallelism get_num_threads Returns the number of threads used for parallelizing CPU operations set_num_threads Sets the number of threads used for intraop parallelism on CPU. get_num_interop_threads Returns the number of threads used for inter-op parallelism on CPU (e.g. set_num_interop_threads Sets the number of threads used for interop parallelism (e.g. Locally disabling gradient computation The context managers torch.no_grad(), torch.enable_grad(), and torch.set_grad_enabled() are helpful for locally disabling and enabling gradient computation. See Locally disabling gradient computation for more details on their usage. These context managers are thread local, so they won’t work if you send work to another thread using the threading module, etc. Examples: >>> x = torch.zeros(1, requires_grad=True) >>> with torch.no_grad(): ... y = x * 2 >>> y.requires_grad False >>> is_train = False >>> with torch.set_grad_enabled(is_train): ... y = x * 2 >>> y.requires_grad False >>> torch.set_grad_enabled(True) # this can also be used as a function >>> y = x * 2 >>> y.requires_grad True >>> torch.set_grad_enabled(False) >>> y = x * 2 >>> y.requires_grad False no_grad Context-manager that disabled gradient calculation. enable_grad Context-manager that enables gradient calculation. set_grad_enabled Context-manager that sets gradient calculation to on or off. Math operations Pointwise Ops abs Computes the absolute value of each element in input. absolute Alias for torch.abs() acos Computes the inverse cosine of each element in input. arccos Alias for torch.acos(). acosh Returns a new tensor with the inverse hyperbolic cosine of the elements of input. arccosh Alias for torch.acosh(). add Adds the scalar other to each element of the input input and returns a new resulting tensor. addcdiv Performs the element-wise division of tensor1 by tensor2, multiply the result by the scalar value and add it to input. addcmul Performs the element-wise multiplication of tensor1 by tensor2, multiply the result by the scalar value and add it to input. angle Computes the element-wise angle (in radians) of the given input tensor. asin Returns a new tensor with the arcsine of the elements of input. arcsin Alias for torch.asin(). asinh Returns a new tensor with the inverse hyperbolic sine of the elements of input. arcsinh Alias for torch.asinh(). atan Returns a new tensor with the arctangent of the elements of input. arctan Alias for torch.atan(). atanh Returns a new tensor with the inverse hyperbolic tangent of the elements of input. arctanh Alias for torch.atanh(). atan2 Element-wise arctangent of inputi/otheri\text{input}_{i} / \text{other}_{i} with consideration of the quadrant. bitwise_not Computes the bitwise NOT of the given input tensor. bitwise_and Computes the bitwise AND of input and other. bitwise_or Computes the bitwise OR of input and other. bitwise_xor Computes the bitwise XOR of input and other. ceil Returns a new tensor with the ceil of the elements of input, the smallest integer greater than or equal to each element. clamp Clamp all elements in input into the range [ min, max ]. clip Alias for torch.clamp(). conj Computes the element-wise conjugate of the given input tensor. copysign Create a new floating-point tensor with the magnitude of input and the sign of other, elementwise. cos Returns a new tensor with the cosine of the elements of input. cosh Returns a new tensor with the hyperbolic cosine of the elements of input. deg2rad Returns a new tensor with each of the elements of input converted from angles in degrees to radians. div Divides each element of the input input by the corresponding element of other. divide Alias for torch.div(). digamma Computes the logarithmic derivative of the gamma function on input. erf Computes the error function of each element. erfc Computes the complementary error function of each element of input. erfinv Computes the inverse error function of each element of input. exp Returns a new tensor with the exponential of the elements of the input tensor input. exp2 Computes the base two exponential function of input. expm1 Returns a new tensor with the exponential of the elements minus 1 of input. fake_quantize_per_channel_affine Returns a new tensor with the data in input fake quantized per channel using scale, zero_point, quant_min and quant_max, across the channel specified by axis. fake_quantize_per_tensor_affine Returns a new tensor with the data in input fake quantized using scale, zero_point, quant_min and quant_max. fix Alias for torch.trunc() float_power Raises input to the power of exponent, elementwise, in double precision. floor Returns a new tensor with the floor of the elements of input, the largest integer less than or equal to each element. floor_divide fmod Computes the element-wise remainder of division. frac Computes the fractional portion of each element in input. imag Returns a new tensor containing imaginary values of the self tensor. ldexp Multiplies input by 2**:attr:other. lerp Does a linear interpolation of two tensors start (given by input) and end based on a scalar or tensor weight and returns the resulting out tensor. lgamma Computes the logarithm of the gamma function on input. log Returns a new tensor with the natural logarithm of the elements of input. log10 Returns a new tensor with the logarithm to the base 10 of the elements of input. log1p Returns a new tensor with the natural logarithm of (1 + input). log2 Returns a new tensor with the logarithm to the base 2 of the elements of input. logaddexp Logarithm of the sum of exponentiations of the inputs. logaddexp2 Logarithm of the sum of exponentiations of the inputs in base-2. logical_and Computes the element-wise logical AND of the given input tensors. logical_not Computes the element-wise logical NOT of the given input tensor. logical_or Computes the element-wise logical OR of the given input tensors. logical_xor Computes the element-wise logical XOR of the given input tensors. logit Returns a new tensor with the logit of the elements of input. hypot Given the legs of a right triangle, return its hypotenuse. i0 Computes the zeroth order modified Bessel function of the first kind for each element of input. igamma Computes the regularized lower incomplete gamma function: igammac Computes the regularized upper incomplete gamma function: mul Multiplies each element of the input input with the scalar other and returns a new resulting tensor. multiply Alias for torch.mul(). mvlgamma Computes the multivariate log-gamma function) with dimension pp element-wise, given by nan_to_num Replaces NaN, positive infinity, and negative infinity values in input with the values specified by nan, posinf, and neginf, respectively. neg Returns a new tensor with the negative of the elements of input. negative Alias for torch.neg() nextafter Return the next floating-point value after input towards other, elementwise. polygamma Computes the nthn^{th} derivative of the digamma function on input. pow Takes the power of each element in input with exponent and returns a tensor with the result. rad2deg Returns a new tensor with each of the elements of input converted from angles in radians to degrees. real Returns a new tensor containing real values of the self tensor. reciprocal Returns a new tensor with the reciprocal of the elements of input remainder Computes the element-wise remainder of division. round Returns a new tensor with each of the elements of input rounded to the closest integer. rsqrt Returns a new tensor with the reciprocal of the square-root of each of the elements of input. sigmoid Returns a new tensor with the sigmoid of the elements of input. sign Returns a new tensor with the signs of the elements of input. sgn For complex tensors, this function returns a new tensor whose elemants have the same angle as that of the elements of input and absolute value 1. signbit Tests if each element of input has its sign bit set (is less than zero) or not. sin Returns a new tensor with the sine of the elements of input. sinc Computes the normalized sinc of input. sinh Returns a new tensor with the hyperbolic sine of the elements of input. sqrt Returns a new tensor with the square-root of the elements of input. square Returns a new tensor with the square of the elements of input. sub Subtracts other, scaled by alpha, from input. subtract Alias for torch.sub(). tan Returns a new tensor with the tangent of the elements of input. tanh Returns a new tensor with the hyperbolic tangent of the elements of input. true_divide Alias for torch.div() with rounding_mode=None. trunc Returns a new tensor with the truncated integer values of the elements of input. xlogy Computes input * log(other) with the following cases. Reduction Ops argmax Returns the indices of the maximum value of all elements in the input tensor. argmin Returns the indices of the minimum value(s) of the flattened tensor or along a dimension amax Returns the maximum value of each slice of the input tensor in the given dimension(s) dim. amin Returns the minimum value of each slice of the input tensor in the given dimension(s) dim. all Tests if all elements in input evaluate to True. any param input the input tensor. max Returns the maximum value of all elements in the input tensor. min Returns the minimum value of all elements in the input tensor. dist Returns the p-norm of (input - other) logsumexp Returns the log of summed exponentials of each row of the input tensor in the given dimension dim. mean Returns the mean value of all elements in the input tensor. median Returns the median of the values in input. nanmedian Returns the median of the values in input, ignoring NaN values. mode Returns a namedtuple (values, indices) where values is the mode value of each row of the input tensor in the given dimension dim, i.e. norm Returns the matrix norm or vector norm of a given tensor. nansum Returns the sum of all elements, treating Not a Numbers (NaNs) as zero. prod Returns the product of all elements in the input tensor. quantile Returns the q-th quantiles of all elements in the input tensor, doing a linear interpolation when the q-th quantile lies between two data points. nanquantile This is a variant of torch.quantile() that “ignores” NaN values, computing the quantiles q as if NaN values in input did not exist. std Returns the standard-deviation of all elements in the input tensor. std_mean Returns the standard-deviation and mean of all elements in the input tensor. sum Returns the sum of all elements in the input tensor. unique Returns the unique elements of the input tensor. unique_consecutive Eliminates all but the first element from every consecutive group of equivalent elements. var Returns the variance of all elements in the input tensor. var_mean Returns the variance and mean of all elements in the input tensor. count_nonzero Counts the number of non-zero values in the tensor input along the given dim. Comparison Ops allclose This function checks if all input and other satisfy the condition: argsort Returns the indices that sort a tensor along a given dimension in ascending order by value. eq Computes element-wise equality equal True if two tensors have the same size and elements, False otherwise. ge Computes input≥other\text{input} \geq \text{other} element-wise. greater_equal Alias for torch.ge(). gt Computes input>other\text{input} > \text{other} element-wise. greater Alias for torch.gt(). isclose Returns a new tensor with boolean elements representing if each element of input is “close” to the corresponding element of other. isfinite Returns a new tensor with boolean elements representing if each element is finite or not. isinf Tests if each element of input is infinite (positive or negative infinity) or not. isposinf Tests if each element of input is positive infinity or not. isneginf Tests if each element of input is negative infinity or not. isnan Returns a new tensor with boolean elements representing if each element of input is NaN or not. isreal Returns a new tensor with boolean elements representing if each element of input is real-valued or not. kthvalue Returns a namedtuple (values, indices) where values is the k th smallest element of each row of the input tensor in the given dimension dim. le Computes input≤other\text{input} \leq \text{other} element-wise. less_equal Alias for torch.le(). lt Computes input<other\text{input} < \text{other} element-wise. less Alias for torch.lt(). maximum Computes the element-wise maximum of input and other. minimum Computes the element-wise minimum of input and other. fmax Computes the element-wise maximum of input and other. fmin Computes the element-wise minimum of input and other. ne Computes input≠other\text{input} \neq \text{other} element-wise. not_equal Alias for torch.ne(). sort Sorts the elements of the input tensor along a given dimension in ascending order by value. topk Returns the k largest elements of the given input tensor along a given dimension. msort Sorts the elements of the input tensor along its first dimension in ascending order by value. Spectral Ops stft Short-time Fourier transform (STFT). istft Inverse short time Fourier Transform. bartlett_window Bartlett window function. blackman_window Blackman window function. hamming_window Hamming window function. hann_window Hann window function. kaiser_window Computes the Kaiser window with window length window_length and shape parameter beta. Other Operations atleast_1d Returns a 1-dimensional view of each input tensor with zero dimensions. atleast_2d Returns a 2-dimensional view of each input tensor with zero dimensions. atleast_3d Returns a 3-dimensional view of each input tensor with zero dimensions. bincount Count the frequency of each value in an array of non-negative ints. block_diag Create a block diagonal matrix from provided tensors. broadcast_tensors Broadcasts the given tensors according to Broadcasting semantics. broadcast_to Broadcasts input to the shape shape. broadcast_shapes Similar to broadcast_tensors() but for shapes. bucketize Returns the indices of the buckets to which each value in the input belongs, where the boundaries of the buckets are set by boundaries. cartesian_prod Do cartesian product of the given sequence of tensors. cdist Computes batched the p-norm distance between each pair of the two collections of row vectors. clone Returns a copy of input. combinations Compute combinations of length rr of the given tensor. cross Returns the cross product of vectors in dimension dim of input and other. cummax Returns a namedtuple (values, indices) where values is the cumulative maximum of elements of input in the dimension dim. cummin Returns a namedtuple (values, indices) where values is the cumulative minimum of elements of input in the dimension dim. cumprod Returns the cumulative product of elements of input in the dimension dim. cumsum Returns the cumulative sum of elements of input in the dimension dim. diag If input is a vector (1-D tensor), then returns a 2-D square tensor diag_embed Creates a tensor whose diagonals of certain 2D planes (specified by dim1 and dim2) are filled by input. diagflat If input is a vector (1-D tensor), then returns a 2-D square tensor diagonal Returns a partial view of input with the its diagonal elements with respect to dim1 and dim2 appended as a dimension at the end of the shape. diff Computes the n-th forward difference along the given dimension. einsum Sums the product of the elements of the input operands along dimensions specified using a notation based on the Einstein summation convention. flatten Flattens input by reshaping it into a one-dimensional tensor. flip Reverse the order of a n-D tensor along given axis in dims. fliplr Flip tensor in the left/right direction, returning a new tensor. flipud Flip tensor in the up/down direction, returning a new tensor. kron Computes the Kronecker product, denoted by ⊗\otimes , of input and other. rot90 Rotate a n-D tensor by 90 degrees in the plane specified by dims axis. gcd Computes the element-wise greatest common divisor (GCD) of input and other. histc Computes the histogram of a tensor. meshgrid Take NN tensors, each of which can be either scalar or 1-dimensional vector, and create NN N-dimensional grids, where the ii th grid is defined by expanding the ii th input over dimensions defined by other inputs. lcm Computes the element-wise least common multiple (LCM) of input and other. logcumsumexp Returns the logarithm of the cumulative summation of the exponentiation of elements of input in the dimension dim. ravel Return a contiguous flattened tensor. renorm Returns a tensor where each sub-tensor of input along dimension dim is normalized such that the p-norm of the sub-tensor is lower than the value maxnorm repeat_interleave Repeat elements of a tensor. roll Roll the tensor along the given dimension(s). searchsorted Find the indices from the innermost dimension of sorted_sequence such that, if the corresponding values in values were inserted before the indices, the order of the corresponding innermost dimension within sorted_sequence would be preserved. tensordot Returns a contraction of a and b over multiple dimensions. trace Returns the sum of the elements of the diagonal of the input 2-D matrix. tril Returns the lower triangular part of the matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0. tril_indices Returns the indices of the lower triangular part of a row-by- col matrix in a 2-by-N Tensor, where the first row contains row coordinates of all indices and the second row contains column coordinates. triu Returns the upper triangular part of a matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0. triu_indices Returns the indices of the upper triangular part of a row by col matrix in a 2-by-N Tensor, where the first row contains row coordinates of all indices and the second row contains column coordinates. vander Generates a Vandermonde matrix. view_as_real Returns a view of input as a real tensor. view_as_complex Returns a view of input as a complex tensor. BLAS and LAPACK Operations addbmm Performs a batch matrix-matrix product of matrices stored in batch1 and batch2, with a reduced add step (all matrix multiplications get accumulated along the first dimension). addmm Performs a matrix multiplication of the matrices mat1 and mat2. addmv Performs a matrix-vector product of the matrix mat and the vector vec. addr Performs the outer-product of vectors vec1 and vec2 and adds it to the matrix input. baddbmm Performs a batch matrix-matrix product of matrices in batch1 and batch2. bmm Performs a batch matrix-matrix product of matrices stored in input and mat2. chain_matmul Returns the matrix product of the NN 2-D tensors. cholesky Computes the Cholesky decomposition of a symmetric positive-definite matrix AA or for batches of symmetric positive-definite matrices. cholesky_inverse Computes the inverse of a symmetric positive-definite matrix AA using its Cholesky factor uu : returns matrix inv. cholesky_solve Solves a linear system of equations with a positive semidefinite matrix to be inverted given its Cholesky factor matrix uu . dot Computes the dot product of two 1D tensors. eig Computes the eigenvalues and eigenvectors of a real square matrix. geqrf This is a low-level function for calling LAPACK directly. ger Alias of torch.outer(). inner Computes the dot product for 1D tensors. inverse Takes the inverse of the square matrix input. det Calculates determinant of a square matrix or batches of square matrices. logdet Calculates log determinant of a square matrix or batches of square matrices. slogdet Calculates the sign and log absolute value of the determinant(s) of a square matrix or batches of square matrices. lstsq Computes the solution to the least squares and least norm problems for a full rank matrix AA of size (m×n)(m \times n) and a matrix BB of size (m×k)(m \times k) . lu Computes the LU factorization of a matrix or batches of matrices A. lu_solve Returns the LU solve of the linear system Ax=bAx = b using the partially pivoted LU factorization of A from torch.lu(). lu_unpack Unpacks the data and pivots from a LU factorization of a tensor. matmul Matrix product of two tensors. matrix_power Returns the matrix raised to the power n for square matrices. matrix_rank Returns the numerical rank of a 2-D tensor. matrix_exp Returns the matrix exponential. mm Performs a matrix multiplication of the matrices input and mat2. mv Performs a matrix-vector product of the matrix input and the vector vec. orgqr Computes the orthogonal matrix Q of a QR factorization, from the (input, input2) tuple returned by torch.geqrf(). ormqr Multiplies mat (given by input3) by the orthogonal Q matrix of the QR factorization formed by torch.geqrf() that is represented by (a, tau) (given by (input, input2)). outer Outer product of input and vec2. pinverse Calculates the pseudo-inverse (also known as the Moore-Penrose inverse) of a 2D tensor. qr Computes the QR decomposition of a matrix or a batch of matrices input, and returns a namedtuple (Q, R) of tensors such that input=QR\text{input} = Q R with QQ being an orthogonal matrix or batch of orthogonal matrices and RR being an upper triangular matrix or batch of upper triangular matrices. solve This function returns the solution to the system of linear equations represented by AX=BAX = B and the LU factorization of A, in order as a namedtuple solution, LU. svd Computes the singular value decomposition of either a matrix or batch of matrices input. svd_lowrank Return the singular value decomposition (U, S, V) of a matrix, batches of matrices, or a sparse matrix AA such that A≈Udiag(S)VTA \approx U diag(S) V^T . pca_lowrank Performs linear Principal Component Analysis (PCA) on a low-rank matrix, batches of such matrices, or sparse matrix. symeig This function returns eigenvalues and eigenvectors of a real symmetric matrix input or a batch of real symmetric matrices, represented by a namedtuple (eigenvalues, eigenvectors). lobpcg Find the k largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric positive defined generalized eigenvalue problem using matrix-free LOBPCG methods. trapz Estimate ∫ydx\int y\,dx along dim, using the trapezoid rule. triangular_solve Solves a system of equations with a triangular coefficient matrix AA and multiple right-hand sides bb . vdot Computes the dot product of two 1D tensors. Utilities compiled_with_cxx11_abi Returns whether PyTorch was built with _GLIBCXX_USE_CXX11_ABI=1 result_type Returns the torch.dtype that would result from performing an arithmetic operation on the provided input tensors. can_cast Determines if a type conversion is allowed under PyTorch casting rules described in the type promotion documentation. promote_types Returns the torch.dtype with the smallest size and scalar kind that is not smaller nor of lower kind than either type1 or type2. use_deterministic_algorithms Sets whether PyTorch operations must use “deterministic” algorithms. are_deterministic_algorithms_enabled Returns True if the global deterministic flag is turned on. _assert A wrapper around Python’s assert which is symbolically traceable.
doc_2805
See Migration guide for more details. tf.compat.v1.errors.UnauthenticatedError tf.errors.UnauthenticatedError( node_def, op, message ) This exception is not currently used. Attributes error_code The integer error code that describes the error. message The error message that describes the error. node_def The NodeDef proto representing the op that failed. op The operation that failed, if known. Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op.
doc_2806
See Migration guide for more details. tf.compat.v1.keras.layers.Masking tf.keras.layers.Masking( mask_value=0.0, **kwargs ) For each timestep in the input tensor (dimension #1 in the tensor), if all values in the input tensor at that timestep are equal to mask_value, then the timestep will be masked (skipped) in all downstream layers (as long as they support masking). If any downstream layer does not support masking yet receives such an input mask, an exception will be raised. Example: Consider a Numpy data array x of shape (samples, timesteps, features), to be fed to an LSTM layer. You want to mask timestep #3 and #5 because you lack data for these timesteps. You can: Set x[:, 3, :] = 0. and x[:, 5, :] = 0. Insert a Masking layer with mask_value=0. before the LSTM layer: samples, timesteps, features = 32, 10, 8 inputs = np.random.random([samples, timesteps, features]).astype(np.float32) inputs[:, 3, :] = 0. inputs[:, 5, :] = 0. model = tf.keras.models.Sequential() model.add(tf.keras.layers.Masking(mask_value=0., input_shape=(timesteps, features))) model.add(tf.keras.layers.LSTM(32)) output = model(inputs) # The time step 3 and 5 will be skipped from LSTM calculation. See the masking and padding guide for more details.
doc_2807
tf.compat.v1.layers.AveragePooling1D( pool_size, strides, padding='valid', data_format='channels_last', name=None, **kwargs ) Arguments pool_size An integer or tuple/list of a single integer, representing the size of the pooling window. strides An integer or tuple/list of a single integer, specifying the strides of the pooling operation. padding A string. The padding method, either 'valid' or 'same'. Case-insensitive. data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, length, channels) while channels_first corresponds to inputs with shape (batch, channels, length). name A string, the name of the layer. Attributes graph scope_name
doc_2808
Close the queue: release internal resources. A queue must not be used anymore after it is closed. For example, get(), put() and empty() methods must no longer be called. New in version 3.9.
doc_2809
Return the url.
doc_2810
Return a logger with the specified name or, if name is None, return a logger which is the root logger of the hierarchy. If specified, the name is typically a dot-separated hierarchical name like ‘a’, ‘a.b’ or ‘a.b.c.d’. Choice of these names is entirely up to the developer who is using logging. All calls to this function with a given name return the same logger instance. This means that logger instances never need to be passed between different parts of an application.
doc_2811
Return True if the call was successfully cancelled.
doc_2812
Get the factor by which to magnify images passed to draw_image(). Allows a backend to have images at a different resolution to other artists.
doc_2813
Create and return a new event loop object. This method should never return None.
doc_2814
Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
doc_2815
Get the artist's bounding box in display space. The bounding box' width and height are nonnegative. Subclasses should override for inclusion in the bounding box "tight" calculation. Default is to return an empty bounding box at 0, 0. Be careful when using this function, the results will not update if the artist window extent of the artist changes. The extent can change due to any changes in the transform stack, such as changing the axes limits, the figure size, or the canvas used (as is done when saving a figure). This can lead to unexpected behavior where interactive figures will look fine on the screen, but will save incorrectly.
doc_2816
Move axes of an array to new positions. Other axes remain in their original order. New in version 1.11.0. Parameters anp.ndarray The array whose axes should be reordered. sourceint or sequence of int Original positions of the axes to move. These must be unique. destinationint or sequence of int Destination positions for each of the original axes. These must also be unique. Returns resultnp.ndarray Array with moved axes. This array is a view of the input array. See also transpose Permute the dimensions of an array. swapaxes Interchange two axes of an array. Examples >>> x = np.zeros((3, 4, 5)) >>> np.moveaxis(x, 0, -1).shape (4, 5, 3) >>> np.moveaxis(x, -1, 0).shape (5, 3, 4) These all achieve the same result: >>> np.transpose(x).shape (5, 4, 3) >>> np.swapaxes(x, 0, -1).shape (5, 4, 3) >>> np.moveaxis(x, [0, 1], [-1, -2]).shape (5, 4, 3) >>> np.moveaxis(x, [0, 1, 2], [-1, -2, -3]).shape (5, 4, 3)
doc_2817
Return the kernel k(X, Y) and optionally its gradient. Parameters Xarray-like of shape (n_samples_X, n_features) or list of object Left argument of the returned kernel k(X, Y) Yarray-like of shape (n_samples_X, n_features) or list of object, default=None Right argument of the returned kernel k(X, Y). If None, k(X, X) is evaluated instead. eval_gradientbool, default=False Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Returns Kndarray of shape (n_samples_X, n_samples_Y) Kernel k(X, Y) K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True.
doc_2818
Set the tick label pad in points Parameters valfloat
doc_2819
class ast.Sub class ast.Mult class ast.Div class ast.FloorDiv class ast.Mod class ast.Pow class ast.LShift class ast.RShift class ast.BitOr class ast.BitXor class ast.BitAnd class ast.MatMult Binary operator tokens.
doc_2820
The file system permissions that the directory will receive when it is saved. Defaults to FILE_UPLOAD_DIRECTORY_PERMISSIONS.
doc_2821
Token value for "@".
doc_2822
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
doc_2823
Bases: mpl_toolkits.axisartist.axislines.AxisArtistHelper.Floating nth_coord = along which coordinate value varies. nth_coord = 0 -> x axis, nth_coord = 1 -> y axis get_axislabel_pos_angle(axes)[source] get_axislabel_transform(axes)[source] get_line(axes)[source] get_line_transform(axes)[source] get_tick_iterators(axes)[source] tick_loc, tick_angle, tick_label, (optionally) tick_label get_tick_transform(axes)[source] propertygrid_info[source] set_extremes(e1, e2)[source] update_lim(axes)[source]
doc_2824
See Migration guide for more details. tf.compat.v1.raw_ops.AudioSummaryV2 tf.raw_ops.AudioSummaryV2( tag, tensor, sample_rate, max_outputs=3, name=None ) The summary has up to max_outputs summary values containing audio. The audio is built from tensor which must be 3-D with shape [batch_size, frames, channels] or 2-D with shape [batch_size, frames]. The values are assumed to be in the range of [-1.0, 1.0] with a sample rate of sample_rate. The tag argument is a scalar Tensor of type string. It is used to build the tag of the summary values: If max_outputs is 1, the summary value tag is 'tag/audio'. If max_outputs is greater than 1, the summary value tags are generated sequentially as 'tag/audio/0', 'tag/audio/1', etc. Args tag A Tensor of type string. Scalar. Used to build the tag attribute of the summary values. tensor A Tensor of type float32. 2-D of shape [batch_size, frames]. sample_rate A Tensor of type float32. The sample rate of the signal in hertz. max_outputs An optional int that is >= 1. Defaults to 3. Max number of batch elements to generate audio for. name A name for the operation (optional). Returns A Tensor of type string.
doc_2825
Return a *Translations instance based on the domain, localedir, and languages, which are first passed to find() to get a list of the associated .mo file paths. Instances with identical .mo file names are cached. The actual class instantiated is class_ if provided, otherwise GNUTranslations. The class’s constructor must take a single file object argument. If provided, codeset will change the charset used to encode translated strings in the lgettext() and lngettext() methods. If multiple files are found, later files are used as fallbacks for earlier ones. To allow setting the fallback, copy.copy() is used to clone each translation object from the cache; the actual instance data is still shared with the cache. If no .mo file is found, this function raises OSError if fallback is false (which is the default), and returns a NullTranslations instance if fallback is true. Changed in version 3.3: IOError used to be raised instead of OSError. Deprecated since version 3.8, will be removed in version 3.10: The codeset parameter.
doc_2826
See Migration guide for more details. tf.compat.v1.raw_ops.ApplyAddSign tf.raw_ops.ApplyAddSign( var, m, lr, alpha, sign_decay, beta, grad, use_locking=False, name=None ) mt <- beta1 * m{t-1} + (1 - beta1) * g update <- (alpha + sign_decay * sign(g) *sign(m)) * g variable <- variable - lr_t * update Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). m A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. alpha A Tensor. Must have the same type as var. Must be a scalar. sign_decay A Tensor. Must have the same type as var. Must be a scalar. beta A Tensor. Must have the same type as var. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. use_locking An optional bool. Defaults to False. If True, updating of the var and m tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
doc_2827
Make sure that self.home_views has an entry for all axes present in the figure.
doc_2828
Draw samples from a Rayleigh distribution. The \(\chi\) and Weibull distributions are generalizations of the Rayleigh. Note New code should use the rayleigh method of a default_rng() instance instead; please see the Quick Start. Parameters scalefloat or array_like of floats, optional Scale, also equals the mode. Must be non-negative. Default is 1. sizeint or tuple of ints, optional Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. If size is None (default), a single value is returned if scale is a scalar. Otherwise, np.array(scale).size samples are drawn. Returns outndarray or scalar Drawn samples from the parameterized Rayleigh distribution. See also Generator.rayleigh which should be used for new code. Notes The probability density function for the Rayleigh distribution is \[P(x;scale) = \frac{x}{scale^2}e^{\frac{-x^2}{2 \cdotp scale^2}}\] The Rayleigh distribution would arise, for example, if the East and North components of the wind velocity had identical zero-mean Gaussian distributions. Then the wind speed would have a Rayleigh distribution. References 1 Brighton Webs Ltd., “Rayleigh Distribution,” https://web.archive.org/web/20090514091424/http://brighton-webs.co.uk:80/distributions/rayleigh.asp 2 Wikipedia, “Rayleigh distribution” https://en.wikipedia.org/wiki/Rayleigh_distribution Examples Draw values from the distribution and plot the histogram >>> from matplotlib.pyplot import hist >>> values = hist(np.random.rayleigh(3, 100000), bins=200, density=True) Wave heights tend to follow a Rayleigh distribution. If the mean wave height is 1 meter, what fraction of waves are likely to be larger than 3 meters? >>> meanvalue = 1 >>> modevalue = np.sqrt(2 / np.pi) * meanvalue >>> s = np.random.rayleigh(modevalue, 1000000) The percentage of waves larger than 3 meters is: >>> 100.*sum(s>3)/1000000. 0.087300000000000003 # random
doc_2829
tf.compat.v1.nn.pool( input, window_shape, pooling_type, padding, dilation_rate=None, strides=None, name=None, data_format=None, dilations=None ) In the case that data_format does not start with "NC", computes for 0 <= b < batch_size, 0 <= x[i] < output_spatial_shape[i], 0 <= c < num_channels: output[b, x[0], ..., x[N-1], c] = REDUCE_{z[0], ..., z[N-1]} input[b, x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0], ... x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1], c], where the reduction function REDUCE depends on the value of pooling_type, and pad_before is defined based on the value of padding as described in the "returns" section of tf.nn.convolution for details. The reduction never includes out-of-bounds positions. In the case that data_format starts with "NC", the input and output are simply transposed as follows: pool(input, data_format, **kwargs) = tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1)) Args input Tensor of rank N+2, of shape [batch_size] + input_spatial_shape + [num_channels] if data_format does not start with "NC" (default), or [batch_size, num_channels] + input_spatial_shape if data_format starts with "NC". Pooling happens over the spatial dimensions only. window_shape Sequence of N ints >= 1. pooling_type Specifies pooling operation, must be "AVG" or "MAX". padding The padding algorithm, must be "SAME" or "VALID". See the "returns" section of tf.nn.convolution for details. dilation_rate Optional. Dilation rate. List of N ints >= 1. Defaults to [1]N. If any value of dilation_rate is > 1, then all values of strides must be 1. strides Optional. Sequence of N ints >= 1. Defaults to [1]N. If any value of strides is > 1, then all values of dilation_rate must be 1. name Optional. Name of the op. data_format A string or None. Specifies whether the channel dimension of the input and output is the last dimension (default, or if data_format does not start with "NC"), or the second dimension (if data_format starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW". dilations Alias for dilation_rate Returns Tensor of rank N+2, of shape [batch_size] + output_spatial_shape + [num_channels] if data_format is None or does not start with "NC", or [batch_size, num_channels] + output_spatial_shape if data_format starts with "NC", where output_spatial_shape depends on the value of padding: If padding = "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i]) If padding = "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i]) / strides[i]). Raises ValueError if arguments are invalid.
doc_2830
A numeric handle of a system object which will become “ready” when the process ends. You can use this value if you want to wait on several events at once using multiprocessing.connection.wait(). Otherwise calling join() is simpler. On Windows, this is an OS handle usable with the WaitForSingleObject and WaitForMultipleObjects family of API calls. On Unix, this is a file descriptor usable with primitives from the select module. New in version 3.3.
doc_2831
Remove leading characters. Strip whitespaces (including newlines) or a set of specified characters from each string in the Series/Index from left side. Equivalent to str.lstrip(). Parameters to_strip:str or None, default None Specifying the set of characters to be removed. All combinations of this set of characters will be stripped. If None then whitespaces are removed. Returns Series or Index of object See also Series.str.strip Remove leading and trailing characters in Series/Index. Series.str.lstrip Remove leading characters in Series/Index. Series.str.rstrip Remove trailing characters in Series/Index. Examples >>> s = pd.Series(['1. Ant. ', '2. Bee!\n', '3. Cat?\t', np.nan]) >>> s 0 1. Ant. 1 2. Bee!\n 2 3. Cat?\t 3 NaN dtype: object >>> s.str.strip() 0 1. Ant. 1 2. Bee! 2 3. Cat? 3 NaN dtype: object >>> s.str.lstrip('123.') 0 Ant. 1 Bee!\n 2 Cat?\t 3 NaN dtype: object >>> s.str.rstrip('.!? \n\t') 0 1. Ant 1 2. Bee 2 3. Cat 3 NaN dtype: object >>> s.str.strip('123.!? \n\t') 0 Ant 1 Bee 2 Cat 3 NaN dtype: object
doc_2832
tf.as_string Compat aliases for migration See Migration guide for more details. tf.compat.v1.as_string, tf.compat.v1.dtypes.as_string, tf.compat.v1.strings.as_string tf.strings.as_string( input, precision=-1, scientific=False, shortest=False, width=-1, fill='', name=None ) Supports many numeric types and boolean. For Unicode, see the https://www.tensorflow.org/tutorials/representation/unicode tutorial. Examples: tf.strings.as_string([3, 2]) <tf.Tensor: shape=(2,), dtype=string, numpy=array([b'3', b'2'], dtype=object)> tf.strings.as_string([3.1415926, 2.71828], precision=2).numpy() array([b'3.14', b'2.72'], dtype=object) Args input A Tensor. Must be one of the following types: int8, int16, int32, int64, complex64, complex128, float32, float64, bool. precision An optional int. Defaults to -1. The post-decimal precision to use for floating point numbers. Only used if precision > -1. scientific An optional bool. Defaults to False. Use scientific notation for floating point numbers. shortest An optional bool. Defaults to False. Use shortest representation (either scientific or standard) for floating point numbers. width An optional int. Defaults to -1. Pad pre-decimal numbers to this width. Applies to both floating point and integer numbers. Only used if width > -1. fill An optional string. Defaults to "". The value to pad if width > -1. If empty, pads with spaces. Another typical value is '0'. String cannot be longer than 1 character. name A name for the operation (optional). Returns A Tensor of type string.
doc_2833
If you have code that wants to test if a request context is there or not this function can be used. For instance, you may want to take advantage of request information if the request object is available, but fail silently if it is unavailable. class User(db.Model): def __init__(self, username, remote_addr=None): self.username = username if remote_addr is None and has_request_context(): remote_addr = request.remote_addr self.remote_addr = remote_addr Alternatively you can also just test any of the context bound objects (such as request or g) for truthness: class User(db.Model): def __init__(self, username, remote_addr=None): self.username = username if remote_addr is None and request: remote_addr = request.remote_addr self.remote_addr = remote_addr Changelog New in version 0.7. Return type bool
doc_2834
Fills self tensor with numbers sampled from the continuous uniform distribution: P(x)=1to−fromP(x) = \dfrac{1}{\text{to} - \text{from}}
doc_2835
Return the number of tests represented by this test object, including all individual tests and sub-suites.
doc_2836
tf.compat.v1.data.TextLineDataset( filenames, compression_type=None, buffer_size=None, num_parallel_reads=None ) Args filenames A tf.string tensor or tf.data.Dataset containing one or more filenames. compression_type (Optional.) A tf.string scalar evaluating to one of "" (no compression), "ZLIB", or "GZIP". buffer_size (Optional.) A tf.int64 scalar denoting the number of bytes to buffer. A value of 0 results in the default buffering values chosen based on the compression type. num_parallel_reads (Optional.) A tf.int64 scalar representing the number of files to read in parallel. If greater than one, the records of files read in parallel are outputted in an interleaved order. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value greater than one to parallelize the I/O. If None, files will be read sequentially. Attributes element_spec The type specification of an element of this dataset. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset.element_spec TensorSpec(shape=(), dtype=tf.int32, name=None) output_classes Returns the class of each component of an element of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_classes(dataset). output_shapes Returns the shape of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_shapes(dataset). output_types Returns the type of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_types(dataset). Methods apply View source apply( transformation_func ) Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset. dataset = tf.data.Dataset.range(100) def dataset_fn(ds): return ds.filter(lambda x: x < 5) dataset = dataset.apply(dataset_fn) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] Args transformation_func A function that takes one Dataset argument and returns a Dataset. Returns Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source as_numpy_iterator() Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset: print(element) tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32) This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset.as_numpy_iterator(): print(element) 1 2 3 dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) print(list(dataset.as_numpy_iterator())) [1, 2, 3] as_numpy_iterator() will preserve the nested structure of dataset elements. dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), 'b': [5, 6]}) list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, {'a': (2, 4), 'b': 6}] True Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays. Raises TypeError if an element contains a non-Tensor value. RuntimeError if eager execution is not enabled. batch View source batch( batch_size, drop_remainder=False ) Combines consecutive elements of this dataset into batches. dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3, drop_remainder=True) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5])] The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Args batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch. drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch. Returns Dataset A Dataset. cache View source cache( filename='' ) Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data. Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data. dataset = tf.data.Dataset.range(5) dataset = dataset.map(lambda x: x**2) dataset = dataset.cache() # The first time reading through the data will generate the data using # `range` and `map`. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] # Subsequent iterations read from the cache. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed. dataset = tf.data.Dataset.range(5) dataset = dataset.cache("/path/to/file") # doctest: +SKIP list(dataset.as_numpy_iterator()) # doctest: +SKIP [0, 1, 2, 3, 4] dataset = tf.data.Dataset.range(10) dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP list(dataset.as_numpy_iterator()) # doctest: +SKIP [0, 1, 2, 3, 4] Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache. Args filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. Returns Dataset A Dataset. cardinality View source cardinality() Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file). dataset = tf.data.Dataset.range(42) print(dataset.cardinality().numpy()) 42 dataset = dataset.repeat() cardinality = dataset.cardinality() print((cardinality == tf.data.INFINITE_CARDINALITY).numpy()) True dataset = dataset.filter(lambda x: True) cardinality = dataset.cardinality() print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy()) True Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively. concatenate View source concatenate( dataset ) Creates a Dataset by concatenating the given dataset with this dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] ds = a.concatenate(b) list(ds.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7] # The input dataset and dataset to be concatenated should have the same # nested structures and output types. c = tf.data.Dataset.zip((a, b)) a.concatenate(c) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and (tf.int64, tf.int64) d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) a.concatenate(d) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and <dtype: 'string'> Args dataset Dataset to be concatenated. Returns Dataset A Dataset. enumerate View source enumerate( start=0 ) Enumerates the elements of this dataset. It is similar to python's enumerate. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.enumerate(start=5) for element in dataset.as_numpy_iterator(): print(element) (5, 1) (6, 2) (7, 3) # The nested structure of the input dataset determines the structure of # elements in the resulting dataset. dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) dataset = dataset.enumerate() for element in dataset.as_numpy_iterator(): print(element) (0, array([7, 8], dtype=int32)) (1, array([ 9, 10], dtype=int32)) Args start A tf.int64 scalar tf.Tensor, representing the start value for enumeration. Returns Dataset A Dataset. filter View source filter( predicate ) Filters this dataset according to predicate. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.filter(lambda x: x < 3) list(dataset.as_numpy_iterator()) [1, 2] # `tf.math.equal(x, y)` is required for equality comparison def filter_fn(x): return tf.math.equal(x, 1) dataset = dataset.filter(filter_fn) list(dataset.as_numpy_iterator()) [1] Args predicate A function mapping a dataset element to a boolean. Returns Dataset The Dataset containing the elements of this dataset for which predicate is True. filter_with_legacy_function View source filter_with_legacy_function( predicate ) Filters this dataset according to predicate. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.filter() Note: This is an escape hatch for existing uses of filter that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to filter as this method will be removed in V2. Args predicate A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to a scalar tf.bool tensor. Returns Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source flat_map( map_func ) Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements: dataset = tf.data.Dataset.from_tensor_slices( [[1, 2, 3], [4, 5, 6], [7, 8, 9]]) dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) list(dataset.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7, 8, 9] tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1) Args map_func A function mapping a dataset element to a dataset. Returns Dataset A Dataset. from_generator View source @staticmethod from_generator( generator, output_types=None, output_shapes=None, args=None, output_signature=None ) Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument: def gen(): ragged_tensor = tf.ragged.constant([[1, 2], [3]]) yield 42, ragged_tensor dataset = tf.data.Dataset.from_generator( gen, output_signature=( tf.TensorSpec(shape=(), dtype=tf.int32), tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32))) list(dataset.take(1)) [(<tf.Tensor: shape=(), dtype=int32, numpy=42>, <tf.RaggedTensor [[1, 2], [3]]>)] There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes. Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment. Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator(). Args generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args. output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator. output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator. args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments. output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator. Returns Dataset A Dataset. from_sparse_tensor_slices View source @staticmethod from_sparse_tensor_slices( sparse_tensor ) Splits each rank-N tf.sparse.SparseTensor in this dataset row-wise. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.from_tensor_slices(). Args sparse_tensor A tf.sparse.SparseTensor. Returns Dataset A Dataset of rank-(N-1) sparse tensors. from_tensor_slices View source @staticmethod from_tensor_slices( tensors ) Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions. # Slicing a 1D tensor produces scalar tensor elements. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) list(dataset.as_numpy_iterator()) [1, 2, 3] # Slicing a 2D tensor produces 1D tensor elements. dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) list(dataset.as_numpy_iterator()) [array([1, 2], dtype=int32), array([3, 4], dtype=int32)] # Slicing a tuple of 1D tensors produces tuple elements containing # scalar tensors. dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) list(dataset.as_numpy_iterator()) [(1, 3, 5), (2, 4, 6)] # Dictionary structure is also preserved. dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, {'a': 2, 'b': 4}] True # Two tensors can be combined into one Dataset object. features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor dataset = Dataset.from_tensor_slices((features, labels)) # Both the features and the labels tensors can be converted # to a Dataset object separately and combined after. features_dataset = Dataset.from_tensor_slices(features) labels_dataset = Dataset.from_tensor_slices(labels) dataset = Dataset.zip((features_dataset, labels_dataset)) # A batched feature and label set can be converted to a Dataset # in similar fashion. batched_features = tf.constant([[[1, 3], [2, 3]], [[2, 1], [1, 2]], [[3, 3], [3, 2]]], shape=(3, 2, 2)) batched_labels = tf.constant([['A', 'A'], ['B', 'B'], ['A', 'B']], shape=(3, 2, 1)) dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) for element in dataset.as_numpy_iterator(): print(element) (array([[1, 3], [2, 3]], dtype=int32), array([[b'A'], [b'A']], dtype=object)) (array([[2, 1], [1, 2]], dtype=int32), array([[b'B'], [b'B']], dtype=object)) (array([[3, 3], [3, 2]], dtype=int32), array([[b'A'], [b'B']], dtype=object)) Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide. Args tensors A dataset element, with each component having the same size in the first dimension. Returns Dataset A Dataset. from_tensors View source @staticmethod from_tensors( tensors ) Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead. dataset = tf.data.Dataset.from_tensors([1, 2, 3]) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32)] dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) list(dataset.as_numpy_iterator()) [(array([1, 2, 3], dtype=int32), b'A')] # You can use `from_tensors` to produce a dataset which repeats # the same example many times. example = tf.constant([1,2,3]) dataset = tf.data.Dataset.from_tensors(example).repeat(2) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide. Args tensors A dataset element. Returns Dataset A Dataset. interleave View source interleave( map_func, cycle_length=None, block_length=None, num_parallel_calls=None, deterministic=None ) Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently: # Preprocess 4 files concurrently, and interleave blocks of 16 records # from each file. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) def parse_fn(filename): return tf.data.Dataset.range(10) dataset = dataset.interleave(lambda x: tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), cycle_length=4, block_length=16) The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example: dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] # NOTE: New lines indicate "block" boundaries. dataset = dataset.interleave( lambda x: Dataset.from_tensors(x).repeat(6), cycle_length=2, block_length=4) list(dataset.as_numpy_iterator()) [1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5] Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined. Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) Args map_func A function mapping a dataset element to a dataset. cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism. block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU. deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically. Returns Dataset A Dataset. list_files View source @staticmethod list_files( file_pattern, shuffle=None, seed=None ) A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems. Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order. Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py Args file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True. seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior. Returns Dataset A Dataset of strings corresponding to file names. make_initializable_iterator View source make_initializable_iterator( shared_name=None ) Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_initializable_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code. Note: The returned iterator will be in an uninitialized state, and you must run the iterator.initializer operation before using it: # Building graph ... dataset = ... iterator = dataset.make_initializable_iterator() next_value = iterator.get_next() # This is a Tensor. # ... from within a session ... sess.run(iterator.initializer) try: while True: value = sess.run(next_value) ... except tf.errors.OutOfRangeError: pass Args shared_name (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server). Returns A tf.data.Iterator for elements of this dataset. Raises RuntimeError If eager execution is enabled. make_one_shot_iterator View source make_one_shot_iterator() Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_one_shot_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code. Note: The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see make_initializable_iterator. Example: # Building graph ... dataset = ... next_value = dataset.make_one_shot_iterator().get_next() # ... from within a session ... try: while True: value = sess.run(next_value) ... except tf.errors.OutOfRangeError: pass Returns An tf.data.Iterator for elements of this dataset. map View source map( map_func, num_parallel_calls=None, deterministic=None ) Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components. dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1) list(dataset.as_numpy_iterator()) [2, 3, 4, 5, 6] The input signature of map_func is determined by the structure of each element in this dataset. dataset = Dataset.range(5) # `map_func` takes a single argument of type `tf.Tensor` with the same # shape and dtype. result = dataset.map(lambda x: x + 1) # Each element is a tuple containing two `tf.Tensor` objects. elements = [(1, "foo"), (2, "bar"), (3, "baz")] dataset = tf.data.Dataset.from_generator( lambda: elements, (tf.int32, tf.string)) # `map_func` takes two arguments of type `tf.Tensor`. This function # projects out just the first component. result = dataset.map(lambda x_int, y_str: x_int) list(result.as_numpy_iterator()) [1, 2, 3] # Each element is a dictionary mapping strings to `tf.Tensor` objects. elements = ([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}, {"a": 3, "b": "baz"}]) dataset = tf.data.Dataset.from_generator( lambda: elements, {"a": tf.int32, "b": tf.string}) # `map_func` takes a single argument of type `dict` with the same keys # as the elements. result = dataset.map(lambda d: str(d["a"]) + d["b"]) The value or values returned by map_func determine the structure of each element in the returned dataset. dataset = tf.data.Dataset.range(3) # `map_func` returns two `tf.Tensor` objects. def g(x): return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) result = dataset.map(g) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) # Python primitives, lists, and NumPy arrays are implicitly converted to # `tf.Tensor`. def h(x): return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) result = dataset.map(h) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) # `map_func` can return nested structures. def i(x): return (37.0, [42, 16]), "foo" result = dataset.map(i) result.element_spec ((TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.int32, name=None)), TensorSpec(shape=(), dtype=tf.string, name=None)) map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example: d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) # transform a string tensor to upper case string using a Python function def upper_case_fn(t: tf.Tensor): return t.numpy().decode('utf-8').upper() d = d.map(lambda x: tf.py_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] 3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example: d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) def upper_case_fn(t: np.ndarray): return t.decode('utf-8').upper() d = d.map(lambda x: tf.numpy_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False. dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) Args map_func A function mapping a dataset element to another dataset element. num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU. deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically. Returns Dataset A Dataset. map_with_legacy_function View source map_with_legacy_function( map_func, num_parallel_calls=None, deterministic=None ) Maps map_func across the elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.map() Note: This is an escape hatch for existing uses of map that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to map as this method will be removed in V2. Args map_func A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to another nested structure of tensors. num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU. deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically. Returns Dataset A Dataset. options View source options() Returns the options for this dataset and its inputs. Returns A tf.data.Options object representing the dataset options. padded_batch View source padded_batch( batch_size, padded_shapes=None, padding_values=None, drop_remainder=False ) Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension. A = (tf.data.Dataset .range(1, 5, output_type=tf.int32) .map(lambda x: tf.fill([x], x))) # Pad to the smallest per-batch size that fits all elements. B = A.padded_batch(2) for element in B.as_numpy_iterator(): print(element) [[1 0] [2 2]] [[3 3 3 0] [4 4 4 4]] # Pad to a fixed size. C = A.padded_batch(2, padded_shapes=5) for element in C.as_numpy_iterator(): print(element) [[1 0 0 0 0] [2 2 0 0 0]] [[3 3 3 0 0] [4 4 4 4 0]] # Pad with a custom value. D = A.padded_batch(2, padded_shapes=5, padding_values=-1) for element in D.as_numpy_iterator(): print(element) [[ 1 -1 -1 -1 -1] [ 2 2 -1 -1 -1]] [[ 3 3 3 -1 -1] [ 4 4 4 4 -1]] # Components of nested elements can be padded independently. elements = [([1, 2, 3], [10]), ([4, 5], [11, 12])] dataset = tf.data.Dataset.from_generator( lambda: iter(elements), (tf.int32, tf.int32)) # Pad the first component of the tuple to length 4, and the second # component to the smallest size that fits. dataset = dataset.padded_batch(2, padded_shapes=([4], [None]), padding_values=(-1, 100)) list(dataset.as_numpy_iterator()) [(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), array([[ 10, 100], [ 11, 12]], dtype=int32))] # Pad with a single value and multiple components. E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1) for element in E.as_numpy_iterator(): print(element) (array([[ 1, -1], [ 2, 2]], dtype=int32), array([[ 1, -1], [ 2, 2]], dtype=int32)) (array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32)) See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor. Args batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch. padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank. padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component. drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch. Returns Dataset A Dataset. Raises ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source prefetch( buffer_size ) Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements. Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each). dataset = tf.data.Dataset.range(3) dataset = dataset.prefetch(2) list(dataset.as_numpy_iterator()) [0, 1, 2] Args buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching. Returns Dataset A Dataset. range View source @staticmethod range( *args, **kwargs ) Creates a Dataset of a step-separated range of values. list(Dataset.range(5).as_numpy_iterator()) [0, 1, 2, 3, 4] list(Dataset.range(2, 5).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2).as_numpy_iterator()) [1, 3] list(Dataset.range(1, 5, -2).as_numpy_iterator()) [] list(Dataset.range(5, 1).as_numpy_iterator()) [] list(Dataset.range(5, 1, -2).as_numpy_iterator()) [5, 3] list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) [1.0, 3.0] Args *args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. **kwargs output_type: Its expected dtype. (Optional, default: tf.int64). Returns Dataset A RangeDataset. Raises ValueError if len(args) == 0. reduce View source reduce( initial_state, reduce_func ) Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result. tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() 5 tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() 10 Args initial_state An element representing the initial state of the transformation. reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state. Returns A dataset element corresponding to the final state of the transformation. repeat View source repeat( count=None ) Repeats this dataset so each original value is seen count times. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.repeat(3) list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 3, 1, 2, 3] Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements. Args count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely. Returns Dataset A Dataset. shard View source shard( num_shards, index ) Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i. A = tf.data.Dataset.range(10) B = A.shard(num_shards=3, index=0) list(B.as_numpy_iterator()) [0, 3, 6, 9] C = A.shard(num_shards=3, index=1) list(C.as_numpy_iterator()) [1, 4, 7] D = A.shard(num_shards=3, index=2) list(D.as_numpy_iterator()) [2, 5, 8] This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.map(parser_fn, num_parallel_calls=num_map_threads) Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.interleave(tf.data.TFRecordDataset, cycle_length=num_readers, block_length=1) d = d.map(parser_fn, num_parallel_calls=num_map_threads) Args num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel. index A tf.int64 scalar tf.Tensor, representing the worker index. Returns Dataset A Dataset. Raises InvalidArgumentError if num_shards or index are illegal values. Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.) shuffle View source shuffle( buffer_size, seed=None, reshuffle_each_iteration=None ) Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation: dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) dataset = dataset.repeat(2) # doctest: +SKIP [1, 0, 2, 1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) dataset = dataset.repeat(2) # doctest: +SKIP [1, 0, 2, 1, 0, 2] In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration: dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 0, 2] list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 0, 2] list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 0, 2] Args buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample. seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior. reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.) Returns Dataset A Dataset. skip View source skip( count ) Creates a Dataset that skips count elements from this dataset. dataset = tf.data.Dataset.range(10) dataset = dataset.skip(7) list(dataset.as_numpy_iterator()) [7, 8, 9] Args count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset. Returns Dataset A Dataset. take View source take( count ) Creates a Dataset with at most count elements from this dataset. dataset = tf.data.Dataset.range(10) dataset = dataset.take(3) list(dataset.as_numpy_iterator()) [0, 1, 2] Args count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset. Returns Dataset A Dataset. unbatch View source unbatch() Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...]. elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) dataset = dataset.unbatch() list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 1, 2, 3, 4] Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch. Returns A Dataset. window View source window( size, shift=None, stride=1, drop_remainder=False ) Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example: dataset = tf.data.Dataset.range(7).window(2) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1] [2, 3] [4, 5] [6] dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1, 2] [2, 3, 4] [4, 5, 6] dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 2, 4] [1, 3, 5] [2, 4, 6] Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows. nested = ([1, 2, 3, 4], [5, 6, 7, 8]) dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) for window in dataset: def to_numpy(ds): return list(ds.as_numpy_iterator()) print(tuple(to_numpy(component) for component in window)) ([1, 2], [5, 6]) ([3, 4], [7, 8]) dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) dataset = dataset.window(2) for window in dataset: def to_numpy(ds): return list(ds.as_numpy_iterator()) print({'a': to_numpy(window['a'])}) {'a': [1, 2]} {'a': [3, 4]} Args size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive. shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive. stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size. Returns Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source with_options( options ) Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values. ds = tf.data.Dataset.range(5) ds = ds.interleave(lambda x: tf.data.Dataset.range(5), cycle_length=3, num_parallel_calls=3) options = tf.data.Options() # This will make the interleave order non-deterministic. options.experimental_deterministic = False ds = ds.with_options(options) Args options A tf.data.Options that identifies the options the use. Returns Dataset A Dataset with the given options. Raises ValueError when an option is set more than once to a non-default value zip View source @staticmethod zip( datasets ) Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects. # The nested structure of the `datasets` argument determines the # structure of elements in the resulting dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] ds = tf.data.Dataset.zip((a, b)) list(ds.as_numpy_iterator()) [(1, 4), (2, 5), (3, 6)] ds = tf.data.Dataset.zip((b, a)) list(ds.as_numpy_iterator()) [(4, 1), (5, 2), (6, 3)] # The `datasets` argument may contain an arbitrary number of datasets. c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], # [9, 10], # [11, 12] ] ds = tf.data.Dataset.zip((a, b, c)) for element in ds.as_numpy_iterator(): print(element) (1, 4, array([7, 8])) (2, 5, array([ 9, 10])) (3, 6, array([11, 12])) # The number of elements in the resulting dataset is the same as # the size of the smallest dataset in `datasets`. d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] ds = tf.data.Dataset.zip((a, d)) list(ds.as_numpy_iterator()) [(1, 13), (2, 14)] Args datasets A nested structure of datasets. Returns Dataset A Dataset. __bool__ View source __bool__() __iter__ View source __iter__() Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol. Returns An tf.data.Iterator for the elements of this dataset. Raises RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source __len__() Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead. Returns An integer representing the length of the dataset. Raises RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source __nonzero__()
doc_2837
Update the location of children if necessary and draw them to the given renderer.
doc_2838
stops, uninitializes, and closes the camera stop() -> None Stops recording, uninitializes the camera, and closes it. Once a camera is stopped, the below functions cannot be used until it is started again.
doc_2839
reference Sound samples into an array samples(Sound) -> array Creates a new array that directly references the samples in a Sound object. Modifying the array will change the Sound. The array will always be in the format returned from pygame.mixer.get_init().
doc_2840
Estimate the transformation from a set of corresponding points. Number of source and destination coordinates must match. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds.
doc_2841
Complex number type composed of two double-precision floating-point numbers, compatible with Python complex. Character code 'D' Alias numpy.cfloat Alias numpy.complex_ Alias on this platform (Linux x86_64) numpy.complex128: Complex number type composed of 2 64-bit-precision floating-point numbers.
doc_2842
Warning This attribute is a private API. It may be changed or removed without a deprecation period in the future, for instance to accommodate changes in application loading. It’s used to optimize Django’s own test suite, which contains hundreds of models but no relations between models in different applications. By default, available_apps is set to None. After each test, Django calls flush to reset the database state. This empties all tables and emits the post_migrate signal, which recreates one content type and four permissions for each model. This operation gets expensive proportionally to the number of models. Setting available_apps to a list of applications instructs Django to behave as if only the models from these applications were available. The behavior of TransactionTestCase changes as follows: post_migrate is fired before each test to create the content types and permissions for each model in available apps, in case they’re missing. After each test, Django empties only tables corresponding to models in available apps. However, at the database level, truncation may cascade to related models in unavailable apps. Furthermore post_migrate isn’t fired; it will be fired by the next TransactionTestCase, after the correct set of applications is selected. Since the database isn’t fully flushed, if a test creates instances of models not included in available_apps, they will leak and they may cause unrelated tests to fail. Be careful with tests that use sessions; the default session engine stores them in the database. Since post_migrate isn’t emitted after flushing the database, its state after a TransactionTestCase isn’t the same as after a TestCase: it’s missing the rows created by listeners to post_migrate. Considering the order in which tests are executed, this isn’t an issue, provided either all TransactionTestCase in a given test suite declare available_apps, or none of them. available_apps is mandatory in Django’s own test suite.
doc_2843
tf.compat.v1.keras.layers.enable_v2_dtype_behavior() By default, the V2 dtype behavior is enabled in TensorFlow 2, so this function is only useful if tf.compat.v1.disable_v2_behavior has been called. Since mixed precision requires V2 dtype behavior to be enabled, this function allows you to use mixed precision in Keras layers if disable_v2_behavior has been called. When enabled, the dtype of Keras layers defaults to floatx (which is typically float32) instead of None. In addition, layers will automatically cast floating-point inputs to the layer's dtype. x = tf.ones((4, 4, 4, 4), dtype='float64') layer = tf.keras.layers.Conv2D(filters=4, kernel_size=2) print(layer.dtype) # float32 since V2 dtype behavior is enabled float32 y = layer(x) # Layer casts inputs since V2 dtype behavior is enabled print(y.dtype.name) float32 A layer author can opt-out their layer from the automatic input casting by passing autocast=False to the base Layer's constructor. This disables the autocasting part of the V2 behavior for that layer, but not the defaulting to floatx part of the V2 behavior. When a global tf.keras.mixed_precision.Policy is set, a Keras layer's dtype will default to the global policy instead of floatx. Layers will automatically cast inputs to the policy's compute_dtype.
doc_2844
Cross-correlation of two 1-dimensional sequences. This function computes the correlation as generally defined in signal processing texts: c_{av}[k] = sum_n a[n+k] * conj(v[n]) with a and v sequences being zero-padded where necessary and conj being the conjugate. Parameters a, varray_like Input sequences. mode{‘valid’, ‘same’, ‘full’}, optional Refer to the convolve docstring. Note that the default is ‘valid’, unlike convolve, which uses ‘full’. old_behaviorbool old_behavior was removed in NumPy 1.10. If you need the old behavior, use multiarray.correlate. Returns outndarray Discrete cross-correlation of a and v. See also convolve Discrete, linear convolution of two one-dimensional sequences. multiarray.correlate Old, no conjugate, version of correlate. scipy.signal.correlate uses FFT which has superior performance on large arrays. Notes The definition of correlation above is not unique and sometimes correlation may be defined differently. Another common definition is: c'_{av}[k] = sum_n a[n] conj(v[n+k]) which is related to c_{av}[k] by c'_{av}[k] = c_{av}[-k]. numpy.correlate may perform slowly in large arrays (i.e. n = 1e5) because it does not use the FFT to compute the convolution; in that case, scipy.signal.correlate might be preferable. Examples >>> np.correlate([1, 2, 3], [0, 1, 0.5]) array([3.5]) >>> np.correlate([1, 2, 3], [0, 1, 0.5], "same") array([2. , 3.5, 3. ]) >>> np.correlate([1, 2, 3], [0, 1, 0.5], "full") array([0.5, 2. , 3.5, 3. , 0. ]) Using complex sequences: >>> np.correlate([1+1j, 2, 3-1j], [0, 1, 0.5j], 'full') array([ 0.5-0.5j, 1.0+0.j , 1.5-1.5j, 3.0-1.j , 0.0+0.j ]) Note that you get the time reversed, complex conjugated result when the two input sequences change places, i.e., c_{va}[k] = c^{*}_{av}[-k]: >>> np.correlate([0, 1, 0.5j], [1+1j, 2, 3-1j], 'full') array([ 0.0+0.j , 3.0+1.j , 1.5+1.5j, 1.0+0.j , 0.5+0.5j])
doc_2845
async def main(): print('Hello ...') await asyncio.sleep(1) print('... World!') # Python 3.7+ asyncio.run(main()) asyncio is a library to write concurrent code using the async/await syntax. asyncio is used as a foundation for multiple Python asynchronous frameworks that provide high-performance network and web-servers, database connection libraries, distributed task queues, etc. asyncio is often a perfect fit for IO-bound and high-level structured network code. asyncio provides a set of high-level APIs to: run Python coroutines concurrently and have full control over their execution; perform network IO and IPC; control subprocesses; distribute tasks via queues; synchronize concurrent code; Additionally, there are low-level APIs for library and framework developers to: create and manage event loops, which provide asynchronous APIs for networking, running subprocesses, handling OS signals, etc; implement efficient protocols using transports; bridge callback-based libraries and code with async/await syntax. Reference High-level APIs Coroutines and Tasks Streams Synchronization Primitives Subprocesses Queues Exceptions Low-level APIs Event Loop Futures Transports and Protocols Policies Platform Support Guides and Tutorials High-level API Index Low-level API Index Developing with asyncio Note The source code for asyncio can be found in Lib/asyncio/.
doc_2846
See Migration guide for more details. tf.compat.v1.raw_ops.CSVDataset tf.raw_ops.CSVDataset( filenames, compression_type, buffer_size, header, field_delim, use_quote_delim, na_value, select_cols, record_defaults, output_shapes, name=None ) Args filenames A Tensor of type string. compression_type A Tensor of type string. buffer_size A Tensor of type int64. header A Tensor of type bool. field_delim A Tensor of type string. use_quote_delim A Tensor of type bool. na_value A Tensor of type string. select_cols A Tensor of type int64. record_defaults A list of Tensor objects with types from: float32, float64, int32, int64, string. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type variant.
doc_2847
Get the minimum theta limit in degrees.
doc_2848
See torch.symeig()
doc_2849
Base class for all Samplers. Every Sampler subclass has to provide an __iter__() method, providing a way to iterate over indices of dataset elements, and a __len__() method that returns the length of the returned iterators. Note The __len__() method isn’t strictly required by DataLoader, but is expected in any calculation involving the length of a DataLoader.
doc_2850
Check if domains match. New in version 1.6.0. Parameters otherclass instance The other class must have the domain attribute. Returns boolboolean True if the domains are the same, False otherwise.
doc_2851
The error response body. This should be an HTTP response body bytestring. It defaults to the plain text, “A server error occurred. Please contact the administrator.”
doc_2852
Receive up to nbytes bytes from the socket, storing the data into a buffer rather than creating a new bytestring. If nbytes is not specified (or 0), receive up to the size available in the given buffer. Returns the number of bytes received. See the Unix manual page recv(2) for the meaning of the optional argument flags; it defaults to zero.
doc_2853
Module: email.mime.message A subclass of MIMENonMultipart, the MIMEMessage class is used to create MIME objects of main type message. _msg is used as the payload, and must be an instance of class Message (or a subclass thereof), otherwise a TypeError is raised. Optional _subtype sets the subtype of the message; it defaults to rfc822. Optional policy argument defaults to compat32. Changed in version 3.6: Added policy keyword-only parameter.
doc_2854
Find the horizontal edges of an image using the Farid transform. Parameters image2-D array Image to process. mask2-D array, optional An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns output2-D array The Farid edge map. Notes The kernel was constructed using the 5-tap weights from [1]. References 1 Farid, H. and Simoncelli, E. P., “Differentiation of discrete multidimensional signals”, IEEE Transactions on Image Processing 13(4): 496-508, 2004. DOI:10.1109/TIP.2004.823819 2 Farid, H. and Simoncelli, E. P. “Optimally rotation-equivariant directional derivative kernels”, In: 7th International Conference on Computer Analysis of Images and Patterns, Kiel, Germany. Sep, 1997.
doc_2855
Remove trailing coefficients Remove trailing coefficients until a coefficient is reached whose absolute value greater than tol or the beginning of the series is reached. If all the coefficients would be removed the series is set to [0]. A new series instance is returned with the new coefficients. The current instance remains unchanged. Parameters tolnon-negative number. All trailing coefficients less than tol will be removed. Returns new_seriesseries New instance of series with trimmed coefficients.
doc_2856
This is a nonstandard shortcut that creates a cursor object by calling the cursor() method, calls the cursor’s executemany() method with the parameters given, and returns the cursor.
doc_2857
Return True if the process was terminated by a signal, otherwise return False. Availability: Unix.
doc_2858
A generic version of collections.abc.AsyncIterator. New in version 3.5.2. Deprecated since version 3.9: collections.abc.AsyncIterator now supports []. See PEP 585 and Generic Alias Type.
doc_2859
Parse the source into an AST node. Equivalent to compile(source, filename, mode, ast.PyCF_ONLY_AST). If type_comments=True is given, the parser is modified to check and return type comments as specified by PEP 484 and PEP 526. This is equivalent to adding ast.PyCF_TYPE_COMMENTS to the flags passed to compile(). This will report syntax errors for misplaced type comments. Without this flag, type comments will be ignored, and the type_comment field on selected AST nodes will always be None. In addition, the locations of # type: ignore comments will be returned as the type_ignores attribute of Module (otherwise it is always an empty list). In addition, if mode is 'func_type', the input syntax is modified to correspond to PEP 484 “signature type comments”, e.g. (str, int) -> List[str]. Also, setting feature_version to a tuple (major, minor) will attempt to parse using that Python version’s grammar. Currently major must equal to 3. For example, setting feature_version=(3, 4) will allow the use of async and await as variable names. The lowest supported version is (3, 4); the highest is sys.version_info[0:2]. Warning It is possible to crash the Python interpreter with a sufficiently large/complex string due to stack depth limitations in Python’s AST compiler. Changed in version 3.8: Added type_comments, mode='func_type' and feature_version.
doc_2860
See Migration guide for more details. tf.compat.v1.raw_ops.DebugNanCount tf.raw_ops.DebugNanCount( input, device_name='', tensor_name='', debug_urls=[], gated_grpc=False, name=None ) Counts number of NaNs in the input tensor, for debugging. Args input A Tensor. Input tensor, non-Reference type. device_name An optional string. Defaults to "". tensor_name An optional string. Defaults to "". Name of the input tensor. debug_urls An optional list of strings. Defaults to []. List of URLs to debug targets, e.g., file:///foo/tfdbg_dump, grpc:://localhost:11011. gated_grpc An optional bool. Defaults to False. Whether this op will be gated. If any of the debug_urls of this debug node is of the grpc:// scheme, when the value of this attribute is set to True, the data will not actually be sent via the grpc stream unless this debug op has been enabled at the debug_url. If all of the debug_urls of this debug node are of the grpc:// scheme and the debug op is enabled at none of them, the output will be an empty Tensor. name A name for the operation (optional). Returns A Tensor of type int64.
doc_2861
A string giving the site-specific directory prefix where the platform-dependent Python files are installed; by default, this is also '/usr/local'. This can be set at build time with the --exec-prefix argument to the configure script. Specifically, all configuration files (e.g. the pyconfig.h header file) are installed in the directory exec_prefix/lib/pythonX.Y/config, and shared library modules are installed in exec_prefix/lib/pythonX.Y/lib-dynload, where X.Y is the version number of Python, for example 3.2. Note If a virtual environment is in effect, this value will be changed in site.py to point to the virtual environment. The value for the Python installation will still be available, via base_exec_prefix.
doc_2862
See Migration guide for more details. tf.compat.v1.keras.applications.inception_v3.decode_predictions tf.keras.applications.inception_v3.decode_predictions( preds, top=5 ) Arguments preds Numpy array encoding a batch of predictions. top Integer, how many top-guesses to return. Defaults to 5. Returns A list of lists of top class prediction tuples (class_name, class_description, score). One list of tuples per sample in batch input. Raises ValueError In case of invalid shape of the pred array (must be 2D).
doc_2863
Initialize self. See help(type(self)) for accurate signature.
doc_2864
Compute the determinant of an array. Parameters a(…, M, M) array_like Input array to compute determinants for. Returns det(…) array_like Determinant of a. See also slogdet Another way to represent the determinant, more suitable for large matrices where underflow/overflow may occur. scipy.linalg.det Similar function in SciPy. Notes New in version 1.8.0. Broadcasting rules apply, see the numpy.linalg documentation for details. The determinant is computed via LU factorization using the LAPACK routine z/dgetrf. Examples The determinant of a 2-D array [[a, b], [c, d]] is ad - bc: >>> a = np.array([[1, 2], [3, 4]]) >>> np.linalg.det(a) -2.0 # may vary Computing determinants for a stack of matrices: >>> a = np.array([ [[1, 2], [3, 4]], [[1, 2], [2, 1]], [[1, 3], [3, 1]] ]) >>> a.shape (3, 2, 2) >>> np.linalg.det(a) array([-2., -3., -8.])
doc_2865
Return a tuple of two integers, whose ratio is equal to the Fraction and with a positive denominator. New in version 3.8.
doc_2866
Return whether the Artist has an explicitly set transform. This is True after set_transform has been called.
doc_2867
This method makes an HttpResponse instance a file-like object.
doc_2868
Copy properties from other to self.
doc_2869
returns all sprites from a layer, ordered by how they where added get_sprites_from_layer(layer) -> sprites Returns all sprites from a layer, ordered by how they where added. It uses linear search and the sprites are not removed from layer.
doc_2870
Generate a sparse symmetric definite positive matrix. Read more in the User Guide. Parameters dimint, default=1 The size of the random matrix to generate. alphafloat, default=0.95 The probability that a coefficient is zero (see notes). Larger values enforce more sparsity. The value should be in the range 0 and 1. norm_diagbool, default=False Whether to normalize the output matrix to make the leading diagonal elements all 1 smallest_coeffloat, default=0.1 The value of the smallest coefficient between 0 and 1. largest_coeffloat, default=0.9 The value of the largest coefficient between 0 and 1. random_stateint, RandomState instance or None, default=None Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns precsparse matrix of shape (dim, dim) The generated matrix. See also make_spd_matrix Notes The sparsity is actually imposed on the cholesky factor of the matrix. Thus alpha does not translate directly into the filling fraction of the matrix itself.
doc_2871
If provided, these arguments ensure that the string is at most or at least the given length.
doc_2872
Map a function in parallel across an array. Split an array into possibly overlapping chunks of a given depth and boundary type, call the given function in parallel on the chunks, combine the chunks and return the resulting array. Parameters functionfunction Function to be mapped which takes an array as an argument. arraynumpy array or dask array Array which the function will be applied to. chunksint, tuple, or tuple of tuples, optional A single integer is interpreted as the length of one side of a square chunk that should be tiled across the array. One tuple of length array.ndim represents the shape of a chunk, and it is tiled across the array. A list of tuples of length ndim, where each sub-tuple is a sequence of chunk sizes along the corresponding dimension. If None, the array is broken up into chunks based on the number of available cpus. More information about chunks is in the documentation here. depthint, optional Integer equal to the depth of the added boundary cells. Defaults to zero. mode{‘reflect’, ‘symmetric’, ‘periodic’, ‘wrap’, ‘nearest’, ‘edge’}, optional type of external boundary padding. extra_argumentstuple, optional Tuple of arguments to be passed to the function. extra_keywordsdictionary, optional Dictionary of keyword arguments to be passed to the function. dtypedata-type or None, optional The data-type of the function output. If None, Dask will attempt to infer this by calling the function on data of shape (1,) * ndim. For functions expecting RGB or multichannel data this may be problematic. In such cases, the user should manually specify this dtype argument instead. New in version 0.18: dtype was added in 0.18. multichannelbool, optional If chunks is None and multichannel is True, this function will keep only a single chunk along the channels axis. When depth is specified as a scalar value, that depth will be applied only to the non-channels axes (a depth of 0 will be used along the channels axis). If the user manually specified both chunks and a depth tuple, then this argument will have no effect. New in version 0.18: multichannel was added in 0.18. computebool, optional If True, compute eagerly returning a NumPy Array. If False, compute lazily returning a Dask Array. If None (default), compute based on array type provided (eagerly for NumPy Arrays and lazily for Dask Arrays). Returns outndarray or dask Array Returns the result of the applying the operation. Type is dependent on the compute argument. Notes Numpy edge modes ‘symmetric’, ‘wrap’, and ‘edge’ are converted to the equivalent dask boundary modes ‘reflect’, ‘periodic’ and ‘nearest’, respectively. Setting compute=False can be useful for chaining later operations. For example region selection to preview a result or storing large data to disk instead of loading in memory.
doc_2873
An instance of ResolverMatch for the response. You can use the func attribute, for example, to verify the view that served the response: # my_view here is a function based view self.assertEqual(response.resolver_match.func, my_view) # class-based views need to be compared by name, as the functions # generated by as_view() won't be equal self.assertEqual(response.resolver_match.func.__name__, MyView.as_view().__name__) If the given URL is not found, accessing this attribute will raise a Resolver404 exception.
doc_2874
See Migration guide for more details. tf.compat.v1.raw_ops.Sigmoid tf.raw_ops.Sigmoid( x, name=None ) Specifically, y = 1 / (1 + exp(-x)). Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
doc_2875
Reverse the order of elements along axis 1 (left/right). For a 2-D array, this flips the entries in each row in the left/right direction. Columns are preserved, but appear in a different order than before. Parameters marray_like Input array, must be at least 2-D. Returns fndarray A view of m with the columns reversed. Since a view is returned, this operation is \(\mathcal O(1)\). See also flipud Flip array in the up/down direction. flip Flip array in one or more dimensions. rot90 Rotate array counterclockwise. Notes Equivalent to m[:,::-1] or np.flip(m, axis=1). Requires the array to be at least 2-D. Examples >>> A = np.diag([1.,2.,3.]) >>> A array([[1., 0., 0.], [0., 2., 0.], [0., 0., 3.]]) >>> np.fliplr(A) array([[0., 0., 1.], [0., 2., 0.], [3., 0., 0.]]) >>> A = np.random.randn(2,3,5) >>> np.all(np.fliplr(A) == A[:,::-1,...]) True
doc_2876
Returns array of indices of the maximum values along the given axis. Masked values are treated as if they had the value fill_value. Parameters axis{None, integer} If None, the index is into the flattened array, otherwise along the specified axis fill_valuescalar or None, optional Value used to fill in the masked values. If None, the output of maximum_fill_value(self._data) is used instead. out{None, array}, optional Array into which the result can be placed. Its type is preserved and it must be of the right shape to hold the output. Returns index_array{integer_array} Examples >>> a = np.arange(6).reshape(2,3) >>> a.argmax() 5 >>> a.argmax(0) array([1, 1, 1]) >>> a.argmax(1) array([2, 2])
doc_2877
Casts all floating point parameters and buffers to double datatype. Returns self Return type Module
doc_2878
Set the default axis ticks and labels. Parameters unitUnitData object string unit information for value axisAxis axis for which information is being set Note axis is not used Returns AxisInfo Information to support default tick labeling
doc_2879
This method serves the 'HEAD' request type: it sends the headers it would send for the equivalent GET request. See the do_GET() method for a more complete explanation of the possible headers.
doc_2880
See Migration guide for more details. tf.compat.v1.strings.format tf.strings.format( template, inputs, placeholder='{}', summarize=3, name=None ) Formats a string template using a list of tensors, abbreviating tensors by only printing the first and last summarize elements of each dimension (recursively). If formatting only one tensor into a template, the tensor does not have to be wrapped in a list. Example: Formatting a single-tensor template: tensor = tf.range(5) tf.strings.format("tensor: {}, suffix", tensor) <tf.Tensor: shape=(), dtype=string, numpy=b'tensor: [0 1 2 3 4], suffix'> Formatting a multi-tensor template: tensor_a = tf.range(2) tensor_b = tf.range(1, 4, 2) tf.strings.format("a: {}, b: {}, suffix", (tensor_a, tensor_b)) <tf.Tensor: shape=(), dtype=string, numpy=b'a: [0 1], b: [1 3], suffix'> Args template A string template to format tensor values into. inputs A list of Tensor objects, or a single Tensor. The list of tensors to format into the template string. If a solitary tensor is passed in, the input tensor will automatically be wrapped as a list. placeholder An optional string. Defaults to {}. At each placeholder occurring in the template, a subsequent tensor will be inserted. summarize An optional int. Defaults to 3. When formatting the tensors, show the first and last summarize entries of each tensor dimension (recursively). If set to -1, all elements of the tensor will be shown. name A name for the operation (optional). Returns A scalar Tensor of type string. Raises ValueError if the number of placeholders does not match the number of inputs.
doc_2881
Add a rotation (in degrees) to this transform in place. Returns self, so this method can easily be chained with more calls to rotate(), rotate_deg(), translate() and scale().
doc_2882
Set the vertices of the polygons. Parameters vertslist of array-like The sequence of polygons [verts0, verts1, ...] where each element verts_i defines the vertices of polygon i as a 2D array-like of shape (M, 2). closedbool, default: True Whether the polygon should be closed by adding a CLOSEPOLY connection at the end.
doc_2883
'blogs.blog': lambda o: "/blogs/%s/" % o.slug, 'news.story': lambda o: "/stories/%s/%s/" % (o.pub_year, o.slug), } The model name used in this setting should be all lowercase, regardless of the case of the actual model class name. ADMINS Default: [] (Empty list) A list of all the people who get code error notifications. When DEBUG=False and AdminEmailHandler is configured in LOGGING (done by default), Django emails these people the details of exceptions raised in the request/response cycle. Each item in the list should be a tuple of (Full name, email address). Example: [('John', 'john@example.com'), ('Mary', 'mary@example.com')] ALLOWED_HOSTS Default: [] (Empty list) A list of strings representing the host/domain names that this Django site can serve. This is a security measure to prevent HTTP Host header attacks, which are possible even under many seemingly-safe web server configurations. Values in this list can be fully qualified names (e.g. 'www.example.com'), in which case they will be matched against the request’s Host header exactly (case-insensitive, not including port). A value beginning with a period can be used as a subdomain wildcard: '.example.com' will match example.com, www.example.com, and any other subdomain of example.com. A value of '*' will match anything; in this case you are responsible to provide your own validation of the Host header (perhaps in a middleware; if so this middleware must be listed first in MIDDLEWARE). Django also allows the fully qualified domain name (FQDN) of any entries. Some browsers include a trailing dot in the Host header which Django strips when performing host validation. If the Host header (or X-Forwarded-Host if USE_X_FORWARDED_HOST is enabled) does not match any value in this list, the django.http.HttpRequest.get_host() method will raise SuspiciousOperation. When DEBUG is True and ALLOWED_HOSTS is empty, the host is validated against ['.localhost', '127.0.0.1', '[::1]']. ALLOWED_HOSTS is also checked when running tests. This validation only applies via get_host(); if your code accesses the Host header directly from request.META you are bypassing this security protection. APPEND_SLASH Default: True When set to True, if the request URL does not match any of the patterns in the URLconf and it doesn’t end in a slash, an HTTP redirect is issued to the same URL with a slash appended. Note that the redirect may cause any data submitted in a POST request to be lost. The APPEND_SLASH setting is only used if CommonMiddleware is installed (see Middleware). See also PREPEND_WWW. CACHES Default: { 'default': { 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache', } } A dictionary containing the settings for all caches to be used with Django. It is a nested dictionary whose contents maps cache aliases to a dictionary containing the options for an individual cache. The CACHES setting must configure a default cache; any number of additional caches may also be specified. If you are using a cache backend other than the local memory cache, or you need to define multiple caches, other options will be required. The following cache options are available. BACKEND Default: '' (Empty string) The cache backend to use. The built-in cache backends are: 'django.core.cache.backends.db.DatabaseCache' 'django.core.cache.backends.dummy.DummyCache' 'django.core.cache.backends.filebased.FileBasedCache' 'django.core.cache.backends.locmem.LocMemCache' 'django.core.cache.backends.memcached.PyMemcacheCache' 'django.core.cache.backends.memcached.PyLibMCCache' 'django.core.cache.backends.redis.RedisCache' You can use a cache backend that doesn’t ship with Django by setting BACKEND to a fully-qualified path of a cache backend class (i.e. mypackage.backends.whatever.WhateverCache). Changed in Django 3.2: The PyMemcacheCache backend was added. Changed in Django 4.0: The RedisCache backend was added. KEY_FUNCTION A string containing a dotted path to a function (or any callable) that defines how to compose a prefix, version and key into a final cache key. The default implementation is equivalent to the function: def make_key(key, key_prefix, version): return ':'.join([key_prefix, str(version), key]) You may use any key function you want, as long as it has the same argument signature. See the cache documentation for more information. KEY_PREFIX Default: '' (Empty string) A string that will be automatically included (prepended by default) to all cache keys used by the Django server. See the cache documentation for more information. LOCATION Default: '' (Empty string) The location of the cache to use. This might be the directory for a file system cache, a host and port for a memcache server, or an identifying name for a local memory cache. e.g.: CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache', 'LOCATION': '/var/tmp/django_cache', } } OPTIONS Default: None Extra parameters to pass to the cache backend. Available parameters vary depending on your cache backend. Some information on available parameters can be found in the cache arguments documentation. For more information, consult your backend module’s own documentation. TIMEOUT Default: 300 The number of seconds before a cache entry is considered stale. If the value of this setting is None, cache entries will not expire. A value of 0 causes keys to immediately expire (effectively “don’t cache”). VERSION Default: 1 The default version number for cache keys generated by the Django server. See the cache documentation for more information. CACHE_MIDDLEWARE_ALIAS Default: 'default' The cache connection to use for the cache middleware. CACHE_MIDDLEWARE_KEY_PREFIX Default: '' (Empty string) A string which will be prefixed to the cache keys generated by the cache middleware. This prefix is combined with the KEY_PREFIX setting; it does not replace it. See Django’s cache framework. CACHE_MIDDLEWARE_SECONDS Default: 600 The default number of seconds to cache a page for the cache middleware. See Django’s cache framework. CSRF_COOKIE_AGE Default: 31449600 (approximately 1 year, in seconds) The age of CSRF cookies, in seconds. The reason for setting a long-lived expiration time is to avoid problems in the case of a user closing a browser or bookmarking a page and then loading that page from a browser cache. Without persistent cookies, the form submission would fail in this case. Some browsers (specifically Internet Explorer) can disallow the use of persistent cookies or can have the indexes to the cookie jar corrupted on disk, thereby causing CSRF protection checks to (sometimes intermittently) fail. Change this setting to None to use session-based CSRF cookies, which keep the cookies in-memory instead of on persistent storage. CSRF_COOKIE_DOMAIN Default: None The domain to be used when setting the CSRF cookie. This can be useful for easily allowing cross-subdomain requests to be excluded from the normal cross site request forgery protection. It should be set to a string such as ".example.com" to allow a POST request from a form on one subdomain to be accepted by a view served from another subdomain. Please note that the presence of this setting does not imply that Django’s CSRF protection is safe from cross-subdomain attacks by default - please see the CSRF limitations section. CSRF_COOKIE_HTTPONLY Default: False Whether to use HttpOnly flag on the CSRF cookie. If this is set to True, client-side JavaScript will not be able to access the CSRF cookie. Designating the CSRF cookie as HttpOnly doesn’t offer any practical protection because CSRF is only to protect against cross-domain attacks. If an attacker can read the cookie via JavaScript, they’re already on the same domain as far as the browser knows, so they can do anything they like anyway. (XSS is a much bigger hole than CSRF.) Although the setting offers little practical benefit, it’s sometimes required by security auditors. If you enable this and need to send the value of the CSRF token with an AJAX request, your JavaScript must pull the value from a hidden CSRF token form input instead of from the cookie. See SESSION_COOKIE_HTTPONLY for details on HttpOnly. CSRF_COOKIE_NAME Default: 'csrftoken' The name of the cookie to use for the CSRF authentication token. This can be whatever you want (as long as it’s different from the other cookie names in your application). See Cross Site Request Forgery protection. CSRF_COOKIE_PATH Default: '/' The path set on the CSRF cookie. This should either match the URL path of your Django installation or be a parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths, and each instance will only see its own CSRF cookie. CSRF_COOKIE_SAMESITE Default: 'Lax' The value of the SameSite flag on the CSRF cookie. This flag prevents the cookie from being sent in cross-site requests. See SESSION_COOKIE_SAMESITE for details about SameSite. CSRF_COOKIE_SECURE Default: False Whether to use a secure cookie for the CSRF cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent with an HTTPS connection. CSRF_USE_SESSIONS Default: False Whether to store the CSRF token in the user’s session instead of in a cookie. It requires the use of django.contrib.sessions. Storing the CSRF token in a cookie (Django’s default) is safe, but storing it in the session is common practice in other web frameworks and therefore sometimes demanded by security auditors. Since the default error views require the CSRF token, SessionMiddleware must appear in MIDDLEWARE before any middleware that may raise an exception to trigger an error view (such as PermissionDenied) if you’re using CSRF_USE_SESSIONS. See Middleware ordering. CSRF_FAILURE_VIEW Default: 'django.views.csrf.csrf_failure' A dotted path to the view function to be used when an incoming request is rejected by the CSRF protection. The function should have this signature: def csrf_failure(request, reason=""): ... where reason is a short message (intended for developers or logging, not for end users) indicating the reason the request was rejected. It should return an HttpResponseForbidden. django.views.csrf.csrf_failure() accepts an additional template_name parameter that defaults to '403_csrf.html'. If a template with that name exists, it will be used to render the page. CSRF_HEADER_NAME Default: 'HTTP_X_CSRFTOKEN' The name of the request header used for CSRF authentication. As with other HTTP headers in request.META, the header name received from the server is normalized by converting all characters to uppercase, replacing any hyphens with underscores, and adding an 'HTTP_' prefix to the name. For example, if your client sends a 'X-XSRF-TOKEN' header, the setting should be 'HTTP_X_XSRF_TOKEN'. CSRF_TRUSTED_ORIGINS Default: [] (Empty list) A list of trusted origins for unsafe requests (e.g. POST). For requests that include the Origin header, Django’s CSRF protection requires that header match the origin present in the Host header. For a secure unsafe request that doesn’t include the Origin header, the request must have a Referer header that matches the origin present in the Host header. These checks prevent, for example, a POST request from subdomain.example.com from succeeding against api.example.com. If you need cross-origin unsafe requests, continuing the example, add 'https://subdomain.example.com' to this list (and/or http://... if requests originate from an insecure page). The setting also supports subdomains, so you could add 'https://*.example.com', for example, to allow access from all subdomains of example.com. Changed in Django 4.0: The values in older versions must only include the hostname (possibly with a leading dot) and not the scheme or an asterisk. Also, Origin header checking isn’t performed in older versions. DATABASES Default: {} (Empty dictionary) A dictionary containing the settings for all databases to be used with Django. It is a nested dictionary whose contents map a database alias to a dictionary containing the options for an individual database. The DATABASES setting must configure a default database; any number of additional databases may also be specified. The simplest possible settings file is for a single-database setup using SQLite. This can be configured using the following: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': 'mydatabase', } } When connecting to other database backends, such as MariaDB, MySQL, Oracle, or PostgreSQL, additional connection parameters will be required. See the ENGINE setting below on how to specify other database types. This example is for PostgreSQL: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'mydatabase', 'USER': 'mydatabaseuser', 'PASSWORD': 'mypassword', 'HOST': '127.0.0.1', 'PORT': '5432', } } The following inner options that may be required for more complex configurations are available: ATOMIC_REQUESTS Default: False Set this to True to wrap each view in a transaction on this database. See Tying transactions to HTTP requests. AUTOCOMMIT Default: True Set this to False if you want to disable Django’s transaction management and implement your own. ENGINE Default: '' (Empty string) The database backend to use. The built-in database backends are: 'django.db.backends.postgresql' 'django.db.backends.mysql' 'django.db.backends.sqlite3' 'django.db.backends.oracle' You can use a database backend that doesn’t ship with Django by setting ENGINE to a fully-qualified path (i.e. mypackage.backends.whatever). HOST Default: '' (Empty string) Which host to use when connecting to the database. An empty string means localhost. Not used with SQLite. If this value starts with a forward slash ('/') and you’re using MySQL, MySQL will connect via a Unix socket to the specified socket. For example: "HOST": '/var/run/mysql' If you’re using MySQL and this value doesn’t start with a forward slash, then this value is assumed to be the host. If you’re using PostgreSQL, by default (empty HOST), the connection to the database is done through UNIX domain sockets (‘local’ lines in pg_hba.conf). If your UNIX domain socket is not in the standard location, use the same value of unix_socket_directory from postgresql.conf. If you want to connect through TCP sockets, set HOST to ‘localhost’ or ‘127.0.0.1’ (‘host’ lines in pg_hba.conf). On Windows, you should always define HOST, as UNIX domain sockets are not available. NAME Default: '' (Empty string) The name of the database to use. For SQLite, it’s the full path to the database file. When specifying the path, always use forward slashes, even on Windows (e.g. C:/homes/user/mysite/sqlite3.db). CONN_MAX_AGE Default: 0 The lifetime of a database connection, as an integer of seconds. Use 0 to close database connections at the end of each request — Django’s historical behavior — and None for unlimited persistent connections. OPTIONS Default: {} (Empty dictionary) Extra parameters to use when connecting to the database. Available parameters vary depending on your database backend. Some information on available parameters can be found in the Database Backends documentation. For more information, consult your backend module’s own documentation. PASSWORD Default: '' (Empty string) The password to use when connecting to the database. Not used with SQLite. PORT Default: '' (Empty string) The port to use when connecting to the database. An empty string means the default port. Not used with SQLite. TIME_ZONE Default: None A string representing the time zone for this database connection or None. This inner option of the DATABASES setting accepts the same values as the general TIME_ZONE setting. When USE_TZ is True and this option is set, reading datetimes from the database returns aware datetimes in this time zone instead of UTC. When USE_TZ is False, it is an error to set this option. If the database backend doesn’t support time zones (e.g. SQLite, MySQL, Oracle), Django reads and writes datetimes in local time according to this option if it is set and in UTC if it isn’t. Changing the connection time zone changes how datetimes are read from and written to the database. If Django manages the database and you don’t have a strong reason to do otherwise, you should leave this option unset. It’s best to store datetimes in UTC because it avoids ambiguous or nonexistent datetimes during daylight saving time changes. Also, receiving datetimes in UTC keeps datetime arithmetic simple — there’s no need to consider potential offset changes over a DST transition. If you’re connecting to a third-party database that stores datetimes in a local time rather than UTC, then you must set this option to the appropriate time zone. Likewise, if Django manages the database but third-party systems connect to the same database and expect to find datetimes in local time, then you must set this option. If the database backend supports time zones (e.g. PostgreSQL), the TIME_ZONE option is very rarely needed. It can be changed at any time; the database takes care of converting datetimes to the desired time zone. Setting the time zone of the database connection may be useful for running raw SQL queries involving date/time functions provided by the database, such as date_trunc, because their results depend on the time zone. However, this has a downside: receiving all datetimes in local time makes datetime arithmetic more tricky — you must account for possible offset changes over DST transitions. Consider converting to local time explicitly with AT TIME ZONE in raw SQL queries instead of setting the TIME_ZONE option. DISABLE_SERVER_SIDE_CURSORS Default: False Set this to True if you want to disable the use of server-side cursors with QuerySet.iterator(). Transaction pooling and server-side cursors describes the use case. This is a PostgreSQL-specific setting. USER Default: '' (Empty string) The username to use when connecting to the database. Not used with SQLite. TEST Default: {} (Empty dictionary) A dictionary of settings for test databases; for more details about the creation and use of test databases, see The test database. Here’s an example with a test database configuration: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'USER': 'mydatabaseuser', 'NAME': 'mydatabase', 'TEST': { 'NAME': 'mytestdatabase', }, }, } The following keys in the TEST dictionary are available: CHARSET Default: None The character set encoding used to create the test database. The value of this string is passed directly through to the database, so its format is backend-specific. Supported by the PostgreSQL (postgresql) and MySQL (mysql) backends. COLLATION Default: None The collation order to use when creating the test database. This value is passed directly to the backend, so its format is backend-specific. Only supported for the mysql backend (see the MySQL manual for details). DEPENDENCIES Default: ['default'], for all databases other than default, which has no dependencies. The creation-order dependencies of the database. See the documentation on controlling the creation order of test databases for details. MIGRATE Default: True When set to False, migrations won’t run when creating the test database. This is similar to setting None as a value in MIGRATION_MODULES, but for all apps. MIRROR Default: None The alias of the database that this database should mirror during testing. This setting exists to allow for testing of primary/replica (referred to as master/slave by some databases) configurations of multiple databases. See the documentation on testing primary/replica configurations for details. NAME Default: None The name of database to use when running the test suite. If the default value (None) is used with the SQLite database engine, the tests will use a memory resident database. For all other database engines the test database will use the name 'test_' + DATABASE_NAME. See The test database. SERIALIZE Boolean value to control whether or not the default test runner serializes the database into an in-memory JSON string before running tests (used to restore the database state between tests if you don’t have transactions). You can set this to False to speed up creation time if you don’t have any test classes with serialized_rollback=True. Deprecated since version 4.0: This setting is deprecated as it can be inferred from the databases with the serialized_rollback option enabled. TEMPLATE This is a PostgreSQL-specific setting. The name of a template (e.g. 'template0') from which to create the test database. CREATE_DB Default: True This is an Oracle-specific setting. If it is set to False, the test tablespaces won’t be automatically created at the beginning of the tests or dropped at the end. CREATE_USER Default: True This is an Oracle-specific setting. If it is set to False, the test user won’t be automatically created at the beginning of the tests and dropped at the end. USER Default: None This is an Oracle-specific setting. The username to use when connecting to the Oracle database that will be used when running tests. If not provided, Django will use 'test_' + USER. PASSWORD Default: None This is an Oracle-specific setting. The password to use when connecting to the Oracle database that will be used when running tests. If not provided, Django will generate a random password. ORACLE_MANAGED_FILES Default: False This is an Oracle-specific setting. If set to True, Oracle Managed Files (OMF) tablespaces will be used. DATAFILE and DATAFILE_TMP will be ignored. TBLSPACE Default: None This is an Oracle-specific setting. The name of the tablespace that will be used when running tests. If not provided, Django will use 'test_' + USER. TBLSPACE_TMP Default: None This is an Oracle-specific setting. The name of the temporary tablespace that will be used when running tests. If not provided, Django will use 'test_' + USER + '_temp'. DATAFILE Default: None This is an Oracle-specific setting. The name of the datafile to use for the TBLSPACE. If not provided, Django will use TBLSPACE + '.dbf'. DATAFILE_TMP Default: None This is an Oracle-specific setting. The name of the datafile to use for the TBLSPACE_TMP. If not provided, Django will use TBLSPACE_TMP + '.dbf'. DATAFILE_MAXSIZE Default: '500M' This is an Oracle-specific setting. The maximum size that the DATAFILE is allowed to grow to. DATAFILE_TMP_MAXSIZE Default: '500M' This is an Oracle-specific setting. The maximum size that the DATAFILE_TMP is allowed to grow to. DATAFILE_SIZE Default: '50M' This is an Oracle-specific setting. The initial size of the DATAFILE. DATAFILE_TMP_SIZE Default: '50M' This is an Oracle-specific setting. The initial size of the DATAFILE_TMP. DATAFILE_EXTSIZE Default: '25M' This is an Oracle-specific setting. The amount by which the DATAFILE is extended when more space is required. DATAFILE_TMP_EXTSIZE Default: '25M' This is an Oracle-specific setting. The amount by which the DATAFILE_TMP is extended when more space is required. DATA_UPLOAD_MAX_MEMORY_SIZE Default: 2621440 (i.e. 2.5 MB). The maximum size in bytes that a request body may be before a SuspiciousOperation (RequestDataTooBig) is raised. The check is done when accessing request.body or request.POST and is calculated against the total request size excluding any file upload data. You can set this to None to disable the check. Applications that are expected to receive unusually large form posts should tune this setting. The amount of request data is correlated to the amount of memory needed to process the request and populate the GET and POST dictionaries. Large requests could be used as a denial-of-service attack vector if left unchecked. Since web servers don’t typically perform deep request inspection, it’s not possible to perform a similar check at that level. See also FILE_UPLOAD_MAX_MEMORY_SIZE. DATA_UPLOAD_MAX_NUMBER_FIELDS Default: 1000 The maximum number of parameters that may be received via GET or POST before a SuspiciousOperation (TooManyFields) is raised. You can set this to None to disable the check. Applications that are expected to receive an unusually large number of form fields should tune this setting. The number of request parameters is correlated to the amount of time needed to process the request and populate the GET and POST dictionaries. Large requests could be used as a denial-of-service attack vector if left unchecked. Since web servers don’t typically perform deep request inspection, it’s not possible to perform a similar check at that level. DATABASE_ROUTERS Default: [] (Empty list) The list of routers that will be used to determine which database to use when performing a database query. See the documentation on automatic database routing in multi database configurations. DATE_FORMAT Default: 'N j, Y' (e.g. Feb. 4, 2003) The default formatting to use for displaying date fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATETIME_FORMAT, TIME_FORMAT and SHORT_DATE_FORMAT. DATE_INPUT_FORMATS Default: [ '%Y-%m-%d', '%m/%d/%Y', '%m/%d/%y', # '2006-10-25', '10/25/2006', '10/25/06' '%b %d %Y', '%b %d, %Y', # 'Oct 25 2006', 'Oct 25, 2006' '%d %b %Y', '%d %b, %Y', # '25 Oct 2006', '25 Oct, 2006' '%B %d %Y', '%B %d, %Y', # 'October 25 2006', 'October 25, 2006' '%d %B %Y', '%d %B, %Y', # '25 October 2006', '25 October, 2006' ] A list of formats that will be accepted when inputting data on a date field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATETIME_INPUT_FORMATS and TIME_INPUT_FORMATS. DATETIME_FORMAT Default: 'N j, Y, P' (e.g. Feb. 4, 2003, 4 p.m.) The default formatting to use for displaying datetime fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATE_FORMAT, TIME_FORMAT and SHORT_DATETIME_FORMAT. DATETIME_INPUT_FORMATS Default: [ '%Y-%m-%d %H:%M:%S', # '2006-10-25 14:30:59' '%Y-%m-%d %H:%M:%S.%f', # '2006-10-25 14:30:59.000200' '%Y-%m-%d %H:%M', # '2006-10-25 14:30' '%m/%d/%Y %H:%M:%S', # '10/25/2006 14:30:59' '%m/%d/%Y %H:%M:%S.%f', # '10/25/2006 14:30:59.000200' '%m/%d/%Y %H:%M', # '10/25/2006 14:30' '%m/%d/%y %H:%M:%S', # '10/25/06 14:30:59' '%m/%d/%y %H:%M:%S.%f', # '10/25/06 14:30:59.000200' '%m/%d/%y %H:%M', # '10/25/06 14:30' ] A list of formats that will be accepted when inputting data on a datetime field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. Date-only formats are not included as datetime fields will automatically try DATE_INPUT_FORMATS in last resort. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATE_INPUT_FORMATS and TIME_INPUT_FORMATS. DEBUG Default: False A boolean that turns on/off debug mode. Never deploy a site into production with DEBUG turned on. One of the main features of debug mode is the display of detailed error pages. If your app raises an exception when DEBUG is True, Django will display a detailed traceback, including a lot of metadata about your environment, such as all the currently defined Django settings (from settings.py). As a security measure, Django will not include settings that might be sensitive, such as SECRET_KEY. Specifically, it will exclude any setting whose name includes any of the following: 'API' 'KEY' 'PASS' 'SECRET' 'SIGNATURE' 'TOKEN' Note that these are partial matches. 'PASS' will also match PASSWORD, just as 'TOKEN' will also match TOKENIZED and so on. Still, note that there are always going to be sections of your debug output that are inappropriate for public consumption. File paths, configuration options and the like all give attackers extra information about your server. It is also important to remember that when running with DEBUG turned on, Django will remember every SQL query it executes. This is useful when you’re debugging, but it’ll rapidly consume memory on a production server. Finally, if DEBUG is False, you also need to properly set the ALLOWED_HOSTS setting. Failing to do so will result in all requests being returned as “Bad Request (400)”. Note The default settings.py file created by django-admin startproject sets DEBUG = True for convenience. DEBUG_PROPAGATE_EXCEPTIONS Default: False If set to True, Django’s exception handling of view functions (handler500, or the debug view if DEBUG is True) and logging of 500 responses (django.request) is skipped and exceptions propagate upward. This can be useful for some test setups. It shouldn’t be used on a live site unless you want your web server (instead of Django) to generate “Internal Server Error” responses. In that case, make sure your server doesn’t show the stack trace or other sensitive information in the response. DECIMAL_SEPARATOR Default: '.' (Dot) Default decimal separator used when formatting decimal numbers. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also NUMBER_GROUPING, THOUSAND_SEPARATOR and USE_THOUSAND_SEPARATOR. DEFAULT_AUTO_FIELD New in Django 3.2. Default: 'django.db.models.AutoField' Default primary key field type to use for models that don’t have a field with primary_key=True. Migrating auto-created through tables The value of DEFAULT_AUTO_FIELD will be respected when creating new auto-created through tables for many-to-many relationships. Unfortunately, the primary keys of existing auto-created through tables cannot currently be updated by the migrations framework. This means that if you switch the value of DEFAULT_AUTO_FIELD and then generate migrations, the primary keys of the related models will be updated, as will the foreign keys from the through table, but the primary key of the auto-created through table will not be migrated. In order to address this, you should add a RunSQL operation to your migrations to perform the required ALTER TABLE step. You can check the existing table name through sqlmigrate, dbshell, or with the field’s remote_field.through._meta.db_table property. Explicitly defined through models are already handled by the migrations system. Allowing automatic migrations for the primary key of existing auto-created through tables may be implemented at a later date. DEFAULT_CHARSET Default: 'utf-8' Default charset to use for all HttpResponse objects, if a MIME type isn’t manually specified. Used when constructing the Content-Type header. DEFAULT_EXCEPTION_REPORTER Default: 'django.views.debug.ExceptionReporter' Default exception reporter class to be used if none has been assigned to the HttpRequest instance yet. See Custom error reports. DEFAULT_EXCEPTION_REPORTER_FILTER Default: 'django.views.debug.SafeExceptionReporterFilter' Default exception reporter filter class to be used if none has been assigned to the HttpRequest instance yet. See Filtering error reports. DEFAULT_FILE_STORAGE Default: 'django.core.files.storage.FileSystemStorage' Default file storage class to be used for any file-related operations that don’t specify a particular storage system. See Managing files. DEFAULT_FROM_EMAIL Default: 'webmaster@localhost' Default email address to use for various automated correspondence from the site manager(s). This doesn’t include error messages sent to ADMINS and MANAGERS; for that, see SERVER_EMAIL. DEFAULT_INDEX_TABLESPACE Default: '' (Empty string) Default tablespace to use for indexes on fields that don’t specify one, if the backend supports it (see Tablespaces). DEFAULT_TABLESPACE Default: '' (Empty string) Default tablespace to use for models that don’t specify one, if the backend supports it (see Tablespaces). DISALLOWED_USER_AGENTS Default: [] (Empty list) List of compiled regular expression objects representing User-Agent strings that are not allowed to visit any page, systemwide. Use this for bots/crawlers. This is only used if CommonMiddleware is installed (see Middleware). EMAIL_BACKEND Default: 'django.core.mail.backends.smtp.EmailBackend' The backend to use for sending emails. For the list of available backends see Sending email. EMAIL_FILE_PATH Default: Not defined The directory used by the file email backend to store output files. EMAIL_HOST Default: 'localhost' The host to use for sending email. See also EMAIL_PORT. EMAIL_HOST_PASSWORD Default: '' (Empty string) Password to use for the SMTP server defined in EMAIL_HOST. This setting is used in conjunction with EMAIL_HOST_USER when authenticating to the SMTP server. If either of these settings is empty, Django won’t attempt authentication. See also EMAIL_HOST_USER. EMAIL_HOST_USER Default: '' (Empty string) Username to use for the SMTP server defined in EMAIL_HOST. If empty, Django won’t attempt authentication. See also EMAIL_HOST_PASSWORD. EMAIL_PORT Default: 25 Port to use for the SMTP server defined in EMAIL_HOST. EMAIL_SUBJECT_PREFIX Default: '[Django] ' Subject-line prefix for email messages sent with django.core.mail.mail_admins or django.core.mail.mail_managers. You’ll probably want to include the trailing space. EMAIL_USE_LOCALTIME Default: False Whether to send the SMTP Date header of email messages in the local time zone (True) or in UTC (False). EMAIL_USE_TLS Default: False Whether to use a TLS (secure) connection when talking to the SMTP server. This is used for explicit TLS connections, generally on port 587. If you are experiencing hanging connections, see the implicit TLS setting EMAIL_USE_SSL. EMAIL_USE_SSL Default: False Whether to use an implicit TLS (secure) connection when talking to the SMTP server. In most email documentation this type of TLS connection is referred to as SSL. It is generally used on port 465. If you are experiencing problems, see the explicit TLS setting EMAIL_USE_TLS. Note that EMAIL_USE_TLS/EMAIL_USE_SSL are mutually exclusive, so only set one of those settings to True. EMAIL_SSL_CERTFILE Default: None If EMAIL_USE_SSL or EMAIL_USE_TLS is True, you can optionally specify the path to a PEM-formatted certificate chain file to use for the SSL connection. EMAIL_SSL_KEYFILE Default: None If EMAIL_USE_SSL or EMAIL_USE_TLS is True, you can optionally specify the path to a PEM-formatted private key file to use for the SSL connection. Note that setting EMAIL_SSL_CERTFILE and EMAIL_SSL_KEYFILE doesn’t result in any certificate checking. They’re passed to the underlying SSL connection. Please refer to the documentation of Python’s ssl.wrap_socket() function for details on how the certificate chain file and private key file are handled. EMAIL_TIMEOUT Default: None Specifies a timeout in seconds for blocking operations like the connection attempt. FILE_UPLOAD_HANDLERS Default: [ 'django.core.files.uploadhandler.MemoryFileUploadHandler', 'django.core.files.uploadhandler.TemporaryFileUploadHandler', ] A list of handlers to use for uploading. Changing this setting allows complete customization – even replacement – of Django’s upload process. See Managing files for details. FILE_UPLOAD_MAX_MEMORY_SIZE Default: 2621440 (i.e. 2.5 MB). The maximum size (in bytes) that an upload will be before it gets streamed to the file system. See Managing files for details. See also DATA_UPLOAD_MAX_MEMORY_SIZE. FILE_UPLOAD_DIRECTORY_PERMISSIONS Default: None The numeric mode to apply to directories created in the process of uploading files. This setting also determines the default permissions for collected static directories when using the collectstatic management command. See collectstatic for details on overriding it. This value mirrors the functionality and caveats of the FILE_UPLOAD_PERMISSIONS setting. FILE_UPLOAD_PERMISSIONS Default: 0o644 The numeric mode (i.e. 0o644) to set newly uploaded files to. For more information about what these modes mean, see the documentation for os.chmod(). If None, you’ll get operating-system dependent behavior. On most platforms, temporary files will have a mode of 0o600, and files saved from memory will be saved using the system’s standard umask. For security reasons, these permissions aren’t applied to the temporary files that are stored in FILE_UPLOAD_TEMP_DIR. This setting also determines the default permissions for collected static files when using the collectstatic management command. See collectstatic for details on overriding it. Warning Always prefix the mode with 0o . If you’re not familiar with file modes, please note that the 0o prefix is very important: it indicates an octal number, which is the way that modes must be specified. If you try to use 644, you’ll get totally incorrect behavior. FILE_UPLOAD_TEMP_DIR Default: None The directory to store data to (typically files larger than FILE_UPLOAD_MAX_MEMORY_SIZE) temporarily while uploading files. If None, Django will use the standard temporary directory for the operating system. For example, this will default to /tmp on *nix-style operating systems. See Managing files for details. FIRST_DAY_OF_WEEK Default: 0 (Sunday) A number representing the first day of the week. This is especially useful when displaying a calendar. This value is only used when not using format internationalization, or when a format cannot be found for the current locale. The value must be an integer from 0 to 6, where 0 means Sunday, 1 means Monday and so on. FIXTURE_DIRS Default: [] (Empty list) List of directories searched for fixture files, in addition to the fixtures directory of each application, in search order. Note that these paths should use Unix-style forward slashes, even on Windows. See Providing data with fixtures and Fixture loading. FORCE_SCRIPT_NAME Default: None If not None, this will be used as the value of the SCRIPT_NAME environment variable in any HTTP request. This setting can be used to override the server-provided value of SCRIPT_NAME, which may be a rewritten version of the preferred value or not supplied at all. It is also used by django.setup() to set the URL resolver script prefix outside of the request/response cycle (e.g. in management commands and standalone scripts) to generate correct URLs when SCRIPT_NAME is not /. FORM_RENDERER Default: 'django.forms.renderers.DjangoTemplates' The class that renders forms and form widgets. It must implement the low-level render API. Included form renderers are: 'django.forms.renderers.DjangoTemplates' 'django.forms.renderers.Jinja2' FORMAT_MODULE_PATH Default: None A full Python path to a Python package that contains custom format definitions for project locales. If not None, Django will check for a formats.py file, under the directory named as the current locale, and will use the formats defined in this file. For example, if FORMAT_MODULE_PATH is set to mysite.formats, and current language is en (English), Django will expect a directory tree like: mysite/ formats/ __init__.py en/ __init__.py formats.py You can also set this setting to a list of Python paths, for example: FORMAT_MODULE_PATH = [ 'mysite.formats', 'some_app.formats', ] When Django searches for a certain format, it will go through all given Python paths until it finds a module that actually defines the given format. This means that formats defined in packages farther up in the list will take precedence over the same formats in packages farther down. Available formats are: DATE_FORMAT DATE_INPUT_FORMATS DATETIME_FORMAT, DATETIME_INPUT_FORMATS DECIMAL_SEPARATOR FIRST_DAY_OF_WEEK MONTH_DAY_FORMAT NUMBER_GROUPING SHORT_DATE_FORMAT SHORT_DATETIME_FORMAT THOUSAND_SEPARATOR TIME_FORMAT TIME_INPUT_FORMATS YEAR_MONTH_FORMAT IGNORABLE_404_URLS Default: [] (Empty list) List of compiled regular expression objects describing URLs that should be ignored when reporting HTTP 404 errors via email (see How to manage error reporting). Regular expressions are matched against request's full paths (including query string, if any). Use this if your site does not provide a commonly requested file such as favicon.ico or robots.txt. This is only used if BrokenLinkEmailsMiddleware is enabled (see Middleware). INSTALLED_APPS Default: [] (Empty list) A list of strings designating all applications that are enabled in this Django installation. Each string should be a dotted Python path to: an application configuration class (preferred), or a package containing an application. Learn more about application configurations. Use the application registry for introspection Your code should never access INSTALLED_APPS directly. Use django.apps.apps instead. Application names and labels must be unique in INSTALLED_APPS Application names — the dotted Python path to the application package — must be unique. There is no way to include the same application twice, short of duplicating its code under another name. Application labels — by default the final part of the name — must be unique too. For example, you can’t include both django.contrib.auth and myproject.auth. However, you can relabel an application with a custom configuration that defines a different label. These rules apply regardless of whether INSTALLED_APPS references application configuration classes or application packages. When several applications provide different versions of the same resource (template, static file, management command, translation), the application listed first in INSTALLED_APPS has precedence. INTERNAL_IPS Default: [] (Empty list) A list of IP addresses, as strings, that: Allow the debug() context processor to add some variables to the template context. Can use the admindocs bookmarklets even if not logged in as a staff user. Are marked as “internal” (as opposed to “EXTERNAL”) in AdminEmailHandler emails. LANGUAGE_CODE Default: 'en-us' A string representing the language code for this installation. This should be in standard language ID format. For example, U.S. English is "en-us". See also the list of language identifiers and Internationalization and localization. USE_I18N must be active for this setting to have any effect. It serves two purposes: If the locale middleware isn’t in use, it decides which translation is served to all users. If the locale middleware is active, it provides a fallback language in case the user’s preferred language can’t be determined or is not supported by the website. It also provides the fallback translation when a translation for a given literal doesn’t exist for the user’s preferred language. See How Django discovers language preference for more details. LANGUAGE_COOKIE_AGE Default: None (expires at browser close) The age of the language cookie, in seconds. LANGUAGE_COOKIE_DOMAIN Default: None The domain to use for the language cookie. Set this to a string such as "example.com" for cross-domain cookies, or use None for a standard domain cookie. Be cautious when updating this setting on a production site. If you update this setting to enable cross-domain cookies on a site that previously used standard domain cookies, existing user cookies that have the old domain will not be updated. This will result in site users being unable to switch the language as long as these cookies persist. The only safe and reliable option to perform the switch is to change the language cookie name permanently (via the LANGUAGE_COOKIE_NAME setting) and to add a middleware that copies the value from the old cookie to a new one and then deletes the old one. LANGUAGE_COOKIE_HTTPONLY Default: False Whether to use HttpOnly flag on the language cookie. If this is set to True, client-side JavaScript will not be able to access the language cookie. See SESSION_COOKIE_HTTPONLY for details on HttpOnly. LANGUAGE_COOKIE_NAME Default: 'django_language' The name of the cookie to use for the language cookie. This can be whatever you want (as long as it’s different from the other cookie names in your application). See Internationalization and localization. LANGUAGE_COOKIE_PATH Default: '/' The path set on the language cookie. This should either match the URL path of your Django installation or be a parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths and each instance will only see its own language cookie. Be cautious when updating this setting on a production site. If you update this setting to use a deeper path than it previously used, existing user cookies that have the old path will not be updated. This will result in site users being unable to switch the language as long as these cookies persist. The only safe and reliable option to perform the switch is to change the language cookie name permanently (via the LANGUAGE_COOKIE_NAME setting), and to add a middleware that copies the value from the old cookie to a new one and then deletes the one. LANGUAGE_COOKIE_SAMESITE Default: None The value of the SameSite flag on the language cookie. This flag prevents the cookie from being sent in cross-site requests. See SESSION_COOKIE_SAMESITE for details about SameSite. LANGUAGE_COOKIE_SECURE Default: False Whether to use a secure cookie for the language cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent under an HTTPS connection. LANGUAGES Default: A list of all available languages. This list is continually growing and including a copy here would inevitably become rapidly out of date. You can see the current list of translated languages by looking in django/conf/global_settings.py. The list is a list of two-tuples in the format (language code, language name) – for example, ('ja', 'Japanese'). This specifies which languages are available for language selection. See Internationalization and localization. Generally, the default value should suffice. Only set this setting if you want to restrict language selection to a subset of the Django-provided languages. If you define a custom LANGUAGES setting, you can mark the language names as translation strings using the gettext_lazy() function. Here’s a sample settings file: from django.utils.translation import gettext_lazy as _ LANGUAGES = [ ('de', _('German')), ('en', _('English')), ] LANGUAGES_BIDI Default: A list of all language codes that are written right-to-left. You can see the current list of these languages by looking in django/conf/global_settings.py. The list contains language codes for languages that are written right-to-left. Generally, the default value should suffice. Only set this setting if you want to restrict language selection to a subset of the Django-provided languages. If you define a custom LANGUAGES setting, the list of bidirectional languages may contain language codes which are not enabled on a given site. LOCALE_PATHS Default: [] (Empty list) A list of directories where Django looks for translation files. See How Django discovers translations. Example: LOCALE_PATHS = [ '/home/www/project/common_files/locale', '/var/local/translations/locale', ] Django will look within each of these paths for the <locale_code>/LC_MESSAGES directories containing the actual translation files. LOGGING Default: A logging configuration dictionary. A data structure containing configuration information. The contents of this data structure will be passed as the argument to the configuration method described in LOGGING_CONFIG. Among other things, the default logging configuration passes HTTP 500 server errors to an email log handler when DEBUG is False. See also Configuring logging. You can see the default logging configuration by looking in django/utils/log.py. LOGGING_CONFIG Default: 'logging.config.dictConfig' A path to a callable that will be used to configure logging in the Django project. Points at an instance of Python’s dictConfig configuration method by default. If you set LOGGING_CONFIG to None, the logging configuration process will be skipped. MANAGERS Default: [] (Empty list) A list in the same format as ADMINS that specifies who should get broken link notifications when BrokenLinkEmailsMiddleware is enabled. MEDIA_ROOT Default: '' (Empty string) Absolute filesystem path to the directory that will hold user-uploaded files. Example: "/var/www/example.com/media/" See also MEDIA_URL. Warning MEDIA_ROOT and STATIC_ROOT must have different values. Before STATIC_ROOT was introduced, it was common to rely or fallback on MEDIA_ROOT to also serve static files; however, since this can have serious security implications, there is a validation check to prevent it. MEDIA_URL Default: '' (Empty string) URL that handles the media served from MEDIA_ROOT, used for managing stored files. It must end in a slash if set to a non-empty value. You will need to configure these files to be served in both development and production environments. If you want to use {{ MEDIA_URL }} in your templates, add 'django.template.context_processors.media' in the 'context_processors' option of TEMPLATES. Example: "http://media.example.com/" Warning There are security risks if you are accepting uploaded content from untrusted users! See the security guide’s topic on User-uploaded content for mitigation details. Warning MEDIA_URL and STATIC_URL must have different values. See MEDIA_ROOT for more details. Note If MEDIA_URL is a relative path, then it will be prefixed by the server-provided value of SCRIPT_NAME (or / if not set). This makes it easier to serve a Django application in a subpath without adding an extra configuration to the settings. MIDDLEWARE Default: None A list of middleware to use. See Middleware. MIGRATION_MODULES Default: {} (Empty dictionary) A dictionary specifying the package where migration modules can be found on a per-app basis. The default value of this setting is an empty dictionary, but the default package name for migration modules is migrations. Example: {'blog': 'blog.db_migrations'} In this case, migrations pertaining to the blog app will be contained in the blog.db_migrations package. If you provide the app_label argument, makemigrations will automatically create the package if it doesn’t already exist. When you supply None as a value for an app, Django will consider the app as an app without migrations regardless of an existing migrations submodule. This can be used, for example, in a test settings file to skip migrations while testing (tables will still be created for the apps’ models). To disable migrations for all apps during tests, you can set the MIGRATE to False instead. If MIGRATION_MODULES is used in your general project settings, remember to use the migrate --run-syncdb option if you want to create tables for the app. MONTH_DAY_FORMAT Default: 'F j' The default formatting to use for date fields on Django admin change-list pages – and, possibly, by other parts of the system – in cases when only the month and day are displayed. For example, when a Django admin change-list page is being filtered by a date drilldown, the header for a given day displays the day and month. Different locales have different formats. For example, U.S. English would say “January 1,” whereas Spanish might say “1 Enero.” Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT and YEAR_MONTH_FORMAT. NUMBER_GROUPING Default: 0 Number of digits grouped together on the integer part of a number. Common use is to display a thousand separator. If this setting is 0, then no grouping will be applied to the number. If this setting is greater than 0, then THOUSAND_SEPARATOR will be used as the separator between those groups. Some locales use non-uniform digit grouping, e.g. 10,00,00,000 in en_IN. For this case, you can provide a sequence with the number of digit group sizes to be applied. The first number defines the size of the group preceding the decimal delimiter, and each number that follows defines the size of preceding groups. If the sequence is terminated with -1, no further grouping is performed. If the sequence terminates with a 0, the last group size is used for the remainder of the number. Example tuple for en_IN: NUMBER_GROUPING = (3, 2, 0) Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also DECIMAL_SEPARATOR, THOUSAND_SEPARATOR and USE_THOUSAND_SEPARATOR. PREPEND_WWW Default: False Whether to prepend the “www.” subdomain to URLs that don’t have it. This is only used if CommonMiddleware is installed (see Middleware). See also APPEND_SLASH. ROOT_URLCONF Default: Not defined A string representing the full Python import path to your root URLconf, for example "mydjangoapps.urls". Can be overridden on a per-request basis by setting the attribute urlconf on the incoming HttpRequest object. See How Django processes a request for details. SECRET_KEY Default: '' (Empty string) A secret key for a particular Django installation. This is used to provide cryptographic signing, and should be set to a unique, unpredictable value. django-admin startproject automatically adds a randomly-generated SECRET_KEY to each new project. Uses of the key shouldn’t assume that it’s text or bytes. Every use should go through force_str() or force_bytes() to convert it to the desired type. Django will refuse to start if SECRET_KEY is not set. Warning Keep this value secret. Running Django with a known SECRET_KEY defeats many of Django’s security protections, and can lead to privilege escalation and remote code execution vulnerabilities. The secret key is used for: All sessions if you are using any other session backend than django.contrib.sessions.backends.cache, or are using the default get_session_auth_hash(). All messages if you are using CookieStorage or FallbackStorage. All PasswordResetView tokens. Any usage of cryptographic signing, unless a different key is provided. If you rotate your secret key, all of the above will be invalidated. Secret keys are not used for passwords of users and key rotation will not affect them. Note The default settings.py file created by django-admin startproject creates a unique SECRET_KEY for convenience. SECURE_CONTENT_TYPE_NOSNIFF Default: True If True, the SecurityMiddleware sets the X-Content-Type-Options: nosniff header on all responses that do not already have it. SECURE_CROSS_ORIGIN_OPENER_POLICY New in Django 4.0. Default: 'same-origin' Unless set to None, the SecurityMiddleware sets the Cross-Origin Opener Policy header on all responses that do not already have it to the value provided. SECURE_HSTS_INCLUDE_SUBDOMAINS Default: False If True, the SecurityMiddleware adds the includeSubDomains directive to the HTTP Strict Transport Security header. It has no effect unless SECURE_HSTS_SECONDS is set to a non-zero value. Warning Setting this incorrectly can irreversibly (for the value of SECURE_HSTS_SECONDS) break your site. Read the HTTP Strict Transport Security documentation first. SECURE_HSTS_PRELOAD Default: False If True, the SecurityMiddleware adds the preload directive to the HTTP Strict Transport Security header. It has no effect unless SECURE_HSTS_SECONDS is set to a non-zero value. SECURE_HSTS_SECONDS Default: 0 If set to a non-zero integer value, the SecurityMiddleware sets the HTTP Strict Transport Security header on all responses that do not already have it. Warning Setting this incorrectly can irreversibly (for some time) break your site. Read the HTTP Strict Transport Security documentation first. SECURE_PROXY_SSL_HEADER Default: None A tuple representing an HTTP header/value combination that signifies a request is secure. This controls the behavior of the request object’s is_secure() method. By default, is_secure() determines if a request is secure by confirming that a requested URL uses https://. This method is important for Django’s CSRF protection, and it may be used by your own code or third-party apps. If your Django app is behind a proxy, though, the proxy may be “swallowing” whether the original request uses HTTPS or not. If there is a non-HTTPS connection between the proxy and Django then is_secure() would always return False – even for requests that were made via HTTPS by the end user. In contrast, if there is an HTTPS connection between the proxy and Django then is_secure() would always return True – even for requests that were made originally via HTTP. In this situation, configure your proxy to set a custom HTTP header that tells Django whether the request came in via HTTPS, and set SECURE_PROXY_SSL_HEADER so that Django knows what header to look for. Set a tuple with two elements – the name of the header to look for and the required value. For example: SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') This tells Django to trust the X-Forwarded-Proto header that comes from our proxy, and any time its value is 'https', then the request is guaranteed to be secure (i.e., it originally came in via HTTPS). You should only set this setting if you control your proxy or have some other guarantee that it sets/strips this header appropriately. Note that the header needs to be in the format as used by request.META – all caps and likely starting with HTTP_. (Remember, Django automatically adds 'HTTP_' to the start of x-header names before making the header available in request.META.) Warning Modifying this setting can compromise your site’s security. Ensure you fully understand your setup before changing it. Make sure ALL of the following are true before setting this (assuming the values from the example above): Your Django app is behind a proxy. Your proxy strips the X-Forwarded-Proto header from all incoming requests. In other words, if end users include that header in their requests, the proxy will discard it. Your proxy sets the X-Forwarded-Proto header and sends it to Django, but only for requests that originally come in via HTTPS. If any of those are not true, you should keep this setting set to None and find another way of determining HTTPS, perhaps via custom middleware. SECURE_REDIRECT_EXEMPT Default: [] (Empty list) If a URL path matches a regular expression in this list, the request will not be redirected to HTTPS. The SecurityMiddleware strips leading slashes from URL paths, so patterns shouldn’t include them, e.g. SECURE_REDIRECT_EXEMPT = [r'^no-ssl/$', …]. If SECURE_SSL_REDIRECT is False, this setting has no effect. SECURE_REFERRER_POLICY Default: 'same-origin' If configured, the SecurityMiddleware sets the Referrer Policy header on all responses that do not already have it to the value provided. SECURE_SSL_HOST Default: None If a string (e.g. secure.example.com), all SSL redirects will be directed to this host rather than the originally-requested host (e.g. www.example.com). If SECURE_SSL_REDIRECT is False, this setting has no effect. SECURE_SSL_REDIRECT Default: False If True, the SecurityMiddleware redirects all non-HTTPS requests to HTTPS (except for those URLs matching a regular expression listed in SECURE_REDIRECT_EXEMPT). Note If turning this to True causes infinite redirects, it probably means your site is running behind a proxy and can’t tell which requests are secure and which are not. Your proxy likely sets a header to indicate secure requests; you can correct the problem by finding out what that header is and configuring the SECURE_PROXY_SSL_HEADER setting accordingly. SERIALIZATION_MODULES Default: Not defined A dictionary of modules containing serializer definitions (provided as strings), keyed by a string identifier for that serialization type. For example, to define a YAML serializer, use: SERIALIZATION_MODULES = {'yaml': 'path.to.yaml_serializer'} SERVER_EMAIL Default: 'root@localhost' The email address that error messages come from, such as those sent to ADMINS and MANAGERS. Why are my emails sent from a different address? This address is used only for error messages. It is not the address that regular email messages sent with send_mail() come from; for that, see DEFAULT_FROM_EMAIL. SHORT_DATE_FORMAT Default: 'm/d/Y' (e.g. 12/31/2003) An available formatting that can be used for displaying date fields on templates. Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT and SHORT_DATETIME_FORMAT. SHORT_DATETIME_FORMAT Default: 'm/d/Y P' (e.g. 12/31/2003 4 p.m.) An available formatting that can be used for displaying datetime fields on templates. Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT and SHORT_DATE_FORMAT. SIGNING_BACKEND Default: 'django.core.signing.TimestampSigner' The backend used for signing cookies and other data. See also the Cryptographic signing documentation. SILENCED_SYSTEM_CHECKS Default: [] (Empty list) A list of identifiers of messages generated by the system check framework (i.e. ["models.W001"]) that you wish to permanently acknowledge and ignore. Silenced checks will not be output to the console. See also the System check framework documentation. TEMPLATES Default: [] (Empty list) A list containing the settings for all template engines to be used with Django. Each item of the list is a dictionary containing the options for an individual engine. Here’s a setup that tells the Django template engine to load templates from the templates subdirectory inside each installed application: TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'APP_DIRS': True, }, ] The following options are available for all backends. BACKEND Default: Not defined The template backend to use. The built-in template backends are: 'django.template.backends.django.DjangoTemplates' 'django.template.backends.jinja2.Jinja2' You can use a template backend that doesn’t ship with Django by setting BACKEND to a fully-qualified path (i.e. 'mypackage.whatever.Backend'). NAME Default: see below The alias for this particular template engine. It’s an identifier that allows selecting an engine for rendering. Aliases must be unique across all configured template engines. It defaults to the name of the module defining the engine class, i.e. the next to last piece of BACKEND, when it isn’t provided. For example if the backend is 'mypackage.whatever.Backend' then its default name is 'whatever'. DIRS Default: [] (Empty list) Directories where the engine should look for template source files, in search order. APP_DIRS Default: False Whether the engine should look for template source files inside installed applications. Note The default settings.py file created by django-admin startproject sets 'APP_DIRS': True. OPTIONS Default: {} (Empty dict) Extra parameters to pass to the template backend. Available parameters vary depending on the template backend. See DjangoTemplates and Jinja2 for the options of the built-in backends. TEST_RUNNER Default: 'django.test.runner.DiscoverRunner' The name of the class to use for starting the test suite. See Using different testing frameworks. TEST_NON_SERIALIZED_APPS Default: [] (Empty list) In order to restore the database state between tests for TransactionTestCases and database backends without transactions, Django will serialize the contents of all apps when it starts the test run so it can then reload from that copy before running tests that need it. This slows down the startup time of the test runner; if you have apps that you know don’t need this feature, you can add their full names in here (e.g. 'django.contrib.contenttypes') to exclude them from this serialization process. THOUSAND_SEPARATOR Default: ',' (Comma) Default thousand separator used when formatting numbers. This setting is used only when USE_THOUSAND_SEPARATOR is True and NUMBER_GROUPING is greater than 0. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also NUMBER_GROUPING, DECIMAL_SEPARATOR and USE_THOUSAND_SEPARATOR. TIME_FORMAT Default: 'P' (e.g. 4 p.m.) The default formatting to use for displaying time fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATE_FORMAT and DATETIME_FORMAT. TIME_INPUT_FORMATS Default: [ '%H:%M:%S', # '14:30:59' '%H:%M:%S.%f', # '14:30:59.000200' '%H:%M', # '14:30' ] A list of formats that will be accepted when inputting data on a time field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATE_INPUT_FORMATS and DATETIME_INPUT_FORMATS. TIME_ZONE Default: 'America/Chicago' A string representing the time zone for this installation. See the list of time zones. Note Since Django was first released with the TIME_ZONE set to 'America/Chicago', the global setting (used if nothing is defined in your project’s settings.py) remains 'America/Chicago' for backwards compatibility. New project templates default to 'UTC'. Note that this isn’t necessarily the time zone of the server. For example, one server may serve multiple Django-powered sites, each with a separate time zone setting. When USE_TZ is False, this is the time zone in which Django will store all datetimes. When USE_TZ is True, this is the default time zone that Django will use to display datetimes in templates and to interpret datetimes entered in forms. On Unix environments (where time.tzset() is implemented), Django sets the os.environ['TZ'] variable to the time zone you specify in the TIME_ZONE setting. Thus, all your views and models will automatically operate in this time zone. However, Django won’t set the TZ environment variable if you’re using the manual configuration option as described in manually configuring settings. If Django doesn’t set the TZ environment variable, it’s up to you to ensure your processes are running in the correct environment. Note Django cannot reliably use alternate time zones in a Windows environment. If you’re running Django on Windows, TIME_ZONE must be set to match the system time zone. USE_DEPRECATED_PYTZ New in Django 4.0. Default: False A boolean that specifies whether to use pytz, rather than zoneinfo, as the default time zone implementation. Deprecated since version 4.0: This transitional setting is deprecated. Support for using pytz will be removed in Django 5.0. USE_I18N Default: True A boolean that specifies whether Django’s translation system should be enabled. This provides a way to turn it off, for performance. If this is set to False, Django will make some optimizations so as not to load the translation machinery. See also LANGUAGE_CODE, USE_L10N and USE_TZ. Note The default settings.py file created by django-admin startproject includes USE_I18N = True for convenience. USE_L10N Default: True A boolean that specifies if localized formatting of data will be enabled by default or not. If this is set to True, e.g. Django will display numbers and dates using the format of the current locale. See also LANGUAGE_CODE, USE_I18N and USE_TZ. Changed in Django 4.0: In older versions, the default value is False. Deprecated since version 4.0: This setting is deprecated. Starting with Django 5.0, localized formatting of data will always be enabled. For example Django will display numbers and dates using the format of the current locale. USE_THOUSAND_SEPARATOR Default: False A boolean that specifies whether to display numbers using a thousand separator. When set to True and USE_L10N is also True, Django will format numbers using the NUMBER_GROUPING and THOUSAND_SEPARATOR settings. These settings may also be dictated by the locale, which takes precedence. See also DECIMAL_SEPARATOR, NUMBER_GROUPING and THOUSAND_SEPARATOR. USE_TZ Default: False Note In Django 5.0, the default value will change from False to True. A boolean that specifies if datetimes will be timezone-aware by default or not. If this is set to True, Django will use timezone-aware datetimes internally. When USE_TZ is False, Django will use naive datetimes in local time, except when parsing ISO 8601 formatted strings, where timezone information will always be retained if present. See also TIME_ZONE, USE_I18N and USE_L10N. Note The default settings.py file created by django-admin startproject includes USE_TZ = True for convenience. USE_X_FORWARDED_HOST Default: False A boolean that specifies whether to use the X-Forwarded-Host header in preference to the Host header. This should only be enabled if a proxy which sets this header is in use. This setting takes priority over USE_X_FORWARDED_PORT. Per RFC 7239#section-5.3, the X-Forwarded-Host header can include the port number, in which case you shouldn’t use USE_X_FORWARDED_PORT. USE_X_FORWARDED_PORT Default: False A boolean that specifies whether to use the X-Forwarded-Port header in preference to the SERVER_PORT META variable. This should only be enabled if a proxy which sets this header is in use. USE_X_FORWARDED_HOST takes priority over this setting. WSGI_APPLICATION Default: None The full Python path of the WSGI application object that Django’s built-in servers (e.g. runserver) will use. The django-admin startproject management command will create a standard wsgi.py file with an application callable in it, and point this setting to that application. If not set, the return value of django.core.wsgi.get_wsgi_application() will be used. In this case, the behavior of runserver will be identical to previous Django versions. YEAR_MONTH_FORMAT Default: 'F Y' The default formatting to use for date fields on Django admin change-list pages – and, possibly, by other parts of the system – in cases when only the year and month are displayed. For example, when a Django admin change-list page is being filtered by a date drilldown, the header for a given month displays the month and the year. Different locales have different formats. For example, U.S. English would say “January 2006,” whereas another locale might say “2006/January.” Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT and MONTH_DAY_FORMAT. X_FRAME_OPTIONS Default: 'DENY' The default value for the X-Frame-Options header used by XFrameOptionsMiddleware. See the clickjacking protection documentation. Auth Settings for django.contrib.auth. AUTHENTICATION_BACKENDS Default: ['django.contrib.auth.backends.ModelBackend'] A list of authentication backend classes (as strings) to use when attempting to authenticate a user. See the authentication backends documentation for details. AUTH_USER_MODEL Default: 'auth.User' The model to use to represent a User. See Substituting a custom User model. Warning You cannot change the AUTH_USER_MODEL setting during the lifetime of a project (i.e. once you have made and migrated models that depend on it) without serious effort. It is intended to be set at the project start, and the model it refers to must be available in the first migration of the app that it lives in. See Substituting a custom User model for more details. LOGIN_REDIRECT_URL Default: '/accounts/profile/' The URL or named URL pattern where requests are redirected after login when the LoginView doesn’t get a next GET parameter. LOGIN_URL Default: '/accounts/login/' The URL or named URL pattern where requests are redirected for login when using the login_required() decorator, LoginRequiredMixin, or AccessMixin. LOGOUT_REDIRECT_URL Default: None The URL or named URL pattern where requests are redirected after logout if LogoutView doesn’t have a next_page attribute. If None, no redirect will be performed and the logout view will be rendered. PASSWORD_RESET_TIMEOUT Default: 259200 (3 days, in seconds) The number of seconds a password reset link is valid for. Used by the PasswordResetConfirmView. Note Reducing the value of this timeout doesn’t make any difference to the ability of an attacker to brute-force a password reset token. Tokens are designed to be safe from brute-forcing without any timeout. This timeout exists to protect against some unlikely attack scenarios, such as someone gaining access to email archives that may contain old, unused password reset tokens. PASSWORD_HASHERS See How Django stores passwords. Default: [ 'django.contrib.auth.hashers.PBKDF2PasswordHasher', 'django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher', 'django.contrib.auth.hashers.Argon2PasswordHasher', 'django.contrib.auth.hashers.BCryptSHA256PasswordHasher', ] AUTH_PASSWORD_VALIDATORS Default: [] (Empty list) The list of validators that are used to check the strength of user’s passwords. See Password validation for more details. By default, no validation is performed and all passwords are accepted. Messages Settings for django.contrib.messages. MESSAGE_LEVEL Default: messages.INFO Sets the minimum message level that will be recorded by the messages framework. See message levels for more details. Important If you override MESSAGE_LEVEL in your settings file and rely on any of the built-in constants, you must import the constants module directly to avoid the potential for circular imports, e.g.: from django.contrib.messages import constants as message_constants MESSAGE_LEVEL = message_constants.DEBUG If desired, you may specify the numeric values for the constants directly according to the values in the above constants table. MESSAGE_STORAGE Default: 'django.contrib.messages.storage.fallback.FallbackStorage' Controls where Django stores message data. Valid values are: 'django.contrib.messages.storage.fallback.FallbackStorage' 'django.contrib.messages.storage.session.SessionStorage' 'django.contrib.messages.storage.cookie.CookieStorage' See message storage backends for more details. The backends that use cookies – CookieStorage and FallbackStorage – use the value of SESSION_COOKIE_DOMAIN, SESSION_COOKIE_SECURE and SESSION_COOKIE_HTTPONLY when setting their cookies. MESSAGE_TAGS Default: { messages.DEBUG: 'debug', messages.INFO: 'info', messages.SUCCESS: 'success', messages.WARNING: 'warning', messages.ERROR: 'error', } This sets the mapping of message level to message tag, which is typically rendered as a CSS class in HTML. If you specify a value, it will extend the default. This means you only have to specify those values which you need to override. See Displaying messages above for more details. Important If you override MESSAGE_TAGS in your settings file and rely on any of the built-in constants, you must import the constants module directly to avoid the potential for circular imports, e.g.: from django.contrib.messages import constants as message_constants MESSAGE_TAGS = {message_constants.INFO: ''} If desired, you may specify the numeric values for the constants directly according to the values in the above constants table. Sessions Settings for django.contrib.sessions. SESSION_CACHE_ALIAS Default: 'default' If you’re using cache-based session storage, this selects the cache to use. SESSION_COOKIE_AGE Default: 1209600 (2 weeks, in seconds) The age of session cookies, in seconds. SESSION_COOKIE_DOMAIN Default: None The domain to use for session cookies. Set this to a string such as "example.com" for cross-domain cookies, or use None for a standard domain cookie. To use cross-domain cookies with CSRF_USE_SESSIONS, you must include a leading dot (e.g. ".example.com") to accommodate the CSRF middleware’s referer checking. Be cautious when updating this setting on a production site. If you update this setting to enable cross-domain cookies on a site that previously used standard domain cookies, existing user cookies will be set to the old domain. This may result in them being unable to log in as long as these cookies persist. This setting also affects cookies set by django.contrib.messages. SESSION_COOKIE_HTTPONLY Default: True Whether to use HttpOnly flag on the session cookie. If this is set to True, client-side JavaScript will not be able to access the session cookie. HttpOnly is a flag included in a Set-Cookie HTTP response header. It’s part of the RFC 6265#section-4.1.2.6 standard for cookies and can be a useful way to mitigate the risk of a client-side script accessing the protected cookie data. This makes it less trivial for an attacker to escalate a cross-site scripting vulnerability into full hijacking of a user’s session. There aren’t many good reasons for turning this off. Your code shouldn’t read session cookies from JavaScript. SESSION_COOKIE_NAME Default: 'sessionid' The name of the cookie to use for sessions. This can be whatever you want (as long as it’s different from the other cookie names in your application). SESSION_COOKIE_PATH Default: '/' The path set on the session cookie. This should either match the URL path of your Django installation or be parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths, and each instance will only see its own session cookie. SESSION_COOKIE_SAMESITE Default: 'Lax' The value of the SameSite flag on the session cookie. This flag prevents the cookie from being sent in cross-site requests thus preventing CSRF attacks and making some methods of stealing session cookie impossible. Possible values for the setting are: 'Strict': prevents the cookie from being sent by the browser to the target site in all cross-site browsing context, even when following a regular link. For example, for a GitHub-like website this would mean that if a logged-in user follows a link to a private GitHub project posted on a corporate discussion forum or email, GitHub will not receive the session cookie and the user won’t be able to access the project. A bank website, however, most likely doesn’t want to allow any transactional pages to be linked from external sites so the 'Strict' flag would be appropriate. 'Lax' (default): provides a balance between security and usability for websites that want to maintain user’s logged-in session after the user arrives from an external link. In the GitHub scenario, the session cookie would be allowed when following a regular link from an external website and be blocked in CSRF-prone request methods (e.g. POST). 'None' (string): the session cookie will be sent with all same-site and cross-site requests. False: disables the flag. Note Modern browsers provide a more secure default policy for the SameSite flag and will assume Lax for cookies without an explicit value set. SESSION_COOKIE_SECURE Default: False Whether to use a secure cookie for the session cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent under an HTTPS connection. Leaving this setting off isn’t a good idea because an attacker could capture an unencrypted session cookie with a packet sniffer and use the cookie to hijack the user’s session. SESSION_ENGINE Default: 'django.contrib.sessions.backends.db' Controls where Django stores session data. Included engines are: 'django.contrib.sessions.backends.db' 'django.contrib.sessions.backends.file' 'django.contrib.sessions.backends.cache' 'django.contrib.sessions.backends.cached_db' 'django.contrib.sessions.backends.signed_cookies' See Configuring the session engine for more details. SESSION_EXPIRE_AT_BROWSER_CLOSE Default: False Whether to expire the session when the user closes their browser. See Browser-length sessions vs. persistent sessions. SESSION_FILE_PATH Default: None If you’re using file-based session storage, this sets the directory in which Django will store session data. When the default value (None) is used, Django will use the standard temporary directory for the system. SESSION_SAVE_EVERY_REQUEST Default: False Whether to save the session data on every request. If this is False (default), then the session data will only be saved if it has been modified – that is, if any of its dictionary values have been assigned or deleted. Empty sessions won’t be created, even if this setting is active. SESSION_SERIALIZER Default: 'django.contrib.sessions.serializers.JSONSerializer' Full import path of a serializer class to use for serializing session data. Included serializers are: 'django.contrib.sessions.serializers.PickleSerializer' 'django.contrib.sessions.serializers.JSONSerializer' See Session serialization for details, including a warning regarding possible remote code execution when using PickleSerializer. Sites Settings for django.contrib.sites. SITE_ID Default: Not defined The ID, as an integer, of the current site in the django_site database table. This is used so that application data can hook into specific sites and a single database can manage content for multiple sites. Static Files Settings for django.contrib.staticfiles. STATIC_ROOT Default: None The absolute path to the directory where collectstatic will collect static files for deployment. Example: "/var/www/example.com/static/" If the staticfiles contrib app is enabled (as in the default project template), the collectstatic management command will collect static files into this directory. See the how-to on managing static files for more details about usage. Warning This should be an initially empty destination directory for collecting your static files from their permanent locations into one directory for ease of deployment; it is not a place to store your static files permanently. You should do that in directories that will be found by staticfiles’s finders, which by default, are 'static/' app sub-directories and any directories you include in STATICFILES_DIRS). STATIC_URL Default: None URL to use when referring to static files located in STATIC_ROOT. Example: "static/" or "http://static.example.com/" If not None, this will be used as the base path for asset definitions (the Media class) and the staticfiles app. It must end in a slash if set to a non-empty value. You may need to configure these files to be served in development and will definitely need to do so in production. Note If STATIC_URL is a relative path, then it will be prefixed by the server-provided value of SCRIPT_NAME (or / if not set). This makes it easier to serve a Django application in a subpath without adding an extra configuration to the settings. STATICFILES_DIRS Default: [] (Empty list) This setting defines the additional locations the staticfiles app will traverse if the FileSystemFinder finder is enabled, e.g. if you use the collectstatic or findstatic management command or use the static file serving view. This should be set to a list of strings that contain full paths to your additional files directory(ies) e.g.: STATICFILES_DIRS = [ "/home/special.polls.com/polls/static", "/home/polls.com/polls/static", "/opt/webfiles/common", ] Note that these paths should use Unix-style forward slashes, even on Windows (e.g. "C:/Users/user/mysite/extra_static_content"). Prefixes (optional) In case you want to refer to files in one of the locations with an additional namespace, you can optionally provide a prefix as (prefix, path) tuples, e.g.: STATICFILES_DIRS = [ # ... ("downloads", "/opt/webfiles/stats"), ] For example, assuming you have STATIC_URL set to 'static/', the collectstatic management command would collect the “stats” files in a 'downloads' subdirectory of STATIC_ROOT. This would allow you to refer to the local file '/opt/webfiles/stats/polls_20101022.tar.gz' with '/static/downloads/polls_20101022.tar.gz' in your templates, e.g.: <a href="{% static 'downloads/polls_20101022.tar.gz' %}"> STATICFILES_STORAGE Default: 'django.contrib.staticfiles.storage.StaticFilesStorage' The file storage engine to use when collecting static files with the collectstatic management command. A ready-to-use instance of the storage backend defined in this setting can be found at django.contrib.staticfiles.storage.staticfiles_storage. For an example, see Serving static files from a cloud service or CDN. STATICFILES_FINDERS Default: [ 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', ] The list of finder backends that know how to find static files in various locations. The default will find files stored in the STATICFILES_DIRS setting (using django.contrib.staticfiles.finders.FileSystemFinder) and in a static subdirectory of each app (using django.contrib.staticfiles.finders.AppDirectoriesFinder). If multiple files with the same name are present, the first file that is found will be used. One finder is disabled by default: django.contrib.staticfiles.finders.DefaultStorageFinder. If added to your STATICFILES_FINDERS setting, it will look for static files in the default file storage as defined by the DEFAULT_FILE_STORAGE setting. Note When using the AppDirectoriesFinder finder, make sure your apps can be found by staticfiles by adding the app to the INSTALLED_APPS setting of your site. Static file finders are currently considered a private interface, and this interface is thus undocumented. Core Settings Topical Index Cache CACHES CACHE_MIDDLEWARE_ALIAS CACHE_MIDDLEWARE_KEY_PREFIX CACHE_MIDDLEWARE_SECONDS Database DATABASES DATABASE_ROUTERS DEFAULT_INDEX_TABLESPACE DEFAULT_TABLESPACE Debugging DEBUG DEBUG_PROPAGATE_EXCEPTIONS Email ADMINS DEFAULT_CHARSET DEFAULT_FROM_EMAIL EMAIL_BACKEND EMAIL_FILE_PATH EMAIL_HOST EMAIL_HOST_PASSWORD EMAIL_HOST_USER EMAIL_PORT EMAIL_SSL_CERTFILE EMAIL_SSL_KEYFILE EMAIL_SUBJECT_PREFIX EMAIL_TIMEOUT EMAIL_USE_LOCALTIME EMAIL_USE_TLS MANAGERS SERVER_EMAIL Error reporting DEFAULT_EXCEPTION_REPORTER DEFAULT_EXCEPTION_REPORTER_FILTER IGNORABLE_404_URLS MANAGERS SILENCED_SYSTEM_CHECKS File uploads DEFAULT_FILE_STORAGE FILE_UPLOAD_HANDLERS FILE_UPLOAD_MAX_MEMORY_SIZE FILE_UPLOAD_PERMISSIONS FILE_UPLOAD_TEMP_DIR MEDIA_ROOT MEDIA_URL Forms FORM_RENDERER Globalization (i18n/l10n) DATE_FORMAT DATE_INPUT_FORMATS DATETIME_FORMAT DATETIME_INPUT_FORMATS DECIMAL_SEPARATOR FIRST_DAY_OF_WEEK FORMAT_MODULE_PATH LANGUAGE_CODE LANGUAGE_COOKIE_AGE LANGUAGE_COOKIE_DOMAIN LANGUAGE_COOKIE_HTTPONLY LANGUAGE_COOKIE_NAME LANGUAGE_COOKIE_PATH LANGUAGE_COOKIE_SAMESITE LANGUAGE_COOKIE_SECURE LANGUAGES LANGUAGES_BIDI LOCALE_PATHS MONTH_DAY_FORMAT NUMBER_GROUPING SHORT_DATE_FORMAT SHORT_DATETIME_FORMAT THOUSAND_SEPARATOR TIME_FORMAT TIME_INPUT_FORMATS TIME_ZONE USE_I18N USE_L10N USE_THOUSAND_SEPARATOR USE_TZ YEAR_MONTH_FORMAT HTTP DATA_UPLOAD_MAX_MEMORY_SIZE DATA_UPLOAD_MAX_NUMBER_FIELDS DEFAULT_CHARSET DISALLOWED_USER_AGENTS FORCE_SCRIPT_NAME INTERNAL_IPS MIDDLEWARE Security SECURE_CONTENT_TYPE_NOSNIFF SECURE_CROSS_ORIGIN_OPENER_POLICY SECURE_HSTS_INCLUDE_SUBDOMAINS SECURE_HSTS_PRELOAD SECURE_HSTS_SECONDS SECURE_PROXY_SSL_HEADER SECURE_REDIRECT_EXEMPT SECURE_REFERRER_POLICY SECURE_SSL_HOST SECURE_SSL_REDIRECT SIGNING_BACKEND USE_X_FORWARDED_HOST USE_X_FORWARDED_PORT WSGI_APPLICATION Logging LOGGING LOGGING_CONFIG Models ABSOLUTE_URL_OVERRIDES FIXTURE_DIRS INSTALLED_APPS Security Cross Site Request Forgery Protection CSRF_COOKIE_DOMAIN CSRF_COOKIE_NAME CSRF_COOKIE_PATH CSRF_COOKIE_SAMESITE CSRF_COOKIE_SECURE CSRF_FAILURE_VIEW CSRF_HEADER_NAME CSRF_TRUSTED_ORIGINS CSRF_USE_SESSIONS SECRET_KEY X_FRAME_OPTIONS Serialization DEFAULT_CHARSET SERIALIZATION_MODULES Templates TEMPLATES Testing Database: TEST TEST_NON_SERIALIZED_APPS TEST_RUNNER URLs APPEND_SLASH PREPEND_WWW ROOT_URLCONF
doc_2884
Evaluate a Chebyshev series at points x. If c is of length n + 1, this function returns the value: \[p(x) = c_0 * T_0(x) + c_1 * T_1(x) + ... + c_n * T_n(x)\] The parameter x is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either x or its elements must support multiplication and addition both with themselves and with the elements of c. If c is a 1-D array, then p(x) will have the same shape as x. If c is multidimensional, then the shape of the result depends on the value of tensor. If tensor is true the shape will be c.shape[1:] + x.shape. If tensor is false the shape will be c.shape[1:]. Note that scalars have shape (,). Trailing zeros in the coefficients will be used in the evaluation, so they should be avoided if efficiency is a concern. Parameters xarray_like, compatible object If x is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, x or its elements must support addition and multiplication with with themselves and with the elements of c. carray_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If c is multidimensional the remaining indices enumerate multiple polynomials. In the two dimensional case the coefficients may be thought of as stored in the columns of c. tensorboolean, optional If True, the shape of the coefficient array is extended with ones on the right, one for each dimension of x. Scalars have dimension 0 for this action. The result is that every column of coefficients in c is evaluated for every element of x. If False, x is broadcast over the columns of c for the evaluation. This keyword is useful when c is multidimensional. The default value is True. New in version 1.7.0. Returns valuesndarray, algebra_like The shape of the return value is described above. See also chebval2d, chebgrid2d, chebval3d, chebgrid3d Notes The evaluation uses Clenshaw recursion, aka synthetic division.
doc_2885
Set the internal state of the generator from a tuple. For use if one has reason to manually (re-)set the internal state of the bit generator used by the RandomState instance. By default, RandomState uses the “Mersenne Twister”[1] pseudo-random number generating algorithm. Parameters state{tuple(str, ndarray of 624 uints, int, int, float), dict} The state tuple has the following items: the string ‘MT19937’, specifying the Mersenne Twister algorithm. a 1-D array of 624 unsigned integers keys. an integer pos. an integer has_gauss. a float cached_gaussian. If state is a dictionary, it is directly set using the BitGenerators state property. Returns outNone Returns ‘None’ on success. See also get_state Notes set_state and get_state are not needed to work with any of the random distributions in NumPy. If the internal state is manually altered, the user should know exactly what he/she is doing. For backwards compatibility, the form (str, array of 624 uints, int) is also accepted although it is missing some information about the cached Gaussian value: state = ('MT19937', keys, pos). References 1 M. Matsumoto and T. Nishimura, “Mersenne Twister: A 623-dimensionally equidistributed uniform pseudorandom number generator,” ACM Trans. on Modeling and Computer Simulation, Vol. 8, No. 1, pp. 3-30, Jan. 1998.
doc_2886
Returns True if the FreeType module is currently initialized. get_init() -> bool Returns True if the pygame.freetype module is currently initialized. New in pygame 1.9.5.
doc_2887
See Migration guide for more details. tf.compat.v1.raw_ops.DeleteIterator tf.raw_ops.DeleteIterator( handle, deleter, name=None ) Args handle A Tensor of type resource. A handle to the iterator to delete. deleter A Tensor of type variant. A variant deleter. name A name for the operation (optional). Returns The created Operation.
doc_2888
This subclass of HTMLCalendar can be passed a locale name in the constructor and will return month and weekday names in the specified locale. If this locale includes an encoding all strings containing month and weekday names will be returned as unicode.
doc_2889
Trigger a tool and emit the tool_trigger_{name} event. Parameters namestr Name of the tool. senderobject Object that wishes to trigger the tool. canvaseventEvent Original Canvas event or None. dataobject Extra data to pass to the tool when triggering.
doc_2890
Getter for the precision matrix. Returns precision_array-like of shape (n_features, n_features) The precision matrix associated to the current covariance object.
doc_2891
Produce an object that mimics broadcasting. Parameters in1, in2, …array_like Input parameters. Returns bbroadcast object Broadcast the input parameters against one another, and return an object that encapsulates the result. Amongst others, it has shape and nd properties, and may be used as an iterator. See also broadcast_arrays broadcast_to broadcast_shapes Examples Manually adding two vectors, using broadcasting: >>> x = np.array([[1], [2], [3]]) >>> y = np.array([4, 5, 6]) >>> b = np.broadcast(x, y) >>> out = np.empty(b.shape) >>> out.flat = [u+v for (u,v) in b] >>> out array([[5., 6., 7.], [6., 7., 8.], [7., 8., 9.]]) Compare against built-in broadcasting: >>> x + y array([[5, 6, 7], [6, 7, 8], [7, 8, 9]]) Attributes index current index in broadcasted result iters tuple of iterators along self’s “components.” nd Number of dimensions of broadcasted result. ndim Number of dimensions of broadcasted result. numiter Number of iterators possessed by the broadcasted result. shape Shape of broadcasted result. size Total size of broadcasted result. Methods reset() Reset the broadcasted result's iterator(s).
doc_2892
Address space of a memory block (int). Read-only property.
doc_2893
See Migration guide for more details. tf.compat.v1.global_norm, tf.compat.v1.linalg.global_norm tf.linalg.global_norm( t_list, name=None ) Given a tuple or list of tensors t_list, this operation returns the global norm of the elements in all tensors in t_list. The global norm is computed as: global_norm = sqrt(sum([l2norm(t)**2 for t in t_list])) Any entries in t_list that are of type None are ignored. Args t_list A tuple or list of mixed Tensors, IndexedSlices, or None. name A name for the operation (optional). Returns A 0-D (scalar) Tensor of type float. Raises TypeError If t_list is not a sequence.
doc_2894
This constant contains a boolean value which indicates if IPv6 is supported on this platform.
doc_2895
Compute density of a sparse vector. Parameters warray-like The sparse vector. Returns float The density of w, between 0 and 1.
doc_2896
Adjust the axes and return a list of information about the Sankey subdiagram(s). Return value is a list of subdiagrams represented with the following fields: Field Description patch Sankey outline (an instance of PathPatch) flows values of the flows (positive for input, negative for output) angles list of angles of the arrows [deg/90] For example, if the diagram has not been rotated, an input to the top side will have an angle of 3 (DOWN), and an output from the top side will have an angle of 1 (UP). If a flow has been skipped (because its magnitude is less than tolerance), then its angle will be None. tips array in which each row is an [x, y] pair indicating the positions of the tips (or "dips") of the flow paths If the magnitude of a flow is less the tolerance for the instance of Sankey, the flow is skipped and its tip will be at the center of the diagram. text Text instance for the label of the diagram texts list of Text instances for the labels of flows See also Sankey.add
doc_2897
Create and return a new GridlineCollection instance. which : "major" or "minor" axis : "both", "x" or "y"
doc_2898
Return the filename of the associated font.
doc_2899
Subclass of SAXException raised on parse errors. Instances of this class are passed to the methods of the SAX ErrorHandler interface to provide information about the parse error. This class supports the SAX Locator interface as well as the SAXException interface.