_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_23600
Returns a contraction of a and b over multiple dimensions. tensordot implements a generalized matrix product. Parameters a (Tensor) – Left tensor to contract b (Tensor) – Right tensor to contract dims (int or Tuple[List[int]] containing two lists) – number of dimensions to contract or explicit lists of dimensions for a and b respectively When called with a non-negative integer argument dims = dd , and the number of dimensions of a and b is mm and nn , respectively, tensordot() computes ri0,...,im−d,id,...,in=∑k0,...,kd−1ai0,...,im−d,k0,...,kd−1×bk0,...,kd−1,id,...,in.r_{i_0,...,i_{m-d}, i_d,...,i_n} = \sum_{k_0,...,k_{d-1}} a_{i_0,...,i_{m-d},k_0,...,k_{d-1}} \times b_{k_0,...,k_{d-1}, i_d,...,i_n}. When called with dims of the list form, the given dimensions will be contracted in place of the last dd of a and the first dd of bb . The sizes in these dimensions must match, but tensordot() will deal with broadcasted dimensions. Examples: >>> a = torch.arange(60.).reshape(3, 4, 5) >>> b = torch.arange(24.).reshape(4, 3, 2) >>> torch.tensordot(a, b, dims=([1, 0], [0, 1])) tensor([[4400., 4730.], [4532., 4874.], [4664., 5018.], [4796., 5162.], [4928., 5306.]]) >>> a = torch.randn(3, 4, 5, device='cuda') >>> b = torch.randn(4, 5, 6, device='cuda') >>> c = torch.tensordot(a, b, dims=2).cpu() tensor([[ 8.3504, -2.5436, 6.2922, 2.7556, -1.0732, 3.2741], [ 3.3161, 0.0704, 5.0187, -0.4079, -4.3126, 4.8744], [ 0.8223, 3.9445, 3.2168, -0.2400, 3.4117, 1.7780]]) >>> a = torch.randn(3, 5, 4, 6) >>> b = torch.randn(6, 4, 5, 3) >>> torch.tensordot(a, b, dims=([2, 1, 3], [1, 2, 0])) tensor([[ 7.7193, -2.4867, -10.3204], [ 1.5513, -14.4737, -6.5113], [ -0.2850, 4.2573, -3.5997]])
doc_23601
See Migration guide for more details. tf.compat.v1.keras.applications.inception_resnet_v2.preprocess_input tf.keras.applications.inception_resnet_v2.preprocess_input( x, data_format=None ) Usage example with applications.MobileNet: i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) x = tf.cast(i, tf.float32) x = tf.keras.applications.mobilenet.preprocess_input(x) core = tf.keras.applications.MobileNet() x = core(x) model = tf.keras.Model(inputs=[i], outputs=[x]) image = tf.image.decode_png(tf.io.read_file('file.png')) result = model(image) Arguments x A floating point numpy.array or a tf.Tensor, 3D or 4D with 3 color channels, with values in the range [0, 255]. The preprocessed data are written over the input data if the data types are compatible. To avoid this behaviour, numpy.copy(x) can be used. data_format Optional data format of the image tensor/array. Defaults to None, in which case the global setting tf.keras.backend.image_data_format() is used (unless you changed it, it defaults to "channels_last"). Returns Preprocessed numpy.array or a tf.Tensor with type float32. The inputs pixel values are scaled between -1 and 1, sample-wise. Raises ValueError In case of unknown data_format argument.
doc_23602
Returns the indices of the lower triangular part of a row-by- col matrix in a 2-by-N Tensor, where the first row contains row coordinates of all indices and the second row contains column coordinates. Indices are ordered based on rows and then columns. The lower triangular part of the matrix is defined as the elements on and below the diagonal. The argument offset controls which diagonal to consider. If offset = 0, all elements on and below the main diagonal are retained. A positive value includes just as many diagonals above the main diagonal, and similarly a negative value excludes just as many diagonals below the main diagonal. The main diagonal are the set of indices {(i,i)}\lbrace (i, i) \rbrace for i∈[0,min⁡{d1,d2}−1]i \in [0, \min\{d_{1}, d_{2}\} - 1] where d1,d2d_{1}, d_{2} are the dimensions of the matrix. Note When running on CUDA, row * col must be less than 2592^{59} to prevent overflow during calculation. Parameters row (int) – number of rows in the 2-D matrix. col (int) – number of columns in the 2-D matrix. offset (int) – diagonal offset from the main diagonal. Default: if not provided, 0. Keyword Arguments dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, torch.long. device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. layout (torch.layout, optional) – currently only support torch.strided. Example:: >>> a = torch.tril_indices(3, 3) >>> a tensor([[0, 1, 1, 2, 2, 2], [0, 0, 1, 0, 1, 2]]) >>> a = torch.tril_indices(4, 3, -1) >>> a tensor([[1, 2, 2, 3, 3, 3], [0, 0, 1, 0, 1, 2]]) >>> a = torch.tril_indices(4, 3, 1) >>> a tensor([[0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3], [0, 1, 0, 1, 2, 0, 1, 2, 0, 1, 2]])
doc_23603
os.P_PGID os.P_ALL These are the possible values for idtype in waitid(). They affect how id is interpreted. Availability: Unix. New in version 3.3.
doc_23604
The number of bytes needed to store this object in memory.
doc_23605
Pop and return the smallest item from the heap, maintaining the heap invariant. If the heap is empty, IndexError is raised. To access the smallest item without popping it, use heap[0].
doc_23606
See Migration guide for more details. tf.compat.v1.sparse.placeholder tf.compat.v1.sparse_placeholder( dtype, shape=None, name=None ) Key Point: This sparse tensor will produce an error if evaluated. Its value must be fed using the feed_dict optional argument to Session.run(), Tensor.eval(), or Operation.run(). For example: x = tf.compat.v1.sparse.placeholder(tf.float32) y = tf.sparse.reduce_sum(x) with tf.compat.v1.Session() as sess: print(sess.run(y)) # ERROR: will fail because x was not fed. indices = np.array([[3, 2, 0], [4, 5, 1]], dtype=np.int64) values = np.array([1.0, 2.0], dtype=np.float32) shape = np.array([7, 9, 2], dtype=np.int64) print(sess.run(y, feed_dict={ x: tf.compat.v1.SparseTensorValue(indices, values, shape)})) # Will succeed. print(sess.run(y, feed_dict={ x: (indices, values, shape)})) # Will succeed. sp = tf.sparse.SparseTensor(indices=indices, values=values, dense_shape=shape) sp_value = sp.eval(session=sess) print(sess.run(y, feed_dict={x: sp_value})) # Will succeed. @compatibility{eager} Placeholders are not compatible with eager execution. Args dtype The type of values elements in the tensor to be fed. shape The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a sparse tensor of any shape. name A name for prefixing the operations (optional). Returns A SparseTensor that may be used as a handle for feeding a value, but not evaluated directly. Raises RuntimeError if eager execution is enabled
doc_23607
Used by send_file() to determine the max_age cache value for a given file path if it wasn’t passed. By default, this returns SEND_FILE_MAX_AGE_DEFAULT from the configuration of current_app. This defaults to None, which tells the browser to use conditional requests instead of a timed cache, which is usually preferable. Changed in version 2.0: The default configuration is None instead of 12 hours. Changelog New in version 0.9. Parameters filename (str) – Return type Optional[int]
doc_23608
An importer for frozen modules. This class implements the importlib.abc.MetaPathFinder and importlib.abc.InspectLoader ABCs. Only class methods are defined by this class to alleviate the need for instantiation. Changed in version 3.4: Gained create_module() and exec_module() methods.
doc_23609
Compute min of group values. Parameters numeric_only:bool, default False Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. min_count:int, default -1 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns Series or DataFrame Computed min of values within each group.
doc_23610
This function creates a mutable unicode character buffer. The returned object is a ctypes array of c_wchar. init_or_size must be an integer which specifies the size of the array, or a string which will be used to initialize the array items. If a string is specified as first argument, the buffer is made one item larger than the length of the string so that the last element in the array is a NUL termination character. An integer can be passed as second argument which allows specifying the size of the array if the length of the string should not be used. Raises an auditing event ctypes.create_unicode_buffer with arguments init, size.
doc_23611
class sklearn.neighbors.NeighborhoodComponentsAnalysis(n_components=None, *, init='auto', warm_start=False, max_iter=50, tol=1e-05, callback=None, verbose=0, random_state=None) [source] Neighborhood Components Analysis Neighborhood Component Analysis (NCA) is a machine learning algorithm for metric learning. It learns a linear transformation in a supervised fashion to improve the classification accuracy of a stochastic nearest neighbors rule in the transformed space. Read more in the User Guide. Parameters n_componentsint, default=None Preferred dimensionality of the projected space. If None it will be set to n_features. init{‘auto’, ‘pca’, ‘lda’, ‘identity’, ‘random’} or ndarray of shape (n_features_a, n_features_b), default=’auto’ Initialization of the linear transformation. Possible options are ‘auto’, ‘pca’, ‘lda’, ‘identity’, ‘random’, and a numpy array of shape (n_features_a, n_features_b). ‘auto’ Depending on n_components, the most reasonable initialization will be chosen. If n_components <= n_classes we use ‘lda’, as it uses labels information. If not, but n_components < min(n_features, n_samples), we use ‘pca’, as it projects data in meaningful directions (those of higher variance). Otherwise, we just use ‘identity’. ‘pca’ n_components principal components of the inputs passed to fit will be used to initialize the transformation. (See PCA) ‘lda’ min(n_components, n_classes) most discriminative components of the inputs passed to fit will be used to initialize the transformation. (If n_components > n_classes, the rest of the components will be zero.) (See LinearDiscriminantAnalysis) ‘identity’ If n_components is strictly smaller than the dimensionality of the inputs passed to fit, the identity matrix will be truncated to the first n_components rows. ‘random’ The initial transformation will be a random array of shape (n_components, n_features). Each value is sampled from the standard normal distribution. numpy array n_features_b must match the dimensionality of the inputs passed to fit and n_features_a must be less than or equal to that. If n_components is not None, n_features_a must match it. warm_startbool, default=False If True and fit has been called before, the solution of the previous call to fit is used as the initial linear transformation (n_components and init will be ignored). max_iterint, default=50 Maximum number of iterations in the optimization. tolfloat, default=1e-5 Convergence tolerance for the optimization. callbackcallable, default=None If not None, this function is called after every iteration of the optimizer, taking as arguments the current solution (flattened transformation matrix) and the number of iterations. This might be useful in case one wants to examine or store the transformation found after each iteration. verboseint, default=0 If 0, no progress messages will be printed. If 1, progress messages will be printed to stdout. If > 1, progress messages will be printed and the disp parameter of scipy.optimize.minimize will be set to verbose - 2. random_stateint or numpy.RandomState, default=None A pseudo random number generator object or a seed for it if int. If init='random', random_state is used to initialize the random transformation. If init='pca', random_state is passed as an argument to PCA when initializing the transformation. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. Attributes components_ndarray of shape (n_components, n_features) The linear transformation learned during fitting. n_iter_int Counts the number of iterations performed by the optimizer. random_state_numpy.RandomState Pseudo random number generator object used during initialization. References 1 J. Goldberger, G. Hinton, S. Roweis, R. Salakhutdinov. “Neighbourhood Components Analysis”. Advances in Neural Information Processing Systems. 17, 513-520, 2005. http://www.cs.nyu.edu/~roweis/papers/ncanips.pdf 2 Wikipedia entry on Neighborhood Components Analysis https://en.wikipedia.org/wiki/Neighbourhood_components_analysis Examples >>> from sklearn.neighbors import NeighborhoodComponentsAnalysis >>> from sklearn.neighbors import KNeighborsClassifier >>> from sklearn.datasets import load_iris >>> from sklearn.model_selection import train_test_split >>> X, y = load_iris(return_X_y=True) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, ... stratify=y, test_size=0.7, random_state=42) >>> nca = NeighborhoodComponentsAnalysis(random_state=42) >>> nca.fit(X_train, y_train) NeighborhoodComponentsAnalysis(...) >>> knn = KNeighborsClassifier(n_neighbors=3) >>> knn.fit(X_train, y_train) KNeighborsClassifier(...) >>> print(knn.score(X_test, y_test)) 0.933333... >>> knn.fit(nca.transform(X_train), y_train) KNeighborsClassifier(...) >>> print(knn.score(nca.transform(X_test), y_test)) 0.961904... Methods fit(X, y) Fit the model according to the given training data. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Applies the learned transformation to the given data. fit(X, y) [source] Fit the model according to the given training data. Parameters Xarray-like of shape (n_samples, n_features) The training samples. yarray-like of shape (n_samples,) The corresponding training labels. Returns selfobject returns a trained NeighborhoodComponentsAnalysis model. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Applies the learned transformation to the given data. Parameters Xarray-like of shape (n_samples, n_features) Data samples. Returns X_embedded: ndarray of shape (n_samples, n_components) The data samples transformed. Raises NotFittedError If fit has not been called before. Examples using sklearn.neighbors.NeighborhoodComponentsAnalysis Manifold learning on handwritten digits: Locally Linear Embedding, Isomap… Neighborhood Components Analysis Illustration Comparing Nearest Neighbors with and without Neighborhood Components Analysis Dimensionality Reduction with Neighborhood Components Analysis
doc_23612
DateOffset increments between business quarter dates for 52-53 week fiscal year (also known as a 4-4-5 calendar). It is used by companies that desire that their fiscal year always end on the same day of the week. It is a method of managing accounting periods. It is a common calendar structure for some industries, such as retail, manufacturing and parking industry. For more information see: https://en.wikipedia.org/wiki/4-4-5_calendar The year may either: end on the last X day of the Y month. end on the last X day closest to the last day of the Y month. X is a specific day of the week. Y is a certain month of the year startingMonth = 1 corresponds to dates like 1/31/2007, 4/30/2007, … startingMonth = 2 corresponds to dates like 2/28/2007, 5/31/2007, … startingMonth = 3 corresponds to dates like 3/30/2007, 6/29/2007, … Parameters n:int weekday:int {0, 1, …, 6}, default 0 A specific integer for the day of the week. 0 is Monday 1 is Tuesday 2 is Wednesday 3 is Thursday 4 is Friday 5 is Saturday 6 is Sunday. startingMonth:int {1, 2, …, 12}, default 1 The month in which fiscal years end. qtr_with_extra_week:int {1, 2, 3, 4}, default 1 The quarter number that has the leap or 14 week when needed. variation:str, default “nearest” Method of employing 4-4-5 calendar. There are two options: “nearest” means year end is weekday closest to last day of month in year. “last” means year end is final weekday of the final month in fiscal year. Attributes base Returns a copy of the calling offset object with n=1 and all other attributes equal. freqstr kwds n name nanos normalize qtr_with_extra_week rule_code startingMonth variation weekday Methods __call__(*args, **kwargs) Call self as a function. rollback Roll provided date backward to next offset only if not on offset. rollforward Roll provided date forward to next offset only if not on offset. apply apply_index copy get_rule_code_suffix get_weeks isAnchored is_anchored is_month_end is_month_start is_on_offset is_quarter_end is_quarter_start is_year_end is_year_start onOffset year_has_extra_week
doc_23613
Convert float rgb values (in the range [0, 1]), in a numpy array to hsv values. Parameters arr(..., 3) array-like All values must be in the range [0, 1] Returns (..., 3) ndarray Colors converted to hsv values in range [0, 1] Examples using matplotlib.colors.rgb_to_hsv List of named colors
doc_23614
bytearray.isalpha() Return True if all bytes in the sequence are alphabetic ASCII characters and the sequence is not empty, False otherwise. Alphabetic ASCII characters are those byte values in the sequence b'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'. For example: >>> b'ABCabc'.isalpha() True >>> b'ABCabc1'.isalpha() False
doc_23615
See Migration guide for more details. tf.compat.v1.raw_ops.StageSize tf.raw_ops.StageSize( dtypes, capacity=0, memory_limit=0, container='', shared_name='', name=None ) Args dtypes A list of tf.DTypes. capacity An optional int that is >= 0. Defaults to 0. memory_limit An optional int that is >= 0. Defaults to 0. container An optional string. Defaults to "". shared_name An optional string. Defaults to "". name A name for the operation (optional). Returns A Tensor of type int32.
doc_23616
The name of the package or module that this object belongs to. Do not change this once it is set by the constructor.
doc_23617
See Migration guide for more details. tf.compat.v1.lookup.experimental.DenseHashTable tf.lookup.experimental.DenseHashTable( key_dtype, value_dtype, default_value, empty_key, deleted_key, initial_num_buckets=None, name='MutableDenseHashTable', checkpoint=True ) Data can be inserted by calling the insert method and removed by calling the remove method. It does not support initialization via the init method. It uses "open addressing" with quadratic reprobing to resolve collisions. Compared to MutableHashTable the insert, remove and lookup operations in a DenseHashTable are typically faster, but memory usage can be higher. However, DenseHashTable does not require additional memory for temporary tensors created during checkpointing and restore operations. Example usage: table = tf.lookup.experimental.DenseHashTable( key_dtype=tf.string, value_dtype=tf.int64, default_value=-1, empty_key='', deleted_key='$') keys = tf.constant(['a', 'b', 'c']) values = tf.constant([0, 1, 2], dtype=tf.int64) table.insert(keys, values) table.remove(tf.constant(['c'])) table.lookup(tf.constant(['a', 'b', 'c','d'])).numpy() array([ 0, 1, -1, -1]) Args key_dtype the type of the key tensors. value_dtype the type of the value tensors. default_value The value to use if a key is missing in the table. empty_key the key to use to represent empty buckets internally. Must not be used in insert, remove or lookup operations. deleted_key the key to use to represent deleted buckets internally. Must not be used in insert, remove or lookup operations and be different from the empty_key. initial_num_buckets the initial number of buckets. name A name for the operation (optional). checkpoint if True, the contents of the table are saved to and restored from checkpoints. If shared_name is empty for a checkpointed table, it is shared using the table node name. Raises ValueError If checkpoint is True and no name was specified. Attributes key_dtype The table key dtype. name The name of the table. resource_handle Returns the resource handle associated with this Resource. value_dtype The table value dtype. Methods erase View source erase( keys, name=None ) Removes keys and its associated values from the table. If a key is not present in the table, it is silently ignored. Args keys Keys to remove. Can be a tensor of any shape. Must match the table's key type. name A name for the operation (optional). Returns The created Operation. Raises TypeError when keys do not match the table data types. export View source export( name=None ) Returns tensors of all keys and values in the table. Args name A name for the operation (optional). Returns A pair of tensors with the first tensor containing all keys and the second tensors containing all values in the table. insert View source insert( keys, values, name=None ) Associates keys with values. Args keys Keys to insert. Can be a tensor of any shape. Must match the table's key type. values Values to be associated with keys. Must be a tensor of the same shape as keys and match the table's value type. name A name for the operation (optional). Returns The created Operation. Raises TypeError when keys or values doesn't match the table data types. insert_or_assign View source insert_or_assign( keys, values, name=None ) Associates keys with values. Args keys Keys to insert. Can be a tensor of any shape. Must match the table's key type. values Values to be associated with keys. Must be a tensor of the same shape as keys and match the table's value type. name A name for the operation (optional). Returns The created Operation. Raises TypeError when keys or values doesn't match the table data types. lookup View source lookup( keys, name=None ) Looks up keys in a table, outputs the corresponding values. The default_value is used for keys not present in the table. Args keys Keys to look up. Can be a tensor of any shape. Must match the table's key_dtype. name A name for the operation (optional). Returns A tensor containing the values in the same shape as keys using the table's value type. Raises TypeError when keys do not match the table data types. remove View source remove( keys, name=None ) Removes keys and its associated values from the table. If a key is not present in the table, it is silently ignored. Args keys Keys to remove. Can be a tensor of any shape. Must match the table's key type. name A name for the operation (optional). Returns The created Operation. Raises TypeError when keys do not match the table data types. size View source size( name=None ) Compute the number of elements in this table. Args name A name for the operation (optional). Returns A scalar tensor containing the number of elements in this table. __getitem__ View source __getitem__( keys ) Looks up keys in a table, outputs the corresponding values.
doc_23618
Set the ticks position. Parameters position{'left', 'right', 'both', 'default', 'none'} 'both' sets the ticks to appear on both positions, but does not change the tick labels. 'default' resets the tick positions to the default: ticks on both positions, labels at left. 'none' can be used if you don't want any ticks. 'none' and 'both' affect only the ticks, not the labels. Examples using matplotlib.axis.YAxis.set_ticks_position Spine Placement Spines Custom spine bounds Dropped spines
doc_23619
The name of the source which is equivalent to the input file path or the name provided upon instantiation. >>> GDALRaster({'width': 10, 'height': 10, 'name': 'myraster', 'srid': 4326}).name 'myraster'
doc_23620
tf.compat.v1.data.experimental.make_batched_features_dataset( file_pattern, batch_size, features, reader=None, label_key=None, reader_args=None, num_epochs=None, shuffle=True, shuffle_buffer_size=10000, shuffle_seed=None, prefetch_buffer_size=None, reader_num_threads=None, parser_num_threads=None, sloppy_ordering=False, drop_final_batch=False ) If label_key argument is provided, returns a Dataset of tuple comprising of feature dictionaries and label. Example: serialized_examples = [ features { feature { key: "age" value { int64_list { value: [ 0 ] } } } feature { key: "gender" value { bytes_list { value: [ "f" ] } } } feature { key: "kws" value { bytes_list { value: [ "code", "art" ] } } } }, features { feature { key: "age" value { int64_list { value: [] } } } feature { key: "gender" value { bytes_list { value: [ "f" ] } } } feature { key: "kws" value { bytes_list { value: [ "sports" ] } } } } ] We can use arguments: features: { "age": FixedLenFeature([], dtype=tf.int64, default_value=-1), "gender": FixedLenFeature([], dtype=tf.string), "kws": VarLenFeature(dtype=tf.string), } And the expected output is: { "age": [[0], [-1]], "gender": [["f"], ["f"]], "kws": SparseTensor( indices=[[0, 0], [0, 1], [1, 0]], values=["code", "art", "sports"] dense_shape=[2, 2]), } Args file_pattern List of files or patterns of file paths containing Example records. See tf.io.gfile.glob for pattern rules. batch_size An int representing the number of records to combine in a single batch. features A dict mapping feature keys to FixedLenFeature or VarLenFeature values. See tf.io.parse_example. reader A function or class that can be called with a filenames tensor and (optional) reader_args and returns a Dataset of Example tensors. Defaults to tf.data.TFRecordDataset. label_key (Optional) A string corresponding to the key labels are stored in tf.Examples. If provided, it must be one of the features key, otherwise results in ValueError. reader_args Additional arguments to pass to the reader class. num_epochs Integer specifying the number of times to read through the dataset. If None, cycles through the dataset forever. Defaults to None. shuffle A boolean, indicates whether the input should be shuffled. Defaults to True. shuffle_buffer_size Buffer size of the ShuffleDataset. A large capacity ensures better shuffling but would increase memory usage and startup time. shuffle_seed Randomization seed to use for shuffling. prefetch_buffer_size Number of feature batches to prefetch in order to improve performance. Recommended value is the number of batches consumed per training step. Defaults to auto-tune. reader_num_threads Number of threads used to read Example records. If >1, the results will be interleaved. Defaults to 1. parser_num_threads Number of threads to use for parsing Example tensors into a dictionary of Feature tensors. Defaults to 2. sloppy_ordering If True, reading performance will be improved at the cost of non-deterministic ordering. If False, the order of elements produced is deterministic prior to shuffling (elements are still randomized if shuffle=True. Note that if the seed is set, then order of elements after shuffling is deterministic). Defaults to False. drop_final_batch If True, and the batch size does not evenly divide the input dataset size, the final smaller batch will be dropped. Defaults to False. Returns A dataset of dict elements, (or a tuple of dict elements and label). Each dict maps feature keys to Tensor or SparseTensor objects. Raises TypeError If reader is of the wrong type. ValueError If label_key is not one of the features keys.
doc_23621
Read the entire uploaded data from the file. Be careful with this method: if the uploaded file is huge it can overwhelm your system if you try to read it into memory. You’ll probably want to use chunks() instead; see below.
doc_23622
Set the wakeup file descriptor to fd. When a signal is received, the signal number is written as a single byte into the fd. This can be used by a library to wakeup a poll or select call, allowing the signal to be fully processed. The old wakeup fd is returned (or -1 if file descriptor wakeup was not enabled). If fd is -1, file descriptor wakeup is disabled. If not -1, fd must be non-blocking. It is up to the library to remove any bytes from fd before calling poll or select again. When threads are enabled, this function can only be called from the main thread of the main interpreter; attempting to call it from other threads will cause a ValueError exception to be raised. There are two common ways to use this function. In both approaches, you use the fd to wake up when a signal arrives, but then they differ in how they determine which signal or signals have arrived. In the first approach, we read the data out of the fd’s buffer, and the byte values give you the signal numbers. This is simple, but in rare cases it can run into a problem: generally the fd will have a limited amount of buffer space, and if too many signals arrive too quickly, then the buffer may become full, and some signals may be lost. If you use this approach, then you should set warn_on_full_buffer=True, which will at least cause a warning to be printed to stderr when signals are lost. In the second approach, we use the wakeup fd only for wakeups, and ignore the actual byte values. In this case, all we care about is whether the fd’s buffer is empty or non-empty; a full buffer doesn’t indicate a problem at all. If you use this approach, then you should set warn_on_full_buffer=False, so that your users are not confused by spurious warning messages. Changed in version 3.5: On Windows, the function now also supports socket handles. Changed in version 3.7: Added warn_on_full_buffer parameter.
doc_23623
See Migration guide for more details. tf.compat.v1.raw_ops.LRN tf.raw_ops.LRN( input, depth_radius=5, bias=1, alpha=1, beta=0.5, name=None ) The 4-D input tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within depth_radius. In detail, sqr_sum[a, b, c, d] = sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2) output = input / (bias + alpha * sqr_sum) ** beta For details, see Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012). Args input A Tensor. Must be one of the following types: half, bfloat16, float32. 4-D. depth_radius An optional int. Defaults to 5. 0-D. Half-width of the 1-D normalization window. bias An optional float. Defaults to 1. An offset (usually positive to avoid dividing by 0). alpha An optional float. Defaults to 1. A scale factor, usually positive. beta An optional float. Defaults to 0.5. An exponent. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
doc_23624
Computes the N dimensional inverse discrete Fourier transform of input. Parameters input (Tensor) – the input tensor s (Tuple[int], optional) – Signal size in the transformed dimensions. If given, each dimension dim[i] will either be zero-padded or trimmed to the length s[i] before computing the IFFT. If a length -1 is specified, no padding is done in that dimension. Default: s = [input.size(d) for d in dim] dim (Tuple[int], optional) – Dimensions to be transformed. Default: all dimensions, or the last len(s) dimensions if s is given. norm (str, optional) – Normalization mode. For the backward transform (ifftn()), these correspond to: "forward" - no normalization "backward" - normalize by 1/n "ortho" - normalize by 1/sqrt(n) (making the IFFT orthonormal) Where n = prod(s) is the logical IFFT size. Calling the forward transform (fftn()) with the same normalization mode will apply an overall normalization of 1/n between the two transforms. This is required to make ifftn() the exact inverse. Default is "backward" (normalize by 1/n). Example >>> x = torch.rand(10, 10, dtype=torch.complex64) >>> ifftn = torch.fft.ifftn(t) The discrete Fourier transform is separable, so ifftn() here is equivalent to two one-dimensional ifft() calls: >>> two_iffts = torch.fft.ifft(torch.fft.ifft(x, dim=0), dim=1) >>> torch.allclose(ifftn, two_iffts)
doc_23625
tf.compat.v1.train.SessionManager( local_init_op=None, ready_op=None, ready_for_local_init_op=None, graph=None, recovery_wait_secs=30, local_init_run_options=None, local_init_feed_dict=None ) This class is a small wrapper that takes care of session creation and checkpoint recovery. It also provides functions that to facilitate coordination among multiple training threads or processes. Checkpointing trained variables as the training progresses. Initializing variables on startup, restoring them from the most recent checkpoint after a crash, or wait for checkpoints to become available. Usage: with tf.Graph().as_default(): ...add operations to the graph... # Create a SessionManager that will checkpoint the model in '/tmp/mydir'. sm = SessionManager() sess = sm.prepare_session(master, init_op, saver, checkpoint_dir) # Use the session to train the graph. while True: sess.run(<my_train_op>) prepare_session() initializes or restores a model. It requires init_op and saver as an argument. A second process could wait for the model to be ready by doing the following: with tf.Graph().as_default(): ...add operations to the graph... # Create a SessionManager that will wait for the model to become ready. sm = SessionManager() sess = sm.wait_for_session(master) # Use the session to train the graph. while True: sess.run(<my_train_op>) wait_for_session() waits for a model to be initialized by other processes. Args local_init_op An Operation run immediately after session creation. Usually used to initialize tables and local variables. ready_op An Operation to check if the model is initialized. ready_for_local_init_op An Operation to check if the model is ready to run local_init_op. graph The Graph that the model will use. recovery_wait_secs Seconds between checks for the model to be ready. local_init_run_options RunOptions to be passed to session.run when executing the local_init_op. local_init_feed_dict Optional session feed dictionary to use when running the local_init_op. Raises ValueError If ready_for_local_init_op is not None but local_init_op is None Methods prepare_session View source prepare_session( master, init_op=None, saver=None, checkpoint_dir=None, checkpoint_filename_with_path=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None, init_feed_dict=None, init_fn=None ) Creates a Session. Makes sure the model is ready to be used. Creates a Session on 'master'. If a saver object is passed in, and checkpoint_dir points to a directory containing valid checkpoint files, then it will try to recover the model from checkpoint. If no checkpoint files are available, and wait_for_checkpoint is True, then the process would check every recovery_wait_secs, up to max_wait_secs, for recovery to succeed. If the model cannot be recovered successfully then it is initialized by running the init_op and calling init_fn if they are provided. The local_init_op is also run after init_op and init_fn, regardless of whether the model was recovered successfully, but only if ready_for_local_init_op passes. If the model is recovered from a checkpoint it is assumed that all global variables have been initialized, in particular neither init_op nor init_fn will be executed. It is an error if the model cannot be recovered and no init_op or init_fn or local_init_op are passed. Args master String representation of the TensorFlow master to use. init_op Optional Operation used to initialize the model. saver A Saver object used to restore a model. checkpoint_dir Path to the checkpoint files. The latest checkpoint in the dir will be used to restore. checkpoint_filename_with_path Full file name path to the checkpoint file. wait_for_checkpoint Whether to wait for checkpoint to become available. max_wait_secs Maximum time to wait for checkpoints to become available. config Optional ConfigProto proto used to configure the session. init_feed_dict Optional dictionary that maps Tensor objects to feed values. This feed dictionary is passed to the session run() call when running the init op. init_fn Optional callable used to initialize the model. Called after the optional init_op is called. The callable must accept one argument, the session being initialized. Returns A Session object that can be used to drive the model. Raises RuntimeError If the model cannot be initialized or recovered. ValueError If both checkpoint_dir and checkpoint_filename_with_path are set. recover_session View source recover_session( master, saver=None, checkpoint_dir=None, checkpoint_filename_with_path=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None ) Creates a Session, recovering if possible. Creates a new session on 'master'. If the session is not initialized and can be recovered from a checkpoint, recover it. Args master String representation of the TensorFlow master to use. saver A Saver object used to restore a model. checkpoint_dir Path to the checkpoint files. The latest checkpoint in the dir will be used to restore. checkpoint_filename_with_path Full file name path to the checkpoint file. wait_for_checkpoint Whether to wait for checkpoint to become available. max_wait_secs Maximum time to wait for checkpoints to become available. config Optional ConfigProto proto used to configure the session. Returns A pair (sess, initialized) where 'initialized' is True if the session could be recovered and initialized, False otherwise. Raises ValueError If both checkpoint_dir and checkpoint_filename_with_path are set. wait_for_session View source wait_for_session( master, config=None, max_wait_secs=float('Inf') ) Creates a new Session and waits for model to be ready. Creates a new Session on 'master'. Waits for the model to be initialized or recovered from a checkpoint. It's expected that another thread or process will make the model ready, and that this is intended to be used by threads/processes that participate in a distributed training configuration where a different thread/process is responsible for initializing or recovering the model being trained. NB: The amount of time this method waits for the session is bounded by max_wait_secs. By default, this function will wait indefinitely. Args master String representation of the TensorFlow master to use. config Optional ConfigProto proto used to configure the session. max_wait_secs Maximum time to wait for the session to become available. Returns A Session. May be None if the operation exceeds the timeout specified by config.operation_timeout_in_ms. Raises tf.DeadlineExceededError if the session is not available after max_wait_secs.
doc_23626
The URI path. If the Request uses a proxy, then selector will be the full URL that is passed to the proxy.
doc_23627
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug tf.raw_ops.RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug( num_shards, shard_id, table_id=-1, table_name='', config='', name=None ) An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint. Args num_shards An int. shard_id An int. table_id An optional int. Defaults to -1. table_name An optional string. Defaults to "". config An optional string. Defaults to "". name A name for the operation (optional). Returns A tuple of Tensor objects (parameters, accumulators, updates, gradient_accumulators). parameters A Tensor of type float32. accumulators A Tensor of type float32. updates A Tensor of type float32. gradient_accumulators A Tensor of type float32.
doc_23628
tf.compat.v1.train.AdagradOptimizer( learning_rate, initial_accumulator_value=0.1, use_locking=False, name='Adagrad' ) References: Adaptive Subgradient Methods for Online Learning and Stochastic Optimization :Duchi et al., 2011 (pdf) Args learning_rate A Tensor or a floating point value. The learning rate. initial_accumulator_value A floating point value. Starting value for the accumulators, must be positive. use_locking If True use locks for update operations. name Optional name prefix for the operations created when applying gradients. Defaults to "Adagrad". Raises ValueError If the initial_accumulator_value is invalid. Methods apply_gradients View source apply_gradients( grads_and_vars, global_step=None, name=None ) Apply gradients to variables. This is the second part of minimize(). It returns an Operation that applies gradients. Args grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients(). global_step Optional Variable to increment by one after the variables have been updated. name Optional name for the returned operation. Default to the name passed to the Optimizer constructor. Returns An Operation that applies the specified gradients. If global_step was not None, that operation also increments global_step. Raises TypeError If grads_and_vars is malformed. ValueError If none of the variables have gradients. RuntimeError If you should use _distributed_apply() instead. compute_gradients View source compute_gradients( loss, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None ) Compute gradients of loss for the variables in var_list. This is the first part of minimize(). It returns a list of (gradient, variable) pairs where "gradient" is the gradient for "variable". Note that "gradient" can be a Tensor, an IndexedSlices, or None if there is no gradient for the given variable. Args loss A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable. var_list Optional list or tuple of tf.Variable to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns A list of (gradient, variable) pairs. Variable is always present, but gradient can be None. Raises TypeError If var_list contains anything else than Variable objects. ValueError If some arguments are invalid. RuntimeError If called with eager execution enabled and loss is not callable. Eager Compatibility When eager execution is enabled, gate_gradients, aggregation_method, and colocate_gradients_with_ops are ignored. get_name View source get_name() get_slot View source get_slot( var, name ) Return a slot named name created for var by the Optimizer. Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them. Use get_slot_names() to get the list of slot names created by the Optimizer. Args var A variable passed to minimize() or apply_gradients(). name A string. Returns The Variable for the slot if it was created, None otherwise. get_slot_names View source get_slot_names() Return a list of the names of slots created by the Optimizer. See get_slot(). Returns A list of strings. minimize View source minimize( loss, global_step=None, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None ) Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function. Args loss A Tensor containing the value to minimize. global_step Optional Variable to increment by one after the variables have been updated. var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. name Optional name for the returned operation. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step. Raises ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source variables() A list of variables which encode the current state of Optimizer. Includes slot variables and additional global variables created by the optimizer in the current default graph. Returns A list of variables. Class Variables GATE_GRAPH 2 GATE_NONE 0 GATE_OP 1
doc_23629
Return the number of elements in the underlying data.
doc_23630
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns thetandarray of shape (n_dims,) The non-fixed, log-transformed hyperparameters of the kernel
doc_23631
Decrements interval timer in real time, and delivers SIGALRM upon expiration.
doc_23632
Abstract base class for writing movies. Fundamentally, what a MovieWriter does is provide is a way to grab frames by calling grab_frame(). setup() is called to start the process and finish() is called afterwards. This class is set up to provide for writing movie frame data to a pipe. saving() is provided as a context manager to facilitate this process as: with moviewriter.saving(fig, outfile='myfile.mp4', dpi=100): # Iterate over frames moviewriter.grab_frame(**savefig_kwargs) The use of the context manager ensures that setup() and finish() are performed as necessary. An instance of a concrete subclass of this class can be given as the writer argument of Animation.save(). __init__(fps=5, metadata=None, codec=None, bitrate=None)[source] Methods __init__([fps, metadata, codec, bitrate]) finish() Finish any processing for writing the movie. grab_frame(**savefig_kwargs) Grab the image information from the figure and save as a movie frame. saving(fig, outfile, dpi, *args, **kwargs) Context manager to facilitate writing the movie file. setup(fig, outfile[, dpi]) Setup for writing the movie file. Attributes frame_size A tuple (width, height) in pixels of a movie frame. abstractfinish()[source] Finish any processing for writing the movie. propertyframe_size A tuple (width, height) in pixels of a movie frame. abstractgrab_frame(**savefig_kwargs)[source] Grab the image information from the figure and save as a movie frame. All keyword arguments in savefig_kwargs are passed on to the savefig call that saves the figure. saving(fig, outfile, dpi, *args, **kwargs) Context manager to facilitate writing the movie file. *args, **kw are any parameters that should be passed to setup. abstractsetup(fig, outfile, dpi=None)[source] Setup for writing the movie file. Parameters figFigure The figure object that contains the information for frames. outfilestr The filename of the resulting movie file. dpifloat, default: fig.dpi The DPI (or resolution) for the file. This controls the size in pixels of the resulting movie file.
doc_23633
Required. A method that returns a sequence or QuerySet of objects. The framework doesn’t care what type of objects they are; all that matters is that these objects get passed to the location(), lastmod(), changefreq() and priority() methods.
doc_23634
Alias for get_linewidth.
doc_23635
Set the list of labels on the message to labels.
doc_23636
Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters module (nn.Module) – module containing the tensor to prune Returns pruned version of the input tensor Return type pruned_tensor (torch.Tensor)
doc_23637
[Deprecated] Create a colorbar on the given axes for the given mappable. Note This is a low-level function to turn an existing axes into a colorbar axes. Typically, you'll want to use colorbar instead, which automatically handles creation and placement of a suitable axes as well. Parameters caxAxes The Axes to turn into a colorbar. mappableScalarMappable The mappable to be described by the colorbar. **kwargs Keyword arguments are passed to the respective colorbar class. Returns Colorbar The created colorbar instance. Notes Deprecated since version 3.4.
doc_23638
Unpacks and returns a variable length opaque data string, similarly to unpack_string().
doc_23639
play a Sound on a specific Channel play(Sound, loops=0, maxtime=0, fade_ms=0) -> None This will begin playback of a Sound on a specific Channel. If the Channel is currently playing any other Sound it will be stopped. The loops argument has the same meaning as in Sound.play(): it is the number of times to repeat the sound after the first time. If it is 3, the sound will be played 4 times (the first time, then three more). If loops is -1 then the playback will repeat indefinitely. As in Sound.play(), the maxtime argument can be used to stop playback of the Sound after a given number of milliseconds. As in Sound.play(), the fade_ms argument can be used fade in the sound.
doc_23640
Add a child inset Axes to this existing Axes. Parameters bounds[x0, y0, width, height] Lower-left corner of inset Axes, and its width and height. transformTransform Defaults to ax.transAxes, i.e. the units of rect are in Axes-relative coordinates. zordernumber Defaults to 5 (same as Axes.legend). Adjust higher or lower to change whether it is above or below data plotted on the parent Axes. **kwargs Other keyword arguments are passed on to the child Axes. Returns ax The created Axes instance. Warning This method is experimental as of 3.0, and the API may change. Examples This example makes two inset Axes, the first is in Axes-relative coordinates, and the second in data-coordinates: fig, ax = plt.subplots() ax.plot(range(10)) axin1 = ax.inset_axes([0.8, 0.1, 0.15, 0.15]) axin2 = ax.inset_axes( [5, 7, 2.3, 2.3], transform=ax.transData) Examples using matplotlib.axes.Axes.inset_axes Placing Colorbars Zoom region inset axes
doc_23641
Performs a single optimization step. Parameters closure (callable, optional) – A closure that reevaluates the model and returns the loss.
doc_23642
An abstract method for finding a spec for the specified module. If this is a top-level import, path will be None. Otherwise, this is a search for a subpackage or module and path will be the value of __path__ from the parent package. If a spec cannot be found, None is returned. When passed in, target is a module object that the finder may use to make a more educated guess about what spec to return. importlib.util.spec_from_loader() may be useful for implementing concrete MetaPathFinders. New in version 3.4.
doc_23643
TransformerEncoder is a stack of N encoder layers Parameters encoder_layer – an instance of the TransformerEncoderLayer() class (required). num_layers – the number of sub-encoder-layers in the encoder (required). norm – the layer normalization component (optional). Examples:: >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8) >>> transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=6) >>> src = torch.rand(10, 32, 512) >>> out = transformer_encoder(src) forward(src, mask=None, src_key_padding_mask=None) [source] Pass the input through the encoder layers in turn. Parameters src – the sequence to the encoder (required). mask – the mask for the src sequence (optional). src_key_padding_mask – the mask for the src keys per batch (optional). Shape: see the docs in Transformer class.
doc_23644
Initialize self. See help(type(self)) for accurate signature.
doc_23645
Return if another array is equivalent to this array. Equivalent means that both arrays have the same shape and dtype, and all values compare equal. Missing values in the same location are considered equal (in contrast with normal equality). Parameters other:ExtensionArray Array to compare to this Array. Returns boolean Whether the arrays are equivalent.
doc_23646
url URL of the resource retrieved, commonly used to determine if a redirect was followed. headers Returns the headers of the response in the form of an EmailMessage instance. status New in version 3.9. Status code returned by server. geturl() Deprecated since version 3.9: Deprecated in favor of url. info() Deprecated since version 3.9: Deprecated in favor of headers. code Deprecated since version 3.9: Deprecated in favor of status. getstatus() Deprecated since version 3.9: Deprecated in favor of status.
doc_23647
Close and skip to the end of the chunk. This does not close the underlying file.
doc_23648
See torch.isneginf()
doc_23649
Return a dictionary of all the properties of the artist.
doc_23650
Set multiple properties at once. Supported properties are Property Description agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha scalar or None animated bool axis_direction unknown axislabel_direction {"+", "-"} axisline_style str or None clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None figure Figure gid str in_layout bool label unknown path_effects AbstractPathEffect picker None or bool or float or callable rasterized bool sketch_params (scale: float, length: float, randomness: float) snap bool or None ticklabel_direction {"+", "-"} transform Transform url str visible bool zorder float
doc_23651
Computes a CRC (Cyclic Redundancy Check) checksum of data. The result is an unsigned 32-bit integer. If value is present, it is used as the starting value of the checksum; otherwise, a default value of 0 is used. Passing in value allows computing a running checksum over the concatenation of several inputs. The algorithm is not cryptographically strong, and should not be used for authentication or digital signatures. Since the algorithm is designed for use as a checksum algorithm, it is not suitable for use as a general hash algorithm. Changed in version 3.0: Always returns an unsigned value. To generate the same numeric value across all Python versions and platforms, use crc32(data) & 0xffffffff.
doc_23652
See Migration guide for more details. tf.compat.v1.raw_ops.UpperBound tf.raw_ops.UpperBound( sorted_inputs, values, out_type=tf.dtypes.int32, name=None ) Each set of rows with the same index in (sorted_inputs, values) is treated independently. The resulting row is the equivalent of calling np.searchsorted(sorted_inputs, values, side='right'). The result is not a global index to the entire Tensor, but rather just the index in the last dimension. A 2-D example: sorted_sequence = [[0, 3, 9, 9, 10], [1, 2, 3, 4, 5]] values = [[2, 4, 9], [0, 2, 6]] result = UpperBound(sorted_sequence, values) result == [[1, 2, 4], [0, 2, 5]] Args sorted_inputs A Tensor. 2-D Tensor where each row is ordered. values A Tensor. Must have the same type as sorted_inputs. 2-D Tensor with the same numbers of rows as sorted_search_values. Contains the values that will be searched for in sorted_search_values. out_type An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32. name A name for the operation (optional). Returns A Tensor of type out_type.
doc_23653
tf.keras.layers.experimental.preprocessing.StringLookup( max_tokens=None, num_oov_indices=1, mask_token='', oov_token='[UNK]', vocabulary=None, encoding=None, invert=False, **kwargs ) This layer translates a set of arbitrary strings into an integer output via a table-based lookup, with optional out-of-vocabulary handling. If desired, the user can call this layer's adapt() method on a data set, which will analyze the data set, determine the frequency of individual string values, and create a vocabulary from them. This vocabulary can have unlimited size or be capped, depending on the configuration options for this layer; if there are more unique values in the input than the maximum vocabulary size, the most frequent terms will be used to create the vocabulary. Examples: Creating a lookup layer with a known vocabulary This example creates a lookup layer with a pre-existing vocabulary. vocab = ["a", "b", "c", "d"] data = tf.constant([["a", "c", "d"], ["d", "z", "b"]]) layer = StringLookup(vocabulary=vocab) layer(data) <tf.Tensor: shape=(2, 3), dtype=int64, numpy= array([[2, 4, 5], [5, 1, 3]])> Creating a lookup layer with an adapted vocabulary This example creates a lookup layer and generates the vocabulary by analyzing the dataset. data = tf.constant([["a", "c", "d"], ["d", "z", "b"]]) layer = StringLookup() layer.adapt(data) layer.get_vocabulary() ['', '[UNK]', 'd', 'z', 'c', 'b', 'a'] Note how the mask token '' and the OOV token [UNK] have been added to the vocabulary. The remaining tokens are sorted by frequency ('d', which has 2 occurrences, is first) then by inverse sort order. data = tf.constant([["a", "c", "d"], ["d", "z", "b"]]) layer = StringLookup() layer.adapt(data) layer(data) <tf.Tensor: shape=(2, 3), dtype=int64, numpy= array([[6, 4, 2], [2, 3, 5]])> Lookups with multiple OOV tokens. This example demonstrates how to use a lookup layer with multiple OOV tokens. When a layer is created with more than one OOV token, any OOV values are hashed into the number of OOV buckets, distributing OOV values in a deterministic fashion across the set. vocab = ["a", "b", "c", "d"] data = tf.constant([["a", "c", "d"], ["m", "z", "b"]]) layer = StringLookup(vocabulary=vocab, num_oov_indices=2) layer(data) <tf.Tensor: shape=(2, 3), dtype=int64, numpy= array([[3, 5, 6], [1, 2, 4]])> Note that the output for OOV value 'm' is 1, while the output for OOV value 'z' is 2. The in-vocab terms have their output index increased by 1 from earlier examples (a maps to 3, etc) in order to make space for the extra OOV value. Inverse lookup This example demonstrates how to map indices to strings using this layer. (You can also use adapt() with inverse=True, but for simplicity we'll pass the vocab in this example.) vocab = ["a", "b", "c", "d"] data = tf.constant([[1, 3, 4], [4, 5, 2]]) layer = StringLookup(vocabulary=vocab, invert=True) layer(data) <tf.Tensor: shape=(2, 3), dtype=string, numpy= array([[b'a', b'c', b'd'], [b'd', b'[UNK]', b'b']], dtype=object)> Note that the integer 5, which is out of the vocabulary space, returns an OOV token. Forward and inverse lookup pairs This example demonstrates how to use the vocabulary of a standard lookup layer to create an inverse lookup layer. vocab = ["a", "b", "c", "d"] data = tf.constant([["a", "c", "d"], ["d", "z", "b"]]) layer = StringLookup(vocabulary=vocab) i_layer = StringLookup(vocabulary=layer.get_vocabulary(), invert=True) int_data = layer(data) i_layer(int_data) <tf.Tensor: shape=(2, 3), dtype=string, numpy= array([[b'a', b'c', b'd'], [b'd', b'[UNK]', b'b']], dtype=object)> In this example, the input value 'z' resulted in an output of '[UNK]', since 1000 was not in the vocabulary - it got represented as an OOV, and all OOV values are returned as '[OOV}' in the inverse layer. Also, note that for the inverse to work, you must have already set the forward layer vocabulary either directly or via fit() before calling get_vocabulary(). Attributes max_tokens The maximum size of the vocabulary for this layer. If None, there is no cap on the size of the vocabulary. Note that this vocabulary includes the OOV and mask tokens, so the effective number of tokens is (max_tokens - num_oov_indices - (1 if mask_token else 0)) num_oov_indices The number of out-of-vocabulary tokens to use; defaults to If this value is more than 1, OOV inputs are hashed to determine their OOV value; if this value is 0, passing an OOV input will result in a '-1' being returned for that value in the output tensor. (Note that, because the value is -1 and not 0, this will allow you to effectively drop OOV values from categorical encodings.) mask_token A token that represents masked values, and which is mapped to index 0. Defaults to the empty string "". If set to None, no mask term will be added and the OOV tokens, if any, will be indexed from (0...num_oov_indices) instead of (1...num_oov_indices+1). oov_token The token representing an out-of-vocabulary value. Defaults to "[UNK]". vocabulary An optional list of vocabulary terms, or a path to a text file containing a vocabulary to load into this layer. The file should contain one token per line. If the list or file contains the same token multiple times, an error will be thrown. encoding The Python string encoding to use. Defaults to 'utf-8'. invert If true, this layer will map indices to vocabulary items instead of mapping vocabulary items to indices. Methods adapt View source adapt( data, reset_state=True ) Fits the state of the preprocessing layer to the dataset. Overrides the default adapt method to apply relevant preprocessing to the inputs before passing to the combiner. Arguments data The data to train on. It can be passed either as a tf.data Dataset, or as a numpy array. reset_state Optional argument specifying whether to clear the state of the layer at the start of the call to adapt. This must be True for this layer, which does not support repeated calls to adapt. get_vocabulary View source get_vocabulary() set_vocabulary View source set_vocabulary( vocab ) Sets vocabulary data for this layer with inverse=False. This method sets the vocabulary for this layer directly, instead of analyzing a dataset through 'adapt'. It should be used whenever the vocab information is already known. If vocabulary data is already present in the layer, this method will either replace it Arguments vocab An array of string tokens. Raises ValueError If there are too many inputs, the inputs do not match, or input data is missing. vocab_size View source vocab_size()
doc_23654
Transform data back to its original space. In other words, return an input X_original whose transform would be X. Parameters Xarray-like, shape (n_samples, n_components) New data, where n_samples is the number of samples and n_components is the number of components. Returns X_original array-like, shape (n_samples, n_features) Notes If whitening is enabled, inverse_transform will compute the exact inverse operation, which includes reversing whitening.
doc_23655
Setting reset_sequences = True on a TransactionTestCase will make sure sequences are always reset before the test run: class TestsThatDependsOnPrimaryKeySequences(TransactionTestCase): reset_sequences = True def test_animal_pk(self): lion = Animal.objects.create(name="lion", sound="roar") # lion.pk is guaranteed to always be 1 self.assertEqual(lion.pk, 1) Unless you are explicitly testing primary keys sequence numbers, it is recommended that you do not hard code primary key values in tests. Using reset_sequences = True will slow down the test, since the primary key reset is a relatively expensive database operation.
doc_23656
'blogs.blog': lambda o: "/blogs/%s/" % o.slug, 'news.story': lambda o: "/stories/%s/%s/" % (o.pub_year, o.slug), } The model name used in this setting should be all lowercase, regardless of the case of the actual model class name. ADMINS Default: [] (Empty list) A list of all the people who get code error notifications. When DEBUG=False and AdminEmailHandler is configured in LOGGING (done by default), Django emails these people the details of exceptions raised in the request/response cycle. Each item in the list should be a tuple of (Full name, email address). Example: [('John', 'john@example.com'), ('Mary', 'mary@example.com')] ALLOWED_HOSTS Default: [] (Empty list) A list of strings representing the host/domain names that this Django site can serve. This is a security measure to prevent HTTP Host header attacks, which are possible even under many seemingly-safe web server configurations. Values in this list can be fully qualified names (e.g. 'www.example.com'), in which case they will be matched against the request’s Host header exactly (case-insensitive, not including port). A value beginning with a period can be used as a subdomain wildcard: '.example.com' will match example.com, www.example.com, and any other subdomain of example.com. A value of '*' will match anything; in this case you are responsible to provide your own validation of the Host header (perhaps in a middleware; if so this middleware must be listed first in MIDDLEWARE). Django also allows the fully qualified domain name (FQDN) of any entries. Some browsers include a trailing dot in the Host header which Django strips when performing host validation. If the Host header (or X-Forwarded-Host if USE_X_FORWARDED_HOST is enabled) does not match any value in this list, the django.http.HttpRequest.get_host() method will raise SuspiciousOperation. When DEBUG is True and ALLOWED_HOSTS is empty, the host is validated against ['.localhost', '127.0.0.1', '[::1]']. ALLOWED_HOSTS is also checked when running tests. This validation only applies via get_host(); if your code accesses the Host header directly from request.META you are bypassing this security protection. APPEND_SLASH Default: True When set to True, if the request URL does not match any of the patterns in the URLconf and it doesn’t end in a slash, an HTTP redirect is issued to the same URL with a slash appended. Note that the redirect may cause any data submitted in a POST request to be lost. The APPEND_SLASH setting is only used if CommonMiddleware is installed (see Middleware). See also PREPEND_WWW. CACHES Default: { 'default': { 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache', } } A dictionary containing the settings for all caches to be used with Django. It is a nested dictionary whose contents maps cache aliases to a dictionary containing the options for an individual cache. The CACHES setting must configure a default cache; any number of additional caches may also be specified. If you are using a cache backend other than the local memory cache, or you need to define multiple caches, other options will be required. The following cache options are available. BACKEND Default: '' (Empty string) The cache backend to use. The built-in cache backends are: 'django.core.cache.backends.db.DatabaseCache' 'django.core.cache.backends.dummy.DummyCache' 'django.core.cache.backends.filebased.FileBasedCache' 'django.core.cache.backends.locmem.LocMemCache' 'django.core.cache.backends.memcached.PyMemcacheCache' 'django.core.cache.backends.memcached.PyLibMCCache' 'django.core.cache.backends.redis.RedisCache' You can use a cache backend that doesn’t ship with Django by setting BACKEND to a fully-qualified path of a cache backend class (i.e. mypackage.backends.whatever.WhateverCache). Changed in Django 3.2: The PyMemcacheCache backend was added. Changed in Django 4.0: The RedisCache backend was added. KEY_FUNCTION A string containing a dotted path to a function (or any callable) that defines how to compose a prefix, version and key into a final cache key. The default implementation is equivalent to the function: def make_key(key, key_prefix, version): return ':'.join([key_prefix, str(version), key]) You may use any key function you want, as long as it has the same argument signature. See the cache documentation for more information. KEY_PREFIX Default: '' (Empty string) A string that will be automatically included (prepended by default) to all cache keys used by the Django server. See the cache documentation for more information. LOCATION Default: '' (Empty string) The location of the cache to use. This might be the directory for a file system cache, a host and port for a memcache server, or an identifying name for a local memory cache. e.g.: CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache', 'LOCATION': '/var/tmp/django_cache', } } OPTIONS Default: None Extra parameters to pass to the cache backend. Available parameters vary depending on your cache backend. Some information on available parameters can be found in the cache arguments documentation. For more information, consult your backend module’s own documentation. TIMEOUT Default: 300 The number of seconds before a cache entry is considered stale. If the value of this setting is None, cache entries will not expire. A value of 0 causes keys to immediately expire (effectively “don’t cache”). VERSION Default: 1 The default version number for cache keys generated by the Django server. See the cache documentation for more information. CACHE_MIDDLEWARE_ALIAS Default: 'default' The cache connection to use for the cache middleware. CACHE_MIDDLEWARE_KEY_PREFIX Default: '' (Empty string) A string which will be prefixed to the cache keys generated by the cache middleware. This prefix is combined with the KEY_PREFIX setting; it does not replace it. See Django’s cache framework. CACHE_MIDDLEWARE_SECONDS Default: 600 The default number of seconds to cache a page for the cache middleware. See Django’s cache framework. CSRF_COOKIE_AGE Default: 31449600 (approximately 1 year, in seconds) The age of CSRF cookies, in seconds. The reason for setting a long-lived expiration time is to avoid problems in the case of a user closing a browser or bookmarking a page and then loading that page from a browser cache. Without persistent cookies, the form submission would fail in this case. Some browsers (specifically Internet Explorer) can disallow the use of persistent cookies or can have the indexes to the cookie jar corrupted on disk, thereby causing CSRF protection checks to (sometimes intermittently) fail. Change this setting to None to use session-based CSRF cookies, which keep the cookies in-memory instead of on persistent storage. CSRF_COOKIE_DOMAIN Default: None The domain to be used when setting the CSRF cookie. This can be useful for easily allowing cross-subdomain requests to be excluded from the normal cross site request forgery protection. It should be set to a string such as ".example.com" to allow a POST request from a form on one subdomain to be accepted by a view served from another subdomain. Please note that the presence of this setting does not imply that Django’s CSRF protection is safe from cross-subdomain attacks by default - please see the CSRF limitations section. CSRF_COOKIE_HTTPONLY Default: False Whether to use HttpOnly flag on the CSRF cookie. If this is set to True, client-side JavaScript will not be able to access the CSRF cookie. Designating the CSRF cookie as HttpOnly doesn’t offer any practical protection because CSRF is only to protect against cross-domain attacks. If an attacker can read the cookie via JavaScript, they’re already on the same domain as far as the browser knows, so they can do anything they like anyway. (XSS is a much bigger hole than CSRF.) Although the setting offers little practical benefit, it’s sometimes required by security auditors. If you enable this and need to send the value of the CSRF token with an AJAX request, your JavaScript must pull the value from a hidden CSRF token form input instead of from the cookie. See SESSION_COOKIE_HTTPONLY for details on HttpOnly. CSRF_COOKIE_NAME Default: 'csrftoken' The name of the cookie to use for the CSRF authentication token. This can be whatever you want (as long as it’s different from the other cookie names in your application). See Cross Site Request Forgery protection. CSRF_COOKIE_PATH Default: '/' The path set on the CSRF cookie. This should either match the URL path of your Django installation or be a parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths, and each instance will only see its own CSRF cookie. CSRF_COOKIE_SAMESITE Default: 'Lax' The value of the SameSite flag on the CSRF cookie. This flag prevents the cookie from being sent in cross-site requests. See SESSION_COOKIE_SAMESITE for details about SameSite. CSRF_COOKIE_SECURE Default: False Whether to use a secure cookie for the CSRF cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent with an HTTPS connection. CSRF_USE_SESSIONS Default: False Whether to store the CSRF token in the user’s session instead of in a cookie. It requires the use of django.contrib.sessions. Storing the CSRF token in a cookie (Django’s default) is safe, but storing it in the session is common practice in other web frameworks and therefore sometimes demanded by security auditors. Since the default error views require the CSRF token, SessionMiddleware must appear in MIDDLEWARE before any middleware that may raise an exception to trigger an error view (such as PermissionDenied) if you’re using CSRF_USE_SESSIONS. See Middleware ordering. CSRF_FAILURE_VIEW Default: 'django.views.csrf.csrf_failure' A dotted path to the view function to be used when an incoming request is rejected by the CSRF protection. The function should have this signature: def csrf_failure(request, reason=""): ... where reason is a short message (intended for developers or logging, not for end users) indicating the reason the request was rejected. It should return an HttpResponseForbidden. django.views.csrf.csrf_failure() accepts an additional template_name parameter that defaults to '403_csrf.html'. If a template with that name exists, it will be used to render the page. CSRF_HEADER_NAME Default: 'HTTP_X_CSRFTOKEN' The name of the request header used for CSRF authentication. As with other HTTP headers in request.META, the header name received from the server is normalized by converting all characters to uppercase, replacing any hyphens with underscores, and adding an 'HTTP_' prefix to the name. For example, if your client sends a 'X-XSRF-TOKEN' header, the setting should be 'HTTP_X_XSRF_TOKEN'. CSRF_TRUSTED_ORIGINS Default: [] (Empty list) A list of trusted origins for unsafe requests (e.g. POST). For requests that include the Origin header, Django’s CSRF protection requires that header match the origin present in the Host header. For a secure unsafe request that doesn’t include the Origin header, the request must have a Referer header that matches the origin present in the Host header. These checks prevent, for example, a POST request from subdomain.example.com from succeeding against api.example.com. If you need cross-origin unsafe requests, continuing the example, add 'https://subdomain.example.com' to this list (and/or http://... if requests originate from an insecure page). The setting also supports subdomains, so you could add 'https://*.example.com', for example, to allow access from all subdomains of example.com. Changed in Django 4.0: The values in older versions must only include the hostname (possibly with a leading dot) and not the scheme or an asterisk. Also, Origin header checking isn’t performed in older versions. DATABASES Default: {} (Empty dictionary) A dictionary containing the settings for all databases to be used with Django. It is a nested dictionary whose contents map a database alias to a dictionary containing the options for an individual database. The DATABASES setting must configure a default database; any number of additional databases may also be specified. The simplest possible settings file is for a single-database setup using SQLite. This can be configured using the following: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': 'mydatabase', } } When connecting to other database backends, such as MariaDB, MySQL, Oracle, or PostgreSQL, additional connection parameters will be required. See the ENGINE setting below on how to specify other database types. This example is for PostgreSQL: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'mydatabase', 'USER': 'mydatabaseuser', 'PASSWORD': 'mypassword', 'HOST': '127.0.0.1', 'PORT': '5432', } } The following inner options that may be required for more complex configurations are available: ATOMIC_REQUESTS Default: False Set this to True to wrap each view in a transaction on this database. See Tying transactions to HTTP requests. AUTOCOMMIT Default: True Set this to False if you want to disable Django’s transaction management and implement your own. ENGINE Default: '' (Empty string) The database backend to use. The built-in database backends are: 'django.db.backends.postgresql' 'django.db.backends.mysql' 'django.db.backends.sqlite3' 'django.db.backends.oracle' You can use a database backend that doesn’t ship with Django by setting ENGINE to a fully-qualified path (i.e. mypackage.backends.whatever). HOST Default: '' (Empty string) Which host to use when connecting to the database. An empty string means localhost. Not used with SQLite. If this value starts with a forward slash ('/') and you’re using MySQL, MySQL will connect via a Unix socket to the specified socket. For example: "HOST": '/var/run/mysql' If you’re using MySQL and this value doesn’t start with a forward slash, then this value is assumed to be the host. If you’re using PostgreSQL, by default (empty HOST), the connection to the database is done through UNIX domain sockets (‘local’ lines in pg_hba.conf). If your UNIX domain socket is not in the standard location, use the same value of unix_socket_directory from postgresql.conf. If you want to connect through TCP sockets, set HOST to ‘localhost’ or ‘127.0.0.1’ (‘host’ lines in pg_hba.conf). On Windows, you should always define HOST, as UNIX domain sockets are not available. NAME Default: '' (Empty string) The name of the database to use. For SQLite, it’s the full path to the database file. When specifying the path, always use forward slashes, even on Windows (e.g. C:/homes/user/mysite/sqlite3.db). CONN_MAX_AGE Default: 0 The lifetime of a database connection, as an integer of seconds. Use 0 to close database connections at the end of each request — Django’s historical behavior — and None for unlimited persistent connections. OPTIONS Default: {} (Empty dictionary) Extra parameters to use when connecting to the database. Available parameters vary depending on your database backend. Some information on available parameters can be found in the Database Backends documentation. For more information, consult your backend module’s own documentation. PASSWORD Default: '' (Empty string) The password to use when connecting to the database. Not used with SQLite. PORT Default: '' (Empty string) The port to use when connecting to the database. An empty string means the default port. Not used with SQLite. TIME_ZONE Default: None A string representing the time zone for this database connection or None. This inner option of the DATABASES setting accepts the same values as the general TIME_ZONE setting. When USE_TZ is True and this option is set, reading datetimes from the database returns aware datetimes in this time zone instead of UTC. When USE_TZ is False, it is an error to set this option. If the database backend doesn’t support time zones (e.g. SQLite, MySQL, Oracle), Django reads and writes datetimes in local time according to this option if it is set and in UTC if it isn’t. Changing the connection time zone changes how datetimes are read from and written to the database. If Django manages the database and you don’t have a strong reason to do otherwise, you should leave this option unset. It’s best to store datetimes in UTC because it avoids ambiguous or nonexistent datetimes during daylight saving time changes. Also, receiving datetimes in UTC keeps datetime arithmetic simple — there’s no need to consider potential offset changes over a DST transition. If you’re connecting to a third-party database that stores datetimes in a local time rather than UTC, then you must set this option to the appropriate time zone. Likewise, if Django manages the database but third-party systems connect to the same database and expect to find datetimes in local time, then you must set this option. If the database backend supports time zones (e.g. PostgreSQL), the TIME_ZONE option is very rarely needed. It can be changed at any time; the database takes care of converting datetimes to the desired time zone. Setting the time zone of the database connection may be useful for running raw SQL queries involving date/time functions provided by the database, such as date_trunc, because their results depend on the time zone. However, this has a downside: receiving all datetimes in local time makes datetime arithmetic more tricky — you must account for possible offset changes over DST transitions. Consider converting to local time explicitly with AT TIME ZONE in raw SQL queries instead of setting the TIME_ZONE option. DISABLE_SERVER_SIDE_CURSORS Default: False Set this to True if you want to disable the use of server-side cursors with QuerySet.iterator(). Transaction pooling and server-side cursors describes the use case. This is a PostgreSQL-specific setting. USER Default: '' (Empty string) The username to use when connecting to the database. Not used with SQLite. TEST Default: {} (Empty dictionary) A dictionary of settings for test databases; for more details about the creation and use of test databases, see The test database. Here’s an example with a test database configuration: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'USER': 'mydatabaseuser', 'NAME': 'mydatabase', 'TEST': { 'NAME': 'mytestdatabase', }, }, } The following keys in the TEST dictionary are available: CHARSET Default: None The character set encoding used to create the test database. The value of this string is passed directly through to the database, so its format is backend-specific. Supported by the PostgreSQL (postgresql) and MySQL (mysql) backends. COLLATION Default: None The collation order to use when creating the test database. This value is passed directly to the backend, so its format is backend-specific. Only supported for the mysql backend (see the MySQL manual for details). DEPENDENCIES Default: ['default'], for all databases other than default, which has no dependencies. The creation-order dependencies of the database. See the documentation on controlling the creation order of test databases for details. MIGRATE Default: True When set to False, migrations won’t run when creating the test database. This is similar to setting None as a value in MIGRATION_MODULES, but for all apps. MIRROR Default: None The alias of the database that this database should mirror during testing. This setting exists to allow for testing of primary/replica (referred to as master/slave by some databases) configurations of multiple databases. See the documentation on testing primary/replica configurations for details. NAME Default: None The name of database to use when running the test suite. If the default value (None) is used with the SQLite database engine, the tests will use a memory resident database. For all other database engines the test database will use the name 'test_' + DATABASE_NAME. See The test database. SERIALIZE Boolean value to control whether or not the default test runner serializes the database into an in-memory JSON string before running tests (used to restore the database state between tests if you don’t have transactions). You can set this to False to speed up creation time if you don’t have any test classes with serialized_rollback=True. Deprecated since version 4.0: This setting is deprecated as it can be inferred from the databases with the serialized_rollback option enabled. TEMPLATE This is a PostgreSQL-specific setting. The name of a template (e.g. 'template0') from which to create the test database. CREATE_DB Default: True This is an Oracle-specific setting. If it is set to False, the test tablespaces won’t be automatically created at the beginning of the tests or dropped at the end. CREATE_USER Default: True This is an Oracle-specific setting. If it is set to False, the test user won’t be automatically created at the beginning of the tests and dropped at the end. USER Default: None This is an Oracle-specific setting. The username to use when connecting to the Oracle database that will be used when running tests. If not provided, Django will use 'test_' + USER. PASSWORD Default: None This is an Oracle-specific setting. The password to use when connecting to the Oracle database that will be used when running tests. If not provided, Django will generate a random password. ORACLE_MANAGED_FILES Default: False This is an Oracle-specific setting. If set to True, Oracle Managed Files (OMF) tablespaces will be used. DATAFILE and DATAFILE_TMP will be ignored. TBLSPACE Default: None This is an Oracle-specific setting. The name of the tablespace that will be used when running tests. If not provided, Django will use 'test_' + USER. TBLSPACE_TMP Default: None This is an Oracle-specific setting. The name of the temporary tablespace that will be used when running tests. If not provided, Django will use 'test_' + USER + '_temp'. DATAFILE Default: None This is an Oracle-specific setting. The name of the datafile to use for the TBLSPACE. If not provided, Django will use TBLSPACE + '.dbf'. DATAFILE_TMP Default: None This is an Oracle-specific setting. The name of the datafile to use for the TBLSPACE_TMP. If not provided, Django will use TBLSPACE_TMP + '.dbf'. DATAFILE_MAXSIZE Default: '500M' This is an Oracle-specific setting. The maximum size that the DATAFILE is allowed to grow to. DATAFILE_TMP_MAXSIZE Default: '500M' This is an Oracle-specific setting. The maximum size that the DATAFILE_TMP is allowed to grow to. DATAFILE_SIZE Default: '50M' This is an Oracle-specific setting. The initial size of the DATAFILE. DATAFILE_TMP_SIZE Default: '50M' This is an Oracle-specific setting. The initial size of the DATAFILE_TMP. DATAFILE_EXTSIZE Default: '25M' This is an Oracle-specific setting. The amount by which the DATAFILE is extended when more space is required. DATAFILE_TMP_EXTSIZE Default: '25M' This is an Oracle-specific setting. The amount by which the DATAFILE_TMP is extended when more space is required. DATA_UPLOAD_MAX_MEMORY_SIZE Default: 2621440 (i.e. 2.5 MB). The maximum size in bytes that a request body may be before a SuspiciousOperation (RequestDataTooBig) is raised. The check is done when accessing request.body or request.POST and is calculated against the total request size excluding any file upload data. You can set this to None to disable the check. Applications that are expected to receive unusually large form posts should tune this setting. The amount of request data is correlated to the amount of memory needed to process the request and populate the GET and POST dictionaries. Large requests could be used as a denial-of-service attack vector if left unchecked. Since web servers don’t typically perform deep request inspection, it’s not possible to perform a similar check at that level. See also FILE_UPLOAD_MAX_MEMORY_SIZE. DATA_UPLOAD_MAX_NUMBER_FIELDS Default: 1000 The maximum number of parameters that may be received via GET or POST before a SuspiciousOperation (TooManyFields) is raised. You can set this to None to disable the check. Applications that are expected to receive an unusually large number of form fields should tune this setting. The number of request parameters is correlated to the amount of time needed to process the request and populate the GET and POST dictionaries. Large requests could be used as a denial-of-service attack vector if left unchecked. Since web servers don’t typically perform deep request inspection, it’s not possible to perform a similar check at that level. DATABASE_ROUTERS Default: [] (Empty list) The list of routers that will be used to determine which database to use when performing a database query. See the documentation on automatic database routing in multi database configurations. DATE_FORMAT Default: 'N j, Y' (e.g. Feb. 4, 2003) The default formatting to use for displaying date fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATETIME_FORMAT, TIME_FORMAT and SHORT_DATE_FORMAT. DATE_INPUT_FORMATS Default: [ '%Y-%m-%d', '%m/%d/%Y', '%m/%d/%y', # '2006-10-25', '10/25/2006', '10/25/06' '%b %d %Y', '%b %d, %Y', # 'Oct 25 2006', 'Oct 25, 2006' '%d %b %Y', '%d %b, %Y', # '25 Oct 2006', '25 Oct, 2006' '%B %d %Y', '%B %d, %Y', # 'October 25 2006', 'October 25, 2006' '%d %B %Y', '%d %B, %Y', # '25 October 2006', '25 October, 2006' ] A list of formats that will be accepted when inputting data on a date field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATETIME_INPUT_FORMATS and TIME_INPUT_FORMATS. DATETIME_FORMAT Default: 'N j, Y, P' (e.g. Feb. 4, 2003, 4 p.m.) The default formatting to use for displaying datetime fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATE_FORMAT, TIME_FORMAT and SHORT_DATETIME_FORMAT. DATETIME_INPUT_FORMATS Default: [ '%Y-%m-%d %H:%M:%S', # '2006-10-25 14:30:59' '%Y-%m-%d %H:%M:%S.%f', # '2006-10-25 14:30:59.000200' '%Y-%m-%d %H:%M', # '2006-10-25 14:30' '%m/%d/%Y %H:%M:%S', # '10/25/2006 14:30:59' '%m/%d/%Y %H:%M:%S.%f', # '10/25/2006 14:30:59.000200' '%m/%d/%Y %H:%M', # '10/25/2006 14:30' '%m/%d/%y %H:%M:%S', # '10/25/06 14:30:59' '%m/%d/%y %H:%M:%S.%f', # '10/25/06 14:30:59.000200' '%m/%d/%y %H:%M', # '10/25/06 14:30' ] A list of formats that will be accepted when inputting data on a datetime field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. Date-only formats are not included as datetime fields will automatically try DATE_INPUT_FORMATS in last resort. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATE_INPUT_FORMATS and TIME_INPUT_FORMATS. DEBUG Default: False A boolean that turns on/off debug mode. Never deploy a site into production with DEBUG turned on. One of the main features of debug mode is the display of detailed error pages. If your app raises an exception when DEBUG is True, Django will display a detailed traceback, including a lot of metadata about your environment, such as all the currently defined Django settings (from settings.py). As a security measure, Django will not include settings that might be sensitive, such as SECRET_KEY. Specifically, it will exclude any setting whose name includes any of the following: 'API' 'KEY' 'PASS' 'SECRET' 'SIGNATURE' 'TOKEN' Note that these are partial matches. 'PASS' will also match PASSWORD, just as 'TOKEN' will also match TOKENIZED and so on. Still, note that there are always going to be sections of your debug output that are inappropriate for public consumption. File paths, configuration options and the like all give attackers extra information about your server. It is also important to remember that when running with DEBUG turned on, Django will remember every SQL query it executes. This is useful when you’re debugging, but it’ll rapidly consume memory on a production server. Finally, if DEBUG is False, you also need to properly set the ALLOWED_HOSTS setting. Failing to do so will result in all requests being returned as “Bad Request (400)”. Note The default settings.py file created by django-admin startproject sets DEBUG = True for convenience. DEBUG_PROPAGATE_EXCEPTIONS Default: False If set to True, Django’s exception handling of view functions (handler500, or the debug view if DEBUG is True) and logging of 500 responses (django.request) is skipped and exceptions propagate upward. This can be useful for some test setups. It shouldn’t be used on a live site unless you want your web server (instead of Django) to generate “Internal Server Error” responses. In that case, make sure your server doesn’t show the stack trace or other sensitive information in the response. DECIMAL_SEPARATOR Default: '.' (Dot) Default decimal separator used when formatting decimal numbers. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also NUMBER_GROUPING, THOUSAND_SEPARATOR and USE_THOUSAND_SEPARATOR. DEFAULT_AUTO_FIELD New in Django 3.2. Default: 'django.db.models.AutoField' Default primary key field type to use for models that don’t have a field with primary_key=True. Migrating auto-created through tables The value of DEFAULT_AUTO_FIELD will be respected when creating new auto-created through tables for many-to-many relationships. Unfortunately, the primary keys of existing auto-created through tables cannot currently be updated by the migrations framework. This means that if you switch the value of DEFAULT_AUTO_FIELD and then generate migrations, the primary keys of the related models will be updated, as will the foreign keys from the through table, but the primary key of the auto-created through table will not be migrated. In order to address this, you should add a RunSQL operation to your migrations to perform the required ALTER TABLE step. You can check the existing table name through sqlmigrate, dbshell, or with the field’s remote_field.through._meta.db_table property. Explicitly defined through models are already handled by the migrations system. Allowing automatic migrations for the primary key of existing auto-created through tables may be implemented at a later date. DEFAULT_CHARSET Default: 'utf-8' Default charset to use for all HttpResponse objects, if a MIME type isn’t manually specified. Used when constructing the Content-Type header. DEFAULT_EXCEPTION_REPORTER Default: 'django.views.debug.ExceptionReporter' Default exception reporter class to be used if none has been assigned to the HttpRequest instance yet. See Custom error reports. DEFAULT_EXCEPTION_REPORTER_FILTER Default: 'django.views.debug.SafeExceptionReporterFilter' Default exception reporter filter class to be used if none has been assigned to the HttpRequest instance yet. See Filtering error reports. DEFAULT_FILE_STORAGE Default: 'django.core.files.storage.FileSystemStorage' Default file storage class to be used for any file-related operations that don’t specify a particular storage system. See Managing files. DEFAULT_FROM_EMAIL Default: 'webmaster@localhost' Default email address to use for various automated correspondence from the site manager(s). This doesn’t include error messages sent to ADMINS and MANAGERS; for that, see SERVER_EMAIL. DEFAULT_INDEX_TABLESPACE Default: '' (Empty string) Default tablespace to use for indexes on fields that don’t specify one, if the backend supports it (see Tablespaces). DEFAULT_TABLESPACE Default: '' (Empty string) Default tablespace to use for models that don’t specify one, if the backend supports it (see Tablespaces). DISALLOWED_USER_AGENTS Default: [] (Empty list) List of compiled regular expression objects representing User-Agent strings that are not allowed to visit any page, systemwide. Use this for bots/crawlers. This is only used if CommonMiddleware is installed (see Middleware). EMAIL_BACKEND Default: 'django.core.mail.backends.smtp.EmailBackend' The backend to use for sending emails. For the list of available backends see Sending email. EMAIL_FILE_PATH Default: Not defined The directory used by the file email backend to store output files. EMAIL_HOST Default: 'localhost' The host to use for sending email. See also EMAIL_PORT. EMAIL_HOST_PASSWORD Default: '' (Empty string) Password to use for the SMTP server defined in EMAIL_HOST. This setting is used in conjunction with EMAIL_HOST_USER when authenticating to the SMTP server. If either of these settings is empty, Django won’t attempt authentication. See also EMAIL_HOST_USER. EMAIL_HOST_USER Default: '' (Empty string) Username to use for the SMTP server defined in EMAIL_HOST. If empty, Django won’t attempt authentication. See also EMAIL_HOST_PASSWORD. EMAIL_PORT Default: 25 Port to use for the SMTP server defined in EMAIL_HOST. EMAIL_SUBJECT_PREFIX Default: '[Django] ' Subject-line prefix for email messages sent with django.core.mail.mail_admins or django.core.mail.mail_managers. You’ll probably want to include the trailing space. EMAIL_USE_LOCALTIME Default: False Whether to send the SMTP Date header of email messages in the local time zone (True) or in UTC (False). EMAIL_USE_TLS Default: False Whether to use a TLS (secure) connection when talking to the SMTP server. This is used for explicit TLS connections, generally on port 587. If you are experiencing hanging connections, see the implicit TLS setting EMAIL_USE_SSL. EMAIL_USE_SSL Default: False Whether to use an implicit TLS (secure) connection when talking to the SMTP server. In most email documentation this type of TLS connection is referred to as SSL. It is generally used on port 465. If you are experiencing problems, see the explicit TLS setting EMAIL_USE_TLS. Note that EMAIL_USE_TLS/EMAIL_USE_SSL are mutually exclusive, so only set one of those settings to True. EMAIL_SSL_CERTFILE Default: None If EMAIL_USE_SSL or EMAIL_USE_TLS is True, you can optionally specify the path to a PEM-formatted certificate chain file to use for the SSL connection. EMAIL_SSL_KEYFILE Default: None If EMAIL_USE_SSL or EMAIL_USE_TLS is True, you can optionally specify the path to a PEM-formatted private key file to use for the SSL connection. Note that setting EMAIL_SSL_CERTFILE and EMAIL_SSL_KEYFILE doesn’t result in any certificate checking. They’re passed to the underlying SSL connection. Please refer to the documentation of Python’s ssl.wrap_socket() function for details on how the certificate chain file and private key file are handled. EMAIL_TIMEOUT Default: None Specifies a timeout in seconds for blocking operations like the connection attempt. FILE_UPLOAD_HANDLERS Default: [ 'django.core.files.uploadhandler.MemoryFileUploadHandler', 'django.core.files.uploadhandler.TemporaryFileUploadHandler', ] A list of handlers to use for uploading. Changing this setting allows complete customization – even replacement – of Django’s upload process. See Managing files for details. FILE_UPLOAD_MAX_MEMORY_SIZE Default: 2621440 (i.e. 2.5 MB). The maximum size (in bytes) that an upload will be before it gets streamed to the file system. See Managing files for details. See also DATA_UPLOAD_MAX_MEMORY_SIZE. FILE_UPLOAD_DIRECTORY_PERMISSIONS Default: None The numeric mode to apply to directories created in the process of uploading files. This setting also determines the default permissions for collected static directories when using the collectstatic management command. See collectstatic for details on overriding it. This value mirrors the functionality and caveats of the FILE_UPLOAD_PERMISSIONS setting. FILE_UPLOAD_PERMISSIONS Default: 0o644 The numeric mode (i.e. 0o644) to set newly uploaded files to. For more information about what these modes mean, see the documentation for os.chmod(). If None, you’ll get operating-system dependent behavior. On most platforms, temporary files will have a mode of 0o600, and files saved from memory will be saved using the system’s standard umask. For security reasons, these permissions aren’t applied to the temporary files that are stored in FILE_UPLOAD_TEMP_DIR. This setting also determines the default permissions for collected static files when using the collectstatic management command. See collectstatic for details on overriding it. Warning Always prefix the mode with 0o . If you’re not familiar with file modes, please note that the 0o prefix is very important: it indicates an octal number, which is the way that modes must be specified. If you try to use 644, you’ll get totally incorrect behavior. FILE_UPLOAD_TEMP_DIR Default: None The directory to store data to (typically files larger than FILE_UPLOAD_MAX_MEMORY_SIZE) temporarily while uploading files. If None, Django will use the standard temporary directory for the operating system. For example, this will default to /tmp on *nix-style operating systems. See Managing files for details. FIRST_DAY_OF_WEEK Default: 0 (Sunday) A number representing the first day of the week. This is especially useful when displaying a calendar. This value is only used when not using format internationalization, or when a format cannot be found for the current locale. The value must be an integer from 0 to 6, where 0 means Sunday, 1 means Monday and so on. FIXTURE_DIRS Default: [] (Empty list) List of directories searched for fixture files, in addition to the fixtures directory of each application, in search order. Note that these paths should use Unix-style forward slashes, even on Windows. See Providing data with fixtures and Fixture loading. FORCE_SCRIPT_NAME Default: None If not None, this will be used as the value of the SCRIPT_NAME environment variable in any HTTP request. This setting can be used to override the server-provided value of SCRIPT_NAME, which may be a rewritten version of the preferred value or not supplied at all. It is also used by django.setup() to set the URL resolver script prefix outside of the request/response cycle (e.g. in management commands and standalone scripts) to generate correct URLs when SCRIPT_NAME is not /. FORM_RENDERER Default: 'django.forms.renderers.DjangoTemplates' The class that renders forms and form widgets. It must implement the low-level render API. Included form renderers are: 'django.forms.renderers.DjangoTemplates' 'django.forms.renderers.Jinja2' FORMAT_MODULE_PATH Default: None A full Python path to a Python package that contains custom format definitions for project locales. If not None, Django will check for a formats.py file, under the directory named as the current locale, and will use the formats defined in this file. For example, if FORMAT_MODULE_PATH is set to mysite.formats, and current language is en (English), Django will expect a directory tree like: mysite/ formats/ __init__.py en/ __init__.py formats.py You can also set this setting to a list of Python paths, for example: FORMAT_MODULE_PATH = [ 'mysite.formats', 'some_app.formats', ] When Django searches for a certain format, it will go through all given Python paths until it finds a module that actually defines the given format. This means that formats defined in packages farther up in the list will take precedence over the same formats in packages farther down. Available formats are: DATE_FORMAT DATE_INPUT_FORMATS DATETIME_FORMAT, DATETIME_INPUT_FORMATS DECIMAL_SEPARATOR FIRST_DAY_OF_WEEK MONTH_DAY_FORMAT NUMBER_GROUPING SHORT_DATE_FORMAT SHORT_DATETIME_FORMAT THOUSAND_SEPARATOR TIME_FORMAT TIME_INPUT_FORMATS YEAR_MONTH_FORMAT IGNORABLE_404_URLS Default: [] (Empty list) List of compiled regular expression objects describing URLs that should be ignored when reporting HTTP 404 errors via email (see How to manage error reporting). Regular expressions are matched against request's full paths (including query string, if any). Use this if your site does not provide a commonly requested file such as favicon.ico or robots.txt. This is only used if BrokenLinkEmailsMiddleware is enabled (see Middleware). INSTALLED_APPS Default: [] (Empty list) A list of strings designating all applications that are enabled in this Django installation. Each string should be a dotted Python path to: an application configuration class (preferred), or a package containing an application. Learn more about application configurations. Use the application registry for introspection Your code should never access INSTALLED_APPS directly. Use django.apps.apps instead. Application names and labels must be unique in INSTALLED_APPS Application names — the dotted Python path to the application package — must be unique. There is no way to include the same application twice, short of duplicating its code under another name. Application labels — by default the final part of the name — must be unique too. For example, you can’t include both django.contrib.auth and myproject.auth. However, you can relabel an application with a custom configuration that defines a different label. These rules apply regardless of whether INSTALLED_APPS references application configuration classes or application packages. When several applications provide different versions of the same resource (template, static file, management command, translation), the application listed first in INSTALLED_APPS has precedence. INTERNAL_IPS Default: [] (Empty list) A list of IP addresses, as strings, that: Allow the debug() context processor to add some variables to the template context. Can use the admindocs bookmarklets even if not logged in as a staff user. Are marked as “internal” (as opposed to “EXTERNAL”) in AdminEmailHandler emails. LANGUAGE_CODE Default: 'en-us' A string representing the language code for this installation. This should be in standard language ID format. For example, U.S. English is "en-us". See also the list of language identifiers and Internationalization and localization. USE_I18N must be active for this setting to have any effect. It serves two purposes: If the locale middleware isn’t in use, it decides which translation is served to all users. If the locale middleware is active, it provides a fallback language in case the user’s preferred language can’t be determined or is not supported by the website. It also provides the fallback translation when a translation for a given literal doesn’t exist for the user’s preferred language. See How Django discovers language preference for more details. LANGUAGE_COOKIE_AGE Default: None (expires at browser close) The age of the language cookie, in seconds. LANGUAGE_COOKIE_DOMAIN Default: None The domain to use for the language cookie. Set this to a string such as "example.com" for cross-domain cookies, or use None for a standard domain cookie. Be cautious when updating this setting on a production site. If you update this setting to enable cross-domain cookies on a site that previously used standard domain cookies, existing user cookies that have the old domain will not be updated. This will result in site users being unable to switch the language as long as these cookies persist. The only safe and reliable option to perform the switch is to change the language cookie name permanently (via the LANGUAGE_COOKIE_NAME setting) and to add a middleware that copies the value from the old cookie to a new one and then deletes the old one. LANGUAGE_COOKIE_HTTPONLY Default: False Whether to use HttpOnly flag on the language cookie. If this is set to True, client-side JavaScript will not be able to access the language cookie. See SESSION_COOKIE_HTTPONLY for details on HttpOnly. LANGUAGE_COOKIE_NAME Default: 'django_language' The name of the cookie to use for the language cookie. This can be whatever you want (as long as it’s different from the other cookie names in your application). See Internationalization and localization. LANGUAGE_COOKIE_PATH Default: '/' The path set on the language cookie. This should either match the URL path of your Django installation or be a parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths and each instance will only see its own language cookie. Be cautious when updating this setting on a production site. If you update this setting to use a deeper path than it previously used, existing user cookies that have the old path will not be updated. This will result in site users being unable to switch the language as long as these cookies persist. The only safe and reliable option to perform the switch is to change the language cookie name permanently (via the LANGUAGE_COOKIE_NAME setting), and to add a middleware that copies the value from the old cookie to a new one and then deletes the one. LANGUAGE_COOKIE_SAMESITE Default: None The value of the SameSite flag on the language cookie. This flag prevents the cookie from being sent in cross-site requests. See SESSION_COOKIE_SAMESITE for details about SameSite. LANGUAGE_COOKIE_SECURE Default: False Whether to use a secure cookie for the language cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent under an HTTPS connection. LANGUAGES Default: A list of all available languages. This list is continually growing and including a copy here would inevitably become rapidly out of date. You can see the current list of translated languages by looking in django/conf/global_settings.py. The list is a list of two-tuples in the format (language code, language name) – for example, ('ja', 'Japanese'). This specifies which languages are available for language selection. See Internationalization and localization. Generally, the default value should suffice. Only set this setting if you want to restrict language selection to a subset of the Django-provided languages. If you define a custom LANGUAGES setting, you can mark the language names as translation strings using the gettext_lazy() function. Here’s a sample settings file: from django.utils.translation import gettext_lazy as _ LANGUAGES = [ ('de', _('German')), ('en', _('English')), ] LANGUAGES_BIDI Default: A list of all language codes that are written right-to-left. You can see the current list of these languages by looking in django/conf/global_settings.py. The list contains language codes for languages that are written right-to-left. Generally, the default value should suffice. Only set this setting if you want to restrict language selection to a subset of the Django-provided languages. If you define a custom LANGUAGES setting, the list of bidirectional languages may contain language codes which are not enabled on a given site. LOCALE_PATHS Default: [] (Empty list) A list of directories where Django looks for translation files. See How Django discovers translations. Example: LOCALE_PATHS = [ '/home/www/project/common_files/locale', '/var/local/translations/locale', ] Django will look within each of these paths for the <locale_code>/LC_MESSAGES directories containing the actual translation files. LOGGING Default: A logging configuration dictionary. A data structure containing configuration information. The contents of this data structure will be passed as the argument to the configuration method described in LOGGING_CONFIG. Among other things, the default logging configuration passes HTTP 500 server errors to an email log handler when DEBUG is False. See also Configuring logging. You can see the default logging configuration by looking in django/utils/log.py. LOGGING_CONFIG Default: 'logging.config.dictConfig' A path to a callable that will be used to configure logging in the Django project. Points at an instance of Python’s dictConfig configuration method by default. If you set LOGGING_CONFIG to None, the logging configuration process will be skipped. MANAGERS Default: [] (Empty list) A list in the same format as ADMINS that specifies who should get broken link notifications when BrokenLinkEmailsMiddleware is enabled. MEDIA_ROOT Default: '' (Empty string) Absolute filesystem path to the directory that will hold user-uploaded files. Example: "/var/www/example.com/media/" See also MEDIA_URL. Warning MEDIA_ROOT and STATIC_ROOT must have different values. Before STATIC_ROOT was introduced, it was common to rely or fallback on MEDIA_ROOT to also serve static files; however, since this can have serious security implications, there is a validation check to prevent it. MEDIA_URL Default: '' (Empty string) URL that handles the media served from MEDIA_ROOT, used for managing stored files. It must end in a slash if set to a non-empty value. You will need to configure these files to be served in both development and production environments. If you want to use {{ MEDIA_URL }} in your templates, add 'django.template.context_processors.media' in the 'context_processors' option of TEMPLATES. Example: "http://media.example.com/" Warning There are security risks if you are accepting uploaded content from untrusted users! See the security guide’s topic on User-uploaded content for mitigation details. Warning MEDIA_URL and STATIC_URL must have different values. See MEDIA_ROOT for more details. Note If MEDIA_URL is a relative path, then it will be prefixed by the server-provided value of SCRIPT_NAME (or / if not set). This makes it easier to serve a Django application in a subpath without adding an extra configuration to the settings. MIDDLEWARE Default: None A list of middleware to use. See Middleware. MIGRATION_MODULES Default: {} (Empty dictionary) A dictionary specifying the package where migration modules can be found on a per-app basis. The default value of this setting is an empty dictionary, but the default package name for migration modules is migrations. Example: {'blog': 'blog.db_migrations'} In this case, migrations pertaining to the blog app will be contained in the blog.db_migrations package. If you provide the app_label argument, makemigrations will automatically create the package if it doesn’t already exist. When you supply None as a value for an app, Django will consider the app as an app without migrations regardless of an existing migrations submodule. This can be used, for example, in a test settings file to skip migrations while testing (tables will still be created for the apps’ models). To disable migrations for all apps during tests, you can set the MIGRATE to False instead. If MIGRATION_MODULES is used in your general project settings, remember to use the migrate --run-syncdb option if you want to create tables for the app. MONTH_DAY_FORMAT Default: 'F j' The default formatting to use for date fields on Django admin change-list pages – and, possibly, by other parts of the system – in cases when only the month and day are displayed. For example, when a Django admin change-list page is being filtered by a date drilldown, the header for a given day displays the day and month. Different locales have different formats. For example, U.S. English would say “January 1,” whereas Spanish might say “1 Enero.” Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT and YEAR_MONTH_FORMAT. NUMBER_GROUPING Default: 0 Number of digits grouped together on the integer part of a number. Common use is to display a thousand separator. If this setting is 0, then no grouping will be applied to the number. If this setting is greater than 0, then THOUSAND_SEPARATOR will be used as the separator between those groups. Some locales use non-uniform digit grouping, e.g. 10,00,00,000 in en_IN. For this case, you can provide a sequence with the number of digit group sizes to be applied. The first number defines the size of the group preceding the decimal delimiter, and each number that follows defines the size of preceding groups. If the sequence is terminated with -1, no further grouping is performed. If the sequence terminates with a 0, the last group size is used for the remainder of the number. Example tuple for en_IN: NUMBER_GROUPING = (3, 2, 0) Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also DECIMAL_SEPARATOR, THOUSAND_SEPARATOR and USE_THOUSAND_SEPARATOR. PREPEND_WWW Default: False Whether to prepend the “www.” subdomain to URLs that don’t have it. This is only used if CommonMiddleware is installed (see Middleware). See also APPEND_SLASH. ROOT_URLCONF Default: Not defined A string representing the full Python import path to your root URLconf, for example "mydjangoapps.urls". Can be overridden on a per-request basis by setting the attribute urlconf on the incoming HttpRequest object. See How Django processes a request for details. SECRET_KEY Default: '' (Empty string) A secret key for a particular Django installation. This is used to provide cryptographic signing, and should be set to a unique, unpredictable value. django-admin startproject automatically adds a randomly-generated SECRET_KEY to each new project. Uses of the key shouldn’t assume that it’s text or bytes. Every use should go through force_str() or force_bytes() to convert it to the desired type. Django will refuse to start if SECRET_KEY is not set. Warning Keep this value secret. Running Django with a known SECRET_KEY defeats many of Django’s security protections, and can lead to privilege escalation and remote code execution vulnerabilities. The secret key is used for: All sessions if you are using any other session backend than django.contrib.sessions.backends.cache, or are using the default get_session_auth_hash(). All messages if you are using CookieStorage or FallbackStorage. All PasswordResetView tokens. Any usage of cryptographic signing, unless a different key is provided. If you rotate your secret key, all of the above will be invalidated. Secret keys are not used for passwords of users and key rotation will not affect them. Note The default settings.py file created by django-admin startproject creates a unique SECRET_KEY for convenience. SECURE_CONTENT_TYPE_NOSNIFF Default: True If True, the SecurityMiddleware sets the X-Content-Type-Options: nosniff header on all responses that do not already have it. SECURE_CROSS_ORIGIN_OPENER_POLICY New in Django 4.0. Default: 'same-origin' Unless set to None, the SecurityMiddleware sets the Cross-Origin Opener Policy header on all responses that do not already have it to the value provided. SECURE_HSTS_INCLUDE_SUBDOMAINS Default: False If True, the SecurityMiddleware adds the includeSubDomains directive to the HTTP Strict Transport Security header. It has no effect unless SECURE_HSTS_SECONDS is set to a non-zero value. Warning Setting this incorrectly can irreversibly (for the value of SECURE_HSTS_SECONDS) break your site. Read the HTTP Strict Transport Security documentation first. SECURE_HSTS_PRELOAD Default: False If True, the SecurityMiddleware adds the preload directive to the HTTP Strict Transport Security header. It has no effect unless SECURE_HSTS_SECONDS is set to a non-zero value. SECURE_HSTS_SECONDS Default: 0 If set to a non-zero integer value, the SecurityMiddleware sets the HTTP Strict Transport Security header on all responses that do not already have it. Warning Setting this incorrectly can irreversibly (for some time) break your site. Read the HTTP Strict Transport Security documentation first. SECURE_PROXY_SSL_HEADER Default: None A tuple representing an HTTP header/value combination that signifies a request is secure. This controls the behavior of the request object’s is_secure() method. By default, is_secure() determines if a request is secure by confirming that a requested URL uses https://. This method is important for Django’s CSRF protection, and it may be used by your own code or third-party apps. If your Django app is behind a proxy, though, the proxy may be “swallowing” whether the original request uses HTTPS or not. If there is a non-HTTPS connection between the proxy and Django then is_secure() would always return False – even for requests that were made via HTTPS by the end user. In contrast, if there is an HTTPS connection between the proxy and Django then is_secure() would always return True – even for requests that were made originally via HTTP. In this situation, configure your proxy to set a custom HTTP header that tells Django whether the request came in via HTTPS, and set SECURE_PROXY_SSL_HEADER so that Django knows what header to look for. Set a tuple with two elements – the name of the header to look for and the required value. For example: SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') This tells Django to trust the X-Forwarded-Proto header that comes from our proxy, and any time its value is 'https', then the request is guaranteed to be secure (i.e., it originally came in via HTTPS). You should only set this setting if you control your proxy or have some other guarantee that it sets/strips this header appropriately. Note that the header needs to be in the format as used by request.META – all caps and likely starting with HTTP_. (Remember, Django automatically adds 'HTTP_' to the start of x-header names before making the header available in request.META.) Warning Modifying this setting can compromise your site’s security. Ensure you fully understand your setup before changing it. Make sure ALL of the following are true before setting this (assuming the values from the example above): Your Django app is behind a proxy. Your proxy strips the X-Forwarded-Proto header from all incoming requests. In other words, if end users include that header in their requests, the proxy will discard it. Your proxy sets the X-Forwarded-Proto header and sends it to Django, but only for requests that originally come in via HTTPS. If any of those are not true, you should keep this setting set to None and find another way of determining HTTPS, perhaps via custom middleware. SECURE_REDIRECT_EXEMPT Default: [] (Empty list) If a URL path matches a regular expression in this list, the request will not be redirected to HTTPS. The SecurityMiddleware strips leading slashes from URL paths, so patterns shouldn’t include them, e.g. SECURE_REDIRECT_EXEMPT = [r'^no-ssl/$', …]. If SECURE_SSL_REDIRECT is False, this setting has no effect. SECURE_REFERRER_POLICY Default: 'same-origin' If configured, the SecurityMiddleware sets the Referrer Policy header on all responses that do not already have it to the value provided. SECURE_SSL_HOST Default: None If a string (e.g. secure.example.com), all SSL redirects will be directed to this host rather than the originally-requested host (e.g. www.example.com). If SECURE_SSL_REDIRECT is False, this setting has no effect. SECURE_SSL_REDIRECT Default: False If True, the SecurityMiddleware redirects all non-HTTPS requests to HTTPS (except for those URLs matching a regular expression listed in SECURE_REDIRECT_EXEMPT). Note If turning this to True causes infinite redirects, it probably means your site is running behind a proxy and can’t tell which requests are secure and which are not. Your proxy likely sets a header to indicate secure requests; you can correct the problem by finding out what that header is and configuring the SECURE_PROXY_SSL_HEADER setting accordingly. SERIALIZATION_MODULES Default: Not defined A dictionary of modules containing serializer definitions (provided as strings), keyed by a string identifier for that serialization type. For example, to define a YAML serializer, use: SERIALIZATION_MODULES = {'yaml': 'path.to.yaml_serializer'} SERVER_EMAIL Default: 'root@localhost' The email address that error messages come from, such as those sent to ADMINS and MANAGERS. Why are my emails sent from a different address? This address is used only for error messages. It is not the address that regular email messages sent with send_mail() come from; for that, see DEFAULT_FROM_EMAIL. SHORT_DATE_FORMAT Default: 'm/d/Y' (e.g. 12/31/2003) An available formatting that can be used for displaying date fields on templates. Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT and SHORT_DATETIME_FORMAT. SHORT_DATETIME_FORMAT Default: 'm/d/Y P' (e.g. 12/31/2003 4 p.m.) An available formatting that can be used for displaying datetime fields on templates. Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT and SHORT_DATE_FORMAT. SIGNING_BACKEND Default: 'django.core.signing.TimestampSigner' The backend used for signing cookies and other data. See also the Cryptographic signing documentation. SILENCED_SYSTEM_CHECKS Default: [] (Empty list) A list of identifiers of messages generated by the system check framework (i.e. ["models.W001"]) that you wish to permanently acknowledge and ignore. Silenced checks will not be output to the console. See also the System check framework documentation. TEMPLATES Default: [] (Empty list) A list containing the settings for all template engines to be used with Django. Each item of the list is a dictionary containing the options for an individual engine. Here’s a setup that tells the Django template engine to load templates from the templates subdirectory inside each installed application: TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'APP_DIRS': True, }, ] The following options are available for all backends. BACKEND Default: Not defined The template backend to use. The built-in template backends are: 'django.template.backends.django.DjangoTemplates' 'django.template.backends.jinja2.Jinja2' You can use a template backend that doesn’t ship with Django by setting BACKEND to a fully-qualified path (i.e. 'mypackage.whatever.Backend'). NAME Default: see below The alias for this particular template engine. It’s an identifier that allows selecting an engine for rendering. Aliases must be unique across all configured template engines. It defaults to the name of the module defining the engine class, i.e. the next to last piece of BACKEND, when it isn’t provided. For example if the backend is 'mypackage.whatever.Backend' then its default name is 'whatever'. DIRS Default: [] (Empty list) Directories where the engine should look for template source files, in search order. APP_DIRS Default: False Whether the engine should look for template source files inside installed applications. Note The default settings.py file created by django-admin startproject sets 'APP_DIRS': True. OPTIONS Default: {} (Empty dict) Extra parameters to pass to the template backend. Available parameters vary depending on the template backend. See DjangoTemplates and Jinja2 for the options of the built-in backends. TEST_RUNNER Default: 'django.test.runner.DiscoverRunner' The name of the class to use for starting the test suite. See Using different testing frameworks. TEST_NON_SERIALIZED_APPS Default: [] (Empty list) In order to restore the database state between tests for TransactionTestCases and database backends without transactions, Django will serialize the contents of all apps when it starts the test run so it can then reload from that copy before running tests that need it. This slows down the startup time of the test runner; if you have apps that you know don’t need this feature, you can add their full names in here (e.g. 'django.contrib.contenttypes') to exclude them from this serialization process. THOUSAND_SEPARATOR Default: ',' (Comma) Default thousand separator used when formatting numbers. This setting is used only when USE_THOUSAND_SEPARATOR is True and NUMBER_GROUPING is greater than 0. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also NUMBER_GROUPING, DECIMAL_SEPARATOR and USE_THOUSAND_SEPARATOR. TIME_FORMAT Default: 'P' (e.g. 4 p.m.) The default formatting to use for displaying time fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATE_FORMAT and DATETIME_FORMAT. TIME_INPUT_FORMATS Default: [ '%H:%M:%S', # '14:30:59' '%H:%M:%S.%f', # '14:30:59.000200' '%H:%M', # '14:30' ] A list of formats that will be accepted when inputting data on a time field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATE_INPUT_FORMATS and DATETIME_INPUT_FORMATS. TIME_ZONE Default: 'America/Chicago' A string representing the time zone for this installation. See the list of time zones. Note Since Django was first released with the TIME_ZONE set to 'America/Chicago', the global setting (used if nothing is defined in your project’s settings.py) remains 'America/Chicago' for backwards compatibility. New project templates default to 'UTC'. Note that this isn’t necessarily the time zone of the server. For example, one server may serve multiple Django-powered sites, each with a separate time zone setting. When USE_TZ is False, this is the time zone in which Django will store all datetimes. When USE_TZ is True, this is the default time zone that Django will use to display datetimes in templates and to interpret datetimes entered in forms. On Unix environments (where time.tzset() is implemented), Django sets the os.environ['TZ'] variable to the time zone you specify in the TIME_ZONE setting. Thus, all your views and models will automatically operate in this time zone. However, Django won’t set the TZ environment variable if you’re using the manual configuration option as described in manually configuring settings. If Django doesn’t set the TZ environment variable, it’s up to you to ensure your processes are running in the correct environment. Note Django cannot reliably use alternate time zones in a Windows environment. If you’re running Django on Windows, TIME_ZONE must be set to match the system time zone. USE_DEPRECATED_PYTZ New in Django 4.0. Default: False A boolean that specifies whether to use pytz, rather than zoneinfo, as the default time zone implementation. Deprecated since version 4.0: This transitional setting is deprecated. Support for using pytz will be removed in Django 5.0. USE_I18N Default: True A boolean that specifies whether Django’s translation system should be enabled. This provides a way to turn it off, for performance. If this is set to False, Django will make some optimizations so as not to load the translation machinery. See also LANGUAGE_CODE, USE_L10N and USE_TZ. Note The default settings.py file created by django-admin startproject includes USE_I18N = True for convenience. USE_L10N Default: True A boolean that specifies if localized formatting of data will be enabled by default or not. If this is set to True, e.g. Django will display numbers and dates using the format of the current locale. See also LANGUAGE_CODE, USE_I18N and USE_TZ. Changed in Django 4.0: In older versions, the default value is False. Deprecated since version 4.0: This setting is deprecated. Starting with Django 5.0, localized formatting of data will always be enabled. For example Django will display numbers and dates using the format of the current locale. USE_THOUSAND_SEPARATOR Default: False A boolean that specifies whether to display numbers using a thousand separator. When set to True and USE_L10N is also True, Django will format numbers using the NUMBER_GROUPING and THOUSAND_SEPARATOR settings. These settings may also be dictated by the locale, which takes precedence. See also DECIMAL_SEPARATOR, NUMBER_GROUPING and THOUSAND_SEPARATOR. USE_TZ Default: False Note In Django 5.0, the default value will change from False to True. A boolean that specifies if datetimes will be timezone-aware by default or not. If this is set to True, Django will use timezone-aware datetimes internally. When USE_TZ is False, Django will use naive datetimes in local time, except when parsing ISO 8601 formatted strings, where timezone information will always be retained if present. See also TIME_ZONE, USE_I18N and USE_L10N. Note The default settings.py file created by django-admin startproject includes USE_TZ = True for convenience. USE_X_FORWARDED_HOST Default: False A boolean that specifies whether to use the X-Forwarded-Host header in preference to the Host header. This should only be enabled if a proxy which sets this header is in use. This setting takes priority over USE_X_FORWARDED_PORT. Per RFC 7239#section-5.3, the X-Forwarded-Host header can include the port number, in which case you shouldn’t use USE_X_FORWARDED_PORT. USE_X_FORWARDED_PORT Default: False A boolean that specifies whether to use the X-Forwarded-Port header in preference to the SERVER_PORT META variable. This should only be enabled if a proxy which sets this header is in use. USE_X_FORWARDED_HOST takes priority over this setting. WSGI_APPLICATION Default: None The full Python path of the WSGI application object that Django’s built-in servers (e.g. runserver) will use. The django-admin startproject management command will create a standard wsgi.py file with an application callable in it, and point this setting to that application. If not set, the return value of django.core.wsgi.get_wsgi_application() will be used. In this case, the behavior of runserver will be identical to previous Django versions. YEAR_MONTH_FORMAT Default: 'F Y' The default formatting to use for date fields on Django admin change-list pages – and, possibly, by other parts of the system – in cases when only the year and month are displayed. For example, when a Django admin change-list page is being filtered by a date drilldown, the header for a given month displays the month and the year. Different locales have different formats. For example, U.S. English would say “January 2006,” whereas another locale might say “2006/January.” Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT and MONTH_DAY_FORMAT. X_FRAME_OPTIONS Default: 'DENY' The default value for the X-Frame-Options header used by XFrameOptionsMiddleware. See the clickjacking protection documentation. Auth Settings for django.contrib.auth. AUTHENTICATION_BACKENDS Default: ['django.contrib.auth.backends.ModelBackend'] A list of authentication backend classes (as strings) to use when attempting to authenticate a user. See the authentication backends documentation for details. AUTH_USER_MODEL Default: 'auth.User' The model to use to represent a User. See Substituting a custom User model. Warning You cannot change the AUTH_USER_MODEL setting during the lifetime of a project (i.e. once you have made and migrated models that depend on it) without serious effort. It is intended to be set at the project start, and the model it refers to must be available in the first migration of the app that it lives in. See Substituting a custom User model for more details. LOGIN_REDIRECT_URL Default: '/accounts/profile/' The URL or named URL pattern where requests are redirected after login when the LoginView doesn’t get a next GET parameter. LOGIN_URL Default: '/accounts/login/' The URL or named URL pattern where requests are redirected for login when using the login_required() decorator, LoginRequiredMixin, or AccessMixin. LOGOUT_REDIRECT_URL Default: None The URL or named URL pattern where requests are redirected after logout if LogoutView doesn’t have a next_page attribute. If None, no redirect will be performed and the logout view will be rendered. PASSWORD_RESET_TIMEOUT Default: 259200 (3 days, in seconds) The number of seconds a password reset link is valid for. Used by the PasswordResetConfirmView. Note Reducing the value of this timeout doesn’t make any difference to the ability of an attacker to brute-force a password reset token. Tokens are designed to be safe from brute-forcing without any timeout. This timeout exists to protect against some unlikely attack scenarios, such as someone gaining access to email archives that may contain old, unused password reset tokens. PASSWORD_HASHERS See How Django stores passwords. Default: [ 'django.contrib.auth.hashers.PBKDF2PasswordHasher', 'django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher', 'django.contrib.auth.hashers.Argon2PasswordHasher', 'django.contrib.auth.hashers.BCryptSHA256PasswordHasher', ] AUTH_PASSWORD_VALIDATORS Default: [] (Empty list) The list of validators that are used to check the strength of user’s passwords. See Password validation for more details. By default, no validation is performed and all passwords are accepted. Messages Settings for django.contrib.messages. MESSAGE_LEVEL Default: messages.INFO Sets the minimum message level that will be recorded by the messages framework. See message levels for more details. Important If you override MESSAGE_LEVEL in your settings file and rely on any of the built-in constants, you must import the constants module directly to avoid the potential for circular imports, e.g.: from django.contrib.messages import constants as message_constants MESSAGE_LEVEL = message_constants.DEBUG If desired, you may specify the numeric values for the constants directly according to the values in the above constants table. MESSAGE_STORAGE Default: 'django.contrib.messages.storage.fallback.FallbackStorage' Controls where Django stores message data. Valid values are: 'django.contrib.messages.storage.fallback.FallbackStorage' 'django.contrib.messages.storage.session.SessionStorage' 'django.contrib.messages.storage.cookie.CookieStorage' See message storage backends for more details. The backends that use cookies – CookieStorage and FallbackStorage – use the value of SESSION_COOKIE_DOMAIN, SESSION_COOKIE_SECURE and SESSION_COOKIE_HTTPONLY when setting their cookies. MESSAGE_TAGS Default: { messages.DEBUG: 'debug', messages.INFO: 'info', messages.SUCCESS: 'success', messages.WARNING: 'warning', messages.ERROR: 'error', } This sets the mapping of message level to message tag, which is typically rendered as a CSS class in HTML. If you specify a value, it will extend the default. This means you only have to specify those values which you need to override. See Displaying messages above for more details. Important If you override MESSAGE_TAGS in your settings file and rely on any of the built-in constants, you must import the constants module directly to avoid the potential for circular imports, e.g.: from django.contrib.messages import constants as message_constants MESSAGE_TAGS = {message_constants.INFO: ''} If desired, you may specify the numeric values for the constants directly according to the values in the above constants table. Sessions Settings for django.contrib.sessions. SESSION_CACHE_ALIAS Default: 'default' If you’re using cache-based session storage, this selects the cache to use. SESSION_COOKIE_AGE Default: 1209600 (2 weeks, in seconds) The age of session cookies, in seconds. SESSION_COOKIE_DOMAIN Default: None The domain to use for session cookies. Set this to a string such as "example.com" for cross-domain cookies, or use None for a standard domain cookie. To use cross-domain cookies with CSRF_USE_SESSIONS, you must include a leading dot (e.g. ".example.com") to accommodate the CSRF middleware’s referer checking. Be cautious when updating this setting on a production site. If you update this setting to enable cross-domain cookies on a site that previously used standard domain cookies, existing user cookies will be set to the old domain. This may result in them being unable to log in as long as these cookies persist. This setting also affects cookies set by django.contrib.messages. SESSION_COOKIE_HTTPONLY Default: True Whether to use HttpOnly flag on the session cookie. If this is set to True, client-side JavaScript will not be able to access the session cookie. HttpOnly is a flag included in a Set-Cookie HTTP response header. It’s part of the RFC 6265#section-4.1.2.6 standard for cookies and can be a useful way to mitigate the risk of a client-side script accessing the protected cookie data. This makes it less trivial for an attacker to escalate a cross-site scripting vulnerability into full hijacking of a user’s session. There aren’t many good reasons for turning this off. Your code shouldn’t read session cookies from JavaScript. SESSION_COOKIE_NAME Default: 'sessionid' The name of the cookie to use for sessions. This can be whatever you want (as long as it’s different from the other cookie names in your application). SESSION_COOKIE_PATH Default: '/' The path set on the session cookie. This should either match the URL path of your Django installation or be parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths, and each instance will only see its own session cookie. SESSION_COOKIE_SAMESITE Default: 'Lax' The value of the SameSite flag on the session cookie. This flag prevents the cookie from being sent in cross-site requests thus preventing CSRF attacks and making some methods of stealing session cookie impossible. Possible values for the setting are: 'Strict': prevents the cookie from being sent by the browser to the target site in all cross-site browsing context, even when following a regular link. For example, for a GitHub-like website this would mean that if a logged-in user follows a link to a private GitHub project posted on a corporate discussion forum or email, GitHub will not receive the session cookie and the user won’t be able to access the project. A bank website, however, most likely doesn’t want to allow any transactional pages to be linked from external sites so the 'Strict' flag would be appropriate. 'Lax' (default): provides a balance between security and usability for websites that want to maintain user’s logged-in session after the user arrives from an external link. In the GitHub scenario, the session cookie would be allowed when following a regular link from an external website and be blocked in CSRF-prone request methods (e.g. POST). 'None' (string): the session cookie will be sent with all same-site and cross-site requests. False: disables the flag. Note Modern browsers provide a more secure default policy for the SameSite flag and will assume Lax for cookies without an explicit value set. SESSION_COOKIE_SECURE Default: False Whether to use a secure cookie for the session cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent under an HTTPS connection. Leaving this setting off isn’t a good idea because an attacker could capture an unencrypted session cookie with a packet sniffer and use the cookie to hijack the user’s session. SESSION_ENGINE Default: 'django.contrib.sessions.backends.db' Controls where Django stores session data. Included engines are: 'django.contrib.sessions.backends.db' 'django.contrib.sessions.backends.file' 'django.contrib.sessions.backends.cache' 'django.contrib.sessions.backends.cached_db' 'django.contrib.sessions.backends.signed_cookies' See Configuring the session engine for more details. SESSION_EXPIRE_AT_BROWSER_CLOSE Default: False Whether to expire the session when the user closes their browser. See Browser-length sessions vs. persistent sessions. SESSION_FILE_PATH Default: None If you’re using file-based session storage, this sets the directory in which Django will store session data. When the default value (None) is used, Django will use the standard temporary directory for the system. SESSION_SAVE_EVERY_REQUEST Default: False Whether to save the session data on every request. If this is False (default), then the session data will only be saved if it has been modified – that is, if any of its dictionary values have been assigned or deleted. Empty sessions won’t be created, even if this setting is active. SESSION_SERIALIZER Default: 'django.contrib.sessions.serializers.JSONSerializer' Full import path of a serializer class to use for serializing session data. Included serializers are: 'django.contrib.sessions.serializers.PickleSerializer' 'django.contrib.sessions.serializers.JSONSerializer' See Session serialization for details, including a warning regarding possible remote code execution when using PickleSerializer. Sites Settings for django.contrib.sites. SITE_ID Default: Not defined The ID, as an integer, of the current site in the django_site database table. This is used so that application data can hook into specific sites and a single database can manage content for multiple sites. Static Files Settings for django.contrib.staticfiles. STATIC_ROOT Default: None The absolute path to the directory where collectstatic will collect static files for deployment. Example: "/var/www/example.com/static/" If the staticfiles contrib app is enabled (as in the default project template), the collectstatic management command will collect static files into this directory. See the how-to on managing static files for more details about usage. Warning This should be an initially empty destination directory for collecting your static files from their permanent locations into one directory for ease of deployment; it is not a place to store your static files permanently. You should do that in directories that will be found by staticfiles’s finders, which by default, are 'static/' app sub-directories and any directories you include in STATICFILES_DIRS). STATIC_URL Default: None URL to use when referring to static files located in STATIC_ROOT. Example: "static/" or "http://static.example.com/" If not None, this will be used as the base path for asset definitions (the Media class) and the staticfiles app. It must end in a slash if set to a non-empty value. You may need to configure these files to be served in development and will definitely need to do so in production. Note If STATIC_URL is a relative path, then it will be prefixed by the server-provided value of SCRIPT_NAME (or / if not set). This makes it easier to serve a Django application in a subpath without adding an extra configuration to the settings. STATICFILES_DIRS Default: [] (Empty list) This setting defines the additional locations the staticfiles app will traverse if the FileSystemFinder finder is enabled, e.g. if you use the collectstatic or findstatic management command or use the static file serving view. This should be set to a list of strings that contain full paths to your additional files directory(ies) e.g.: STATICFILES_DIRS = [ "/home/special.polls.com/polls/static", "/home/polls.com/polls/static", "/opt/webfiles/common", ] Note that these paths should use Unix-style forward slashes, even on Windows (e.g. "C:/Users/user/mysite/extra_static_content"). Prefixes (optional) In case you want to refer to files in one of the locations with an additional namespace, you can optionally provide a prefix as (prefix, path) tuples, e.g.: STATICFILES_DIRS = [ # ... ("downloads", "/opt/webfiles/stats"), ] For example, assuming you have STATIC_URL set to 'static/', the collectstatic management command would collect the “stats” files in a 'downloads' subdirectory of STATIC_ROOT. This would allow you to refer to the local file '/opt/webfiles/stats/polls_20101022.tar.gz' with '/static/downloads/polls_20101022.tar.gz' in your templates, e.g.: <a href="{% static 'downloads/polls_20101022.tar.gz' %}"> STATICFILES_STORAGE Default: 'django.contrib.staticfiles.storage.StaticFilesStorage' The file storage engine to use when collecting static files with the collectstatic management command. A ready-to-use instance of the storage backend defined in this setting can be found at django.contrib.staticfiles.storage.staticfiles_storage. For an example, see Serving static files from a cloud service or CDN. STATICFILES_FINDERS Default: [ 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', ] The list of finder backends that know how to find static files in various locations. The default will find files stored in the STATICFILES_DIRS setting (using django.contrib.staticfiles.finders.FileSystemFinder) and in a static subdirectory of each app (using django.contrib.staticfiles.finders.AppDirectoriesFinder). If multiple files with the same name are present, the first file that is found will be used. One finder is disabled by default: django.contrib.staticfiles.finders.DefaultStorageFinder. If added to your STATICFILES_FINDERS setting, it will look for static files in the default file storage as defined by the DEFAULT_FILE_STORAGE setting. Note When using the AppDirectoriesFinder finder, make sure your apps can be found by staticfiles by adding the app to the INSTALLED_APPS setting of your site. Static file finders are currently considered a private interface, and this interface is thus undocumented. Core Settings Topical Index Cache CACHES CACHE_MIDDLEWARE_ALIAS CACHE_MIDDLEWARE_KEY_PREFIX CACHE_MIDDLEWARE_SECONDS Database DATABASES DATABASE_ROUTERS DEFAULT_INDEX_TABLESPACE DEFAULT_TABLESPACE Debugging DEBUG DEBUG_PROPAGATE_EXCEPTIONS Email ADMINS DEFAULT_CHARSET DEFAULT_FROM_EMAIL EMAIL_BACKEND EMAIL_FILE_PATH EMAIL_HOST EMAIL_HOST_PASSWORD EMAIL_HOST_USER EMAIL_PORT EMAIL_SSL_CERTFILE EMAIL_SSL_KEYFILE EMAIL_SUBJECT_PREFIX EMAIL_TIMEOUT EMAIL_USE_LOCALTIME EMAIL_USE_TLS MANAGERS SERVER_EMAIL Error reporting DEFAULT_EXCEPTION_REPORTER DEFAULT_EXCEPTION_REPORTER_FILTER IGNORABLE_404_URLS MANAGERS SILENCED_SYSTEM_CHECKS File uploads DEFAULT_FILE_STORAGE FILE_UPLOAD_HANDLERS FILE_UPLOAD_MAX_MEMORY_SIZE FILE_UPLOAD_PERMISSIONS FILE_UPLOAD_TEMP_DIR MEDIA_ROOT MEDIA_URL Forms FORM_RENDERER Globalization (i18n/l10n) DATE_FORMAT DATE_INPUT_FORMATS DATETIME_FORMAT DATETIME_INPUT_FORMATS DECIMAL_SEPARATOR FIRST_DAY_OF_WEEK FORMAT_MODULE_PATH LANGUAGE_CODE LANGUAGE_COOKIE_AGE LANGUAGE_COOKIE_DOMAIN LANGUAGE_COOKIE_HTTPONLY LANGUAGE_COOKIE_NAME LANGUAGE_COOKIE_PATH LANGUAGE_COOKIE_SAMESITE LANGUAGE_COOKIE_SECURE LANGUAGES LANGUAGES_BIDI LOCALE_PATHS MONTH_DAY_FORMAT NUMBER_GROUPING SHORT_DATE_FORMAT SHORT_DATETIME_FORMAT THOUSAND_SEPARATOR TIME_FORMAT TIME_INPUT_FORMATS TIME_ZONE USE_I18N USE_L10N USE_THOUSAND_SEPARATOR USE_TZ YEAR_MONTH_FORMAT HTTP DATA_UPLOAD_MAX_MEMORY_SIZE DATA_UPLOAD_MAX_NUMBER_FIELDS DEFAULT_CHARSET DISALLOWED_USER_AGENTS FORCE_SCRIPT_NAME INTERNAL_IPS MIDDLEWARE Security SECURE_CONTENT_TYPE_NOSNIFF SECURE_CROSS_ORIGIN_OPENER_POLICY SECURE_HSTS_INCLUDE_SUBDOMAINS SECURE_HSTS_PRELOAD SECURE_HSTS_SECONDS SECURE_PROXY_SSL_HEADER SECURE_REDIRECT_EXEMPT SECURE_REFERRER_POLICY SECURE_SSL_HOST SECURE_SSL_REDIRECT SIGNING_BACKEND USE_X_FORWARDED_HOST USE_X_FORWARDED_PORT WSGI_APPLICATION Logging LOGGING LOGGING_CONFIG Models ABSOLUTE_URL_OVERRIDES FIXTURE_DIRS INSTALLED_APPS Security Cross Site Request Forgery Protection CSRF_COOKIE_DOMAIN CSRF_COOKIE_NAME CSRF_COOKIE_PATH CSRF_COOKIE_SAMESITE CSRF_COOKIE_SECURE CSRF_FAILURE_VIEW CSRF_HEADER_NAME CSRF_TRUSTED_ORIGINS CSRF_USE_SESSIONS SECRET_KEY X_FRAME_OPTIONS Serialization DEFAULT_CHARSET SERIALIZATION_MODULES Templates TEMPLATES Testing Database: TEST TEST_NON_SERIALIZED_APPS TEST_RUNNER URLs APPEND_SLASH PREPEND_WWW ROOT_URLCONF
doc_23657
Returns a clone of self with given hyperparameters theta. Parameters thetandarray of shape (n_dims,) The hyperparameters
doc_23658
Raised by built-in function next() and an iterator’s __next__() method to signal that there are no further items produced by the iterator. The exception object has a single attribute value, which is given as an argument when constructing the exception, and defaults to None. When a generator or coroutine function returns, a new StopIteration instance is raised, and the value returned by the function is used as the value parameter to the constructor of the exception. If a generator code directly or indirectly raises StopIteration, it is converted into a RuntimeError (retaining the StopIteration as the new exception’s cause). Changed in version 3.3: Added value attribute and the ability for generator functions to use it to return a value. Changed in version 3.5: Introduced the RuntimeError transformation via from __future__ import generator_stop, see PEP 479. Changed in version 3.7: Enable PEP 479 for all code by default: a StopIteration error raised in a generator is transformed into a RuntimeError.
doc_23659
Like Flask.before_first_request(). Such a function is executed before the first request to the application. Parameters f (Callable[[], None]) – Return type Callable[[], None]
doc_23660
Bases: object __call__(*args, **kwargs)[source] Call self as a function.
doc_23661
Set the label for the x-axis. Parameters xlabelstr The label text. labelpadfloat, default: rcParams["axes.labelpad"] (default: 4.0) Spacing in points from the Axes bounding box including ticks and tick labels. If None, the previous value is left as is. loc{'left', 'center', 'right'}, default: rcParams["xaxis.labellocation"] (default: 'center') The label position. This is a high-level alternative for passing parameters x and horizontalalignment. Other Parameters **kwargsText properties Text properties control the appearance of the label. See also text Documents the properties supported by Text. Examples using matplotlib.axes.Axes.set_xlabel Bar Label Demo Horizontal bar chart Broken Barh CSD Demo Fill Between and Alpha Filling the area between lines Fill Betweenx Demo Hatch-filled histograms Hat graph Psd Demo Scatter Demo2 Stackplots and streamgraphs hlines and vlines Contourf Demo Tricontour Demo Tripcolor Demo Triplot Demo Aligning Labels Axes Demo Axis Label Position Resizing axes with constrained layout Resizing axes with tight layout Figure labels: suptitle, supxlabel, supylabel Invert Axes Secondary Axis Figure subfigures Multiple subplots Plots with different scales Box plots with custom fill colors Boxplots Box plot vs. violin plot comparison Violin plot customization Using histograms to plot a cumulative distribution Some features of the histogram (hist) function Producing multiple histograms side by side Using accented text in matplotlib Labeling ticks using engineering notation Using a ttf font file in Matplotlib Legend Demo Mathtext Multiline Rendering math equations using TeX Title positioning Simple axes labels Text Commands Color Demo Line, Poly and RegularPoly Collection with autoscaling Ellipse Collection Dark background style sheet Make Room For Ylabel Using Axesgrid Parasite Simple Parasite Axes demo Parasite axis demo Ticklabel alignment Simple Axis Direction03 Simple Axisline Anatomy of a figure XKCD Keypress event Pythonic Matplotlib Plot 2D data on 3D plot Create 2D bar graphs in different planes 3D errorbars Lorenz Attractor Automatic Text Offsetting 3D scatterplot 3D surface with polar coordinates Text annotations in 3D Log Bar MRI With EEG Multiple Yaxis With Spines Centering labels between ticks Pgf Fonts Pgf Texsystem Slider Basic Usage Artist tutorial Constrained Layout Guide Tight Layout guide Arranging multiple Axes in a Figure Choosing Colormaps in Matplotlib Text in Matplotlib Plots
doc_23662
signal.SIG_DFL This is one of two standard signal handling options; it will simply perform the default function for the signal. For example, on most systems the default action for SIGQUIT is to dump core and exit, while the default action for SIGCHLD is to simply ignore it. signal.SIG_IGN This is another standard signal handler, which will simply ignore the given signal. signal.SIGABRT Abort signal from abort(3). signal.SIGALRM Timer signal from alarm(2). Availability: Unix. signal.SIGBREAK Interrupt from keyboard (CTRL + BREAK). Availability: Windows. signal.SIGBUS Bus error (bad memory access). Availability: Unix. signal.SIGCHLD Child process stopped or terminated. Availability: Unix. signal.SIGCLD Alias to SIGCHLD. signal.SIGCONT Continue the process if it is currently stopped Availability: Unix. signal.SIGFPE Floating-point exception. For example, division by zero. See also ZeroDivisionError is raised when the second argument of a division or modulo operation is zero. signal.SIGHUP Hangup detected on controlling terminal or death of controlling process. Availability: Unix. signal.SIGILL Illegal instruction. signal.SIGINT Interrupt from keyboard (CTRL + C). Default action is to raise KeyboardInterrupt. signal.SIGKILL Kill signal. It cannot be caught, blocked, or ignored. Availability: Unix. signal.SIGPIPE Broken pipe: write to pipe with no readers. Default action is to ignore the signal. Availability: Unix. signal.SIGSEGV Segmentation fault: invalid memory reference. signal.SIGTERM Termination signal. signal.SIGUSR1 User-defined signal 1. Availability: Unix. signal.SIGUSR2 User-defined signal 2. Availability: Unix. signal.SIGWINCH Window resize signal. Availability: Unix. SIG* All the signal numbers are defined symbolically. For example, the hangup signal is defined as signal.SIGHUP; the variable names are identical to the names used in C programs, as found in <signal.h>. The Unix man page for ‘signal()’ lists the existing signals (on some systems this is signal(2), on others the list is in signal(7)). Note that not all systems define the same set of signal names; only those names defined by the system are defined by this module. signal.CTRL_C_EVENT The signal corresponding to the Ctrl+C keystroke event. This signal can only be used with os.kill(). Availability: Windows. New in version 3.2. signal.CTRL_BREAK_EVENT The signal corresponding to the Ctrl+Break keystroke event. This signal can only be used with os.kill(). Availability: Windows. New in version 3.2. signal.NSIG One more than the number of the highest signal number. signal.ITIMER_REAL Decrements interval timer in real time, and delivers SIGALRM upon expiration. signal.ITIMER_VIRTUAL Decrements interval timer only when the process is executing, and delivers SIGVTALRM upon expiration. signal.ITIMER_PROF Decrements interval timer both when the process executes and when the system is executing on behalf of the process. Coupled with ITIMER_VIRTUAL, this timer is usually used to profile the time spent by the application in user and kernel space. SIGPROF is delivered upon expiration. signal.SIG_BLOCK A possible value for the how parameter to pthread_sigmask() indicating that signals are to be blocked. New in version 3.3. signal.SIG_UNBLOCK A possible value for the how parameter to pthread_sigmask() indicating that signals are to be unblocked. New in version 3.3. signal.SIG_SETMASK A possible value for the how parameter to pthread_sigmask() indicating that the signal mask is to be replaced. New in version 3.3. The signal module defines one exception: exception signal.ItimerError Raised to signal an error from the underlying setitimer() or getitimer() implementation. Expect this error if an invalid interval timer or a negative time is passed to setitimer(). This error is a subtype of OSError. New in version 3.3: This error used to be a subtype of IOError, which is now an alias of OSError. The signal module defines the following functions: signal.alarm(time) If time is non-zero, this function requests that a SIGALRM signal be sent to the process in time seconds. Any previously scheduled alarm is canceled (only one alarm can be scheduled at any time). The returned value is then the number of seconds before any previously set alarm was to have been delivered. If time is zero, no alarm is scheduled, and any scheduled alarm is canceled. If the return value is zero, no alarm is currently scheduled. Availability: Unix. See the man page alarm(2) for further information. signal.getsignal(signalnum) Return the current signal handler for the signal signalnum. The returned value may be a callable Python object, or one of the special values signal.SIG_IGN, signal.SIG_DFL or None. Here, signal.SIG_IGN means that the signal was previously ignored, signal.SIG_DFL means that the default way of handling the signal was previously in use, and None means that the previous signal handler was not installed from Python. signal.strsignal(signalnum) Return the system description of the signal signalnum, such as “Interrupt”, “Segmentation fault”, etc. Returns None if the signal is not recognized. New in version 3.8. signal.valid_signals() Return the set of valid signal numbers on this platform. This can be less than range(1, NSIG) if some signals are reserved by the system for internal use. New in version 3.8. signal.pause() Cause the process to sleep until a signal is received; the appropriate handler will then be called. Returns nothing. Availability: Unix. See the man page signal(2) for further information. See also sigwait(), sigwaitinfo(), sigtimedwait() and sigpending(). signal.raise_signal(signum) Sends a signal to the calling process. Returns nothing. New in version 3.8. signal.pidfd_send_signal(pidfd, sig, siginfo=None, flags=0) Send signal sig to the process referred to by file descriptor pidfd. Python does not currently support the siginfo parameter; it must be None. The flags argument is provided for future extensions; no flag values are currently defined. See the pidfd_send_signal(2) man page for more information. Availability: Linux 5.1+ New in version 3.9. signal.pthread_kill(thread_id, signalnum) Send the signal signalnum to the thread thread_id, another thread in the same process as the caller. The target thread can be executing any code (Python or not). However, if the target thread is executing the Python interpreter, the Python signal handlers will be executed by the main thread of the main interpreter. Therefore, the only point of sending a signal to a particular Python thread would be to force a running system call to fail with InterruptedError. Use threading.get_ident() or the ident attribute of threading.Thread objects to get a suitable value for thread_id. If signalnum is 0, then no signal is sent, but error checking is still performed; this can be used to check if the target thread is still running. Raises an auditing event signal.pthread_kill with arguments thread_id, signalnum. Availability: Unix. See the man page pthread_kill(3) for further information. See also os.kill(). New in version 3.3. signal.pthread_sigmask(how, mask) Fetch and/or change the signal mask of the calling thread. The signal mask is the set of signals whose delivery is currently blocked for the caller. Return the old signal mask as a set of signals. The behavior of the call is dependent on the value of how, as follows. SIG_BLOCK: The set of blocked signals is the union of the current set and the mask argument. SIG_UNBLOCK: The signals in mask are removed from the current set of blocked signals. It is permissible to attempt to unblock a signal which is not blocked. SIG_SETMASK: The set of blocked signals is set to the mask argument. mask is a set of signal numbers (e.g. {signal.SIGINT, signal.SIGTERM}). Use valid_signals() for a full mask including all signals. For example, signal.pthread_sigmask(signal.SIG_BLOCK, []) reads the signal mask of the calling thread. SIGKILL and SIGSTOP cannot be blocked. Availability: Unix. See the man page sigprocmask(3) and pthread_sigmask(3) for further information. See also pause(), sigpending() and sigwait(). New in version 3.3. signal.setitimer(which, seconds, interval=0.0) Sets given interval timer (one of signal.ITIMER_REAL, signal.ITIMER_VIRTUAL or signal.ITIMER_PROF) specified by which to fire after seconds (float is accepted, different from alarm()) and after that every interval seconds (if interval is non-zero). The interval timer specified by which can be cleared by setting seconds to zero. When an interval timer fires, a signal is sent to the process. The signal sent is dependent on the timer being used; signal.ITIMER_REAL will deliver SIGALRM, signal.ITIMER_VIRTUAL sends SIGVTALRM, and signal.ITIMER_PROF will deliver SIGPROF. The old values are returned as a tuple: (delay, interval). Attempting to pass an invalid interval timer will cause an ItimerError. Availability: Unix. signal.getitimer(which) Returns current value of a given interval timer specified by which. Availability: Unix. signal.set_wakeup_fd(fd, *, warn_on_full_buffer=True) Set the wakeup file descriptor to fd. When a signal is received, the signal number is written as a single byte into the fd. This can be used by a library to wakeup a poll or select call, allowing the signal to be fully processed. The old wakeup fd is returned (or -1 if file descriptor wakeup was not enabled). If fd is -1, file descriptor wakeup is disabled. If not -1, fd must be non-blocking. It is up to the library to remove any bytes from fd before calling poll or select again. When threads are enabled, this function can only be called from the main thread of the main interpreter; attempting to call it from other threads will cause a ValueError exception to be raised. There are two common ways to use this function. In both approaches, you use the fd to wake up when a signal arrives, but then they differ in how they determine which signal or signals have arrived. In the first approach, we read the data out of the fd’s buffer, and the byte values give you the signal numbers. This is simple, but in rare cases it can run into a problem: generally the fd will have a limited amount of buffer space, and if too many signals arrive too quickly, then the buffer may become full, and some signals may be lost. If you use this approach, then you should set warn_on_full_buffer=True, which will at least cause a warning to be printed to stderr when signals are lost. In the second approach, we use the wakeup fd only for wakeups, and ignore the actual byte values. In this case, all we care about is whether the fd’s buffer is empty or non-empty; a full buffer doesn’t indicate a problem at all. If you use this approach, then you should set warn_on_full_buffer=False, so that your users are not confused by spurious warning messages. Changed in version 3.5: On Windows, the function now also supports socket handles. Changed in version 3.7: Added warn_on_full_buffer parameter. signal.siginterrupt(signalnum, flag) Change system call restart behaviour: if flag is False, system calls will be restarted when interrupted by signal signalnum, otherwise system calls will be interrupted. Returns nothing. Availability: Unix. See the man page siginterrupt(3) for further information. Note that installing a signal handler with signal() will reset the restart behaviour to interruptible by implicitly calling siginterrupt() with a true flag value for the given signal. signal.signal(signalnum, handler) Set the handler for signal signalnum to the function handler. handler can be a callable Python object taking two arguments (see below), or one of the special values signal.SIG_IGN or signal.SIG_DFL. The previous signal handler will be returned (see the description of getsignal() above). (See the Unix man page signal(2) for further information.) When threads are enabled, this function can only be called from the main thread of the main interpreter; attempting to call it from other threads will cause a ValueError exception to be raised. The handler is called with two arguments: the signal number and the current stack frame (None or a frame object; for a description of frame objects, see the description in the type hierarchy or see the attribute descriptions in the inspect module). On Windows, signal() can only be called with SIGABRT, SIGFPE, SIGILL, SIGINT, SIGSEGV, SIGTERM, or SIGBREAK. A ValueError will be raised in any other case. Note that not all systems define the same set of signal names; an AttributeError will be raised if a signal name is not defined as SIG* module level constant. signal.sigpending() Examine the set of signals that are pending for delivery to the calling thread (i.e., the signals which have been raised while blocked). Return the set of the pending signals. Availability: Unix. See the man page sigpending(2) for further information. See also pause(), pthread_sigmask() and sigwait(). New in version 3.3. signal.sigwait(sigset) Suspend execution of the calling thread until the delivery of one of the signals specified in the signal set sigset. The function accepts the signal (removes it from the pending list of signals), and returns the signal number. Availability: Unix. See the man page sigwait(3) for further information. See also pause(), pthread_sigmask(), sigpending(), sigwaitinfo() and sigtimedwait(). New in version 3.3. signal.sigwaitinfo(sigset) Suspend execution of the calling thread until the delivery of one of the signals specified in the signal set sigset. The function accepts the signal and removes it from the pending list of signals. If one of the signals in sigset is already pending for the calling thread, the function will return immediately with information about that signal. The signal handler is not called for the delivered signal. The function raises an InterruptedError if it is interrupted by a signal that is not in sigset. The return value is an object representing the data contained in the siginfo_t structure, namely: si_signo, si_code, si_errno, si_pid, si_uid, si_status, si_band. Availability: Unix. See the man page sigwaitinfo(2) for further information. See also pause(), sigwait() and sigtimedwait(). New in version 3.3. Changed in version 3.5: The function is now retried if interrupted by a signal not in sigset and the signal handler does not raise an exception (see PEP 475 for the rationale). signal.sigtimedwait(sigset, timeout) Like sigwaitinfo(), but takes an additional timeout argument specifying a timeout. If timeout is specified as 0, a poll is performed. Returns None if a timeout occurs. Availability: Unix. See the man page sigtimedwait(2) for further information. See also pause(), sigwait() and sigwaitinfo(). New in version 3.3. Changed in version 3.5: The function is now retried with the recomputed timeout if interrupted by a signal not in sigset and the signal handler does not raise an exception (see PEP 475 for the rationale). Example Here is a minimal example program. It uses the alarm() function to limit the time spent waiting to open a file; this is useful if the file is for a serial device that may not be turned on, which would normally cause the os.open() to hang indefinitely. The solution is to set a 5-second alarm before opening the file; if the operation takes too long, the alarm signal will be sent, and the handler raises an exception. import signal, os def handler(signum, frame): print('Signal handler called with signal', signum) raise OSError("Couldn't open device!") # Set the signal handler and a 5-second alarm signal.signal(signal.SIGALRM, handler) signal.alarm(5) # This open() may hang indefinitely fd = os.open('/dev/ttyS0', os.O_RDWR) signal.alarm(0) # Disable the alarm Note on SIGPIPE Piping output of your program to tools like head(1) will cause a SIGPIPE signal to be sent to your process when the receiver of its standard output closes early. This results in an exception like BrokenPipeError: [Errno 32] Broken pipe. To handle this case, wrap your entry point to catch this exception as follows: import os import sys def main(): try: # simulate large output (your code replaces this loop) for x in range(10000): print("y") # flush output here to force SIGPIPE to be triggered # while inside this try block. sys.stdout.flush() except BrokenPipeError: # Python flushes standard streams on exit; redirect remaining output # to devnull to avoid another BrokenPipeError at shutdown devnull = os.open(os.devnull, os.O_WRONLY) os.dup2(devnull, sys.stdout.fileno()) sys.exit(1) # Python exits with error code 1 on EPIPE if __name__ == '__main__': main() Do not set SIGPIPE’s disposition to SIG_DFL in order to avoid BrokenPipeError. Doing that would cause your program to exit unexpectedly also whenever any socket connection is interrupted while your program is still writing to it.
doc_23663
Convert a block of base64 data back to binary and return the binary data. More than one line may be passed at a time.
doc_23664
See Migration guide for more details. tf.compat.v1.math.special.bessel_j0 tf.math.special.bessel_j0( x, name=None ) Modified Bessel function of order 0. tf.math.special.bessel_j0([0.5, 1., 2., 4.]).numpy() array([ 0.93846981, 0.76519769, 0.22389078, -0.39714981], dtype=float32) Args x A Tensor or SparseTensor. Must be one of the following types: half, float32, float64. name A name for the operation (optional). Returns A Tensor or SparseTensor, respectively. Has the same type as x. Scipy Compatibility Equivalent to scipy.special.j0
doc_23665
See Migration guide for more details. tf.compat.v1.signal.frame tf.signal.frame( signal, frame_length, frame_step, pad_end=False, pad_value=0, axis=-1, name=None ) Slides a window of size frame_length over signal's axis dimension with a stride of frame_step, replacing the axis dimension with [frames, frame_length] frames. If pad_end is True, window positions that are past the end of the axis dimension are padded with pad_value until the window moves fully past the end of the dimension. Otherwise, only window positions that fully overlap the axis dimension are produced. For example: # A batch size 3 tensor of 9152 audio samples. audio = tf.random.normal([3, 9152]) # Compute overlapping frames of length 512 with a step of 180 (frames overlap # by 332 samples). By default, only 49 frames are generated since a frame # with start position j*180 for j > 48 would overhang the end. frames = tf.signal.frame(audio, 512, 180) frames.shape.assert_is_compatible_with([3, 49, 512]) # When pad_end is enabled, the final two frames are kept (padded with zeros). frames = tf.signal.frame(audio, 512, 180, pad_end=True) frames.shape.assert_is_compatible_with([3, 51, 512]) If the dimension along axis is N, and pad_end=False, the number of frames can be computed by: num_frames = 1 + (N - frame_size) // frame_step If pad_end=True, the number of frames can be computed by: num_frames = -(-N // frame_step) # ceiling division Args signal A [..., samples, ...] Tensor. The rank and dimensions may be unknown. Rank must be at least 1. frame_length The frame length in samples. An integer or scalar Tensor. frame_step The frame hop size in samples. An integer or scalar Tensor. pad_end Whether to pad the end of signal with pad_value. pad_value An optional scalar Tensor to use where the input signal does not exist when pad_end is True. axis A scalar integer Tensor indicating the axis to frame. Defaults to the last axis. Supports negative values for indexing from the end. name An optional name for the operation. Returns A Tensor of frames with shape [..., num_frames, frame_length, ...]. Raises ValueError If frame_length, frame_step, pad_value, or axis are not scalar.
doc_23666
The logical parent of the path: >>> p = PurePosixPath('/a/b/c/d') >>> p.parent PurePosixPath('/a/b/c') You cannot go past an anchor, or empty path: >>> p = PurePosixPath('/') >>> p.parent PurePosixPath('/') >>> p = PurePosixPath('.') >>> p.parent PurePosixPath('.') Note This is a purely lexical operation, hence the following behaviour: >>> p = PurePosixPath('foo/..') >>> p.parent PurePosixPath('foo') If you want to walk an arbitrary filesystem path upwards, it is recommended to first call Path.resolve() so as to resolve symlinks and eliminate “..” components.
doc_23667
Set an attribute value from a string.
doc_23668
See Migration guide for more details. tf.compat.v1.bitwise.left_shift tf.bitwise.left_shift( x, y, name=None ) If y is negative, or greater than or equal to the width of x in bits the result is implementation defined. Example: import tensorflow as tf from tensorflow.python.ops import bitwise_ops import numpy as np dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64] for dtype in dtype_list: lhs = tf.constant([-1, -5, -3, -14], dtype=dtype) rhs = tf.constant([5, 0, 7, 11], dtype=dtype) left_shift_result = bitwise_ops.left_shift(lhs, rhs) print(left_shift_result) # This will print: # tf.Tensor([ -32 -5 -128 0], shape=(4,), dtype=int8) # tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int16) # tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int32) # tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int64) lhs = np.array([-2, 64, 101, 32], dtype=np.int8) rhs = np.array([-1, -5, -3, -14], dtype=np.int8) bitwise_ops.left_shift(lhs, rhs) # <tf.Tensor: shape=(4,), dtype=int8, numpy=array([ -2, 64, 101, 32], dtype=int8)> Args x A Tensor. Must be one of the following types: int8, int16, int32, int64, uint8, uint16, uint32, uint64. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
doc_23669
Holds an instance of the class specified by the MessageClass class variable. This instance parses and manages the headers in the HTTP request. The parse_headers() function from http.client is used to parse the headers and it requires that the HTTP request provide a valid RFC 2822 style header.
doc_23670
Bases: skimage.measure.fit.BaseModel Total least squares estimator for N-dimensional lines. In contrast to ordinary least squares line estimation, this estimator minimizes the orthogonal distances of points to the estimated line. Lines are defined by a point (origin) and a unit vector (direction) according to the following vector equation: X = origin + lambda * direction Examples >>> x = np.linspace(1, 2, 25) >>> y = 1.5 * x + 3 >>> lm = LineModelND() >>> lm.estimate(np.stack([x, y], axis=-1)) True >>> tuple(np.round(lm.params, 5)) (array([1.5 , 5.25]), array([0.5547 , 0.83205])) >>> res = lm.residuals(np.stack([x, y], axis=-1)) >>> np.abs(np.round(res, 9)) array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) >>> np.round(lm.predict_y(x[:5]), 3) array([4.5 , 4.562, 4.625, 4.688, 4.75 ]) >>> np.round(lm.predict_x(y[:5]), 3) array([1. , 1.042, 1.083, 1.125, 1.167]) Attributes paramstuple Line model parameters in the following order origin, direction. __init__() [source] Initialize self. See help(type(self)) for accurate signature. estimate(data) [source] Estimate line model from data. This minimizes the sum of shortest (orthogonal) distances from the given data points to the estimated line. Parameters data(N, dim) array N points in a space of dimensionality dim >= 2. Returns successbool True, if model estimation succeeds. predict(x, axis=0, params=None) [source] Predict intersection of the estimated line model with a hyperplane orthogonal to a given axis. Parameters x(n, 1) array Coordinates along an axis. axisint Axis orthogonal to the hyperplane intersecting the line. params(2, ) array, optional Optional custom parameter set in the form (origin, direction). Returns data(n, m) array Predicted coordinates. Raises ValueError If the line is parallel to the given axis. predict_x(y, params=None) [source] Predict x-coordinates for 2D lines using the estimated model. Alias for: predict(y, axis=1)[:, 0] Parameters yarray y-coordinates. params(2, ) array, optional Optional custom parameter set in the form (origin, direction). Returns xarray Predicted x-coordinates. predict_y(x, params=None) [source] Predict y-coordinates for 2D lines using the estimated model. Alias for: predict(x, axis=0)[:, 1] Parameters xarray x-coordinates. params(2, ) array, optional Optional custom parameter set in the form (origin, direction). Returns yarray Predicted y-coordinates. residuals(data, params=None) [source] Determine residuals of data to model. For each point, the shortest (orthogonal) distance to the line is returned. It is obtained by projecting the data onto the line. Parameters data(N, dim) array N points in a space of dimension dim. params(2, ) array, optional Optional custom parameter set in the form (origin, direction). Returns residuals(N, ) array Residual for each data point.
doc_23671
operator.invert(obj) operator.__inv__(obj) operator.__invert__(obj) Return the bitwise inverse of the number obj. This is equivalent to ~obj.
doc_23672
See Migration guide for more details. tf.compat.v1.numpy_function tf.numpy_function( func, inp, Tout, name=None ) Given a python function func wrap this function as an operation in a TensorFlow function. func must take numpy arrays as its arguments and return numpy arrays as its outputs. The following example creates a TensorFlow graph with np.sinh() as an operation in the graph: def my_numpy_func(x): # x will be a numpy array with the contents of the input to the # tf.function return np.sinh(x) @tf.function(input_signature=[tf.TensorSpec(None, tf.float32)]) def tf_function(input): y = tf.numpy_function(my_numpy_func, [input], tf.float32) return y * y tf_function(tf.constant(1.)) <tf.Tensor: shape=(), dtype=float32, numpy=1.3810978> Comparison to tf.py_function: tf.py_function and tf.numpy_function are very similar, except that tf.numpy_function takes numpy arrays, and not tf.Tensors. If you want the function to contain tf.Tensors, and have any TensorFlow operations executed in the function be differentiable, please use tf.py_function. Note: The tf.numpy_function operation has the following known limitations: The body of the function (i.e. func) will not be serialized in a tf.SavedModel. Therefore, you should not use this function if you need to serialize your model and restore it in a different environment. The operation must run in the same address space as the Python program that calls tf.numpy_function(). If you are using distributed TensorFlow, you must run a tf.distribute.Server in the same process as the program that calls tf.numpy_function you must pin the created operation to a device in that server (e.g. using with tf.device():). Since the function takes numpy arrays, you cannot take gradients through a numpy_function. If you require something that is differentiable, please consider using tf.py_function. The resulting function is assumed stateful and will never be optimized. Args func A Python function, which accepts numpy.ndarray objects as arguments and returns a list of numpy.ndarray objects (or a single numpy.ndarray). This function must accept as many arguments as there are tensors in inp, and these argument types will match the corresponding tf.Tensor objects in inp. The returns numpy.ndarrays must match the number and types defined Tout. Important Note: Input and output numpy.ndarrays of func are not guaranteed to be copies. In some cases their underlying memory will be shared with the corresponding TensorFlow tensors. In-place modification or storing func input or return values in python datastructures without explicit (np.)copy can have non-deterministic consequences. inp A list of tf.Tensor objects. Tout A list or tuple of tensorflow data types or a single tensorflow data type if there is only one, indicating what func returns. name (Optional) A name for the operation. Returns Single or list of tf.Tensor which func computes.
doc_23673
See Migration guide for more details. tf.compat.v1.raw_ops.QuantizedConv2DWithBiasSumAndReluAndRequantize tf.raw_ops.QuantizedConv2DWithBiasSumAndReluAndRequantize( input, filter, bias, min_input, max_input, min_filter, max_filter, min_freezed_output, max_freezed_output, summand, min_summand, max_summand, strides, padding, out_type=tf.dtypes.quint8, dilations=[1, 1, 1, 1], padding_list=[], name=None ) Args input A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. filter A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. bias A Tensor. Must be one of the following types: float32, qint32. min_input A Tensor of type float32. max_input A Tensor of type float32. min_filter A Tensor of type float32. max_filter A Tensor of type float32. min_freezed_output A Tensor of type float32. max_freezed_output A Tensor of type float32. summand A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. min_summand A Tensor of type float32. max_summand A Tensor of type float32. strides A list of ints. padding A string from: "SAME", "VALID". out_type An optional tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16. Defaults to tf.quint8. dilations An optional list of ints. Defaults to [1, 1, 1, 1]. padding_list An optional list of ints. Defaults to []. name A name for the operation (optional). Returns A tuple of Tensor objects (output, min_output, max_output). output A Tensor of type out_type. min_output A Tensor of type float32. max_output A Tensor of type float32.
doc_23674
Learn the vocabulary dictionary and return document-term matrix. This is equivalent to fit followed by transform, but more efficiently implemented. Parameters raw_documentsiterable An iterable which yields either str, unicode or file objects. Returns Xarray of shape (n_samples, n_features) Document-term matrix.
doc_23675
Ensure that the function is synchronous for WSGI workers. Plain def functions are returned as-is. async def functions are wrapped to run and wait for the response. Override this method to change how the app runs async views. New in version 2.0. Parameters func (Callable) – Return type Callable
doc_23676
Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
doc_23677
The masked constant is a special case of MaskedArray, with a float datatype and a null shape. It is used to test whether a specific entry of a masked array is masked, or to mask one or several entries of a masked array: >>> x = ma.array([1, 2, 3], mask=[0, 1, 0]) >>> x[1] is ma.masked True >>> x[-1] = ma.masked >>> x masked_array(data=[1, --, --], mask=[False, True, True], fill_value=999999) numpy.ma.nomask Value indicating that a masked array has no invalid entry. nomask is used internally to speed up computations when the mask is not needed. It is represented internally as np.False_. numpy.ma.masked_print_options String used in lieu of missing data when a masked array is printed. By default, this string is '--'.
doc_23678
Count the number of vertices contained in the Bbox. Any vertices with a non-finite x or y value are ignored. Parameters verticesNx2 Numpy array.
doc_23679
Alias for set_linewidth.
doc_23680
Copy the contents (no metadata) of the file named src to a file named dst and return dst in the most efficient way possible. src and dst are path-like objects or path names given as strings. dst must be the complete target file name; look at copy() for a copy that accepts a target directory path. If src and dst specify the same file, SameFileError is raised. The destination location must be writable; otherwise, an OSError exception will be raised. If dst already exists, it will be replaced. Special files such as character or block devices and pipes cannot be copied with this function. If follow_symlinks is false and src is a symbolic link, a new symbolic link will be created instead of copying the file src points to. Raises an auditing event shutil.copyfile with arguments src, dst. Changed in version 3.3: IOError used to be raised instead of OSError. Added follow_symlinks argument. Now returns dst. Changed in version 3.4: Raise SameFileError instead of Error. Since the former is a subclass of the latter, this change is backward compatible. Changed in version 3.8: Platform-specific fast-copy syscalls may be used internally in order to copy the file more efficiently. See Platform-dependent efficient copy operations section.
doc_23681
Set multilinebaseline. If True, the baseline for multiline text is adjusted so that it is (approximately) center-aligned with single-line text. This is used e.g. by the legend implementation so that single-line labels are baseline-aligned, but multiline labels are "center"-aligned with them.
doc_23682
tf.compat.v1.make_template( name_, func_, create_scope_now_=False, unique_name_=None, custom_getter_=None, **kwargs ) This wraps func_ in a Template and partially evaluates it. Templates are functions that create variables the first time they are called and reuse them thereafter. In order for func_ to be compatible with a Template it must have the following properties: The function should create all trainable variables and any variables that should be reused by calling tf.compat.v1.get_variable. If a trainable variable is created using tf.Variable, then a ValueError will be thrown. Variables that are intended to be locals can be created by specifying tf.Variable(..., trainable=false). The function may use variable scopes and other templates internally to create and reuse variables, but it shouldn't use tf.compat.v1.global_variables to capture variables that are defined outside of the scope of the function. Internal scopes and variable names should not depend on any arguments that are not supplied to make_template. In general you will get a ValueError telling you that you are trying to reuse a variable that doesn't exist if you make a mistake. In the following example, both z and w will be scaled by the same y. It is important to note that if we didn't assign scalar_name and used a different name for z and w that a ValueError would be thrown because it couldn't reuse the variable. def my_op(x, scalar_name): var1 = tf.compat.v1.get_variable(scalar_name, shape=[], initializer=tf.compat.v1.constant_initializer(1)) return x * var1 scale_by_y = tf.compat.v1.make_template('scale_by_y', my_op, scalar_name='y') z = scale_by_y(input1) w = scale_by_y(input2) As a safe-guard, the returned function will raise a ValueError after the first call if trainable variables are created by calling tf.Variable. If all of these are true, then 2 properties are enforced by the template: Calling the same template multiple times will share all non-local variables. Two different templates are guaranteed to be unique, unless you reenter the same variable scope as the initial definition of a template and redefine it. An examples of this exception: def my_op(x, scalar_name): var1 = tf.compat.v1.get_variable(scalar_name, shape=[], initializer=tf.compat.v1.constant_initializer(1)) return x * var1 with tf.compat.v1.variable_scope('scope') as vs: scale_by_y = tf.compat.v1.make_template('scale_by_y', my_op, scalar_name='y') z = scale_by_y(input1) w = scale_by_y(input2) # Creates a template that reuses the variables above. with tf.compat.v1.variable_scope(vs, reuse=True): scale_by_y2 = tf.compat.v1.make_template('scale_by_y', my_op, scalar_name='y') z2 = scale_by_y2(input1) w2 = scale_by_y2(input2) Depending on the value of create_scope_now_, the full variable scope may be captured either at the time of first call or at the time of construction. If this option is set to True, then all Tensors created by repeated calls to the template will have an extra trailing _N+1 to their name, as the first time the scope is entered in the Template constructor no Tensors are created. Note: name_, func_ and create_scope_now_ have a trailing underscore to reduce the likelihood of collisions with kwargs. Args name_ A name for the scope created by this template. If necessary, the name will be made unique by appending _N to the name. func_ The function to wrap. create_scope_now_ Boolean controlling whether the scope should be created when the template is constructed or when the template is called. Default is False, meaning the scope is created when the template is called. unique_name_ When used, it overrides name_ and is not made unique. If a template of the same scope/unique_name already exists and reuse is false, an error is raised. Defaults to None. custom_getter_ Optional custom getter for variables used in func_. See the tf.compat.v1.get_variable custom_getter documentation for more information. **kwargs Keyword arguments to apply to func_. Returns A function to encapsulate a set of variables which should be created once and reused. An enclosing scope will be created either when make_template is called or when the result is called, depending on the value of create_scope_now_. Regardless of the value, the first time the template is called it will enter the scope with no reuse, and call func_ to create variables, which are guaranteed to be unique. All subsequent calls will re-enter the scope and reuse those variables. Raises ValueError if name_ is None.
doc_23683
Call this whenever the mappable is changed to notify all the callbackSM listeners to the 'changed' signal.
doc_23684
Registers a backward hook on the module. The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature: hook(module, grad_input, grad_output) -> tuple(Tensor) or None The grad_input and grad_output are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in grad_input and grad_output will be None for all non-Tensor arguments. Warning Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error. Returns a handle that can be used to remove the added hook by calling handle.remove() Return type torch.utils.hooks.RemovableHandle
doc_23685
This is the standard MSI schema for MSI 2.0, with the tables variable providing a list of table definitions, and _Validation_records providing the data for MSI validation.
doc_23686
Scalar method identical to the corresponding array attribute. Please see ndarray.view.
doc_23687
Set the fontsize in points. If s is not given, reset to rcParams["legend.fontsize"] (default: 'medium').
doc_23688
[Deprecated] Notes Deprecated since version 3.4:
doc_23689
Calculate the rolling weighted window mean. Parameters **kwargs Keyword arguments to configure the SciPy weighted window type. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.mean Aggregating mean for Series. pandas.DataFrame.mean Aggregating mean for DataFrame.
doc_23690
Convert a polynomial to a Hermite series. Convert an array representing the coefficients of a polynomial (relative to the “standard” basis) ordered from lowest degree to highest, to an array of the coefficients of the equivalent Hermite series, ordered from lowest to highest degree. Parameters polarray_like 1-D array containing the polynomial coefficients Returns cndarray 1-D array containing the coefficients of the equivalent Hermite series. See also herme2poly Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. Examples >>> from numpy.polynomial.hermite_e import poly2herme >>> poly2herme(np.arange(4)) array([ 2., 10., 2., 3.])
doc_23691
tf.experimental.numpy.isinf( x ) Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.isinf.
doc_23692
Compute Shi-Tomasi (Kanade-Tomasi) corner measure response image. This corner detector uses information from the auto-correlation matrix A: A = [(imx**2) (imx*imy)] = [Axx Axy] [(imx*imy) (imy**2)] [Axy Ayy] Where imx and imy are first derivatives, averaged with a gaussian filter. The corner measure is then defined as the smaller eigenvalue of A: ((Axx + Ayy) - sqrt((Axx - Ayy)**2 + 4 * Axy**2)) / 2 Parameters imagendarray Input image. sigmafloat, optional Standard deviation used for the Gaussian kernel, which is used as weighting function for the auto-correlation matrix. Returns responsendarray Shi-Tomasi response image. References 1 https://en.wikipedia.org/wiki/Corner_detection Examples >>> from skimage.feature import corner_shi_tomasi, corner_peaks >>> square = np.zeros([10, 10]) >>> square[2:8, 2:8] = 1 >>> square.astype(int) array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 1, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) >>> corner_peaks(corner_shi_tomasi(square), min_distance=1) array([[2, 2], [2, 7], [7, 2], [7, 7]])
doc_23693
Object whose attributes correspond roughly to the members of the stat structure. It is used for the result of os.stat(), os.fstat() and os.lstat(). Attributes: st_mode File mode: file type and file mode bits (permissions). st_ino Platform dependent, but if non-zero, uniquely identifies the file for a given value of st_dev. Typically: the inode number on Unix, the file index on Windows st_dev Identifier of the device on which this file resides. st_nlink Number of hard links. st_uid User identifier of the file owner. st_gid Group identifier of the file owner. st_size Size of the file in bytes, if it is a regular file or a symbolic link. The size of a symbolic link is the length of the pathname it contains, without a terminating null byte. Timestamps: st_atime Time of most recent access expressed in seconds. st_mtime Time of most recent content modification expressed in seconds. st_ctime Platform dependent: the time of most recent metadata change on Unix, the time of creation on Windows, expressed in seconds. st_atime_ns Time of most recent access expressed in nanoseconds as an integer. st_mtime_ns Time of most recent content modification expressed in nanoseconds as an integer. st_ctime_ns Platform dependent: the time of most recent metadata change on Unix, the time of creation on Windows, expressed in nanoseconds as an integer. Note The exact meaning and resolution of the st_atime, st_mtime, and st_ctime attributes depend on the operating system and the file system. For example, on Windows systems using the FAT or FAT32 file systems, st_mtime has 2-second resolution, and st_atime has only 1-day resolution. See your operating system documentation for details. Similarly, although st_atime_ns, st_mtime_ns, and st_ctime_ns are always expressed in nanoseconds, many systems do not provide nanosecond precision. On systems that do provide nanosecond precision, the floating-point object used to store st_atime, st_mtime, and st_ctime cannot preserve all of it, and as such will be slightly inexact. If you need the exact timestamps you should always use st_atime_ns, st_mtime_ns, and st_ctime_ns. On some Unix systems (such as Linux), the following attributes may also be available: st_blocks Number of 512-byte blocks allocated for file. This may be smaller than st_size/512 when the file has holes. st_blksize “Preferred” blocksize for efficient file system I/O. Writing to a file in smaller chunks may cause an inefficient read-modify-rewrite. st_rdev Type of device if an inode device. st_flags User defined flags for file. On other Unix systems (such as FreeBSD), the following attributes may be available (but may be only filled out if root tries to use them): st_gen File generation number. st_birthtime Time of file creation. On Solaris and derivatives, the following attributes may also be available: st_fstype String that uniquely identifies the type of the filesystem that contains the file. On Mac OS systems, the following attributes may also be available: st_rsize Real size of the file. st_creator Creator of the file. st_type File type. On Windows systems, the following attributes are also available: st_file_attributes Windows file attributes: dwFileAttributes member of the BY_HANDLE_FILE_INFORMATION structure returned by GetFileInformationByHandle(). See the FILE_ATTRIBUTE_* constants in the stat module. st_reparse_tag When st_file_attributes has the FILE_ATTRIBUTE_REPARSE_POINT set, this field contains the tag identifying the type of reparse point. See the IO_REPARSE_TAG_* constants in the stat module. The standard module stat defines functions and constants that are useful for extracting information from a stat structure. (On Windows, some items are filled with dummy values.) For backward compatibility, a stat_result instance is also accessible as a tuple of at least 10 integers giving the most important (and portable) members of the stat structure, in the order st_mode, st_ino, st_dev, st_nlink, st_uid, st_gid, st_size, st_atime, st_mtime, st_ctime. More items may be added at the end by some implementations. For compatibility with older Python versions, accessing stat_result as a tuple always returns integers. New in version 3.3: Added the st_atime_ns, st_mtime_ns, and st_ctime_ns members. New in version 3.5: Added the st_file_attributes member on Windows. Changed in version 3.5: Windows now returns the file index as st_ino when available. New in version 3.7: Added the st_fstype member to Solaris/derivatives. New in version 3.8: Added the st_reparse_tag member on Windows. Changed in version 3.8: On Windows, the st_mode member now identifies special files as S_IFCHR, S_IFIFO or S_IFBLK as appropriate.
doc_23694
class torch.utils.benchmark.Timer(stmt='pass', setup='pass', timer=<function timer>, globals=None, label=None, sub_label=None, description=None, env=None, num_threads=1, language=<Language.PYTHON: 0>) [source] Helper class for measuring execution time of PyTorch statements. For a full tutorial on how to use this class, see: https://pytorch.org/tutorials/recipes/recipes/benchmark.html The PyTorch Timer is based on timeit.Timer (and in fact uses timeit.Timer internally), but with several key differences: Runtime aware: Timer will perform warmups (important as some elements of PyTorch are lazily initialized), set threadpool size so that comparisons are apples-to-apples, and synchronize asynchronous CUDA functions when necessary. Focus on replicates: When measuring code, and particularly complex kernels / models, run-to-run variation is a significant confounding factor. It is expected that all measurements should include replicates to quantify noise and allow median computation, which is more robust than mean. To that effect, this class deviates from the timeit API by conceptually merging timeit.Timer.repeat and timeit.Timer.autorange. (Exact algorithms are discussed in method docstrings.) The timeit method is replicated for cases where an adaptive strategy is not desired. Optional metadata: When defining a Timer, one can optionally specify label, sub_label, description, and env. (Defined later) These fields are included in the representation of result object and by the Compare class to group and display results for comparison. Instruction counts In addition to wall times, Timer can run a statement under Callgrind and report instructions executed. Directly analogous to timeit.Timer constructor arguments: stmt, setup, timer, globals PyTorch Timer specific constructor arguments: label, sub_label, description, env, num_threads Parameters stmt – Code snippet to be run in a loop and timed. setup – Optional setup code. Used to define variables used in stmt timer – Callable which returns the current time. If PyTorch was built without CUDA or there is no GPU present, this defaults to timeit.default_timer; otherwise it will synchronize CUDA before measuring the time. globals – A dict which defines the global variables when stmt is being executed. This is the other method for providing variables which stmt needs. label – String which summarizes stmt. For instance, if stmt is “torch.nn.functional.relu(torch.add(x, 1, out=out))” one might set label to “ReLU(x + 1)” to improve readability. sub_label – Provide supplemental information to disambiguate measurements with identical stmt or label. For instance, in our example above sub_label might be “float” or “int”, so that it is easy to differentiate: “ReLU(x + 1): (float)” ”ReLU(x + 1): (int)” when printing Measurements or summarizing using Compare. description – String to distinguish measurements with identical label and sub_label. The principal use of description is to signal to Compare the columns of data. For instance one might set it based on the input size to create a table of the form: | n=1 | n=4 | ... ------------- ... ReLU(x + 1): (float) | ... | ... | ... ReLU(x + 1): (int) | ... | ... | ... using Compare. It is also included when printing a Measurement. env – This tag indicates that otherwise identical tasks were run in different environments, and are therefore not equivilent, for instance when A/B testing a change to a kernel. Compare will treat Measurements with different env specification as distinct when merging replicate runs. num_threads – The size of the PyTorch threadpool when executing stmt. Single threaded performace is important as both a key inference workload and a good indicator of intrinsic algorithmic efficiency, so the default is set to one. This is in contrast to the default PyTorch threadpool size which tries to utilize all cores. blocked_autorange(callback=None, min_run_time=0.2) [source] Measure many replicates while keeping timer overhead to a minimum. At a high level, blocked_autorange executes the following pseudo-code: `setup` total_time = 0 while total_time < min_run_time start = timer() for _ in range(block_size): `stmt` total_time += (timer() - start) Note the variable block_size in the inner loop. The choice of block size is important to measurement quality, and must balance two competing objectives: A small block size results in more replicates and generally better statistics. A large block size better amortizes the cost of timer invocation, and results in a less biased measurement. This is important because CUDA syncronization time is non-trivial (order single to low double digit microseconds) and would otherwise bias the measurement. blocked_autorange sets block_size by running a warmup period, increasing block size until timer overhead is less than 0.1% of the overall computation. This value is then used for the main measurement loop. Returns A Measurement object that contains measured runtimes and repetition counts, and can be used to compute statistics. (mean, median, etc.) collect_callgrind(number=100, collect_baseline=True) [source] Collect instruction counts using Callgrind. Unlike wall times, instruction counts are deterministic (modulo non-determinism in the program itself and small amounts of jitter from the Python interpreter.) This makes them ideal for detailed performance analysis. This method runs stmt in a separate process so that Valgrind can instrument the program. Performance is severely degraded due to the instrumentation, howevever this is ameliorated by the fact that a small number of iterations is generally sufficient to obtain good measurements. In order to to use this method valgrind, callgrind_control, and callgrind_annotate must be installed. Because there is a process boundary between the caller (this process) and the stmt execution, globals cannot contain arbitrary in-memory data structures. (Unlike timing methods) Instead, globals are restricted to builtins, nn.Modules’s, and TorchScripted functions/modules to reduce the surprise factor from serialization and subsequent deserialization. The GlobalsBridge class provides more detail on this subject. Take particular care with nn.Modules: they rely on pickle and you may need to add an import to setup for them to transfer properly. By default, a profile for an empty statement will be collected and cached to indicate how many instructions are from the Python loop which drives stmt. Returns A CallgrindStats object which provides instruction counts and some basic facilities for analyzing and manipulating results. timeit(number=1000000) [source] Mirrors the semantics of timeit.Timer.timeit(). Execute the main statement (stmt) number times. https://docs.python.org/3/library/timeit.html#timeit.Timer.timeit class torch.utils.benchmark.Measurement(number_per_run, raw_times, task_spec, metadata=None) [source] The result of a Timer measurement. This class stores one or more measurements of a given statement. It is serializable and provides several convenience methods (including a detailed __repr__) for downstream consumers. static merge(measurements) [source] Convenience method for merging replicates. Merge will extrapolate times to number_per_run=1 and will not transfer any metadata. (Since it might differ between replicates) property significant_figures Approximate significant figure estimate. This property is intended to give a convenient way to estimate the precision of a measurement. It only uses the interquartile region to estimate statistics to try to mitigate skew from the tails, and uses a static z value of 1.645 since it is not expected to be used for small values of n, so z can approximate t. The significant figure estimation used in conjunction with the trim_sigfig method to provide a more human interpretable data summary. __repr__ does not use this method; it simply displays raw values. Significant figure estimation is intended for Compare. class torch.utils.benchmark.CallgrindStats(task_spec, number_per_run, built_with_debug_symbols, baseline_inclusive_stats, baseline_exclusive_stats, stmt_inclusive_stats, stmt_exclusive_stats) [source] Top level container for Callgrind results collected by Timer. Manipulation is generally done using the FunctionCounts class, which is obtained by calling CallgrindStats.stats(…). Several convenience methods are provided as well; the most significant is CallgrindStats.as_standardized(). as_standardized() [source] Strip library names and some prefixes from function strings. When comparing two different sets of instruction counts, on stumbling block can be path prefixes. Callgrind includes the full filepath when reporting a function (as it should). However, this can cause issues when diffing profiles. If a key component such as Python or PyTorch was built in separate locations in the two profiles, which can result in something resembling: 23234231 /tmp/first_build_dir/thing.c:foo(...) 9823794 /tmp/first_build_dir/thing.c:bar(...) ... 53453 .../aten/src/Aten/...:function_that_actually_changed(...) ... -9823794 /tmp/second_build_dir/thing.c:bar(...) -23234231 /tmp/second_build_dir/thing.c:foo(...) Stripping prefixes can ameliorate this issue by regularizing the strings and causing better cancellation of equivilent call sites when diffing. counts(*, denoise=False) [source] Returns the total number of instructions executed. See FunctionCounts.denoise() for an explation of the denoise arg. delta(other, inclusive=False, subtract_baselines=True) [source] Diff two sets of counts. One common reason to collect instruction counts is to determine the the effect that a particular change will have on the number of instructions needed to perform some unit of work. If a change increases that number, the next logical question is “why”. This generally involves looking at what part if the code increased in instruction count. This function automates that process so that one can easily diff counts on both an inclusive and exclusive basis. The subtract_baselines argument allows one to disable baseline correction, though in most cases it shouldn’t matter as the baselines are expected to more or less cancel out. stats(inclusive=False) [source] Returns detailed function counts. Conceptually, the FunctionCounts returned can be thought of as a tuple of (count, path_and_function_name) tuples. inclusive matches the semantics of callgrind. If True, the counts include instructions executed by children. inclusive=True is useful for identifying hot spots in code; inclusive=False is useful for reducing noise when diffing counts from two different runs. (See CallgrindStats.delta(…) for more details) class torch.utils.benchmark.FunctionCounts(_data, inclusive, _linewidth=None) [source] Container for manipulating Callgrind results. It supports: Addition and subtraction to combine or diff results. Tuple-like indexing. A denoise function which strips CPython calls which are known to be non-deterministic and quite noisy. Two higher order methods (filter and transform) for custom manipulation. denoise() [source] Remove known noisy instructions. Several instructions in the CPython interpreter are rather noisy. These instructions involve unicode to dictionary lookups which Python uses to map variable names. FunctionCounts is generally a content agnostic container, however this is sufficiently important for obtaining reliable results to warrant an exception. filter(filter_fn) [source] Keep only the elements where filter_fn applied to function name returns True. transform(map_fn) [source] Apply map_fn to all of the function names. This can be used to regularize function names (e.g. stripping irrelevant parts of the file path), coalesce entries by mapping multiple functions to the same name (in which case the counts are added together), etc.
doc_23695
Return a boolean mask of the given shape, filled with False. This function returns a boolean ndarray with all entries False, that can be used in common mask manipulations. If a complex dtype is specified, the type of each field is converted to a boolean type. Parameters newshapetuple A tuple indicating the shape of the mask. dtype{None, dtype}, optional If None, use a MaskType instance. Otherwise, use a new datatype with the same fields as dtype, converted to boolean types. Returns resultndarray An ndarray of appropriate shape and dtype, filled with False. See also make_mask Create a boolean mask from an array. make_mask_descr Construct a dtype description list from a given dtype. Examples >>> import numpy.ma as ma >>> ma.make_mask_none((3,)) array([False, False, False]) Defining a more complex dtype. >>> dtype = np.dtype({'names':['foo', 'bar'], ... 'formats':[np.float32, np.int64]}) >>> dtype dtype([('foo', '<f4'), ('bar', '<i8')]) >>> ma.make_mask_none((3,), dtype=dtype) array([(False, False), (False, False), (False, False)], dtype=[('foo', '|b1'), ('bar', '|b1')])
doc_23696
See Migration guide for more details. tf.compat.v1.image.image_gradients tf.image.image_gradients( image ) Both output tensors have the same shape as the input: [batch_size, h, w, d]. The gradient values are organized so that [I(x+1, y) - I(x, y)] is in location (x, y). That means that dy will always have zeros in the last row, and dx will always have zeros in the last column. Usage Example: BATCH_SIZE = 1 IMAGE_HEIGHT = 5 IMAGE_WIDTH = 5 CHANNELS = 1 image = tf.reshape(tf.range(IMAGE_HEIGHT * IMAGE_WIDTH * CHANNELS, delta=1, dtype=tf.float32), shape=(BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS)) dx, dy = tf.image.image_gradients(image) print(image[0, :,:,0]) tf.Tensor( [[ 0. 1. 2. 3. 4.] [ 5. 6. 7. 8. 9.] [10. 11. 12. 13. 14.] [15. 16. 17. 18. 19.] [20. 21. 22. 23. 24.]], shape=(5, 5), dtype=float32) print(dx[0, :,:,0]) tf.Tensor( [[5. 5. 5. 5. 5.] [5. 5. 5. 5. 5.] [5. 5. 5. 5. 5.] [5. 5. 5. 5. 5.] [0. 0. 0. 0. 0.]], shape=(5, 5), dtype=float32) print(dy[0, :,:,0]) tf.Tensor( [[1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.]], shape=(5, 5), dtype=float32) Arguments image Tensor with shape [batch_size, h, w, d]. Returns Pair of tensors (dy, dx) holding the vertical and horizontal image gradients (1-step finite difference). Raises ValueError If image is not a 4D tensor.
doc_23697
See Migration guide for more details. tf.compat.v1.raw_ops.LRNGrad tf.raw_ops.LRNGrad( input_grads, input_image, output_image, depth_radius=5, bias=1, alpha=1, beta=0.5, name=None ) Args input_grads A Tensor. Must be one of the following types: half, bfloat16, float32. 4-D with shape [batch, height, width, channels]. input_image A Tensor. Must have the same type as input_grads. 4-D with shape [batch, height, width, channels]. output_image A Tensor. Must have the same type as input_grads. 4-D with shape [batch, height, width, channels]. depth_radius An optional int. Defaults to 5. A depth radius. bias An optional float. Defaults to 1. An offset (usually > 0 to avoid dividing by 0). alpha An optional float. Defaults to 1. A scale factor, usually positive. beta An optional float. Defaults to 0.5. An exponent. name A name for the operation (optional). Returns A Tensor. Has the same type as input_grads.
doc_23698
class sklearn.preprocessing.KBinsDiscretizer(n_bins=5, *, encode='onehot', strategy='quantile', dtype=None) [source] Bin continuous data into intervals. Read more in the User Guide. New in version 0.20. Parameters n_binsint or array-like of shape (n_features,), default=5 The number of bins to produce. Raises ValueError if n_bins < 2. encode{‘onehot’, ‘onehot-dense’, ‘ordinal’}, default=’onehot’ Method used to encode the transformed result. onehot Encode the transformed result with one-hot encoding and return a sparse matrix. Ignored features are always stacked to the right. onehot-dense Encode the transformed result with one-hot encoding and return a dense array. Ignored features are always stacked to the right. ordinal Return the bin identifier encoded as an integer value. strategy{‘uniform’, ‘quantile’, ‘kmeans’}, default=’quantile’ Strategy used to define the widths of the bins. uniform All bins in each feature have identical widths. quantile All bins in each feature have the same number of points. kmeans Values in each bin have the same nearest center of a 1D k-means cluster. dtype{np.float32, np.float64}, default=None The desired data-type for the output. If None, output dtype is consistent with input dtype. Only np.float32 and np.float64 are supported. New in version 0.24. Attributes n_bins_ndarray of shape (n_features,), dtype=np.int_ Number of bins per feature. Bins whose width are too small (i.e., <= 1e-8) are removed with a warning. bin_edges_ndarray of ndarray of shape (n_features,) The edges of each bin. Contain arrays of varying shapes (n_bins_, ) Ignored features will have empty arrays. See also Binarizer Class used to bin values as 0 or 1 based on a parameter threshold. Notes In bin edges for feature i, the first and last values are used only for inverse_transform. During transform, bin edges are extended to: np.concatenate([-np.inf, bin_edges_[i][1:-1], np.inf]) You can combine KBinsDiscretizer with ColumnTransformer if you only want to preprocess part of the features. KBinsDiscretizer might produce constant features (e.g., when encode = 'onehot' and certain bins do not contain any data). These features can be removed with feature selection algorithms (e.g., VarianceThreshold). Examples >>> X = [[-2, 1, -4, -1], ... [-1, 2, -3, -0.5], ... [ 0, 3, -2, 0.5], ... [ 1, 4, -1, 2]] >>> est = KBinsDiscretizer(n_bins=3, encode='ordinal', strategy='uniform') >>> est.fit(X) KBinsDiscretizer(...) >>> Xt = est.transform(X) >>> Xt array([[ 0., 0., 0., 0.], [ 1., 1., 1., 0.], [ 2., 2., 2., 1.], [ 2., 2., 2., 2.]]) Sometimes it may be useful to convert the data back into the original feature space. The inverse_transform function converts the binned data into the original feature space. Each value will be equal to the mean of the two bin edges. >>> est.bin_edges_[0] array([-2., -1., 0., 1.]) >>> est.inverse_transform(Xt) array([[-1.5, 1.5, -3.5, -0.5], [-0.5, 2.5, -2.5, -0.5], [ 0.5, 3.5, -1.5, 0.5], [ 0.5, 3.5, -1.5, 1.5]]) Methods fit(X[, y]) Fit the estimator. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. inverse_transform(Xt) Transform discretized data back to original feature space. set_params(**params) Set the parameters of this estimator. transform(X) Discretize the data. fit(X, y=None) [source] Fit the estimator. Parameters Xarray-like of shape (n_samples, n_features) Data to be discretized. yNone Ignored. This parameter exists only for compatibility with Pipeline. Returns self fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. inverse_transform(Xt) [source] Transform discretized data back to original feature space. Note that this function does not regenerate the original data due to discretization rounding. Parameters Xtarray-like of shape (n_samples, n_features) Transformed data in the binned space. Returns Xinvndarray, dtype={np.float32, np.float64} Data in the original feature space. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Discretize the data. Parameters Xarray-like of shape (n_samples, n_features) Data to be discretized. Returns Xt{ndarray, sparse matrix}, dtype={np.float32, np.float64} Data in the binned space. Will be a sparse matrix if self.encode='onehot' and ndarray otherwise. Examples using sklearn.preprocessing.KBinsDiscretizer Poisson regression and non-normal loss Tweedie regression on insurance claims Using KBinsDiscretizer to discretize continuous features Demonstrating the different strategies of KBinsDiscretizer Feature discretization
doc_23699
Called for references to external entities. base is the current base, as set by a previous call to SetBase(). The public and system identifiers, systemId and publicId, are strings if given; if the public identifier is not given, publicId will be None. The context value is opaque and should only be used as described below. For external entities to be parsed, this handler must be implemented. It is responsible for creating the sub-parser using ExternalEntityParserCreate(context), initializing it with the appropriate callbacks, and parsing the entity. This handler should return an integer; if it returns 0, the parser will raise an XML_ERROR_EXTERNAL_ENTITY_HANDLING error, otherwise parsing will continue. If this handler is not provided, external entities are reported by the DefaultHandler callback, if provided.