_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_29400
File-based animated gif writer. Frames are written to temporary files on disk and then stitched together at the end. __init__(*args, **kwargs)[source] Methods __init__(*args, **kwargs) bin_path() Return the binary path to the commandline tool used by a specific subclass. cleanup() [Deprecated] finish() Finish any processing for writing the movie. grab_frame(**savefig_kwargs) Grab the image information from the figure and save as a movie frame. isAvailable() Return whether a MovieWriter subclass is actually available. saving(fig, outfile, dpi, *args, **kwargs) Context manager to facilitate writing the movie file. setup(fig, outfile[, dpi, frame_prefix]) Setup for writing the movie file. Attributes delay frame_format Format (png, jpeg, etc.) to use for saving the frames, which can be decided by the individual subclasses. frame_size A tuple (width, height) in pixels of a movie frame. output_args supported_formats supported_formats=['png', 'jpeg', 'tiff', 'raw', 'rgba']
doc_29401
Return the clip path.
doc_29402
Return the Euclidean distance between two points p and q, each given as a sequence (or iterable) of coordinates. The two points must have the same dimension. Roughly equivalent to: sqrt(sum((px - qx) ** 2.0 for px, qx in zip(p, q))) New in version 3.8.
doc_29403
Add a director as parent.
doc_29404
The view function that would be used to serve the URL
doc_29405
The metadata of this band. The functionality is identical to GDALRaster.metadata.
doc_29406
tf.losses.MeanSquaredLogarithmicError Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.MeanSquaredLogarithmicError tf.keras.losses.MeanSquaredLogarithmicError( reduction=losses_utils.ReductionV2.AUTO, name='mean_squared_logarithmic_error' ) loss = square(log(y_true + 1.) - log(y_pred + 1.)) Standalone usage: y_true = [[0., 1.], [0., 0.]] y_pred = [[1., 1.], [1., 0.]] # Using 'auto'/'sum_over_batch_size' reduction type. msle = tf.keras.losses.MeanSquaredLogarithmicError() msle(y_true, y_pred).numpy() 0.240 # Calling with 'sample_weight'. msle(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy() 0.120 # Using 'sum' reduction type. msle = tf.keras.losses.MeanSquaredLogarithmicError( reduction=tf.keras.losses.Reduction.SUM) msle(y_true, y_pred).numpy() 0.480 # Using 'none' reduction type. msle = tf.keras.losses.MeanSquaredLogarithmicError( reduction=tf.keras.losses.Reduction.NONE) msle(y_true, y_pred).numpy() array([0.240, 0.240], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.MeanSquaredLogarithmicError()) Args reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Defaults to 'mean_squared_logarithmic_error'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
doc_29407
Optional type. Optional[X] is equivalent to Union[X, None]. Note that this is not the same concept as an optional argument, which is one that has a default. An optional argument with a default does not require the Optional qualifier on its type annotation just because it is optional. For example: def foo(arg: int = 0) -> None: ... On the other hand, if an explicit value of None is allowed, the use of Optional is appropriate, whether the argument is optional or not. For example: def foo(arg: Optional[int] = None) -> None: ...
doc_29408
pygame object for storing rectangular coordinates Rect(left, top, width, height) -> Rect Rect((left, top), (width, height)) -> Rect Rect(object) -> Rect Pygame uses Rect objects to store and manipulate rectangular areas. A Rect can be created from a combination of left, top, width, and height values. Rects can also be created from python objects that are already a Rect or have an attribute named "rect". Any pygame function that requires a Rect argument also accepts any of these values to construct a Rect. This makes it easier to create Rects on the fly as arguments to functions. The Rect functions that change the position or size of a Rect return a new copy of the Rect with the affected changes. The original Rect is not modified. Some methods have an alternate "in-place" version that returns None but affects the original Rect. These "in-place" methods are denoted with the "ip" suffix. The Rect object has several virtual attributes which can be used to move and align the Rect: x,y top, left, bottom, right topleft, bottomleft, topright, bottomright midtop, midleft, midbottom, midright center, centerx, centery size, width, height w,h All of these attributes can be assigned to: rect1.right = 10 rect2.center = (20,30) Assigning to size, width or height changes the dimensions of the rectangle; all other assignments move the rectangle without resizing it. Notice that some attributes are integers and others are pairs of integers. If a Rect has a nonzero width or height, it will return True for a nonzero test. Some methods return a Rect with 0 size to represent an invalid rectangle. A Rect with a 0 size will not collide when using collision detection methods (e.g. collidepoint(), colliderect(), etc.). The coordinates for Rect objects are all integers. The size values can be programmed to have negative values, but these are considered illegal Rects for most operations. There are several collision tests between other rectangles. Most python containers can be searched for collisions against a single Rect. The area covered by a Rect does not include the right- and bottom-most edge of pixels. If one Rect's bottom border is another Rect's top border (i.e., rect1.bottom=rect2.top), the two meet exactly on the screen but do not overlap, and rect1.colliderect(rect2) returns false. New in pygame 1.9.2: The Rect class can be subclassed. Methods such as copy() and move() will recognize this and return instances of the subclass. However, the subclass's __init__() method is not called, and __new__() is assumed to take no arguments. So these methods should be overridden if any extra attributes need to be copied. copy() copy the rectangle copy() -> Rect Returns a new rectangle having the same position and size as the original. New in pygame 1.9 move() moves the rectangle move(x, y) -> Rect Returns a new rectangle that is moved by the given offset. The x and y arguments can be any integer value, positive or negative. move_ip() moves the rectangle, in place move_ip(x, y) -> None Same as the Rect.move() method, but operates in place. inflate() grow or shrink the rectangle size inflate(x, y) -> Rect Returns a new rectangle with the size changed by the given offset. The rectangle remains centered around its current center. Negative values will shrink the rectangle. Note, uses integers, if the offset given is too small(< 2 > -2), center will be off. inflate_ip() grow or shrink the rectangle size, in place inflate_ip(x, y) -> None Same as the Rect.inflate() method, but operates in place. update() sets the position and size of the rectangle update(left, top, width, height) -> None update((left, top), (width, height)) -> None update(object) -> None Sets the position and size of the rectangle, in place. See parameters for pygame.Rect() for the parameters of this function. New in pygame 2.0.1. clamp() moves the rectangle inside another clamp(Rect) -> Rect Returns a new rectangle that is moved to be completely inside the argument Rect. If the rectangle is too large to fit inside, it is centered inside the argument Rect, but its size is not changed. clamp_ip() moves the rectangle inside another, in place clamp_ip(Rect) -> None Same as the Rect.clamp() method, but operates in place. clip() crops a rectangle inside another clip(Rect) -> Rect Returns a new rectangle that is cropped to be completely inside the argument Rect. If the two rectangles do not overlap to begin with, a Rect with 0 size is returned. clipline() crops a line inside a rectangle clipline(x1, y1, x2, y2) -> ((cx1, cy1), (cx2, cy2)) clipline(x1, y1, x2, y2) -> () clipline((x1, y1), (x2, y2)) -> ((cx1, cy1), (cx2, cy2)) clipline((x1, y1), (x2, y2)) -> () clipline((x1, y1, x2, y2)) -> ((cx1, cy1), (cx2, cy2)) clipline((x1, y1, x2, y2)) -> () clipline(((x1, y1), (x2, y2))) -> ((cx1, cy1), (cx2, cy2)) clipline(((x1, y1), (x2, y2))) -> () Returns the coordinates of a line that is cropped to be completely inside the rectangle. If the line does not overlap the rectangle, then an empty tuple is returned. The line to crop can be any of the following formats (floats can be used in place of ints, but they will be truncated): four ints 2 lists/tuples/Vector2s of 2 ints a list/tuple of four ints a list/tuple of 2 lists/tuples/Vector2s of 2 ints Returns: a tuple with the coordinates of the given line cropped to be completely inside the rectangle is returned, if the given line does not overlap the rectangle, an empty tuple is returned Return type: tuple(tuple(int, int), tuple(int, int)) or () Raises: TypeError -- if the line coordinates are not given as one of the above described line formats Note This method can be used for collision detection between a rect and a line. See example code below. Note The rect.bottom and rect.right attributes of a pygame.Rect always lie one pixel outside of its actual border. # Example using clipline(). clipped_line = rect.clipline(line) if clipped_line: # If clipped_line is not an empty tuple then the line # collides/overlaps with the rect. The returned value contains # the endpoints of the clipped line. start, end = clipped_line x1, y1 = start x2, y2 = end else: print("No clipping. The line is fully outside the rect.") New in pygame 2.0.0. union() joins two rectangles into one union(Rect) -> Rect Returns a new rectangle that completely covers the area of the two provided rectangles. There may be area inside the new Rect that is not covered by the originals. union_ip() joins two rectangles into one, in place union_ip(Rect) -> None Same as the Rect.union() method, but operates in place. unionall() the union of many rectangles unionall(Rect_sequence) -> Rect Returns the union of one rectangle with a sequence of many rectangles. unionall_ip() the union of many rectangles, in place unionall_ip(Rect_sequence) -> None The same as the Rect.unionall() method, but operates in place. fit() resize and move a rectangle with aspect ratio fit(Rect) -> Rect Returns a new rectangle that is moved and resized to fit another. The aspect ratio of the original Rect is preserved, so the new rectangle may be smaller than the target in either width or height. normalize() correct negative sizes normalize() -> None This will flip the width or height of a rectangle if it has a negative size. The rectangle will remain in the same place, with only the sides swapped. contains() test if one rectangle is inside another contains(Rect) -> bool Returns true when the argument is completely inside the Rect. collidepoint() test if a point is inside a rectangle collidepoint(x, y) -> bool collidepoint((x,y)) -> bool Returns true if the given point is inside the rectangle. A point along the right or bottom edge is not considered to be inside the rectangle. Note For collision detection between a rect and a line the clipline() method can be used. colliderect() test if two rectangles overlap colliderect(Rect) -> bool Returns true if any portion of either rectangle overlap (except the top+bottom or left+right edges). Note For collision detection between a rect and a line the clipline() method can be used. collidelist() test if one rectangle in a list intersects collidelist(list) -> index Test whether the rectangle collides with any in a sequence of rectangles. The index of the first collision found is returned. If no collisions are found an index of -1 is returned. collidelistall() test if all rectangles in a list intersect collidelistall(list) -> indices Returns a list of all the indices that contain rectangles that collide with the Rect. If no intersecting rectangles are found, an empty list is returned. collidedict() test if one rectangle in a dictionary intersects collidedict(dict) -> (key, value) collidedict(dict) -> None collidedict(dict, use_values=0) -> (key, value) collidedict(dict, use_values=0) -> None Returns the first key and value pair that intersects with the calling Rect object. If no collisions are found, None is returned. If use_values is 0 (default) then the dict's keys will be used in the collision detection, otherwise the dict's values will be used. Note Rect objects cannot be used as keys in a dictionary (they are not hashable), so they must be converted to a tuple/list. e.g. rect.collidedict({tuple(key_rect) : value}) collidedictall() test if all rectangles in a dictionary intersect collidedictall(dict) -> [(key, value), ...] collidedictall(dict, use_values=0) -> [(key, value), ...] Returns a list of all the key and value pairs that intersect with the calling Rect object. If no collisions are found an empty list is returned. If use_values is 0 (default) then the dict's keys will be used in the collision detection, otherwise the dict's values will be used. Note Rect objects cannot be used as keys in a dictionary (they are not hashable), so they must be converted to a tuple/list. e.g. rect.collidedictall({tuple(key_rect) : value})
doc_29409
Integer representation of the values. Returns ndarray An ndarray with int64 dtype.
doc_29410
Return a short string version of the tick value. Defaults to the position-independent long value.
doc_29411
Return the number of non-overlapping occurrences of substring sub in the range [start, end]. Optional arguments start and end are interpreted as in slice notation.
doc_29412
C-Support Vector Classification. The implementation is based on libsvm. The fit time scales at least quadratically with the number of samples and may be impractical beyond tens of thousands of samples. For large datasets consider using LinearSVC or SGDClassifier instead, possibly after a Nystroem transformer. The multiclass support is handled according to a one-vs-one scheme. For details on the precise mathematical formulation of the provided kernel functions and how gamma, coef0 and degree affect each other, see the corresponding section in the narrative documentation: Kernel functions. Read more in the User Guide. Parameters Cfloat, default=1.0 Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty. kernel{‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}, default=’rbf’ Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to pre-compute the kernel matrix from data matrices; that matrix should be an array of shape (n_samples, n_samples). degreeint, default=3 Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels. gamma{‘scale’, ‘auto’} or float, default=’scale’ Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’. if gamma='scale' (default) is passed then it uses 1 / (n_features * X.var()) as value of gamma, if ‘auto’, uses 1 / n_features. Changed in version 0.22: The default value of gamma changed from ‘auto’ to ‘scale’. coef0float, default=0.0 Independent term in kernel function. It is only significant in ‘poly’ and ‘sigmoid’. shrinkingbool, default=True Whether to use the shrinking heuristic. See the User Guide. probabilitybool, default=False Whether to enable probability estimates. This must be enabled prior to calling fit, will slow down that method as it internally uses 5-fold cross-validation, and predict_proba may be inconsistent with predict. Read more in the User Guide. tolfloat, default=1e-3 Tolerance for stopping criterion. cache_sizefloat, default=200 Specify the size of the kernel cache (in MB). class_weightdict or ‘balanced’, default=None Set the parameter C of class i to class_weight[i]*C for SVC. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)) verbosebool, default=False Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in libsvm that, if enabled, may not work properly in a multithreaded context. max_iterint, default=-1 Hard limit on iterations within solver, or -1 for no limit. decision_function_shape{‘ovo’, ‘ovr’}, default=’ovr’ Whether to return a one-vs-rest (‘ovr’) decision function of shape (n_samples, n_classes) as all other classifiers, or the original one-vs-one (‘ovo’) decision function of libsvm which has shape (n_samples, n_classes * (n_classes - 1) / 2). However, one-vs-one (‘ovo’) is always used as multi-class strategy. The parameter is ignored for binary classification. Changed in version 0.19: decision_function_shape is ‘ovr’ by default. New in version 0.17: decision_function_shape=’ovr’ is recommended. Changed in version 0.17: Deprecated decision_function_shape=’ovo’ and None. break_tiesbool, default=False If true, decision_function_shape='ovr', and number of classes > 2, predict will break ties according to the confidence values of decision_function; otherwise the first class among the tied classes is returned. Please note that breaking ties comes at a relatively high computational cost compared to a simple predict. New in version 0.22. random_stateint, RandomState instance or None, default=None Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes class_weight_ndarray of shape (n_classes,) Multipliers of parameter C for each class. Computed based on the class_weight parameter. classes_ndarray of shape (n_classes,) The classes labels. coef_ndarray of shape (n_classes * (n_classes - 1) / 2, n_features) Weights assigned to the features (coefficients in the primal problem). This is only available in the case of a linear kernel. coef_ is a readonly property derived from dual_coef_ and support_vectors_. dual_coef_ndarray of shape (n_classes -1, n_SV) Dual coefficients of the support vector in the decision function (see Mathematical formulation), multiplied by their targets. For multiclass, coefficient for all 1-vs-1 classifiers. The layout of the coefficients in the multiclass case is somewhat non-trivial. See the multi-class section of the User Guide for details. fit_status_int 0 if correctly fitted, 1 otherwise (will raise warning) intercept_ndarray of shape (n_classes * (n_classes - 1) / 2,) Constants in decision function. support_ndarray of shape (n_SV) Indices of support vectors. support_vectors_ndarray of shape (n_SV, n_features) Support vectors. n_support_ndarray of shape (n_classes,), dtype=int32 Number of support vectors for each class. probA_ndarray of shape (n_classes * (n_classes - 1) / 2) probB_ndarray of shape (n_classes * (n_classes - 1) / 2) If probability=True, it corresponds to the parameters learned in Platt scaling to produce probability estimates from decision values. If probability=False, it’s an empty array. Platt scaling uses the logistic function 1 / (1 + exp(decision_value * probA_ + probB_)) where probA_ and probB_ are learned from the dataset [2]. For more information on the multiclass case and training procedure see section 8 of [1]. shape_fit_tuple of int of shape (n_dimensions_of_X,) Array dimensions of training vector X. See also SVR Support Vector Machine for Regression implemented using libsvm. LinearSVC Scalable Linear Support Vector Machine for classification implemented using liblinear. Check the See Also section of LinearSVC for more comparison element. References 1 LIBSVM: A Library for Support Vector Machines 2 Platt, John (1999). “Probabilistic outputs for support vector machines and comparison to regularizedlikelihood methods.” Examples >>> import numpy as np >>> from sklearn.pipeline import make_pipeline >>> from sklearn.preprocessing import StandardScaler >>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]]) >>> y = np.array([1, 1, 2, 2]) >>> from sklearn.svm import SVC >>> clf = make_pipeline(StandardScaler(), SVC(gamma='auto')) >>> clf.fit(X, y) Pipeline(steps=[('standardscaler', StandardScaler()), ('svc', SVC(gamma='auto'))]) >>> print(clf.predict([[-0.8, -1]])) [1] Methods decision_function(X) Evaluates the decision function for the samples in X. fit(X, y[, sample_weight]) Fit the SVM model according to the given training data. get_params([deep]) Get parameters for this estimator. predict(X) Perform classification on samples in X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of this estimator. decision_function(X) [source] Evaluates the decision function for the samples in X. Parameters Xarray-like of shape (n_samples, n_features) Returns Xndarray of shape (n_samples, n_classes * (n_classes-1) / 2) Returns the decision function of the sample for each class in the model. If decision_function_shape=’ovr’, the shape is (n_samples, n_classes). Notes If decision_function_shape=’ovo’, the function values are proportional to the distance of the samples X to the separating hyperplane. If the exact distances are required, divide the function values by the norm of the weight vector (coef_). See also this question for further details. If decision_function_shape=’ovr’, the decision function is a monotonic transformation of ovo decision function. fit(X, y, sample_weight=None) [source] Fit the SVM model according to the given training data. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) Training vectors, where n_samples is the number of samples and n_features is the number of features. For kernel=”precomputed”, the expected shape of X is (n_samples, n_samples). yarray-like of shape (n_samples,) Target values (class labels in classification, real numbers in regression). sample_weightarray-like of shape (n_samples,), default=None Per-sample weights. Rescale C per sample. Higher weights force the classifier to put more emphasis on these points. Returns selfobject Notes If X and y are not C-ordered and contiguous arrays of np.float64 and X is not a scipy.sparse.csr_matrix, X and/or y may be copied. If X is a dense array, then the other methods will not support sparse matrices as input. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Perform classification on samples in X. For an one-class model, +1 or -1 is returned. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples_test, n_samples_train) For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns y_predndarray of shape (n_samples,) Class labels for samples in X. property predict_log_proba Compute log probabilities of possible outcomes for samples in X. The model need to have probability information computed at training time: fit with attribute probability set to True. Parameters Xarray-like of shape (n_samples, n_features) or (n_samples_test, n_samples_train) For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns Tndarray of shape (n_samples, n_classes) Returns the log-probabilities of the sample for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_. Notes The probability model is created using cross validation, so the results can be slightly different than those obtained by predict. Also, it will produce meaningless results on very small datasets. property predict_proba Compute probabilities of possible outcomes for samples in X. The model need to have probability information computed at training time: fit with attribute probability set to True. Parameters Xarray-like of shape (n_samples, n_features) For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns Tndarray of shape (n_samples, n_classes) Returns the probability of the sample for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_. Notes The probability model is created using cross validation, so the results can be slightly different than those obtained by predict. Also, it will produce meaningless results on very small datasets. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_29413
Sticky bit. When this bit is set on a directory it means that a file in that directory can be renamed or deleted only by the owner of the file, by the owner of the directory, or by a privileged process.
doc_29414
Returns the average of the array elements along given axis. Masked entries are ignored, and result elements which are not finite will be masked. Refer to numpy.mean for full documentation. See also numpy.ndarray.mean corresponding function for ndarrays numpy.mean Equivalent function numpy.ma.average Weighted average. Examples >>> a = np.ma.array([1,2,3], mask=[False, False, True]) >>> a masked_array(data=[1, 2, --], mask=[False, False, True], fill_value=999999) >>> a.mean() 1.5
doc_29415
See Migration guide for more details. tf.compat.v1.raw_ops.Unbatch tf.raw_ops.Unbatch( batched_tensor, batch_index, id, timeout_micros, container='', shared_name='', name=None ) An instance of Unbatch either receives an empty batched_tensor, in which case it asynchronously waits until the values become available from a concurrently running instance of Unbatch with the same container and shared_name, or receives a non-empty batched_tensor in which case it finalizes all other concurrently running instances and outputs its own element from the batch. batched_tensor: The possibly transformed output of Batch. The size of the first dimension should remain unchanged by the transformations for the operation to work. batch_index: The matching batch_index obtained from Batch. id: The id scalar emitted by Batch. unbatched_tensor: The Tensor corresponding to this execution. timeout_micros: Maximum amount of time (in microseconds) to wait to receive the batched input tensor associated with a given invocation of the op. container: Container to control resource sharing. shared_name: Instances of Unbatch with the same container and shared_name are assumed to possibly belong to the same batch. If left empty, the op name will be used as the shared name. Args batched_tensor A Tensor. batch_index A Tensor of type int64. id A Tensor of type int64. timeout_micros An int. container An optional string. Defaults to "". shared_name An optional string. Defaults to "". name A name for the operation (optional). Returns A Tensor. Has the same type as batched_tensor.
doc_29416
initialize all imported pygame modules init() -> (numpass, numfail) Initialize all imported pygame modules. No exceptions will be raised if a module fails, but the total number if successful and failed inits will be returned as a tuple. You can always initialize individual modules manually, but pygame.init() is a convenient way to get everything started. The init() functions for individual modules will raise exceptions when they fail. You may want to initialize the different modules separately to speed up your program or to not use modules your game does not require. It is safe to call this init() more than once as repeated calls will have no effect. This is true even if you have pygame.quit() all the modules.
doc_29417
Return the value of the (natural) exponential function e**x at the given number. The result is correctly rounded using the ROUND_HALF_EVEN rounding mode. >>> Decimal(1).exp() Decimal('2.718281828459045235360287471') >>> Decimal(321).exp() Decimal('2.561702493119680037517373933E+139')
doc_29418
Generate the file names in a directory tree by walking the tree either top-down or bottom-up. For each directory in the tree rooted at directory top (including top itself), it yields a 3-tuple (dirpath, dirnames, filenames). dirpath is a string, the path to the directory. dirnames is a list of the names of the subdirectories in dirpath (excluding '.' and '..'). filenames is a list of the names of the non-directory files in dirpath. Note that the names in the lists contain no path components. To get a full path (which begins with top) to a file or directory in dirpath, do os.path.join(dirpath, name). Whether or not the lists are sorted depends on the file system. If a file is removed from or added to the dirpath directory during generating the lists, whether a name for that file be included is unspecified. If optional argument topdown is True or not specified, the triple for a directory is generated before the triples for any of its subdirectories (directories are generated top-down). If topdown is False, the triple for a directory is generated after the triples for all of its subdirectories (directories are generated bottom-up). No matter the value of topdown, the list of subdirectories is retrieved before the tuples for the directory and its subdirectories are generated. When topdown is True, the caller can modify the dirnames list in-place (perhaps using del or slice assignment), and walk() will only recurse into the subdirectories whose names remain in dirnames; this can be used to prune the search, impose a specific order of visiting, or even to inform walk() about directories the caller creates or renames before it resumes walk() again. Modifying dirnames when topdown is False has no effect on the behavior of the walk, because in bottom-up mode the directories in dirnames are generated before dirpath itself is generated. By default, errors from the scandir() call are ignored. If optional argument onerror is specified, it should be a function; it will be called with one argument, an OSError instance. It can report the error to continue with the walk, or raise the exception to abort the walk. Note that the filename is available as the filename attribute of the exception object. By default, walk() will not walk down into symbolic links that resolve to directories. Set followlinks to True to visit directories pointed to by symlinks, on systems that support them. Note Be aware that setting followlinks to True can lead to infinite recursion if a link points to a parent directory of itself. walk() does not keep track of the directories it visited already. Note If you pass a relative pathname, don’t change the current working directory between resumptions of walk(). walk() never changes the current directory, and assumes that its caller doesn’t either. This example displays the number of bytes taken by non-directory files in each directory under the starting directory, except that it doesn’t look under any CVS subdirectory: import os from os.path import join, getsize for root, dirs, files in os.walk('python/Lib/email'): print(root, "consumes", end=" ") print(sum(getsize(join(root, name)) for name in files), end=" ") print("bytes in", len(files), "non-directory files") if 'CVS' in dirs: dirs.remove('CVS') # don't visit CVS directories In the next example (simple implementation of shutil.rmtree()), walking the tree bottom-up is essential, rmdir() doesn’t allow deleting a directory before the directory is empty: # Delete everything reachable from the directory named in "top", # assuming there are no symbolic links. # CAUTION: This is dangerous! For example, if top == '/', it # could delete all your disk files. import os for root, dirs, files in os.walk(top, topdown=False): for name in files: os.remove(os.path.join(root, name)) for name in dirs: os.rmdir(os.path.join(root, name)) Raises an auditing event os.walk with arguments top, topdown, onerror, followlinks. Changed in version 3.5: This function now calls os.scandir() instead of os.listdir(), making it faster by reducing the number of calls to os.stat(). Changed in version 3.6: Accepts a path-like object.
doc_29419
Return the Transform instance used by this artist.
doc_29420
Return the transpose, which is by definition self. Returns %(klass)s
doc_29421
Mask rows and/or columns of a 2D array that contain masked values. Mask whole rows and/or columns of a 2D array that contain masked values. The masking behavior is selected using the axis parameter. If axis is None, rows and columns are masked. If axis is 0, only rows are masked. If axis is 1 or -1, only columns are masked. Parameters aarray_like, MaskedArray The array to mask. If not a MaskedArray instance (or if no array elements are masked). The result is a MaskedArray with mask set to nomask (False). Must be a 2D array. axisint, optional Axis along which to perform the operation. If None, applies to a flattened version of the array. Returns aMaskedArray A modified version of the input array, masked depending on the value of the axis parameter. Raises NotImplementedError If input array a is not 2D. See also mask_rows Mask rows of a 2D array that contain masked values. mask_cols Mask cols of a 2D array that contain masked values. masked_where Mask where a condition is met. Notes The input array’s mask is modified by this function. Examples >>> import numpy.ma as ma >>> a = np.zeros((3, 3), dtype=int) >>> a[1, 1] = 1 >>> a array([[0, 0, 0], [0, 1, 0], [0, 0, 0]]) >>> a = ma.masked_equal(a, 1) >>> a masked_array( data=[[0, 0, 0], [0, --, 0], [0, 0, 0]], mask=[[False, False, False], [False, True, False], [False, False, False]], fill_value=1) >>> ma.mask_rowcols(a) masked_array( data=[[0, --, 0], [--, --, --], [0, --, 0]], mask=[[False, True, False], [ True, True, True], [False, True, False]], fill_value=1)
doc_29422
Sets the data buffer unpack position to position. You should be careful about using get_position() and set_position().
doc_29423
The Click command group for registering CLI commands for this object. The commands are available from the flask command once the application has been discovered and blueprints have been registered.
doc_29424
See Migration guide for more details. tf.compat.v1.initializers.tables_initializer tf.compat.v1.tables_initializer( name='init_all_tables' ) See the Low Level Intro guide, for an example of usage. Args name Optional name for the initialization op. Returns An Op that initializes all tables. Note that if there are not tables the returned Op is a NoOp.
doc_29425
Raised when a future operation exceeds the given timeout.
doc_29426
Return symmetric conditional entropies associated with the VI. [1] The variation of information is defined as VI(X,Y) = H(X|Y) + H(Y|X). If X is the ground-truth segmentation, then H(X|Y) can be interpreted as the amount of under-segmentation and H(X|Y) as the amount of over-segmentation. In other words, a perfect over-segmentation will have H(X|Y)=0 and a perfect under-segmentation will have H(Y|X)=0. Parameters image0, image1ndarray of int Label images / segmentations, must have same shape. tablescipy.sparse array in csr format, optional A contingency table built with skimage.evaluate.contingency_table. If None, it will be computed with skimage.evaluate.contingency_table. If given, the entropies will be computed from this table and any images will be ignored. ignore_labelssequence of int, optional Labels to ignore. Any part of the true image labeled with any of these values will not be counted in the score. Returns vindarray of float, shape (2,) The conditional entropies of image1|image0 and image0|image1. References 1 Marina Meilă (2007), Comparing clusterings—an information based distance, Journal of Multivariate Analysis, Volume 98, Issue 5, Pages 873-895, ISSN 0047-259X, DOI:10.1016/j.jmva.2006.11.013.
doc_29427
Keymap to associate with this tool. list[str]: List of keys that will trigger this tool when a keypress event is emitted on self.figure.canvas.
doc_29428
Allows control over sharing of browsing context group with cross-origin documents. Values must be a member of the werkzeug.http.COOP enum.
doc_29429
Returns the state of the optimizer as a dict. It contains two entries: state - a dict holding current optimization state. Its content differs between optimizer classes. param_groups - a dict containing all parameter groups
doc_29430
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingCenteredRMSPropParameters tf.raw_ops.RetrieveTPUEmbeddingCenteredRMSPropParameters( num_shards, shard_id, table_id=-1, table_name='', config='', name=None ) An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint. Args num_shards An int. shard_id An int. table_id An optional int. Defaults to -1. table_name An optional string. Defaults to "". config An optional string. Defaults to "". name A name for the operation (optional). Returns A tuple of Tensor objects (parameters, ms, mom, mg). parameters A Tensor of type float32. ms A Tensor of type float32. mom A Tensor of type float32. mg A Tensor of type float32.
doc_29431
See Migration guide for more details. tf.compat.v1.keras.utils.register_keras_serializable tf.keras.utils.register_keras_serializable( package='Custom', name=None ) This decorator injects the decorated class or function into the Keras custom object dictionary, so that it can be serialized and deserialized without needing an entry in the user-provided custom object dict. It also injects a function that Keras will call to get the object's serializable string key. Note that to be serialized and deserialized, classes must implement the get_config() method. Functions do not have this requirement. The object will be registered under the key 'package>name' where name, defaults to the object name if not passed. Arguments package The package that this class belongs to. name The name to serialize this class under in this package. If None, the class' name will be used. Returns A decorator that registers the decorated class with the passed names.
doc_29432
Returns True if the type of element is a scalar type. Parameters elementany Input argument, can be of any type and shape. Returns valbool True if element is a scalar type, False if it is not. See also ndim Get the number of dimensions of an array Notes If you need a stricter way to identify a numerical scalar, use isinstance(x, numbers.Number), as that returns False for most non-numerical elements such as strings. In most cases np.ndim(x) == 0 should be used instead of this function, as that will also return true for 0d arrays. This is how numpy overloads functions in the style of the dx arguments to gradient and the bins argument to histogram. Some key differences: x isscalar(x) np.ndim(x) == 0 PEP 3141 numeric objects (including builtins) True True builtin string and buffer objects True True other builtin objects, like pathlib.Path, Exception, the result of re.compile False True third-party objects like matplotlib.figure.Figure False True zero-dimensional numpy arrays False True other numpy arrays False False list, tuple, and other sequence objects False False Examples >>> np.isscalar(3.1) True >>> np.isscalar(np.array(3.1)) False >>> np.isscalar([3.1]) False >>> np.isscalar(False) True >>> np.isscalar('numpy') True NumPy supports PEP 3141 numbers: >>> from fractions import Fraction >>> np.isscalar(Fraction(5, 17)) True >>> from numbers import Number >>> np.isscalar(Number()) True
doc_29433
For scoped addresses as defined by RFC 4007, this property identifies the particular zone of the address’s scope that the address belongs to, as a string. When no scope zone is specified, this property will be None.
doc_29434
Set the value array from array-like A. Parameters Aarray-like or None The values that are mapped to colors. The base class ScalarMappable does not make any assumptions on the dimensionality and shape of the value array A.
doc_29435
See Migration guide for more details. tf.compat.v1.keras.constraints.deserialize tf.keras.constraints.deserialize( config, custom_objects=None )
doc_29436
Call self as a function.
doc_29437
Passive Aggressive Classifier Read more in the User Guide. Parameters Cfloat, default=1.0 Maximum step size (regularization). Defaults to 1.0. fit_interceptbool, default=True Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. max_iterint, default=1000 The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method. New in version 0.19. tolfloat or None, default=1e-3 The stopping criterion. If it is not None, the iterations will stop when (loss > previous_loss - tol). New in version 0.19. early_stoppingbool, default=False Whether to use early stopping to terminate training when validation. score is not improving. If set to True, it will automatically set aside a stratified fraction of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. New in version 0.20. validation_fractionfloat, default=0.1 The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True. New in version 0.20. n_iter_no_changeint, default=5 Number of iterations with no improvement to wait before early stopping. New in version 0.20. shufflebool, default=True Whether or not the training data should be shuffled after each epoch. verboseinteger, default=0 The verbosity level lossstring, default=”hinge” The loss function to be used: hinge: equivalent to PA-I in the reference paper. squared_hinge: equivalent to PA-II in the reference paper. n_jobsint or None, default=None The number of CPUs to use to do the OVA (One Versus All, for multi-class problems) computation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. random_stateint, RandomState instance, default=None Used to shuffle the training data, when shuffle is set to True. Pass an int for reproducible output across multiple function calls. See Glossary. warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. Repeatedly calling fit or partial_fit when warm_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled. class_weightdict, {class_label: weight} or “balanced” or None, default=None Preset for the class_weight fit parameter. Weights associated with classes. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)) New in version 0.17: parameter class_weight to automatically weight samples. averagebool or int, default=False When set to True, computes the averaged SGD weights and stores the result in the coef_ attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average. So average=10 will begin averaging after seeing 10 samples. New in version 0.19: parameter average to use weights averaging in SGD Attributes coef_array, shape = [1, n_features] if n_classes == 2 else [n_classes, n_features] Weights assigned to the features. intercept_array, shape = [1] if n_classes == 2 else [n_classes] Constants in decision function. n_iter_int The actual number of iterations to reach the stopping criterion. For multiclass fits, it is the maximum over every binary fit. classes_array of shape (n_classes,) The unique classes labels. t_int Number of weight updates performed during training. Same as (n_iter_ * n_samples). loss_function_callable Loss function used by the algorithm. See also SGDClassifier Perceptron References Online Passive-Aggressive Algorithms <http://jmlr.csail.mit.edu/papers/volume7/crammer06a/crammer06a.pdf> K. Crammer, O. Dekel, J. Keshat, S. Shalev-Shwartz, Y. Singer - JMLR (2006) Examples >>> from sklearn.linear_model import PassiveAggressiveClassifier >>> from sklearn.datasets import make_classification >>> X, y = make_classification(n_features=4, random_state=0) >>> clf = PassiveAggressiveClassifier(max_iter=1000, random_state=0, ... tol=1e-3) >>> clf.fit(X, y) PassiveAggressiveClassifier(random_state=0) >>> print(clf.coef_) [[0.26642044 0.45070924 0.67251877 0.64185414]] >>> print(clf.intercept_) [1.84127814] >>> print(clf.predict([[0, 0, 0, 0]])) [1] Methods decision_function(X) Predict confidence scores for samples. densify() Convert coefficient matrix to dense array format. fit(X, y[, coef_init, intercept_init]) Fit linear model with Passive Aggressive algorithm. get_params([deep]) Get parameters for this estimator. partial_fit(X, y[, classes]) Fit linear model with Passive Aggressive algorithm. predict(X) Predict class labels for samples in X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**kwargs) Set and validate the parameters of estimator. sparsify() Convert coefficient matrix to sparse format. decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. densify() [source] Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self Fitted estimator. fit(X, y, coef_init=None, intercept_init=None) [source] Fit linear model with Passive Aggressive algorithm. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data ynumpy array of shape [n_samples] Target values coef_initarray, shape = [n_classes,n_features] The initial coefficients to warm-start the optimization. intercept_initarray, shape = [n_classes] The initial intercept to warm-start the optimization. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. partial_fit(X, y, classes=None) [source] Fit linear model with Passive Aggressive algorithm. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Subset of the training data ynumpy array of shape [n_samples] Subset of the target values classesarray, shape = [n_classes] Classes across all calls to partial_fit. Can be obtained by via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes. Returns selfreturns an instance of self. predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**kwargs) [source] Set and validate the parameters of estimator. Parameters **kwargsdict Estimator parameters. Returns selfobject Estimator instance. sparsify() [source] Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
doc_29438
See Migration guide for more details. tf.compat.v1.raw_ops.Erf tf.raw_ops.Erf( x, name=None ) Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
doc_29439
Abstract base class for classes implementing mesh refinement. A TriRefiner encapsulates a Triangulation object and provides tools for mesh refinement and interpolation. Derived classes must implement: refine_triangulation(return_tri_index=False, **kwargs) , where the optional keyword arguments kwargs are defined in each TriRefiner concrete implementation, and which returns: a refined triangulation, optionally (depending on return_tri_index), for each point of the refined triangulation: the index of the initial triangulation triangle to which it belongs. refine_field(z, triinterpolator=None, **kwargs), where: z array of field values (to refine) defined at the base triangulation nodes, triinterpolator is an optional TriInterpolator, the other optional keyword arguments kwargs are defined in each TriRefiner concrete implementation; and which returns (as a tuple) a refined triangular mesh and the interpolated values of the field at the refined triangulation nodes.
doc_29440
Each non-abstract Model class must have a Manager instance added to it. Django ensures that in your model class you have at least a default Manager specified. If you don’t add your own Manager, Django will add an attribute objects containing default Manager instance. If you add your own Manager instance attribute, the default one does not appear. Consider the following example: from django.db import models class Person(models.Model): # Add manager with another name people = models.Manager() For more details on model managers see Managers and Retrieving objects.
doc_29441
Integrate. Return a series instance that is the definite integral of the current series. Parameters mnon-negative int The number of integrations to perform. karray_like Integration constants. The first constant is applied to the first integration, the second to the second, and so on. The list of values must less than or equal to m in length and any missing values are set to zero. lbndScalar The lower bound of the definite integral. Returns new_seriesseries A new series representing the integral. The domain is the same as the domain of the integrated series.
doc_29442
Index Attribute Meaning 0 gr_name the name of the group 1 gr_passwd the (encrypted) group password; often empty 2 gr_gid the numerical group ID 3 gr_mem all the group member’s user names The gid is an integer, name and password are strings, and the member list is a list of strings. (Note that most users are not explicitly listed as members of the group they are in according to the password database. Check both databases to get complete membership information. Also note that a gr_name that starts with a + or - is likely to be a YP/NIS reference and may not be accessible via getgrnam() or getgrgid().) It defines the following items: grp.getgrgid(gid) Return the group database entry for the given numeric group ID. KeyError is raised if the entry asked for cannot be found. Deprecated since version 3.6: Since Python 3.6 the support of non-integer arguments like floats or strings in getgrgid() is deprecated. grp.getgrnam(name) Return the group database entry for the given group name. KeyError is raised if the entry asked for cannot be found. grp.getgrall() Return a list of all available group entries, in arbitrary order. See also Module pwd An interface to the user database, similar to this. Module spwd An interface to the shadow password database, similar to this.
doc_29443
Returns a copy of the calling offset object with n=1 and all other attributes equal.
doc_29444
Define the picking behavior of the artist. Parameters pickerNone or bool or float or callable This can be one of the following: None: Picking is disabled for this artist (default). A boolean: If True then picking will be enabled and the artist will fire a pick event if the mouse event is over the artist. A float: If picker is a number it is interpreted as an epsilon tolerance in points and the artist will fire off an event if its data is within epsilon of the mouse event. For some artists like lines and patch collections, the artist may provide additional data to the pick event that is generated, e.g., the indices of the data within epsilon of the pick event A function: If picker is callable, it is a user supplied function which determines whether the artist is hit by the mouse event: hit, props = picker(artist, mouseevent) to determine the hit test. if the mouse event is over the artist, return hit=True and props is a dictionary of properties you want added to the PickEvent attributes.
doc_29445
Transform functor that applies a sequence of transforms tseq component-wise to each submatrix at dim in a way compatible with torch.stack(). Example:: x = torch.stack([torch.range(1, 10), torch.range(1, 10)], dim=1) t = StackTransform([ExpTransform(), identity_transform], dim=1) y = t(x)
doc_29446
The dictionary of converters. This can be modified after the class was created, but will only affect rules added after the modification. If the rules are defined with the list passed to the class, the converters parameter to the constructor has to be used instead.
doc_29447
Extract DAISY feature descriptors densely for the given image. DAISY is a feature descriptor similar to SIFT formulated in a way that allows for fast dense extraction. Typically, this is practical for bag-of-features image representations. The implementation follows Tola et al. [1] but deviate on the following points: Histogram bin contribution are smoothed with a circular Gaussian window over the tonal range (the angular range). The sigma values of the spatial Gaussian smoothing in this code do not match the sigma values in the original code by Tola et al. [2]. In their code, spatial smoothing is applied to both the input image and the center histogram. However, this smoothing is not documented in [1] and, therefore, it is omitted. Parameters image(M, N) array Input image (grayscale). stepint, optional Distance between descriptor sampling points. radiusint, optional Radius (in pixels) of the outermost ring. ringsint, optional Number of rings. histogramsint, optional Number of histograms sampled per ring. orientationsint, optional Number of orientations (bins) per histogram. normalization[ ‘l1’ | ‘l2’ | ‘daisy’ | ‘off’ ], optional How to normalize the descriptors ‘l1’: L1-normalization of each descriptor. ‘l2’: L2-normalization of each descriptor. ‘daisy’: L2-normalization of individual histograms. ‘off’: Disable normalization. sigmas1D array of float, optional Standard deviation of spatial Gaussian smoothing for the center histogram and for each ring of histograms. The array of sigmas should be sorted from the center and out. I.e. the first sigma value defines the spatial smoothing of the center histogram and the last sigma value defines the spatial smoothing of the outermost ring. Specifying sigmas overrides the following parameter. rings = len(sigmas) - 1 ring_radii1D array of int, optional Radius (in pixels) for each ring. Specifying ring_radii overrides the following two parameters. rings = len(ring_radii) radius = ring_radii[-1] If both sigmas and ring_radii are given, they must satisfy the following predicate since no radius is needed for the center histogram. len(ring_radii) == len(sigmas) + 1 visualizebool, optional Generate a visualization of the DAISY descriptors Returns descsarray Grid of DAISY descriptors for the given image as an array dimensionality (P, Q, R) where P = ceil((M - radius*2) / step) Q = ceil((N - radius*2) / step) R = (rings * histograms + 1) * orientations descs_img(M, N, 3) array (only if visualize==True) Visualization of the DAISY descriptors. References 1(1,2) Tola et al. “Daisy: An efficient dense descriptor applied to wide- baseline stereo.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 32.5 (2010): 815-830. 2 http://cvlab.epfl.ch/software/daisy
doc_29448
'blogs.blog': lambda o: "/blogs/%s/" % o.slug, 'news.story': lambda o: "/stories/%s/%s/" % (o.pub_year, o.slug), } The model name used in this setting should be all lowercase, regardless of the case of the actual model class name. ADMINS Default: [] (Empty list) A list of all the people who get code error notifications. When DEBUG=False and AdminEmailHandler is configured in LOGGING (done by default), Django emails these people the details of exceptions raised in the request/response cycle. Each item in the list should be a tuple of (Full name, email address). Example: [('John', 'john@example.com'), ('Mary', 'mary@example.com')] ALLOWED_HOSTS Default: [] (Empty list) A list of strings representing the host/domain names that this Django site can serve. This is a security measure to prevent HTTP Host header attacks, which are possible even under many seemingly-safe web server configurations. Values in this list can be fully qualified names (e.g. 'www.example.com'), in which case they will be matched against the request’s Host header exactly (case-insensitive, not including port). A value beginning with a period can be used as a subdomain wildcard: '.example.com' will match example.com, www.example.com, and any other subdomain of example.com. A value of '*' will match anything; in this case you are responsible to provide your own validation of the Host header (perhaps in a middleware; if so this middleware must be listed first in MIDDLEWARE). Django also allows the fully qualified domain name (FQDN) of any entries. Some browsers include a trailing dot in the Host header which Django strips when performing host validation. If the Host header (or X-Forwarded-Host if USE_X_FORWARDED_HOST is enabled) does not match any value in this list, the django.http.HttpRequest.get_host() method will raise SuspiciousOperation. When DEBUG is True and ALLOWED_HOSTS is empty, the host is validated against ['.localhost', '127.0.0.1', '[::1]']. ALLOWED_HOSTS is also checked when running tests. This validation only applies via get_host(); if your code accesses the Host header directly from request.META you are bypassing this security protection. APPEND_SLASH Default: True When set to True, if the request URL does not match any of the patterns in the URLconf and it doesn’t end in a slash, an HTTP redirect is issued to the same URL with a slash appended. Note that the redirect may cause any data submitted in a POST request to be lost. The APPEND_SLASH setting is only used if CommonMiddleware is installed (see Middleware). See also PREPEND_WWW. CACHES Default: { 'default': { 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache', } } A dictionary containing the settings for all caches to be used with Django. It is a nested dictionary whose contents maps cache aliases to a dictionary containing the options for an individual cache. The CACHES setting must configure a default cache; any number of additional caches may also be specified. If you are using a cache backend other than the local memory cache, or you need to define multiple caches, other options will be required. The following cache options are available. BACKEND Default: '' (Empty string) The cache backend to use. The built-in cache backends are: 'django.core.cache.backends.db.DatabaseCache' 'django.core.cache.backends.dummy.DummyCache' 'django.core.cache.backends.filebased.FileBasedCache' 'django.core.cache.backends.locmem.LocMemCache' 'django.core.cache.backends.memcached.PyMemcacheCache' 'django.core.cache.backends.memcached.PyLibMCCache' 'django.core.cache.backends.redis.RedisCache' You can use a cache backend that doesn’t ship with Django by setting BACKEND to a fully-qualified path of a cache backend class (i.e. mypackage.backends.whatever.WhateverCache). Changed in Django 3.2: The PyMemcacheCache backend was added. Changed in Django 4.0: The RedisCache backend was added. KEY_FUNCTION A string containing a dotted path to a function (or any callable) that defines how to compose a prefix, version and key into a final cache key. The default implementation is equivalent to the function: def make_key(key, key_prefix, version): return ':'.join([key_prefix, str(version), key]) You may use any key function you want, as long as it has the same argument signature. See the cache documentation for more information. KEY_PREFIX Default: '' (Empty string) A string that will be automatically included (prepended by default) to all cache keys used by the Django server. See the cache documentation for more information. LOCATION Default: '' (Empty string) The location of the cache to use. This might be the directory for a file system cache, a host and port for a memcache server, or an identifying name for a local memory cache. e.g.: CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache', 'LOCATION': '/var/tmp/django_cache', } } OPTIONS Default: None Extra parameters to pass to the cache backend. Available parameters vary depending on your cache backend. Some information on available parameters can be found in the cache arguments documentation. For more information, consult your backend module’s own documentation. TIMEOUT Default: 300 The number of seconds before a cache entry is considered stale. If the value of this setting is None, cache entries will not expire. A value of 0 causes keys to immediately expire (effectively “don’t cache”). VERSION Default: 1 The default version number for cache keys generated by the Django server. See the cache documentation for more information. CACHE_MIDDLEWARE_ALIAS Default: 'default' The cache connection to use for the cache middleware. CACHE_MIDDLEWARE_KEY_PREFIX Default: '' (Empty string) A string which will be prefixed to the cache keys generated by the cache middleware. This prefix is combined with the KEY_PREFIX setting; it does not replace it. See Django’s cache framework. CACHE_MIDDLEWARE_SECONDS Default: 600 The default number of seconds to cache a page for the cache middleware. See Django’s cache framework. CSRF_COOKIE_AGE Default: 31449600 (approximately 1 year, in seconds) The age of CSRF cookies, in seconds. The reason for setting a long-lived expiration time is to avoid problems in the case of a user closing a browser or bookmarking a page and then loading that page from a browser cache. Without persistent cookies, the form submission would fail in this case. Some browsers (specifically Internet Explorer) can disallow the use of persistent cookies or can have the indexes to the cookie jar corrupted on disk, thereby causing CSRF protection checks to (sometimes intermittently) fail. Change this setting to None to use session-based CSRF cookies, which keep the cookies in-memory instead of on persistent storage. CSRF_COOKIE_DOMAIN Default: None The domain to be used when setting the CSRF cookie. This can be useful for easily allowing cross-subdomain requests to be excluded from the normal cross site request forgery protection. It should be set to a string such as ".example.com" to allow a POST request from a form on one subdomain to be accepted by a view served from another subdomain. Please note that the presence of this setting does not imply that Django’s CSRF protection is safe from cross-subdomain attacks by default - please see the CSRF limitations section. CSRF_COOKIE_HTTPONLY Default: False Whether to use HttpOnly flag on the CSRF cookie. If this is set to True, client-side JavaScript will not be able to access the CSRF cookie. Designating the CSRF cookie as HttpOnly doesn’t offer any practical protection because CSRF is only to protect against cross-domain attacks. If an attacker can read the cookie via JavaScript, they’re already on the same domain as far as the browser knows, so they can do anything they like anyway. (XSS is a much bigger hole than CSRF.) Although the setting offers little practical benefit, it’s sometimes required by security auditors. If you enable this and need to send the value of the CSRF token with an AJAX request, your JavaScript must pull the value from a hidden CSRF token form input instead of from the cookie. See SESSION_COOKIE_HTTPONLY for details on HttpOnly. CSRF_COOKIE_NAME Default: 'csrftoken' The name of the cookie to use for the CSRF authentication token. This can be whatever you want (as long as it’s different from the other cookie names in your application). See Cross Site Request Forgery protection. CSRF_COOKIE_PATH Default: '/' The path set on the CSRF cookie. This should either match the URL path of your Django installation or be a parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths, and each instance will only see its own CSRF cookie. CSRF_COOKIE_SAMESITE Default: 'Lax' The value of the SameSite flag on the CSRF cookie. This flag prevents the cookie from being sent in cross-site requests. See SESSION_COOKIE_SAMESITE for details about SameSite. CSRF_COOKIE_SECURE Default: False Whether to use a secure cookie for the CSRF cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent with an HTTPS connection. CSRF_USE_SESSIONS Default: False Whether to store the CSRF token in the user’s session instead of in a cookie. It requires the use of django.contrib.sessions. Storing the CSRF token in a cookie (Django’s default) is safe, but storing it in the session is common practice in other web frameworks and therefore sometimes demanded by security auditors. Since the default error views require the CSRF token, SessionMiddleware must appear in MIDDLEWARE before any middleware that may raise an exception to trigger an error view (such as PermissionDenied) if you’re using CSRF_USE_SESSIONS. See Middleware ordering. CSRF_FAILURE_VIEW Default: 'django.views.csrf.csrf_failure' A dotted path to the view function to be used when an incoming request is rejected by the CSRF protection. The function should have this signature: def csrf_failure(request, reason=""): ... where reason is a short message (intended for developers or logging, not for end users) indicating the reason the request was rejected. It should return an HttpResponseForbidden. django.views.csrf.csrf_failure() accepts an additional template_name parameter that defaults to '403_csrf.html'. If a template with that name exists, it will be used to render the page. CSRF_HEADER_NAME Default: 'HTTP_X_CSRFTOKEN' The name of the request header used for CSRF authentication. As with other HTTP headers in request.META, the header name received from the server is normalized by converting all characters to uppercase, replacing any hyphens with underscores, and adding an 'HTTP_' prefix to the name. For example, if your client sends a 'X-XSRF-TOKEN' header, the setting should be 'HTTP_X_XSRF_TOKEN'. CSRF_TRUSTED_ORIGINS Default: [] (Empty list) A list of trusted origins for unsafe requests (e.g. POST). For requests that include the Origin header, Django’s CSRF protection requires that header match the origin present in the Host header. For a secure unsafe request that doesn’t include the Origin header, the request must have a Referer header that matches the origin present in the Host header. These checks prevent, for example, a POST request from subdomain.example.com from succeeding against api.example.com. If you need cross-origin unsafe requests, continuing the example, add 'https://subdomain.example.com' to this list (and/or http://... if requests originate from an insecure page). The setting also supports subdomains, so you could add 'https://*.example.com', for example, to allow access from all subdomains of example.com. Changed in Django 4.0: The values in older versions must only include the hostname (possibly with a leading dot) and not the scheme or an asterisk. Also, Origin header checking isn’t performed in older versions. DATABASES Default: {} (Empty dictionary) A dictionary containing the settings for all databases to be used with Django. It is a nested dictionary whose contents map a database alias to a dictionary containing the options for an individual database. The DATABASES setting must configure a default database; any number of additional databases may also be specified. The simplest possible settings file is for a single-database setup using SQLite. This can be configured using the following: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': 'mydatabase', } } When connecting to other database backends, such as MariaDB, MySQL, Oracle, or PostgreSQL, additional connection parameters will be required. See the ENGINE setting below on how to specify other database types. This example is for PostgreSQL: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'mydatabase', 'USER': 'mydatabaseuser', 'PASSWORD': 'mypassword', 'HOST': '127.0.0.1', 'PORT': '5432', } } The following inner options that may be required for more complex configurations are available: ATOMIC_REQUESTS Default: False Set this to True to wrap each view in a transaction on this database. See Tying transactions to HTTP requests. AUTOCOMMIT Default: True Set this to False if you want to disable Django’s transaction management and implement your own. ENGINE Default: '' (Empty string) The database backend to use. The built-in database backends are: 'django.db.backends.postgresql' 'django.db.backends.mysql' 'django.db.backends.sqlite3' 'django.db.backends.oracle' You can use a database backend that doesn’t ship with Django by setting ENGINE to a fully-qualified path (i.e. mypackage.backends.whatever). HOST Default: '' (Empty string) Which host to use when connecting to the database. An empty string means localhost. Not used with SQLite. If this value starts with a forward slash ('/') and you’re using MySQL, MySQL will connect via a Unix socket to the specified socket. For example: "HOST": '/var/run/mysql' If you’re using MySQL and this value doesn’t start with a forward slash, then this value is assumed to be the host. If you’re using PostgreSQL, by default (empty HOST), the connection to the database is done through UNIX domain sockets (‘local’ lines in pg_hba.conf). If your UNIX domain socket is not in the standard location, use the same value of unix_socket_directory from postgresql.conf. If you want to connect through TCP sockets, set HOST to ‘localhost’ or ‘127.0.0.1’ (‘host’ lines in pg_hba.conf). On Windows, you should always define HOST, as UNIX domain sockets are not available. NAME Default: '' (Empty string) The name of the database to use. For SQLite, it’s the full path to the database file. When specifying the path, always use forward slashes, even on Windows (e.g. C:/homes/user/mysite/sqlite3.db). CONN_MAX_AGE Default: 0 The lifetime of a database connection, as an integer of seconds. Use 0 to close database connections at the end of each request — Django’s historical behavior — and None for unlimited persistent connections. OPTIONS Default: {} (Empty dictionary) Extra parameters to use when connecting to the database. Available parameters vary depending on your database backend. Some information on available parameters can be found in the Database Backends documentation. For more information, consult your backend module’s own documentation. PASSWORD Default: '' (Empty string) The password to use when connecting to the database. Not used with SQLite. PORT Default: '' (Empty string) The port to use when connecting to the database. An empty string means the default port. Not used with SQLite. TIME_ZONE Default: None A string representing the time zone for this database connection or None. This inner option of the DATABASES setting accepts the same values as the general TIME_ZONE setting. When USE_TZ is True and this option is set, reading datetimes from the database returns aware datetimes in this time zone instead of UTC. When USE_TZ is False, it is an error to set this option. If the database backend doesn’t support time zones (e.g. SQLite, MySQL, Oracle), Django reads and writes datetimes in local time according to this option if it is set and in UTC if it isn’t. Changing the connection time zone changes how datetimes are read from and written to the database. If Django manages the database and you don’t have a strong reason to do otherwise, you should leave this option unset. It’s best to store datetimes in UTC because it avoids ambiguous or nonexistent datetimes during daylight saving time changes. Also, receiving datetimes in UTC keeps datetime arithmetic simple — there’s no need to consider potential offset changes over a DST transition. If you’re connecting to a third-party database that stores datetimes in a local time rather than UTC, then you must set this option to the appropriate time zone. Likewise, if Django manages the database but third-party systems connect to the same database and expect to find datetimes in local time, then you must set this option. If the database backend supports time zones (e.g. PostgreSQL), the TIME_ZONE option is very rarely needed. It can be changed at any time; the database takes care of converting datetimes to the desired time zone. Setting the time zone of the database connection may be useful for running raw SQL queries involving date/time functions provided by the database, such as date_trunc, because their results depend on the time zone. However, this has a downside: receiving all datetimes in local time makes datetime arithmetic more tricky — you must account for possible offset changes over DST transitions. Consider converting to local time explicitly with AT TIME ZONE in raw SQL queries instead of setting the TIME_ZONE option. DISABLE_SERVER_SIDE_CURSORS Default: False Set this to True if you want to disable the use of server-side cursors with QuerySet.iterator(). Transaction pooling and server-side cursors describes the use case. This is a PostgreSQL-specific setting. USER Default: '' (Empty string) The username to use when connecting to the database. Not used with SQLite. TEST Default: {} (Empty dictionary) A dictionary of settings for test databases; for more details about the creation and use of test databases, see The test database. Here’s an example with a test database configuration: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'USER': 'mydatabaseuser', 'NAME': 'mydatabase', 'TEST': { 'NAME': 'mytestdatabase', }, }, } The following keys in the TEST dictionary are available: CHARSET Default: None The character set encoding used to create the test database. The value of this string is passed directly through to the database, so its format is backend-specific. Supported by the PostgreSQL (postgresql) and MySQL (mysql) backends. COLLATION Default: None The collation order to use when creating the test database. This value is passed directly to the backend, so its format is backend-specific. Only supported for the mysql backend (see the MySQL manual for details). DEPENDENCIES Default: ['default'], for all databases other than default, which has no dependencies. The creation-order dependencies of the database. See the documentation on controlling the creation order of test databases for details. MIGRATE Default: True When set to False, migrations won’t run when creating the test database. This is similar to setting None as a value in MIGRATION_MODULES, but for all apps. MIRROR Default: None The alias of the database that this database should mirror during testing. This setting exists to allow for testing of primary/replica (referred to as master/slave by some databases) configurations of multiple databases. See the documentation on testing primary/replica configurations for details. NAME Default: None The name of database to use when running the test suite. If the default value (None) is used with the SQLite database engine, the tests will use a memory resident database. For all other database engines the test database will use the name 'test_' + DATABASE_NAME. See The test database. SERIALIZE Boolean value to control whether or not the default test runner serializes the database into an in-memory JSON string before running tests (used to restore the database state between tests if you don’t have transactions). You can set this to False to speed up creation time if you don’t have any test classes with serialized_rollback=True. Deprecated since version 4.0: This setting is deprecated as it can be inferred from the databases with the serialized_rollback option enabled. TEMPLATE This is a PostgreSQL-specific setting. The name of a template (e.g. 'template0') from which to create the test database. CREATE_DB Default: True This is an Oracle-specific setting. If it is set to False, the test tablespaces won’t be automatically created at the beginning of the tests or dropped at the end. CREATE_USER Default: True This is an Oracle-specific setting. If it is set to False, the test user won’t be automatically created at the beginning of the tests and dropped at the end. USER Default: None This is an Oracle-specific setting. The username to use when connecting to the Oracle database that will be used when running tests. If not provided, Django will use 'test_' + USER. PASSWORD Default: None This is an Oracle-specific setting. The password to use when connecting to the Oracle database that will be used when running tests. If not provided, Django will generate a random password. ORACLE_MANAGED_FILES Default: False This is an Oracle-specific setting. If set to True, Oracle Managed Files (OMF) tablespaces will be used. DATAFILE and DATAFILE_TMP will be ignored. TBLSPACE Default: None This is an Oracle-specific setting. The name of the tablespace that will be used when running tests. If not provided, Django will use 'test_' + USER. TBLSPACE_TMP Default: None This is an Oracle-specific setting. The name of the temporary tablespace that will be used when running tests. If not provided, Django will use 'test_' + USER + '_temp'. DATAFILE Default: None This is an Oracle-specific setting. The name of the datafile to use for the TBLSPACE. If not provided, Django will use TBLSPACE + '.dbf'. DATAFILE_TMP Default: None This is an Oracle-specific setting. The name of the datafile to use for the TBLSPACE_TMP. If not provided, Django will use TBLSPACE_TMP + '.dbf'. DATAFILE_MAXSIZE Default: '500M' This is an Oracle-specific setting. The maximum size that the DATAFILE is allowed to grow to. DATAFILE_TMP_MAXSIZE Default: '500M' This is an Oracle-specific setting. The maximum size that the DATAFILE_TMP is allowed to grow to. DATAFILE_SIZE Default: '50M' This is an Oracle-specific setting. The initial size of the DATAFILE. DATAFILE_TMP_SIZE Default: '50M' This is an Oracle-specific setting. The initial size of the DATAFILE_TMP. DATAFILE_EXTSIZE Default: '25M' This is an Oracle-specific setting. The amount by which the DATAFILE is extended when more space is required. DATAFILE_TMP_EXTSIZE Default: '25M' This is an Oracle-specific setting. The amount by which the DATAFILE_TMP is extended when more space is required. DATA_UPLOAD_MAX_MEMORY_SIZE Default: 2621440 (i.e. 2.5 MB). The maximum size in bytes that a request body may be before a SuspiciousOperation (RequestDataTooBig) is raised. The check is done when accessing request.body or request.POST and is calculated against the total request size excluding any file upload data. You can set this to None to disable the check. Applications that are expected to receive unusually large form posts should tune this setting. The amount of request data is correlated to the amount of memory needed to process the request and populate the GET and POST dictionaries. Large requests could be used as a denial-of-service attack vector if left unchecked. Since web servers don’t typically perform deep request inspection, it’s not possible to perform a similar check at that level. See also FILE_UPLOAD_MAX_MEMORY_SIZE. DATA_UPLOAD_MAX_NUMBER_FIELDS Default: 1000 The maximum number of parameters that may be received via GET or POST before a SuspiciousOperation (TooManyFields) is raised. You can set this to None to disable the check. Applications that are expected to receive an unusually large number of form fields should tune this setting. The number of request parameters is correlated to the amount of time needed to process the request and populate the GET and POST dictionaries. Large requests could be used as a denial-of-service attack vector if left unchecked. Since web servers don’t typically perform deep request inspection, it’s not possible to perform a similar check at that level. DATABASE_ROUTERS Default: [] (Empty list) The list of routers that will be used to determine which database to use when performing a database query. See the documentation on automatic database routing in multi database configurations. DATE_FORMAT Default: 'N j, Y' (e.g. Feb. 4, 2003) The default formatting to use for displaying date fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATETIME_FORMAT, TIME_FORMAT and SHORT_DATE_FORMAT. DATE_INPUT_FORMATS Default: [ '%Y-%m-%d', '%m/%d/%Y', '%m/%d/%y', # '2006-10-25', '10/25/2006', '10/25/06' '%b %d %Y', '%b %d, %Y', # 'Oct 25 2006', 'Oct 25, 2006' '%d %b %Y', '%d %b, %Y', # '25 Oct 2006', '25 Oct, 2006' '%B %d %Y', '%B %d, %Y', # 'October 25 2006', 'October 25, 2006' '%d %B %Y', '%d %B, %Y', # '25 October 2006', '25 October, 2006' ] A list of formats that will be accepted when inputting data on a date field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATETIME_INPUT_FORMATS and TIME_INPUT_FORMATS. DATETIME_FORMAT Default: 'N j, Y, P' (e.g. Feb. 4, 2003, 4 p.m.) The default formatting to use for displaying datetime fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATE_FORMAT, TIME_FORMAT and SHORT_DATETIME_FORMAT. DATETIME_INPUT_FORMATS Default: [ '%Y-%m-%d %H:%M:%S', # '2006-10-25 14:30:59' '%Y-%m-%d %H:%M:%S.%f', # '2006-10-25 14:30:59.000200' '%Y-%m-%d %H:%M', # '2006-10-25 14:30' '%m/%d/%Y %H:%M:%S', # '10/25/2006 14:30:59' '%m/%d/%Y %H:%M:%S.%f', # '10/25/2006 14:30:59.000200' '%m/%d/%Y %H:%M', # '10/25/2006 14:30' '%m/%d/%y %H:%M:%S', # '10/25/06 14:30:59' '%m/%d/%y %H:%M:%S.%f', # '10/25/06 14:30:59.000200' '%m/%d/%y %H:%M', # '10/25/06 14:30' ] A list of formats that will be accepted when inputting data on a datetime field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. Date-only formats are not included as datetime fields will automatically try DATE_INPUT_FORMATS in last resort. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATE_INPUT_FORMATS and TIME_INPUT_FORMATS. DEBUG Default: False A boolean that turns on/off debug mode. Never deploy a site into production with DEBUG turned on. One of the main features of debug mode is the display of detailed error pages. If your app raises an exception when DEBUG is True, Django will display a detailed traceback, including a lot of metadata about your environment, such as all the currently defined Django settings (from settings.py). As a security measure, Django will not include settings that might be sensitive, such as SECRET_KEY. Specifically, it will exclude any setting whose name includes any of the following: 'API' 'KEY' 'PASS' 'SECRET' 'SIGNATURE' 'TOKEN' Note that these are partial matches. 'PASS' will also match PASSWORD, just as 'TOKEN' will also match TOKENIZED and so on. Still, note that there are always going to be sections of your debug output that are inappropriate for public consumption. File paths, configuration options and the like all give attackers extra information about your server. It is also important to remember that when running with DEBUG turned on, Django will remember every SQL query it executes. This is useful when you’re debugging, but it’ll rapidly consume memory on a production server. Finally, if DEBUG is False, you also need to properly set the ALLOWED_HOSTS setting. Failing to do so will result in all requests being returned as “Bad Request (400)”. Note The default settings.py file created by django-admin startproject sets DEBUG = True for convenience. DEBUG_PROPAGATE_EXCEPTIONS Default: False If set to True, Django’s exception handling of view functions (handler500, or the debug view if DEBUG is True) and logging of 500 responses (django.request) is skipped and exceptions propagate upward. This can be useful for some test setups. It shouldn’t be used on a live site unless you want your web server (instead of Django) to generate “Internal Server Error” responses. In that case, make sure your server doesn’t show the stack trace or other sensitive information in the response. DECIMAL_SEPARATOR Default: '.' (Dot) Default decimal separator used when formatting decimal numbers. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also NUMBER_GROUPING, THOUSAND_SEPARATOR and USE_THOUSAND_SEPARATOR. DEFAULT_AUTO_FIELD New in Django 3.2. Default: 'django.db.models.AutoField' Default primary key field type to use for models that don’t have a field with primary_key=True. Migrating auto-created through tables The value of DEFAULT_AUTO_FIELD will be respected when creating new auto-created through tables for many-to-many relationships. Unfortunately, the primary keys of existing auto-created through tables cannot currently be updated by the migrations framework. This means that if you switch the value of DEFAULT_AUTO_FIELD and then generate migrations, the primary keys of the related models will be updated, as will the foreign keys from the through table, but the primary key of the auto-created through table will not be migrated. In order to address this, you should add a RunSQL operation to your migrations to perform the required ALTER TABLE step. You can check the existing table name through sqlmigrate, dbshell, or with the field’s remote_field.through._meta.db_table property. Explicitly defined through models are already handled by the migrations system. Allowing automatic migrations for the primary key of existing auto-created through tables may be implemented at a later date. DEFAULT_CHARSET Default: 'utf-8' Default charset to use for all HttpResponse objects, if a MIME type isn’t manually specified. Used when constructing the Content-Type header. DEFAULT_EXCEPTION_REPORTER Default: 'django.views.debug.ExceptionReporter' Default exception reporter class to be used if none has been assigned to the HttpRequest instance yet. See Custom error reports. DEFAULT_EXCEPTION_REPORTER_FILTER Default: 'django.views.debug.SafeExceptionReporterFilter' Default exception reporter filter class to be used if none has been assigned to the HttpRequest instance yet. See Filtering error reports. DEFAULT_FILE_STORAGE Default: 'django.core.files.storage.FileSystemStorage' Default file storage class to be used for any file-related operations that don’t specify a particular storage system. See Managing files. DEFAULT_FROM_EMAIL Default: 'webmaster@localhost' Default email address to use for various automated correspondence from the site manager(s). This doesn’t include error messages sent to ADMINS and MANAGERS; for that, see SERVER_EMAIL. DEFAULT_INDEX_TABLESPACE Default: '' (Empty string) Default tablespace to use for indexes on fields that don’t specify one, if the backend supports it (see Tablespaces). DEFAULT_TABLESPACE Default: '' (Empty string) Default tablespace to use for models that don’t specify one, if the backend supports it (see Tablespaces). DISALLOWED_USER_AGENTS Default: [] (Empty list) List of compiled regular expression objects representing User-Agent strings that are not allowed to visit any page, systemwide. Use this for bots/crawlers. This is only used if CommonMiddleware is installed (see Middleware). EMAIL_BACKEND Default: 'django.core.mail.backends.smtp.EmailBackend' The backend to use for sending emails. For the list of available backends see Sending email. EMAIL_FILE_PATH Default: Not defined The directory used by the file email backend to store output files. EMAIL_HOST Default: 'localhost' The host to use for sending email. See also EMAIL_PORT. EMAIL_HOST_PASSWORD Default: '' (Empty string) Password to use for the SMTP server defined in EMAIL_HOST. This setting is used in conjunction with EMAIL_HOST_USER when authenticating to the SMTP server. If either of these settings is empty, Django won’t attempt authentication. See also EMAIL_HOST_USER. EMAIL_HOST_USER Default: '' (Empty string) Username to use for the SMTP server defined in EMAIL_HOST. If empty, Django won’t attempt authentication. See also EMAIL_HOST_PASSWORD. EMAIL_PORT Default: 25 Port to use for the SMTP server defined in EMAIL_HOST. EMAIL_SUBJECT_PREFIX Default: '[Django] ' Subject-line prefix for email messages sent with django.core.mail.mail_admins or django.core.mail.mail_managers. You’ll probably want to include the trailing space. EMAIL_USE_LOCALTIME Default: False Whether to send the SMTP Date header of email messages in the local time zone (True) or in UTC (False). EMAIL_USE_TLS Default: False Whether to use a TLS (secure) connection when talking to the SMTP server. This is used for explicit TLS connections, generally on port 587. If you are experiencing hanging connections, see the implicit TLS setting EMAIL_USE_SSL. EMAIL_USE_SSL Default: False Whether to use an implicit TLS (secure) connection when talking to the SMTP server. In most email documentation this type of TLS connection is referred to as SSL. It is generally used on port 465. If you are experiencing problems, see the explicit TLS setting EMAIL_USE_TLS. Note that EMAIL_USE_TLS/EMAIL_USE_SSL are mutually exclusive, so only set one of those settings to True. EMAIL_SSL_CERTFILE Default: None If EMAIL_USE_SSL or EMAIL_USE_TLS is True, you can optionally specify the path to a PEM-formatted certificate chain file to use for the SSL connection. EMAIL_SSL_KEYFILE Default: None If EMAIL_USE_SSL or EMAIL_USE_TLS is True, you can optionally specify the path to a PEM-formatted private key file to use for the SSL connection. Note that setting EMAIL_SSL_CERTFILE and EMAIL_SSL_KEYFILE doesn’t result in any certificate checking. They’re passed to the underlying SSL connection. Please refer to the documentation of Python’s ssl.wrap_socket() function for details on how the certificate chain file and private key file are handled. EMAIL_TIMEOUT Default: None Specifies a timeout in seconds for blocking operations like the connection attempt. FILE_UPLOAD_HANDLERS Default: [ 'django.core.files.uploadhandler.MemoryFileUploadHandler', 'django.core.files.uploadhandler.TemporaryFileUploadHandler', ] A list of handlers to use for uploading. Changing this setting allows complete customization – even replacement – of Django’s upload process. See Managing files for details. FILE_UPLOAD_MAX_MEMORY_SIZE Default: 2621440 (i.e. 2.5 MB). The maximum size (in bytes) that an upload will be before it gets streamed to the file system. See Managing files for details. See also DATA_UPLOAD_MAX_MEMORY_SIZE. FILE_UPLOAD_DIRECTORY_PERMISSIONS Default: None The numeric mode to apply to directories created in the process of uploading files. This setting also determines the default permissions for collected static directories when using the collectstatic management command. See collectstatic for details on overriding it. This value mirrors the functionality and caveats of the FILE_UPLOAD_PERMISSIONS setting. FILE_UPLOAD_PERMISSIONS Default: 0o644 The numeric mode (i.e. 0o644) to set newly uploaded files to. For more information about what these modes mean, see the documentation for os.chmod(). If None, you’ll get operating-system dependent behavior. On most platforms, temporary files will have a mode of 0o600, and files saved from memory will be saved using the system’s standard umask. For security reasons, these permissions aren’t applied to the temporary files that are stored in FILE_UPLOAD_TEMP_DIR. This setting also determines the default permissions for collected static files when using the collectstatic management command. See collectstatic for details on overriding it. Warning Always prefix the mode with 0o . If you’re not familiar with file modes, please note that the 0o prefix is very important: it indicates an octal number, which is the way that modes must be specified. If you try to use 644, you’ll get totally incorrect behavior. FILE_UPLOAD_TEMP_DIR Default: None The directory to store data to (typically files larger than FILE_UPLOAD_MAX_MEMORY_SIZE) temporarily while uploading files. If None, Django will use the standard temporary directory for the operating system. For example, this will default to /tmp on *nix-style operating systems. See Managing files for details. FIRST_DAY_OF_WEEK Default: 0 (Sunday) A number representing the first day of the week. This is especially useful when displaying a calendar. This value is only used when not using format internationalization, or when a format cannot be found for the current locale. The value must be an integer from 0 to 6, where 0 means Sunday, 1 means Monday and so on. FIXTURE_DIRS Default: [] (Empty list) List of directories searched for fixture files, in addition to the fixtures directory of each application, in search order. Note that these paths should use Unix-style forward slashes, even on Windows. See Providing data with fixtures and Fixture loading. FORCE_SCRIPT_NAME Default: None If not None, this will be used as the value of the SCRIPT_NAME environment variable in any HTTP request. This setting can be used to override the server-provided value of SCRIPT_NAME, which may be a rewritten version of the preferred value or not supplied at all. It is also used by django.setup() to set the URL resolver script prefix outside of the request/response cycle (e.g. in management commands and standalone scripts) to generate correct URLs when SCRIPT_NAME is not /. FORM_RENDERER Default: 'django.forms.renderers.DjangoTemplates' The class that renders forms and form widgets. It must implement the low-level render API. Included form renderers are: 'django.forms.renderers.DjangoTemplates' 'django.forms.renderers.Jinja2' FORMAT_MODULE_PATH Default: None A full Python path to a Python package that contains custom format definitions for project locales. If not None, Django will check for a formats.py file, under the directory named as the current locale, and will use the formats defined in this file. For example, if FORMAT_MODULE_PATH is set to mysite.formats, and current language is en (English), Django will expect a directory tree like: mysite/ formats/ __init__.py en/ __init__.py formats.py You can also set this setting to a list of Python paths, for example: FORMAT_MODULE_PATH = [ 'mysite.formats', 'some_app.formats', ] When Django searches for a certain format, it will go through all given Python paths until it finds a module that actually defines the given format. This means that formats defined in packages farther up in the list will take precedence over the same formats in packages farther down. Available formats are: DATE_FORMAT DATE_INPUT_FORMATS DATETIME_FORMAT, DATETIME_INPUT_FORMATS DECIMAL_SEPARATOR FIRST_DAY_OF_WEEK MONTH_DAY_FORMAT NUMBER_GROUPING SHORT_DATE_FORMAT SHORT_DATETIME_FORMAT THOUSAND_SEPARATOR TIME_FORMAT TIME_INPUT_FORMATS YEAR_MONTH_FORMAT IGNORABLE_404_URLS Default: [] (Empty list) List of compiled regular expression objects describing URLs that should be ignored when reporting HTTP 404 errors via email (see How to manage error reporting). Regular expressions are matched against request's full paths (including query string, if any). Use this if your site does not provide a commonly requested file such as favicon.ico or robots.txt. This is only used if BrokenLinkEmailsMiddleware is enabled (see Middleware). INSTALLED_APPS Default: [] (Empty list) A list of strings designating all applications that are enabled in this Django installation. Each string should be a dotted Python path to: an application configuration class (preferred), or a package containing an application. Learn more about application configurations. Use the application registry for introspection Your code should never access INSTALLED_APPS directly. Use django.apps.apps instead. Application names and labels must be unique in INSTALLED_APPS Application names — the dotted Python path to the application package — must be unique. There is no way to include the same application twice, short of duplicating its code under another name. Application labels — by default the final part of the name — must be unique too. For example, you can’t include both django.contrib.auth and myproject.auth. However, you can relabel an application with a custom configuration that defines a different label. These rules apply regardless of whether INSTALLED_APPS references application configuration classes or application packages. When several applications provide different versions of the same resource (template, static file, management command, translation), the application listed first in INSTALLED_APPS has precedence. INTERNAL_IPS Default: [] (Empty list) A list of IP addresses, as strings, that: Allow the debug() context processor to add some variables to the template context. Can use the admindocs bookmarklets even if not logged in as a staff user. Are marked as “internal” (as opposed to “EXTERNAL”) in AdminEmailHandler emails. LANGUAGE_CODE Default: 'en-us' A string representing the language code for this installation. This should be in standard language ID format. For example, U.S. English is "en-us". See also the list of language identifiers and Internationalization and localization. USE_I18N must be active for this setting to have any effect. It serves two purposes: If the locale middleware isn’t in use, it decides which translation is served to all users. If the locale middleware is active, it provides a fallback language in case the user’s preferred language can’t be determined or is not supported by the website. It also provides the fallback translation when a translation for a given literal doesn’t exist for the user’s preferred language. See How Django discovers language preference for more details. LANGUAGE_COOKIE_AGE Default: None (expires at browser close) The age of the language cookie, in seconds. LANGUAGE_COOKIE_DOMAIN Default: None The domain to use for the language cookie. Set this to a string such as "example.com" for cross-domain cookies, or use None for a standard domain cookie. Be cautious when updating this setting on a production site. If you update this setting to enable cross-domain cookies on a site that previously used standard domain cookies, existing user cookies that have the old domain will not be updated. This will result in site users being unable to switch the language as long as these cookies persist. The only safe and reliable option to perform the switch is to change the language cookie name permanently (via the LANGUAGE_COOKIE_NAME setting) and to add a middleware that copies the value from the old cookie to a new one and then deletes the old one. LANGUAGE_COOKIE_HTTPONLY Default: False Whether to use HttpOnly flag on the language cookie. If this is set to True, client-side JavaScript will not be able to access the language cookie. See SESSION_COOKIE_HTTPONLY for details on HttpOnly. LANGUAGE_COOKIE_NAME Default: 'django_language' The name of the cookie to use for the language cookie. This can be whatever you want (as long as it’s different from the other cookie names in your application). See Internationalization and localization. LANGUAGE_COOKIE_PATH Default: '/' The path set on the language cookie. This should either match the URL path of your Django installation or be a parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths and each instance will only see its own language cookie. Be cautious when updating this setting on a production site. If you update this setting to use a deeper path than it previously used, existing user cookies that have the old path will not be updated. This will result in site users being unable to switch the language as long as these cookies persist. The only safe and reliable option to perform the switch is to change the language cookie name permanently (via the LANGUAGE_COOKIE_NAME setting), and to add a middleware that copies the value from the old cookie to a new one and then deletes the one. LANGUAGE_COOKIE_SAMESITE Default: None The value of the SameSite flag on the language cookie. This flag prevents the cookie from being sent in cross-site requests. See SESSION_COOKIE_SAMESITE for details about SameSite. LANGUAGE_COOKIE_SECURE Default: False Whether to use a secure cookie for the language cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent under an HTTPS connection. LANGUAGES Default: A list of all available languages. This list is continually growing and including a copy here would inevitably become rapidly out of date. You can see the current list of translated languages by looking in django/conf/global_settings.py. The list is a list of two-tuples in the format (language code, language name) – for example, ('ja', 'Japanese'). This specifies which languages are available for language selection. See Internationalization and localization. Generally, the default value should suffice. Only set this setting if you want to restrict language selection to a subset of the Django-provided languages. If you define a custom LANGUAGES setting, you can mark the language names as translation strings using the gettext_lazy() function. Here’s a sample settings file: from django.utils.translation import gettext_lazy as _ LANGUAGES = [ ('de', _('German')), ('en', _('English')), ] LANGUAGES_BIDI Default: A list of all language codes that are written right-to-left. You can see the current list of these languages by looking in django/conf/global_settings.py. The list contains language codes for languages that are written right-to-left. Generally, the default value should suffice. Only set this setting if you want to restrict language selection to a subset of the Django-provided languages. If you define a custom LANGUAGES setting, the list of bidirectional languages may contain language codes which are not enabled on a given site. LOCALE_PATHS Default: [] (Empty list) A list of directories where Django looks for translation files. See How Django discovers translations. Example: LOCALE_PATHS = [ '/home/www/project/common_files/locale', '/var/local/translations/locale', ] Django will look within each of these paths for the <locale_code>/LC_MESSAGES directories containing the actual translation files. LOGGING Default: A logging configuration dictionary. A data structure containing configuration information. The contents of this data structure will be passed as the argument to the configuration method described in LOGGING_CONFIG. Among other things, the default logging configuration passes HTTP 500 server errors to an email log handler when DEBUG is False. See also Configuring logging. You can see the default logging configuration by looking in django/utils/log.py. LOGGING_CONFIG Default: 'logging.config.dictConfig' A path to a callable that will be used to configure logging in the Django project. Points at an instance of Python’s dictConfig configuration method by default. If you set LOGGING_CONFIG to None, the logging configuration process will be skipped. MANAGERS Default: [] (Empty list) A list in the same format as ADMINS that specifies who should get broken link notifications when BrokenLinkEmailsMiddleware is enabled. MEDIA_ROOT Default: '' (Empty string) Absolute filesystem path to the directory that will hold user-uploaded files. Example: "/var/www/example.com/media/" See also MEDIA_URL. Warning MEDIA_ROOT and STATIC_ROOT must have different values. Before STATIC_ROOT was introduced, it was common to rely or fallback on MEDIA_ROOT to also serve static files; however, since this can have serious security implications, there is a validation check to prevent it. MEDIA_URL Default: '' (Empty string) URL that handles the media served from MEDIA_ROOT, used for managing stored files. It must end in a slash if set to a non-empty value. You will need to configure these files to be served in both development and production environments. If you want to use {{ MEDIA_URL }} in your templates, add 'django.template.context_processors.media' in the 'context_processors' option of TEMPLATES. Example: "http://media.example.com/" Warning There are security risks if you are accepting uploaded content from untrusted users! See the security guide’s topic on User-uploaded content for mitigation details. Warning MEDIA_URL and STATIC_URL must have different values. See MEDIA_ROOT for more details. Note If MEDIA_URL is a relative path, then it will be prefixed by the server-provided value of SCRIPT_NAME (or / if not set). This makes it easier to serve a Django application in a subpath without adding an extra configuration to the settings. MIDDLEWARE Default: None A list of middleware to use. See Middleware. MIGRATION_MODULES Default: {} (Empty dictionary) A dictionary specifying the package where migration modules can be found on a per-app basis. The default value of this setting is an empty dictionary, but the default package name for migration modules is migrations. Example: {'blog': 'blog.db_migrations'} In this case, migrations pertaining to the blog app will be contained in the blog.db_migrations package. If you provide the app_label argument, makemigrations will automatically create the package if it doesn’t already exist. When you supply None as a value for an app, Django will consider the app as an app without migrations regardless of an existing migrations submodule. This can be used, for example, in a test settings file to skip migrations while testing (tables will still be created for the apps’ models). To disable migrations for all apps during tests, you can set the MIGRATE to False instead. If MIGRATION_MODULES is used in your general project settings, remember to use the migrate --run-syncdb option if you want to create tables for the app. MONTH_DAY_FORMAT Default: 'F j' The default formatting to use for date fields on Django admin change-list pages – and, possibly, by other parts of the system – in cases when only the month and day are displayed. For example, when a Django admin change-list page is being filtered by a date drilldown, the header for a given day displays the day and month. Different locales have different formats. For example, U.S. English would say “January 1,” whereas Spanish might say “1 Enero.” Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT and YEAR_MONTH_FORMAT. NUMBER_GROUPING Default: 0 Number of digits grouped together on the integer part of a number. Common use is to display a thousand separator. If this setting is 0, then no grouping will be applied to the number. If this setting is greater than 0, then THOUSAND_SEPARATOR will be used as the separator between those groups. Some locales use non-uniform digit grouping, e.g. 10,00,00,000 in en_IN. For this case, you can provide a sequence with the number of digit group sizes to be applied. The first number defines the size of the group preceding the decimal delimiter, and each number that follows defines the size of preceding groups. If the sequence is terminated with -1, no further grouping is performed. If the sequence terminates with a 0, the last group size is used for the remainder of the number. Example tuple for en_IN: NUMBER_GROUPING = (3, 2, 0) Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also DECIMAL_SEPARATOR, THOUSAND_SEPARATOR and USE_THOUSAND_SEPARATOR. PREPEND_WWW Default: False Whether to prepend the “www.” subdomain to URLs that don’t have it. This is only used if CommonMiddleware is installed (see Middleware). See also APPEND_SLASH. ROOT_URLCONF Default: Not defined A string representing the full Python import path to your root URLconf, for example "mydjangoapps.urls". Can be overridden on a per-request basis by setting the attribute urlconf on the incoming HttpRequest object. See How Django processes a request for details. SECRET_KEY Default: '' (Empty string) A secret key for a particular Django installation. This is used to provide cryptographic signing, and should be set to a unique, unpredictable value. django-admin startproject automatically adds a randomly-generated SECRET_KEY to each new project. Uses of the key shouldn’t assume that it’s text or bytes. Every use should go through force_str() or force_bytes() to convert it to the desired type. Django will refuse to start if SECRET_KEY is not set. Warning Keep this value secret. Running Django with a known SECRET_KEY defeats many of Django’s security protections, and can lead to privilege escalation and remote code execution vulnerabilities. The secret key is used for: All sessions if you are using any other session backend than django.contrib.sessions.backends.cache, or are using the default get_session_auth_hash(). All messages if you are using CookieStorage or FallbackStorage. All PasswordResetView tokens. Any usage of cryptographic signing, unless a different key is provided. If you rotate your secret key, all of the above will be invalidated. Secret keys are not used for passwords of users and key rotation will not affect them. Note The default settings.py file created by django-admin startproject creates a unique SECRET_KEY for convenience. SECURE_CONTENT_TYPE_NOSNIFF Default: True If True, the SecurityMiddleware sets the X-Content-Type-Options: nosniff header on all responses that do not already have it. SECURE_CROSS_ORIGIN_OPENER_POLICY New in Django 4.0. Default: 'same-origin' Unless set to None, the SecurityMiddleware sets the Cross-Origin Opener Policy header on all responses that do not already have it to the value provided. SECURE_HSTS_INCLUDE_SUBDOMAINS Default: False If True, the SecurityMiddleware adds the includeSubDomains directive to the HTTP Strict Transport Security header. It has no effect unless SECURE_HSTS_SECONDS is set to a non-zero value. Warning Setting this incorrectly can irreversibly (for the value of SECURE_HSTS_SECONDS) break your site. Read the HTTP Strict Transport Security documentation first. SECURE_HSTS_PRELOAD Default: False If True, the SecurityMiddleware adds the preload directive to the HTTP Strict Transport Security header. It has no effect unless SECURE_HSTS_SECONDS is set to a non-zero value. SECURE_HSTS_SECONDS Default: 0 If set to a non-zero integer value, the SecurityMiddleware sets the HTTP Strict Transport Security header on all responses that do not already have it. Warning Setting this incorrectly can irreversibly (for some time) break your site. Read the HTTP Strict Transport Security documentation first. SECURE_PROXY_SSL_HEADER Default: None A tuple representing an HTTP header/value combination that signifies a request is secure. This controls the behavior of the request object’s is_secure() method. By default, is_secure() determines if a request is secure by confirming that a requested URL uses https://. This method is important for Django’s CSRF protection, and it may be used by your own code or third-party apps. If your Django app is behind a proxy, though, the proxy may be “swallowing” whether the original request uses HTTPS or not. If there is a non-HTTPS connection between the proxy and Django then is_secure() would always return False – even for requests that were made via HTTPS by the end user. In contrast, if there is an HTTPS connection between the proxy and Django then is_secure() would always return True – even for requests that were made originally via HTTP. In this situation, configure your proxy to set a custom HTTP header that tells Django whether the request came in via HTTPS, and set SECURE_PROXY_SSL_HEADER so that Django knows what header to look for. Set a tuple with two elements – the name of the header to look for and the required value. For example: SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') This tells Django to trust the X-Forwarded-Proto header that comes from our proxy, and any time its value is 'https', then the request is guaranteed to be secure (i.e., it originally came in via HTTPS). You should only set this setting if you control your proxy or have some other guarantee that it sets/strips this header appropriately. Note that the header needs to be in the format as used by request.META – all caps and likely starting with HTTP_. (Remember, Django automatically adds 'HTTP_' to the start of x-header names before making the header available in request.META.) Warning Modifying this setting can compromise your site’s security. Ensure you fully understand your setup before changing it. Make sure ALL of the following are true before setting this (assuming the values from the example above): Your Django app is behind a proxy. Your proxy strips the X-Forwarded-Proto header from all incoming requests. In other words, if end users include that header in their requests, the proxy will discard it. Your proxy sets the X-Forwarded-Proto header and sends it to Django, but only for requests that originally come in via HTTPS. If any of those are not true, you should keep this setting set to None and find another way of determining HTTPS, perhaps via custom middleware. SECURE_REDIRECT_EXEMPT Default: [] (Empty list) If a URL path matches a regular expression in this list, the request will not be redirected to HTTPS. The SecurityMiddleware strips leading slashes from URL paths, so patterns shouldn’t include them, e.g. SECURE_REDIRECT_EXEMPT = [r'^no-ssl/$', …]. If SECURE_SSL_REDIRECT is False, this setting has no effect. SECURE_REFERRER_POLICY Default: 'same-origin' If configured, the SecurityMiddleware sets the Referrer Policy header on all responses that do not already have it to the value provided. SECURE_SSL_HOST Default: None If a string (e.g. secure.example.com), all SSL redirects will be directed to this host rather than the originally-requested host (e.g. www.example.com). If SECURE_SSL_REDIRECT is False, this setting has no effect. SECURE_SSL_REDIRECT Default: False If True, the SecurityMiddleware redirects all non-HTTPS requests to HTTPS (except for those URLs matching a regular expression listed in SECURE_REDIRECT_EXEMPT). Note If turning this to True causes infinite redirects, it probably means your site is running behind a proxy and can’t tell which requests are secure and which are not. Your proxy likely sets a header to indicate secure requests; you can correct the problem by finding out what that header is and configuring the SECURE_PROXY_SSL_HEADER setting accordingly. SERIALIZATION_MODULES Default: Not defined A dictionary of modules containing serializer definitions (provided as strings), keyed by a string identifier for that serialization type. For example, to define a YAML serializer, use: SERIALIZATION_MODULES = {'yaml': 'path.to.yaml_serializer'} SERVER_EMAIL Default: 'root@localhost' The email address that error messages come from, such as those sent to ADMINS and MANAGERS. Why are my emails sent from a different address? This address is used only for error messages. It is not the address that regular email messages sent with send_mail() come from; for that, see DEFAULT_FROM_EMAIL. SHORT_DATE_FORMAT Default: 'm/d/Y' (e.g. 12/31/2003) An available formatting that can be used for displaying date fields on templates. Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT and SHORT_DATETIME_FORMAT. SHORT_DATETIME_FORMAT Default: 'm/d/Y P' (e.g. 12/31/2003 4 p.m.) An available formatting that can be used for displaying datetime fields on templates. Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT and SHORT_DATE_FORMAT. SIGNING_BACKEND Default: 'django.core.signing.TimestampSigner' The backend used for signing cookies and other data. See also the Cryptographic signing documentation. SILENCED_SYSTEM_CHECKS Default: [] (Empty list) A list of identifiers of messages generated by the system check framework (i.e. ["models.W001"]) that you wish to permanently acknowledge and ignore. Silenced checks will not be output to the console. See also the System check framework documentation. TEMPLATES Default: [] (Empty list) A list containing the settings for all template engines to be used with Django. Each item of the list is a dictionary containing the options for an individual engine. Here’s a setup that tells the Django template engine to load templates from the templates subdirectory inside each installed application: TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'APP_DIRS': True, }, ] The following options are available for all backends. BACKEND Default: Not defined The template backend to use. The built-in template backends are: 'django.template.backends.django.DjangoTemplates' 'django.template.backends.jinja2.Jinja2' You can use a template backend that doesn’t ship with Django by setting BACKEND to a fully-qualified path (i.e. 'mypackage.whatever.Backend'). NAME Default: see below The alias for this particular template engine. It’s an identifier that allows selecting an engine for rendering. Aliases must be unique across all configured template engines. It defaults to the name of the module defining the engine class, i.e. the next to last piece of BACKEND, when it isn’t provided. For example if the backend is 'mypackage.whatever.Backend' then its default name is 'whatever'. DIRS Default: [] (Empty list) Directories where the engine should look for template source files, in search order. APP_DIRS Default: False Whether the engine should look for template source files inside installed applications. Note The default settings.py file created by django-admin startproject sets 'APP_DIRS': True. OPTIONS Default: {} (Empty dict) Extra parameters to pass to the template backend. Available parameters vary depending on the template backend. See DjangoTemplates and Jinja2 for the options of the built-in backends. TEST_RUNNER Default: 'django.test.runner.DiscoverRunner' The name of the class to use for starting the test suite. See Using different testing frameworks. TEST_NON_SERIALIZED_APPS Default: [] (Empty list) In order to restore the database state between tests for TransactionTestCases and database backends without transactions, Django will serialize the contents of all apps when it starts the test run so it can then reload from that copy before running tests that need it. This slows down the startup time of the test runner; if you have apps that you know don’t need this feature, you can add their full names in here (e.g. 'django.contrib.contenttypes') to exclude them from this serialization process. THOUSAND_SEPARATOR Default: ',' (Comma) Default thousand separator used when formatting numbers. This setting is used only when USE_THOUSAND_SEPARATOR is True and NUMBER_GROUPING is greater than 0. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also NUMBER_GROUPING, DECIMAL_SEPARATOR and USE_THOUSAND_SEPARATOR. TIME_FORMAT Default: 'P' (e.g. 4 p.m.) The default formatting to use for displaying time fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATE_FORMAT and DATETIME_FORMAT. TIME_INPUT_FORMATS Default: [ '%H:%M:%S', # '14:30:59' '%H:%M:%S.%f', # '14:30:59.000200' '%H:%M', # '14:30' ] A list of formats that will be accepted when inputting data on a time field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATE_INPUT_FORMATS and DATETIME_INPUT_FORMATS. TIME_ZONE Default: 'America/Chicago' A string representing the time zone for this installation. See the list of time zones. Note Since Django was first released with the TIME_ZONE set to 'America/Chicago', the global setting (used if nothing is defined in your project’s settings.py) remains 'America/Chicago' for backwards compatibility. New project templates default to 'UTC'. Note that this isn’t necessarily the time zone of the server. For example, one server may serve multiple Django-powered sites, each with a separate time zone setting. When USE_TZ is False, this is the time zone in which Django will store all datetimes. When USE_TZ is True, this is the default time zone that Django will use to display datetimes in templates and to interpret datetimes entered in forms. On Unix environments (where time.tzset() is implemented), Django sets the os.environ['TZ'] variable to the time zone you specify in the TIME_ZONE setting. Thus, all your views and models will automatically operate in this time zone. However, Django won’t set the TZ environment variable if you’re using the manual configuration option as described in manually configuring settings. If Django doesn’t set the TZ environment variable, it’s up to you to ensure your processes are running in the correct environment. Note Django cannot reliably use alternate time zones in a Windows environment. If you’re running Django on Windows, TIME_ZONE must be set to match the system time zone. USE_DEPRECATED_PYTZ New in Django 4.0. Default: False A boolean that specifies whether to use pytz, rather than zoneinfo, as the default time zone implementation. Deprecated since version 4.0: This transitional setting is deprecated. Support for using pytz will be removed in Django 5.0. USE_I18N Default: True A boolean that specifies whether Django’s translation system should be enabled. This provides a way to turn it off, for performance. If this is set to False, Django will make some optimizations so as not to load the translation machinery. See also LANGUAGE_CODE, USE_L10N and USE_TZ. Note The default settings.py file created by django-admin startproject includes USE_I18N = True for convenience. USE_L10N Default: True A boolean that specifies if localized formatting of data will be enabled by default or not. If this is set to True, e.g. Django will display numbers and dates using the format of the current locale. See also LANGUAGE_CODE, USE_I18N and USE_TZ. Changed in Django 4.0: In older versions, the default value is False. Deprecated since version 4.0: This setting is deprecated. Starting with Django 5.0, localized formatting of data will always be enabled. For example Django will display numbers and dates using the format of the current locale. USE_THOUSAND_SEPARATOR Default: False A boolean that specifies whether to display numbers using a thousand separator. When set to True and USE_L10N is also True, Django will format numbers using the NUMBER_GROUPING and THOUSAND_SEPARATOR settings. These settings may also be dictated by the locale, which takes precedence. See also DECIMAL_SEPARATOR, NUMBER_GROUPING and THOUSAND_SEPARATOR. USE_TZ Default: False Note In Django 5.0, the default value will change from False to True. A boolean that specifies if datetimes will be timezone-aware by default or not. If this is set to True, Django will use timezone-aware datetimes internally. When USE_TZ is False, Django will use naive datetimes in local time, except when parsing ISO 8601 formatted strings, where timezone information will always be retained if present. See also TIME_ZONE, USE_I18N and USE_L10N. Note The default settings.py file created by django-admin startproject includes USE_TZ = True for convenience. USE_X_FORWARDED_HOST Default: False A boolean that specifies whether to use the X-Forwarded-Host header in preference to the Host header. This should only be enabled if a proxy which sets this header is in use. This setting takes priority over USE_X_FORWARDED_PORT. Per RFC 7239#section-5.3, the X-Forwarded-Host header can include the port number, in which case you shouldn’t use USE_X_FORWARDED_PORT. USE_X_FORWARDED_PORT Default: False A boolean that specifies whether to use the X-Forwarded-Port header in preference to the SERVER_PORT META variable. This should only be enabled if a proxy which sets this header is in use. USE_X_FORWARDED_HOST takes priority over this setting. WSGI_APPLICATION Default: None The full Python path of the WSGI application object that Django’s built-in servers (e.g. runserver) will use. The django-admin startproject management command will create a standard wsgi.py file with an application callable in it, and point this setting to that application. If not set, the return value of django.core.wsgi.get_wsgi_application() will be used. In this case, the behavior of runserver will be identical to previous Django versions. YEAR_MONTH_FORMAT Default: 'F Y' The default formatting to use for date fields on Django admin change-list pages – and, possibly, by other parts of the system – in cases when only the year and month are displayed. For example, when a Django admin change-list page is being filtered by a date drilldown, the header for a given month displays the month and the year. Different locales have different formats. For example, U.S. English would say “January 2006,” whereas another locale might say “2006/January.” Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT and MONTH_DAY_FORMAT. X_FRAME_OPTIONS Default: 'DENY' The default value for the X-Frame-Options header used by XFrameOptionsMiddleware. See the clickjacking protection documentation. Auth Settings for django.contrib.auth. AUTHENTICATION_BACKENDS Default: ['django.contrib.auth.backends.ModelBackend'] A list of authentication backend classes (as strings) to use when attempting to authenticate a user. See the authentication backends documentation for details. AUTH_USER_MODEL Default: 'auth.User' The model to use to represent a User. See Substituting a custom User model. Warning You cannot change the AUTH_USER_MODEL setting during the lifetime of a project (i.e. once you have made and migrated models that depend on it) without serious effort. It is intended to be set at the project start, and the model it refers to must be available in the first migration of the app that it lives in. See Substituting a custom User model for more details. LOGIN_REDIRECT_URL Default: '/accounts/profile/' The URL or named URL pattern where requests are redirected after login when the LoginView doesn’t get a next GET parameter. LOGIN_URL Default: '/accounts/login/' The URL or named URL pattern where requests are redirected for login when using the login_required() decorator, LoginRequiredMixin, or AccessMixin. LOGOUT_REDIRECT_URL Default: None The URL or named URL pattern where requests are redirected after logout if LogoutView doesn’t have a next_page attribute. If None, no redirect will be performed and the logout view will be rendered. PASSWORD_RESET_TIMEOUT Default: 259200 (3 days, in seconds) The number of seconds a password reset link is valid for. Used by the PasswordResetConfirmView. Note Reducing the value of this timeout doesn’t make any difference to the ability of an attacker to brute-force a password reset token. Tokens are designed to be safe from brute-forcing without any timeout. This timeout exists to protect against some unlikely attack scenarios, such as someone gaining access to email archives that may contain old, unused password reset tokens. PASSWORD_HASHERS See How Django stores passwords. Default: [ 'django.contrib.auth.hashers.PBKDF2PasswordHasher', 'django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher', 'django.contrib.auth.hashers.Argon2PasswordHasher', 'django.contrib.auth.hashers.BCryptSHA256PasswordHasher', ] AUTH_PASSWORD_VALIDATORS Default: [] (Empty list) The list of validators that are used to check the strength of user’s passwords. See Password validation for more details. By default, no validation is performed and all passwords are accepted. Messages Settings for django.contrib.messages. MESSAGE_LEVEL Default: messages.INFO Sets the minimum message level that will be recorded by the messages framework. See message levels for more details. Important If you override MESSAGE_LEVEL in your settings file and rely on any of the built-in constants, you must import the constants module directly to avoid the potential for circular imports, e.g.: from django.contrib.messages import constants as message_constants MESSAGE_LEVEL = message_constants.DEBUG If desired, you may specify the numeric values for the constants directly according to the values in the above constants table. MESSAGE_STORAGE Default: 'django.contrib.messages.storage.fallback.FallbackStorage' Controls where Django stores message data. Valid values are: 'django.contrib.messages.storage.fallback.FallbackStorage' 'django.contrib.messages.storage.session.SessionStorage' 'django.contrib.messages.storage.cookie.CookieStorage' See message storage backends for more details. The backends that use cookies – CookieStorage and FallbackStorage – use the value of SESSION_COOKIE_DOMAIN, SESSION_COOKIE_SECURE and SESSION_COOKIE_HTTPONLY when setting their cookies. MESSAGE_TAGS Default: { messages.DEBUG: 'debug', messages.INFO: 'info', messages.SUCCESS: 'success', messages.WARNING: 'warning', messages.ERROR: 'error', } This sets the mapping of message level to message tag, which is typically rendered as a CSS class in HTML. If you specify a value, it will extend the default. This means you only have to specify those values which you need to override. See Displaying messages above for more details. Important If you override MESSAGE_TAGS in your settings file and rely on any of the built-in constants, you must import the constants module directly to avoid the potential for circular imports, e.g.: from django.contrib.messages import constants as message_constants MESSAGE_TAGS = {message_constants.INFO: ''} If desired, you may specify the numeric values for the constants directly according to the values in the above constants table. Sessions Settings for django.contrib.sessions. SESSION_CACHE_ALIAS Default: 'default' If you’re using cache-based session storage, this selects the cache to use. SESSION_COOKIE_AGE Default: 1209600 (2 weeks, in seconds) The age of session cookies, in seconds. SESSION_COOKIE_DOMAIN Default: None The domain to use for session cookies. Set this to a string such as "example.com" for cross-domain cookies, or use None for a standard domain cookie. To use cross-domain cookies with CSRF_USE_SESSIONS, you must include a leading dot (e.g. ".example.com") to accommodate the CSRF middleware’s referer checking. Be cautious when updating this setting on a production site. If you update this setting to enable cross-domain cookies on a site that previously used standard domain cookies, existing user cookies will be set to the old domain. This may result in them being unable to log in as long as these cookies persist. This setting also affects cookies set by django.contrib.messages. SESSION_COOKIE_HTTPONLY Default: True Whether to use HttpOnly flag on the session cookie. If this is set to True, client-side JavaScript will not be able to access the session cookie. HttpOnly is a flag included in a Set-Cookie HTTP response header. It’s part of the RFC 6265#section-4.1.2.6 standard for cookies and can be a useful way to mitigate the risk of a client-side script accessing the protected cookie data. This makes it less trivial for an attacker to escalate a cross-site scripting vulnerability into full hijacking of a user’s session. There aren’t many good reasons for turning this off. Your code shouldn’t read session cookies from JavaScript. SESSION_COOKIE_NAME Default: 'sessionid' The name of the cookie to use for sessions. This can be whatever you want (as long as it’s different from the other cookie names in your application). SESSION_COOKIE_PATH Default: '/' The path set on the session cookie. This should either match the URL path of your Django installation or be parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths, and each instance will only see its own session cookie. SESSION_COOKIE_SAMESITE Default: 'Lax' The value of the SameSite flag on the session cookie. This flag prevents the cookie from being sent in cross-site requests thus preventing CSRF attacks and making some methods of stealing session cookie impossible. Possible values for the setting are: 'Strict': prevents the cookie from being sent by the browser to the target site in all cross-site browsing context, even when following a regular link. For example, for a GitHub-like website this would mean that if a logged-in user follows a link to a private GitHub project posted on a corporate discussion forum or email, GitHub will not receive the session cookie and the user won’t be able to access the project. A bank website, however, most likely doesn’t want to allow any transactional pages to be linked from external sites so the 'Strict' flag would be appropriate. 'Lax' (default): provides a balance between security and usability for websites that want to maintain user’s logged-in session after the user arrives from an external link. In the GitHub scenario, the session cookie would be allowed when following a regular link from an external website and be blocked in CSRF-prone request methods (e.g. POST). 'None' (string): the session cookie will be sent with all same-site and cross-site requests. False: disables the flag. Note Modern browsers provide a more secure default policy for the SameSite flag and will assume Lax for cookies without an explicit value set. SESSION_COOKIE_SECURE Default: False Whether to use a secure cookie for the session cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent under an HTTPS connection. Leaving this setting off isn’t a good idea because an attacker could capture an unencrypted session cookie with a packet sniffer and use the cookie to hijack the user’s session. SESSION_ENGINE Default: 'django.contrib.sessions.backends.db' Controls where Django stores session data. Included engines are: 'django.contrib.sessions.backends.db' 'django.contrib.sessions.backends.file' 'django.contrib.sessions.backends.cache' 'django.contrib.sessions.backends.cached_db' 'django.contrib.sessions.backends.signed_cookies' See Configuring the session engine for more details. SESSION_EXPIRE_AT_BROWSER_CLOSE Default: False Whether to expire the session when the user closes their browser. See Browser-length sessions vs. persistent sessions. SESSION_FILE_PATH Default: None If you’re using file-based session storage, this sets the directory in which Django will store session data. When the default value (None) is used, Django will use the standard temporary directory for the system. SESSION_SAVE_EVERY_REQUEST Default: False Whether to save the session data on every request. If this is False (default), then the session data will only be saved if it has been modified – that is, if any of its dictionary values have been assigned or deleted. Empty sessions won’t be created, even if this setting is active. SESSION_SERIALIZER Default: 'django.contrib.sessions.serializers.JSONSerializer' Full import path of a serializer class to use for serializing session data. Included serializers are: 'django.contrib.sessions.serializers.PickleSerializer' 'django.contrib.sessions.serializers.JSONSerializer' See Session serialization for details, including a warning regarding possible remote code execution when using PickleSerializer. Sites Settings for django.contrib.sites. SITE_ID Default: Not defined The ID, as an integer, of the current site in the django_site database table. This is used so that application data can hook into specific sites and a single database can manage content for multiple sites. Static Files Settings for django.contrib.staticfiles. STATIC_ROOT Default: None The absolute path to the directory where collectstatic will collect static files for deployment. Example: "/var/www/example.com/static/" If the staticfiles contrib app is enabled (as in the default project template), the collectstatic management command will collect static files into this directory. See the how-to on managing static files for more details about usage. Warning This should be an initially empty destination directory for collecting your static files from their permanent locations into one directory for ease of deployment; it is not a place to store your static files permanently. You should do that in directories that will be found by staticfiles’s finders, which by default, are 'static/' app sub-directories and any directories you include in STATICFILES_DIRS). STATIC_URL Default: None URL to use when referring to static files located in STATIC_ROOT. Example: "static/" or "http://static.example.com/" If not None, this will be used as the base path for asset definitions (the Media class) and the staticfiles app. It must end in a slash if set to a non-empty value. You may need to configure these files to be served in development and will definitely need to do so in production. Note If STATIC_URL is a relative path, then it will be prefixed by the server-provided value of SCRIPT_NAME (or / if not set). This makes it easier to serve a Django application in a subpath without adding an extra configuration to the settings. STATICFILES_DIRS Default: [] (Empty list) This setting defines the additional locations the staticfiles app will traverse if the FileSystemFinder finder is enabled, e.g. if you use the collectstatic or findstatic management command or use the static file serving view. This should be set to a list of strings that contain full paths to your additional files directory(ies) e.g.: STATICFILES_DIRS = [ "/home/special.polls.com/polls/static", "/home/polls.com/polls/static", "/opt/webfiles/common", ] Note that these paths should use Unix-style forward slashes, even on Windows (e.g. "C:/Users/user/mysite/extra_static_content"). Prefixes (optional) In case you want to refer to files in one of the locations with an additional namespace, you can optionally provide a prefix as (prefix, path) tuples, e.g.: STATICFILES_DIRS = [ # ... ("downloads", "/opt/webfiles/stats"), ] For example, assuming you have STATIC_URL set to 'static/', the collectstatic management command would collect the “stats” files in a 'downloads' subdirectory of STATIC_ROOT. This would allow you to refer to the local file '/opt/webfiles/stats/polls_20101022.tar.gz' with '/static/downloads/polls_20101022.tar.gz' in your templates, e.g.: <a href="{% static 'downloads/polls_20101022.tar.gz' %}"> STATICFILES_STORAGE Default: 'django.contrib.staticfiles.storage.StaticFilesStorage' The file storage engine to use when collecting static files with the collectstatic management command. A ready-to-use instance of the storage backend defined in this setting can be found at django.contrib.staticfiles.storage.staticfiles_storage. For an example, see Serving static files from a cloud service or CDN. STATICFILES_FINDERS Default: [ 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', ] The list of finder backends that know how to find static files in various locations. The default will find files stored in the STATICFILES_DIRS setting (using django.contrib.staticfiles.finders.FileSystemFinder) and in a static subdirectory of each app (using django.contrib.staticfiles.finders.AppDirectoriesFinder). If multiple files with the same name are present, the first file that is found will be used. One finder is disabled by default: django.contrib.staticfiles.finders.DefaultStorageFinder. If added to your STATICFILES_FINDERS setting, it will look for static files in the default file storage as defined by the DEFAULT_FILE_STORAGE setting. Note When using the AppDirectoriesFinder finder, make sure your apps can be found by staticfiles by adding the app to the INSTALLED_APPS setting of your site. Static file finders are currently considered a private interface, and this interface is thus undocumented. Core Settings Topical Index Cache CACHES CACHE_MIDDLEWARE_ALIAS CACHE_MIDDLEWARE_KEY_PREFIX CACHE_MIDDLEWARE_SECONDS Database DATABASES DATABASE_ROUTERS DEFAULT_INDEX_TABLESPACE DEFAULT_TABLESPACE Debugging DEBUG DEBUG_PROPAGATE_EXCEPTIONS Email ADMINS DEFAULT_CHARSET DEFAULT_FROM_EMAIL EMAIL_BACKEND EMAIL_FILE_PATH EMAIL_HOST EMAIL_HOST_PASSWORD EMAIL_HOST_USER EMAIL_PORT EMAIL_SSL_CERTFILE EMAIL_SSL_KEYFILE EMAIL_SUBJECT_PREFIX EMAIL_TIMEOUT EMAIL_USE_LOCALTIME EMAIL_USE_TLS MANAGERS SERVER_EMAIL Error reporting DEFAULT_EXCEPTION_REPORTER DEFAULT_EXCEPTION_REPORTER_FILTER IGNORABLE_404_URLS MANAGERS SILENCED_SYSTEM_CHECKS File uploads DEFAULT_FILE_STORAGE FILE_UPLOAD_HANDLERS FILE_UPLOAD_MAX_MEMORY_SIZE FILE_UPLOAD_PERMISSIONS FILE_UPLOAD_TEMP_DIR MEDIA_ROOT MEDIA_URL Forms FORM_RENDERER Globalization (i18n/l10n) DATE_FORMAT DATE_INPUT_FORMATS DATETIME_FORMAT DATETIME_INPUT_FORMATS DECIMAL_SEPARATOR FIRST_DAY_OF_WEEK FORMAT_MODULE_PATH LANGUAGE_CODE LANGUAGE_COOKIE_AGE LANGUAGE_COOKIE_DOMAIN LANGUAGE_COOKIE_HTTPONLY LANGUAGE_COOKIE_NAME LANGUAGE_COOKIE_PATH LANGUAGE_COOKIE_SAMESITE LANGUAGE_COOKIE_SECURE LANGUAGES LANGUAGES_BIDI LOCALE_PATHS MONTH_DAY_FORMAT NUMBER_GROUPING SHORT_DATE_FORMAT SHORT_DATETIME_FORMAT THOUSAND_SEPARATOR TIME_FORMAT TIME_INPUT_FORMATS TIME_ZONE USE_I18N USE_L10N USE_THOUSAND_SEPARATOR USE_TZ YEAR_MONTH_FORMAT HTTP DATA_UPLOAD_MAX_MEMORY_SIZE DATA_UPLOAD_MAX_NUMBER_FIELDS DEFAULT_CHARSET DISALLOWED_USER_AGENTS FORCE_SCRIPT_NAME INTERNAL_IPS MIDDLEWARE Security SECURE_CONTENT_TYPE_NOSNIFF SECURE_CROSS_ORIGIN_OPENER_POLICY SECURE_HSTS_INCLUDE_SUBDOMAINS SECURE_HSTS_PRELOAD SECURE_HSTS_SECONDS SECURE_PROXY_SSL_HEADER SECURE_REDIRECT_EXEMPT SECURE_REFERRER_POLICY SECURE_SSL_HOST SECURE_SSL_REDIRECT SIGNING_BACKEND USE_X_FORWARDED_HOST USE_X_FORWARDED_PORT WSGI_APPLICATION Logging LOGGING LOGGING_CONFIG Models ABSOLUTE_URL_OVERRIDES FIXTURE_DIRS INSTALLED_APPS Security Cross Site Request Forgery Protection CSRF_COOKIE_DOMAIN CSRF_COOKIE_NAME CSRF_COOKIE_PATH CSRF_COOKIE_SAMESITE CSRF_COOKIE_SECURE CSRF_FAILURE_VIEW CSRF_HEADER_NAME CSRF_TRUSTED_ORIGINS CSRF_USE_SESSIONS SECRET_KEY X_FRAME_OPTIONS Serialization DEFAULT_CHARSET SERIALIZATION_MODULES Templates TEMPLATES Testing Database: TEST TEST_NON_SERIALIZED_APPS TEST_RUNNER URLs APPEND_SLASH PREPEND_WWW ROOT_URLCONF
doc_29449
Convert strings in the Series/Index to be casefolded. New in version 0.25.0. Equivalent to str.casefold(). Returns Series or Index of object See also Series.str.lower Converts all characters to lowercase. Series.str.upper Converts all characters to uppercase. Series.str.title Converts first character of each word to uppercase and remaining to lowercase. Series.str.capitalize Converts first character to uppercase and remaining to lowercase. Series.str.swapcase Converts uppercase to lowercase and lowercase to uppercase. Series.str.casefold Removes all case distinctions in the string. Examples >>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe']) >>> s 0 lower 1 CAPITALS 2 this is a sentence 3 SwApCaSe dtype: object >>> s.str.lower() 0 lower 1 capitals 2 this is a sentence 3 swapcase dtype: object >>> s.str.upper() 0 LOWER 1 CAPITALS 2 THIS IS A SENTENCE 3 SWAPCASE dtype: object >>> s.str.title() 0 Lower 1 Capitals 2 This Is A Sentence 3 Swapcase dtype: object >>> s.str.capitalize() 0 Lower 1 Capitals 2 This is a sentence 3 Swapcase dtype: object >>> s.str.swapcase() 0 LOWER 1 capitals 2 THIS IS A SENTENCE 3 sWaPcAsE dtype: object
doc_29450
class smtplib.SMTP(host='', port=0, local_hostname=None, [timeout, ]source_address=None) An SMTP instance encapsulates an SMTP connection. It has methods that support a full repertoire of SMTP and ESMTP operations. If the optional host and port parameters are given, the SMTP connect() method is called with those parameters during initialization. If specified, local_hostname is used as the FQDN of the local host in the HELO/EHLO command. Otherwise, the local hostname is found using socket.getfqdn(). If the connect() call returns anything other than a success code, an SMTPConnectError is raised. The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if not specified, the global default timeout setting will be used). If the timeout expires, socket.timeout is raised. The optional source_address parameter allows binding to some specific source address in a machine with multiple network interfaces, and/or to some specific source TCP port. It takes a 2-tuple (host, port), for the socket to bind to as its source address before connecting. If omitted (or if host or port are '' and/or 0 respectively) the OS default behavior will be used. For normal use, you should only require the initialization/connect, sendmail(), and SMTP.quit() methods. An example is included below. The SMTP class supports the with statement. When used like this, the SMTP QUIT command is issued automatically when the with statement exits. E.g.: >>> from smtplib import SMTP >>> with SMTP("domain.org") as smtp: ... smtp.noop() ... (250, b'Ok') >>> All commands will raise an auditing event smtplib.SMTP.send with arguments self and data, where data is the bytes about to be sent to the remote host. Changed in version 3.3: Support for the with statement was added. Changed in version 3.3: source_address argument was added. New in version 3.5: The SMTPUTF8 extension (RFC 6531) is now supported. Changed in version 3.9: If the timeout parameter is set to be zero, it will raise a ValueError to prevent the creation of a non-blocking socket class smtplib.SMTP_SSL(host='', port=0, local_hostname=None, keyfile=None, certfile=None, [timeout, ]context=None, source_address=None) An SMTP_SSL instance behaves exactly the same as instances of SMTP. SMTP_SSL should be used for situations where SSL is required from the beginning of the connection and using starttls() is not appropriate. If host is not specified, the local host is used. If port is zero, the standard SMTP-over-SSL port (465) is used. The optional arguments local_hostname, timeout and source_address have the same meaning as they do in the SMTP class. context, also optional, can contain a SSLContext and allows configuring various aspects of the secure connection. Please read Security considerations for best practices. keyfile and certfile are a legacy alternative to context, and can point to a PEM formatted private key and certificate chain file for the SSL connection. Changed in version 3.3: context was added. Changed in version 3.3: source_address argument was added. Changed in version 3.4: The class now supports hostname check with ssl.SSLContext.check_hostname and Server Name Indication (see ssl.HAS_SNI). Deprecated since version 3.6: keyfile and certfile are deprecated in favor of context. Please use ssl.SSLContext.load_cert_chain() instead, or let ssl.create_default_context() select the system’s trusted CA certificates for you. Changed in version 3.9: If the timeout parameter is set to be zero, it will raise a ValueError to prevent the creation of a non-blocking socket class smtplib.LMTP(host='', port=LMTP_PORT, local_hostname=None, source_address=None[, timeout]) The LMTP protocol, which is very similar to ESMTP, is heavily based on the standard SMTP client. It’s common to use Unix sockets for LMTP, so our connect() method must support that as well as a regular host:port server. The optional arguments local_hostname and source_address have the same meaning as they do in the SMTP class. To specify a Unix socket, you must use an absolute path for host, starting with a ‘/’. Authentication is supported, using the regular SMTP mechanism. When using a Unix socket, LMTP generally don’t support or require any authentication, but your mileage might vary. Changed in version 3.9: The optional timeout parameter was added. A nice selection of exceptions is defined as well: exception smtplib.SMTPException Subclass of OSError that is the base exception class for all the other exceptions provided by this module. Changed in version 3.4: SMTPException became subclass of OSError exception smtplib.SMTPServerDisconnected This exception is raised when the server unexpectedly disconnects, or when an attempt is made to use the SMTP instance before connecting it to a server. exception smtplib.SMTPResponseException Base class for all exceptions that include an SMTP error code. These exceptions are generated in some instances when the SMTP server returns an error code. The error code is stored in the smtp_code attribute of the error, and the smtp_error attribute is set to the error message. exception smtplib.SMTPSenderRefused Sender address refused. In addition to the attributes set by on all SMTPResponseException exceptions, this sets ‘sender’ to the string that the SMTP server refused. exception smtplib.SMTPRecipientsRefused All recipient addresses refused. The errors for each recipient are accessible through the attribute recipients, which is a dictionary of exactly the same sort as SMTP.sendmail() returns. exception smtplib.SMTPDataError The SMTP server refused to accept the message data. exception smtplib.SMTPConnectError Error occurred during establishment of a connection with the server. exception smtplib.SMTPHeloError The server refused our HELO message. exception smtplib.SMTPNotSupportedError The command or option attempted is not supported by the server. New in version 3.5. exception smtplib.SMTPAuthenticationError SMTP authentication went wrong. Most probably the server didn’t accept the username/password combination provided. See also RFC 821 - Simple Mail Transfer Protocol Protocol definition for SMTP. This document covers the model, operating procedure, and protocol details for SMTP. RFC 1869 - SMTP Service Extensions Definition of the ESMTP extensions for SMTP. This describes a framework for extending SMTP with new commands, supporting dynamic discovery of the commands provided by the server, and defines a few additional commands. SMTP Objects An SMTP instance has the following methods: SMTP.set_debuglevel(level) Set the debug output level. A value of 1 or True for level results in debug messages for connection and for all messages sent to and received from the server. A value of 2 for level results in these messages being timestamped. Changed in version 3.5: Added debuglevel 2. SMTP.docmd(cmd, args='') Send a command cmd to the server. The optional argument args is simply concatenated to the command, separated by a space. This returns a 2-tuple composed of a numeric response code and the actual response line (multiline responses are joined into one long line.) In normal operation it should not be necessary to call this method explicitly. It is used to implement other methods and may be useful for testing private extensions. If the connection to the server is lost while waiting for the reply, SMTPServerDisconnected will be raised. SMTP.connect(host='localhost', port=0) Connect to a host on a given port. The defaults are to connect to the local host at the standard SMTP port (25). If the hostname ends with a colon (':') followed by a number, that suffix will be stripped off and the number interpreted as the port number to use. This method is automatically invoked by the constructor if a host is specified during instantiation. Returns a 2-tuple of the response code and message sent by the server in its connection response. Raises an auditing event smtplib.connect with arguments self, host, port. SMTP.helo(name='') Identify yourself to the SMTP server using HELO. The hostname argument defaults to the fully qualified domain name of the local host. The message returned by the server is stored as the helo_resp attribute of the object. In normal operation it should not be necessary to call this method explicitly. It will be implicitly called by the sendmail() when necessary. SMTP.ehlo(name='') Identify yourself to an ESMTP server using EHLO. The hostname argument defaults to the fully qualified domain name of the local host. Examine the response for ESMTP option and store them for use by has_extn(). Also sets several informational attributes: the message returned by the server is stored as the ehlo_resp attribute, does_esmtp is set to true or false depending on whether the server supports ESMTP, and esmtp_features will be a dictionary containing the names of the SMTP service extensions this server supports, and their parameters (if any). Unless you wish to use has_extn() before sending mail, it should not be necessary to call this method explicitly. It will be implicitly called by sendmail() when necessary. SMTP.ehlo_or_helo_if_needed() This method calls ehlo() and/or helo() if there has been no previous EHLO or HELO command this session. It tries ESMTP EHLO first. SMTPHeloError The server didn’t reply properly to the HELO greeting. SMTP.has_extn(name) Return True if name is in the set of SMTP service extensions returned by the server, False otherwise. Case is ignored. SMTP.verify(address) Check the validity of an address on this server using SMTP VRFY. Returns a tuple consisting of code 250 and a full RFC 822 address (including human name) if the user address is valid. Otherwise returns an SMTP error code of 400 or greater and an error string. Note Many sites disable SMTP VRFY in order to foil spammers. SMTP.login(user, password, *, initial_response_ok=True) Log in on an SMTP server that requires authentication. The arguments are the username and the password to authenticate with. If there has been no previous EHLO or HELO command this session, this method tries ESMTP EHLO first. This method will return normally if the authentication was successful, or may raise the following exceptions: SMTPHeloError The server didn’t reply properly to the HELO greeting. SMTPAuthenticationError The server didn’t accept the username/password combination. SMTPNotSupportedError The AUTH command is not supported by the server. SMTPException No suitable authentication method was found. Each of the authentication methods supported by smtplib are tried in turn if they are advertised as supported by the server. See auth() for a list of supported authentication methods. initial_response_ok is passed through to auth(). Optional keyword argument initial_response_ok specifies whether, for authentication methods that support it, an “initial response” as specified in RFC 4954 can be sent along with the AUTH command, rather than requiring a challenge/response. Changed in version 3.5: SMTPNotSupportedError may be raised, and the initial_response_ok parameter was added. SMTP.auth(mechanism, authobject, *, initial_response_ok=True) Issue an SMTP AUTH command for the specified authentication mechanism, and handle the challenge response via authobject. mechanism specifies which authentication mechanism is to be used as argument to the AUTH command; the valid values are those listed in the auth element of esmtp_features. authobject must be a callable object taking an optional single argument: data = authobject(challenge=None) If optional keyword argument initial_response_ok is true, authobject() will be called first with no argument. It can return the RFC 4954 “initial response” ASCII str which will be encoded and sent with the AUTH command as below. If the authobject() does not support an initial response (e.g. because it requires a challenge), it should return None when called with challenge=None. If initial_response_ok is false, then authobject() will not be called first with None. If the initial response check returns None, or if initial_response_ok is false, authobject() will be called to process the server’s challenge response; the challenge argument it is passed will be a bytes. It should return ASCII str data that will be base64 encoded and sent to the server. The SMTP class provides authobjects for the CRAM-MD5, PLAIN, and LOGIN mechanisms; they are named SMTP.auth_cram_md5, SMTP.auth_plain, and SMTP.auth_login respectively. They all require that the user and password properties of the SMTP instance are set to appropriate values. User code does not normally need to call auth directly, but can instead call the login() method, which will try each of the above mechanisms in turn, in the order listed. auth is exposed to facilitate the implementation of authentication methods not (or not yet) supported directly by smtplib. New in version 3.5. SMTP.starttls(keyfile=None, certfile=None, context=None) Put the SMTP connection in TLS (Transport Layer Security) mode. All SMTP commands that follow will be encrypted. You should then call ehlo() again. If keyfile and certfile are provided, they are used to create an ssl.SSLContext. Optional context parameter is an ssl.SSLContext object; This is an alternative to using a keyfile and a certfile and if specified both keyfile and certfile should be None. If there has been no previous EHLO or HELO command this session, this method tries ESMTP EHLO first. Deprecated since version 3.6: keyfile and certfile are deprecated in favor of context. Please use ssl.SSLContext.load_cert_chain() instead, or let ssl.create_default_context() select the system’s trusted CA certificates for you. SMTPHeloError The server didn’t reply properly to the HELO greeting. SMTPNotSupportedError The server does not support the STARTTLS extension. RuntimeError SSL/TLS support is not available to your Python interpreter. Changed in version 3.3: context was added. Changed in version 3.4: The method now supports hostname check with SSLContext.check_hostname and Server Name Indicator (see HAS_SNI). Changed in version 3.5: The error raised for lack of STARTTLS support is now the SMTPNotSupportedError subclass instead of the base SMTPException. SMTP.sendmail(from_addr, to_addrs, msg, mail_options=(), rcpt_options=()) Send mail. The required arguments are an RFC 822 from-address string, a list of RFC 822 to-address strings (a bare string will be treated as a list with 1 address), and a message string. The caller may pass a list of ESMTP options (such as 8bitmime) to be used in MAIL FROM commands as mail_options. ESMTP options (such as DSN commands) that should be used with all RCPT commands can be passed as rcpt_options. (If you need to use different ESMTP options to different recipients you have to use the low-level methods such as mail(), rcpt() and data() to send the message.) Note The from_addr and to_addrs parameters are used to construct the message envelope used by the transport agents. sendmail does not modify the message headers in any way. msg may be a string containing characters in the ASCII range, or a byte string. A string is encoded to bytes using the ascii codec, and lone \r and \n characters are converted to \r\n characters. A byte string is not modified. If there has been no previous EHLO or HELO command this session, this method tries ESMTP EHLO first. If the server does ESMTP, message size and each of the specified options will be passed to it (if the option is in the feature set the server advertises). If EHLO fails, HELO will be tried and ESMTP options suppressed. This method will return normally if the mail is accepted for at least one recipient. Otherwise it will raise an exception. That is, if this method does not raise an exception, then someone should get your mail. If this method does not raise an exception, it returns a dictionary, with one entry for each recipient that was refused. Each entry contains a tuple of the SMTP error code and the accompanying error message sent by the server. If SMTPUTF8 is included in mail_options, and the server supports it, from_addr and to_addrs may contain non-ASCII characters. This method may raise the following exceptions: SMTPRecipientsRefused All recipients were refused. Nobody got the mail. The recipients attribute of the exception object is a dictionary with information about the refused recipients (like the one returned when at least one recipient was accepted). SMTPHeloError The server didn’t reply properly to the HELO greeting. SMTPSenderRefused The server didn’t accept the from_addr. SMTPDataError The server replied with an unexpected error code (other than a refusal of a recipient). SMTPNotSupportedError SMTPUTF8 was given in the mail_options but is not supported by the server. Unless otherwise noted, the connection will be open even after an exception is raised. Changed in version 3.2: msg may be a byte string. Changed in version 3.5: SMTPUTF8 support added, and SMTPNotSupportedError may be raised if SMTPUTF8 is specified but the server does not support it. SMTP.send_message(msg, from_addr=None, to_addrs=None, mail_options=(), rcpt_options=()) This is a convenience method for calling sendmail() with the message represented by an email.message.Message object. The arguments have the same meaning as for sendmail(), except that msg is a Message object. If from_addr is None or to_addrs is None, send_message fills those arguments with addresses extracted from the headers of msg as specified in RFC 5322: from_addr is set to the Sender field if it is present, and otherwise to the From field. to_addrs combines the values (if any) of the To, Cc, and Bcc fields from msg. If exactly one set of Resent-* headers appear in the message, the regular headers are ignored and the Resent-* headers are used instead. If the message contains more than one set of Resent-* headers, a ValueError is raised, since there is no way to unambiguously detect the most recent set of Resent- headers. send_message serializes msg using BytesGenerator with \r\n as the linesep, and calls sendmail() to transmit the resulting message. Regardless of the values of from_addr and to_addrs, send_message does not transmit any Bcc or Resent-Bcc headers that may appear in msg. If any of the addresses in from_addr and to_addrs contain non-ASCII characters and the server does not advertise SMTPUTF8 support, an SMTPNotSupported error is raised. Otherwise the Message is serialized with a clone of its policy with the utf8 attribute set to True, and SMTPUTF8 and BODY=8BITMIME are added to mail_options. New in version 3.2. New in version 3.5: Support for internationalized addresses (SMTPUTF8). SMTP.quit() Terminate the SMTP session and close the connection. Return the result of the SMTP QUIT command. Low-level methods corresponding to the standard SMTP/ESMTP commands HELP, RSET, NOOP, MAIL, RCPT, and DATA are also supported. Normally these do not need to be called directly, so they are not documented here. For details, consult the module code. SMTP Example This example prompts the user for addresses needed in the message envelope (‘To’ and ‘From’ addresses), and the message to be delivered. Note that the headers to be included with the message must be included in the message as entered; this example doesn’t do any processing of the RFC 822 headers. In particular, the ‘To’ and ‘From’ addresses must be included in the message headers explicitly. import smtplib def prompt(prompt): return input(prompt).strip() fromaddr = prompt("From: ") toaddrs = prompt("To: ").split() print("Enter message, end with ^D (Unix) or ^Z (Windows):") # Add the From: and To: headers at the start! msg = ("From: %s\r\nTo: %s\r\n\r\n" % (fromaddr, ", ".join(toaddrs))) while True: try: line = input() except EOFError: break if not line: break msg = msg + line print("Message length is", len(msg)) server = smtplib.SMTP('localhost') server.set_debuglevel(1) server.sendmail(fromaddr, toaddrs, msg) server.quit() Note In general, you will want to use the email package’s features to construct an email message, which you can then send via send_message(); see email: Examples.
doc_29451
See Migration guide for more details. tf.compat.v1.raw_ops.QuantizeAndDequantize tf.raw_ops.QuantizeAndDequantize( input, signed_input=True, num_bits=8, range_given=False, input_min=0, input_max=0, name=None ) Args input A Tensor. Must be one of the following types: bfloat16, half, float32, float64. signed_input An optional bool. Defaults to True. num_bits An optional int. Defaults to 8. range_given An optional bool. Defaults to False. input_min An optional float. Defaults to 0. input_max An optional float. Defaults to 0. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
doc_29452
See Migration guide for more details. tf.compat.v1.raw_ops.UnravelIndex tf.raw_ops.UnravelIndex( indices, dims, name=None ) Example: y = tf.unravel_index(indices=[2, 5, 7], dims=[3, 3]) # 'dims' represent a hypothetical (3, 3) tensor of indices: # [[0, 1, *2*], # [3, 4, *5*], # [6, *7*, 8]] # For each entry from 'indices', this operation returns # its coordinates (marked with '*'), such as # 2 ==> (0, 2) # 5 ==> (1, 2) # 7 ==> (2, 1) y ==> [[0, 1, 2], [2, 2, 1]] Args indices A Tensor. Must be one of the following types: int32, int64. An 0-D or 1-D int Tensor whose elements are indices into the flattened version of an array of dimensions dims. dims A Tensor. Must have the same type as indices. An 1-D int Tensor. The shape of the array to use for unraveling indices. name A name for the operation (optional). Returns A Tensor. Has the same type as indices. Numpy Compatibility Equivalent to np.unravel_index
doc_29453
Parses a range header into a ContentRange object or None if parsing is not possible. Changelog New in version 0.7. Parameters value (Optional[str]) – a content range header to be parsed. on_update (Optional[Callable[[werkzeug.datastructures.ContentRange], None]]) – an optional callable that is called every time a value on the ContentRange object is changed. Return type Optional[werkzeug.datastructures.ContentRange]
doc_29454
Object that is less than anything (except itself). Used to test mixed type comparison.
doc_29455
Return the Transform instance used by this artist.
doc_29456
Return True if the call was successfully cancelled or finished running.
doc_29457
Scalar method identical to the corresponding array attribute. Please see ndarray.choose.
doc_29458
Parameters Xfloat or int, ndarray or scalar The data value(s) to convert to RGBA. For floats, X should be in the interval [0.0, 1.0] to return the RGBA values X*100 percent along the Colormap line. For integers, X should be in the interval [0, Colormap.N) to return RGBA values indexed from the Colormap with index X. alphafloat or array-like or None Alpha must be a scalar between 0 and 1, a sequence of such floats with shape matching X, or None. bytesbool If False (default), the returned RGBA values will be floats in the interval [0, 1] otherwise they will be uint8s in the interval [0, 255]. Returns Tuple of RGBA values if X is scalar, otherwise an array of RGBA values with a shape of X.shape + (4, ).
doc_29459
Interpret a buffer as a 1-dimensional array. Parameters bufferbuffer_like An object that exposes the buffer interface. dtypedata-type, optional Data-type of the returned array; default: float. countint, optional Number of items to read. -1 means all data in the buffer. offsetint, optional Start reading the buffer from this offset (in bytes); default: 0. likearray_like Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as like supports the __array_function__ protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns outndarray Notes If the buffer has data that is not in machine byte-order, this should be specified as part of the data-type, e.g.: >>> dt = np.dtype(int) >>> dt = dt.newbyteorder('>') >>> np.frombuffer(buf, dtype=dt) The data of the resulting array will not be byteswapped, but will be interpreted correctly. Examples >>> s = b'hello world' >>> np.frombuffer(s, dtype='S1', count=5, offset=6) array([b'w', b'o', b'r', b'l', b'd'], dtype='|S1') >>> np.frombuffer(b'\x01\x02', dtype=np.uint8) array([1, 2], dtype=uint8) >>> np.frombuffer(b'\x01\x02\x03\x04\x05', dtype=np.uint8, count=3) array([1, 2, 3], dtype=uint8)
doc_29460
re.MULTILINE When specified, the pattern character '^' matches at the beginning of the string and at the beginning of each line (immediately following each newline); and the pattern character '$' matches at the end of the string and at the end of each line (immediately preceding each newline). By default, '^' matches only at the beginning of the string, and '$' only at the end of the string and immediately before the newline (if any) at the end of the string. Corresponds to the inline flag (?m).
doc_29461
Call self as a function.
doc_29462
Generate C preprocessor definitions and include headers of a CPU feature. Parameters ‘feature_name’: str CPU feature name in uppercase. ‘tabs’: int if > 0, align the generated strings to the right depend on number of tabs. Returns str, generated C preprocessor Examples >>> self.feature_c_preprocessor("SSE3") /** SSE3 **/ #define NPY_HAVE_SSE3 1 #include <pmmintrin.h>
doc_29463
Follow RFC 2965 rules on unverifiable transactions (usually, an unverifiable transaction is one resulting from a redirect or a request for an image hosted on another site). If this is false, cookies are never blocked on the basis of verifiability
doc_29464
Return name of a marker XObject representing the given path.
doc_29465
sklearn.metrics.precision_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source] Compute the precision. The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative. The best value is 1 and the worst value is 0. Read more in the User Guide. Parameters y_true1d array-like, or label indicator array / sparse matrix Ground truth (correct) target values. y_pred1d array-like, or label indicator array / sparse matrix Estimated targets as returned by a classifier. labelsarray-like, default=None The set of labels to include when average != 'binary', and their order if average is None. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Changed in version 0.17: Parameter labels improved for multiclass problem. pos_labelstr or int, default=1 The class to report if average='binary' and the data is binary. If the data are multiclass or multilabel, this will be ignored; setting labels=[pos_label] and average != 'binary' will report scores for that label only. average{‘micro’, ‘macro’, ‘samples’, ‘weighted’, ‘binary’} default=’binary’ This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: 'binary': Only report results for the class specified by pos_label. This is applicable only if targets (y_{true,pred}) are binary. 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives. 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall. 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from accuracy_score). sample_weightarray-like of shape (n_samples,), default=None Sample weights. zero_division“warn”, 0 or 1, default=”warn” Sets the value to return when there is a zero division. If set to “warn”, this acts as 0, but warnings are also raised. Returns precisionfloat (if average is not None) or array of float of shape (n_unique_labels,) Precision of the positive class in binary classification or weighted average of the precision of each class for the multiclass task. See also precision_recall_fscore_support, multilabel_confusion_matrix Notes When true positive + false positive == 0, precision returns 0 and raises UndefinedMetricWarning. This behavior can be modified with zero_division. Examples >>> from sklearn.metrics import precision_score >>> y_true = [0, 1, 2, 0, 1, 2] >>> y_pred = [0, 2, 1, 0, 0, 1] >>> precision_score(y_true, y_pred, average='macro') 0.22... >>> precision_score(y_true, y_pred, average='micro') 0.33... >>> precision_score(y_true, y_pred, average='weighted') 0.22... >>> precision_score(y_true, y_pred, average=None) array([0.66..., 0. , 0. ]) >>> y_pred = [0, 0, 0, 0, 0, 0] >>> precision_score(y_true, y_pred, average=None) array([0.33..., 0. , 0. ]) >>> precision_score(y_true, y_pred, average=None, zero_division=1) array([0.33..., 1. , 1. ]) Examples using sklearn.metrics.precision_score Probability Calibration curves Precision-Recall
doc_29466
Return the font style. Values are: 'normal', 'italic' or 'oblique'.
doc_29467
See Migration guide for more details. tf.compat.v1.raw_ops.Fill tf.raw_ops.Fill( dims, value, name=None ) This operation creates a tensor of shape dims and fills it with value. For example: # Output tensor has shape [2, 3]. fill([2, 3], 9) ==> [[9, 9, 9] [9, 9, 9]] tf.fill differs from tf.constant in a few ways: tf.fill only supports scalar contents, whereas tf.constant supports Tensor values. tf.fill creates an Op in the computation graph that constructs the actual Tensor value at runtime. This is in contrast to tf.constant which embeds the entire Tensor into the graph with a Const node. Because tf.fill evaluates at graph runtime, it supports dynamic shapes based on other runtime Tensors, unlike tf.constant. Args dims A Tensor. Must be one of the following types: int32, int64. 1-D. Represents the shape of the output tensor. value A Tensor. 0-D (scalar). Value to fill the returned tensor. name A name for the operation (optional). Returns A Tensor. Has the same type as value. Numpy Compatibility Equivalent to np.full
doc_29468
""" May be applied as a `default=...` value on a serializer field. Returns the current user. """ requires_context = True def __call__(self, serializer_field): return serializer_field.context['request'].user When serializing the instance, default will be used if the object attribute or dictionary key is not present in the instance. Note that setting a default value implies that the field is not required. Including both the default and required keyword arguments is invalid and will raise an error. allow_null Normally an error will be raised if None is passed to a serializer field. Set this keyword argument to True if None should be considered a valid value. Note that, without an explicit default, setting this argument to True will imply a default value of null for serialization output, but does not imply a default for input deserialization. Defaults to False source The name of the attribute that will be used to populate the field. May be a method that only takes a self argument, such as URLField(source='get_absolute_url'), or may use dotted notation to traverse attributes, such as EmailField(source='user.email'). When serializing fields with dotted notation, it may be necessary to provide a default value if any object is not present or is empty during attribute traversal. The value source='*' has a special meaning, and is used to indicate that the entire object should be passed through to the field. This can be useful for creating nested representations, or for fields which require access to the complete object in order to determine the output representation. Defaults to the name of the field. validators A list of validator functions which should be applied to the incoming field input, and which either raise a validation error or simply return. Validator functions should typically raise serializers.ValidationError, but Django's built-in ValidationError is also supported for compatibility with validators defined in the Django codebase or third party Django packages. error_messages A dictionary of error codes to error messages. label A short text string that may be used as the name of the field in HTML form fields or other descriptive elements. help_text A text string that may be used as a description of the field in HTML form fields or other descriptive elements. initial A value that should be used for pre-populating the value of HTML form fields. You may pass a callable to it, just as you may do with any regular Django Field: import datetime from rest_framework import serializers class ExampleSerializer(serializers.Serializer): day = serializers.DateField(initial=datetime.date.today) style A dictionary of key-value pairs that can be used to control how renderers should render the field. Two examples here are 'input_type' and 'base_template': # Use <input type="password"> for the input. password = serializers.CharField( style={'input_type': 'password'} ) # Use a radio input instead of a select input. color_channel = serializers.ChoiceField( choices=['red', 'green', 'blue'], style={'base_template': 'radio.html'} ) For more details see the HTML & Forms documentation. Boolean fields BooleanField A boolean representation. When using HTML encoded form input be aware that omitting a value will always be treated as setting a field to False, even if it has a default=True option specified. This is because HTML checkbox inputs represent the unchecked state by omitting the value, so REST framework treats omission as if it is an empty checkbox input. Note that Django 2.1 removed the blank kwarg from models.BooleanField. Prior to Django 2.1 models.BooleanField fields were always blank=True. Thus since Django 2.1 default serializers.BooleanField instances will be generated without the required kwarg (i.e. equivalent to required=True) whereas with previous versions of Django, default BooleanField instances will be generated with a required=False option. If you want to control this behaviour manually, explicitly declare the BooleanField on the serializer class, or use the extra_kwargs option to set the required flag. Corresponds to django.db.models.fields.BooleanField. Signature: BooleanField() NullBooleanField A boolean representation that also accepts None as a valid value. Corresponds to django.db.models.fields.NullBooleanField. Signature: NullBooleanField() String fields CharField A text representation. Optionally validates the text to be shorter than max_length and longer than min_length. Corresponds to django.db.models.fields.CharField or django.db.models.fields.TextField. Signature: CharField(max_length=None, min_length=None, allow_blank=False, trim_whitespace=True) max_length - Validates that the input contains no more than this number of characters. min_length - Validates that the input contains no fewer than this number of characters. allow_blank - If set to True then the empty string should be considered a valid value. If set to False then the empty string is considered invalid and will raise a validation error. Defaults to False. trim_whitespace - If set to True then leading and trailing whitespace is trimmed. Defaults to True. The allow_null option is also available for string fields, although its usage is discouraged in favor of allow_blank. It is valid to set both allow_blank=True and allow_null=True, but doing so means that there will be two differing types of empty value permissible for string representations, which can lead to data inconsistencies and subtle application bugs. EmailField A text representation, validates the text to be a valid e-mail address. Corresponds to django.db.models.fields.EmailField Signature: EmailField(max_length=None, min_length=None, allow_blank=False) RegexField A text representation, that validates the given value matches against a certain regular expression. Corresponds to django.forms.fields.RegexField. Signature: RegexField(regex, max_length=None, min_length=None, allow_blank=False) The mandatory regex argument may either be a string, or a compiled python regular expression object. Uses Django's django.core.validators.RegexValidator for validation. SlugField A RegexField that validates the input against the pattern [a-zA-Z0-9_-]+. Corresponds to django.db.models.fields.SlugField. Signature: SlugField(max_length=50, min_length=None, allow_blank=False) URLField A RegexField that validates the input against a URL matching pattern. Expects fully qualified URLs of the form http://<host>/<path>. Corresponds to django.db.models.fields.URLField. Uses Django's django.core.validators.URLValidator for validation. Signature: URLField(max_length=200, min_length=None, allow_blank=False) UUIDField A field that ensures the input is a valid UUID string. The to_internal_value method will return a uuid.UUID instance. On output the field will return a string in the canonical hyphenated format, for example: "de305d54-75b4-431b-adb2-eb6b9e546013" Signature: UUIDField(format='hex_verbose') format: Determines the representation format of the uuid value 'hex_verbose' - The canonical hex representation, including hyphens: "5ce0e9a5-5ffa-654b-cee0-1238041fb31a" 'hex' - The compact hex representation of the UUID, not including hyphens: "5ce0e9a55ffa654bcee01238041fb31a" 'int' - A 128 bit integer representation of the UUID: "123456789012312313134124512351145145114" 'urn' - RFC 4122 URN representation of the UUID: "urn:uuid:5ce0e9a5-5ffa-654b-cee0-1238041fb31a" Changing the format parameters only affects representation values. All formats are accepted by to_internal_value FilePathField A field whose choices are limited to the filenames in a certain directory on the filesystem Corresponds to django.forms.fields.FilePathField. Signature: FilePathField(path, match=None, recursive=False, allow_files=True, allow_folders=False, required=None, **kwargs) path - The absolute filesystem path to a directory from which this FilePathField should get its choice. match - A regular expression, as a string, that FilePathField will use to filter filenames. recursive - Specifies whether all subdirectories of path should be included. Default is False. allow_files - Specifies whether files in the specified location should be included. Default is True. Either this or allow_folders must be True. allow_folders - Specifies whether folders in the specified location should be included. Default is False. Either this or allow_files must be True. IPAddressField A field that ensures the input is a valid IPv4 or IPv6 string. Corresponds to django.forms.fields.IPAddressField and django.forms.fields.GenericIPAddressField. Signature: IPAddressField(protocol='both', unpack_ipv4=False, **options) protocol Limits valid inputs to the specified protocol. Accepted values are 'both' (default), 'IPv4' or 'IPv6'. Matching is case insensitive. unpack_ipv4 Unpacks IPv4 mapped addresses like ::ffff:192.0.2.1. If this option is enabled that address would be unpacked to 192.0.2.1. Default is disabled. Can only be used when protocol is set to 'both'. Numeric fields IntegerField An integer representation. Corresponds to django.db.models.fields.IntegerField, django.db.models.fields.SmallIntegerField, django.db.models.fields.PositiveIntegerField and django.db.models.fields.PositiveSmallIntegerField. Signature: IntegerField(max_value=None, min_value=None) max_value Validate that the number provided is no greater than this value. min_value Validate that the number provided is no less than this value. FloatField A floating point representation. Corresponds to django.db.models.fields.FloatField. Signature: FloatField(max_value=None, min_value=None) max_value Validate that the number provided is no greater than this value. min_value Validate that the number provided is no less than this value. DecimalField A decimal representation, represented in Python by a Decimal instance. Corresponds to django.db.models.fields.DecimalField. Signature: DecimalField(max_digits, decimal_places, coerce_to_string=None, max_value=None, min_value=None) max_digits The maximum number of digits allowed in the number. It must be either None or an integer greater than or equal to decimal_places. decimal_places The number of decimal places to store with the number. coerce_to_string Set to True if string values should be returned for the representation, or False if Decimal objects should be returned. Defaults to the same value as the COERCE_DECIMAL_TO_STRING settings key, which will be True unless overridden. If Decimal objects are returned by the serializer, then the final output format will be determined by the renderer. Note that setting localize will force the value to True. max_value Validate that the number provided is no greater than this value. min_value Validate that the number provided is no less than this value. localize Set to True to enable localization of input and output based on the current locale. This will also force coerce_to_string to True. Defaults to False. Note that data formatting is enabled if you have set USE_L10N=True in your settings file. rounding Sets the rounding mode used when quantising to the configured precision. Valid values are decimal module rounding modes. Defaults to None. Example usage To validate numbers up to 999 with a resolution of 2 decimal places, you would use: serializers.DecimalField(max_digits=5, decimal_places=2) And to validate numbers up to anything less than one billion with a resolution of 10 decimal places: serializers.DecimalField(max_digits=19, decimal_places=10) This field also takes an optional argument, coerce_to_string. If set to True the representation will be output as a string. If set to False the representation will be left as a Decimal instance and the final representation will be determined by the renderer. If unset, this will default to the same value as the COERCE_DECIMAL_TO_STRING setting, which is True unless set otherwise. Date and time fields DateTimeField A date and time representation. Corresponds to django.db.models.fields.DateTimeField. Signature: DateTimeField(format=api_settings.DATETIME_FORMAT, input_formats=None, default_timezone=None) format - A string representing the output format. If not specified, this defaults to the same value as the DATETIME_FORMAT settings key, which will be 'iso-8601' unless set. Setting to a format string indicates that to_representation return values should be coerced to string output. Format strings are described below. Setting this value to None indicates that Python datetime objects should be returned by to_representation. In this case the datetime encoding will be determined by the renderer. input_formats - A list of strings representing the input formats which may be used to parse the date. If not specified, the DATETIME_INPUT_FORMATS setting will be used, which defaults to ['iso-8601']. default_timezone - A pytz.timezone representing the timezone. If not specified and the USE_TZ setting is enabled, this defaults to the current timezone. If USE_TZ is disabled, then datetime objects will be naive. DateTimeField format strings. Format strings may either be Python strftime formats which explicitly specify the format, or the special string 'iso-8601', which indicates that ISO 8601 style datetimes should be used. (eg '2013-01-29T12:34:56.000000Z') When a value of None is used for the format datetime objects will be returned by to_representation and the final output representation will determined by the renderer class. auto_now and auto_now_add model fields. When using ModelSerializer or HyperlinkedModelSerializer, note that any model fields with auto_now=True or auto_now_add=True will use serializer fields that are read_only=True by default. If you want to override this behavior, you'll need to declare the DateTimeField explicitly on the serializer. For example: class CommentSerializer(serializers.ModelSerializer): created = serializers.DateTimeField() class Meta: model = Comment DateField A date representation. Corresponds to django.db.models.fields.DateField Signature: DateField(format=api_settings.DATE_FORMAT, input_formats=None) format - A string representing the output format. If not specified, this defaults to the same value as the DATE_FORMAT settings key, which will be 'iso-8601' unless set. Setting to a format string indicates that to_representation return values should be coerced to string output. Format strings are described below. Setting this value to None indicates that Python date objects should be returned by to_representation. In this case the date encoding will be determined by the renderer. input_formats - A list of strings representing the input formats which may be used to parse the date. If not specified, the DATE_INPUT_FORMATS setting will be used, which defaults to ['iso-8601']. DateField format strings Format strings may either be Python strftime formats which explicitly specify the format, or the special string 'iso-8601', which indicates that ISO 8601 style dates should be used. (eg '2013-01-29') TimeField A time representation. Corresponds to django.db.models.fields.TimeField Signature: TimeField(format=api_settings.TIME_FORMAT, input_formats=None) format - A string representing the output format. If not specified, this defaults to the same value as the TIME_FORMAT settings key, which will be 'iso-8601' unless set. Setting to a format string indicates that to_representation return values should be coerced to string output. Format strings are described below. Setting this value to None indicates that Python time objects should be returned by to_representation. In this case the time encoding will be determined by the renderer. input_formats - A list of strings representing the input formats which may be used to parse the date. If not specified, the TIME_INPUT_FORMATS setting will be used, which defaults to ['iso-8601']. TimeField format strings Format strings may either be Python strftime formats which explicitly specify the format, or the special string 'iso-8601', which indicates that ISO 8601 style times should be used. (eg '12:34:56.000000') DurationField A Duration representation. Corresponds to django.db.models.fields.DurationField The validated_data for these fields will contain a datetime.timedelta instance. The representation is a string following this format '[DD] [HH:[MM:]]ss[.uuuuuu]'. Signature: DurationField(max_value=None, min_value=None) max_value Validate that the duration provided is no greater than this value. min_value Validate that the duration provided is no less than this value. Choice selection fields ChoiceField A field that can accept a value out of a limited set of choices. Used by ModelSerializer to automatically generate fields if the corresponding model field includes a choices=… argument. Signature: ChoiceField(choices) choices - A list of valid values, or a list of (key, display_name) tuples. allow_blank - If set to True then the empty string should be considered a valid value. If set to False then the empty string is considered invalid and will raise a validation error. Defaults to False. html_cutoff - If set this will be the maximum number of choices that will be displayed by a HTML select drop down. Can be used to ensure that automatically generated ChoiceFields with very large possible selections do not prevent a template from rendering. Defaults to None. html_cutoff_text - If set this will display a textual indicator if the maximum number of items have been cutoff in an HTML select drop down. Defaults to "More than {count} items…" Both the allow_blank and allow_null are valid options on ChoiceField, although it is highly recommended that you only use one and not both. allow_blank should be preferred for textual choices, and allow_null should be preferred for numeric or other non-textual choices. MultipleChoiceField A field that can accept a set of zero, one or many values, chosen from a limited set of choices. Takes a single mandatory argument. to_internal_value returns a set containing the selected values. Signature: MultipleChoiceField(choices) choices - A list of valid values, or a list of (key, display_name) tuples. allow_blank - If set to True then the empty string should be considered a valid value. If set to False then the empty string is considered invalid and will raise a validation error. Defaults to False. html_cutoff - If set this will be the maximum number of choices that will be displayed by a HTML select drop down. Can be used to ensure that automatically generated ChoiceFields with very large possible selections do not prevent a template from rendering. Defaults to None. html_cutoff_text - If set this will display a textual indicator if the maximum number of items have been cutoff in an HTML select drop down. Defaults to "More than {count} items…" As with ChoiceField, both the allow_blank and allow_null options are valid, although it is highly recommended that you only use one and not both. allow_blank should be preferred for textual choices, and allow_null should be preferred for numeric or other non-textual choices. File upload fields Parsers and file uploads. The FileField and ImageField classes are only suitable for use with MultiPartParser or FileUploadParser. Most parsers, such as e.g. JSON don't support file uploads. Django's regular FILE_UPLOAD_HANDLERS are used for handling uploaded files. FileField A file representation. Performs Django's standard FileField validation. Corresponds to django.forms.fields.FileField. Signature: FileField(max_length=None, allow_empty_file=False, use_url=UPLOADED_FILES_USE_URL) max_length - Designates the maximum length for the file name. allow_empty_file - Designates if empty files are allowed. use_url - If set to True then URL string values will be used for the output representation. If set to False then filename string values will be used for the output representation. Defaults to the value of the UPLOADED_FILES_USE_URL settings key, which is True unless set otherwise. ImageField An image representation. Validates the uploaded file content as matching a known image format. Corresponds to django.forms.fields.ImageField. Signature: ImageField(max_length=None, allow_empty_file=False, use_url=UPLOADED_FILES_USE_URL) max_length - Designates the maximum length for the file name. allow_empty_file - Designates if empty files are allowed. use_url - If set to True then URL string values will be used for the output representation. If set to False then filename string values will be used for the output representation. Defaults to the value of the UPLOADED_FILES_USE_URL settings key, which is True unless set otherwise. Requires either the Pillow package or PIL package. The Pillow package is recommended, as PIL is no longer actively maintained. Composite fields ListField A field class that validates a list of objects. Signature: ListField(child=<A_FIELD_INSTANCE>, allow_empty=True, min_length=None, max_length=None) child - A field instance that should be used for validating the objects in the list. If this argument is not provided then objects in the list will not be validated. allow_empty - Designates if empty lists are allowed. min_length - Validates that the list contains no fewer than this number of elements. max_length - Validates that the list contains no more than this number of elements. For example, to validate a list of integers you might use something like the following: scores = serializers.ListField( child=serializers.IntegerField(min_value=0, max_value=100) ) The ListField class also supports a declarative style that allows you to write reusable list field classes. class StringListField(serializers.ListField): child = serializers.CharField() We can now reuse our custom StringListField class throughout our application, without having to provide a child argument to it. DictField A field class that validates a dictionary of objects. The keys in DictField are always assumed to be string values. Signature: DictField(child=<A_FIELD_INSTANCE>, allow_empty=True) child - A field instance that should be used for validating the values in the dictionary. If this argument is not provided then values in the mapping will not be validated. allow_empty - Designates if empty dictionaries are allowed. For example, to create a field that validates a mapping of strings to strings, you would write something like this: document = DictField(child=CharField()) You can also use the declarative style, as with ListField. For example: class DocumentField(DictField): child = CharField() HStoreField A preconfigured DictField that is compatible with Django's postgres HStoreField. Signature: HStoreField(child=<A_FIELD_INSTANCE>, allow_empty=True) child - A field instance that is used for validating the values in the dictionary. The default child field accepts both empty strings and null values. allow_empty - Designates if empty dictionaries are allowed. Note that the child field must be an instance of CharField, as the hstore extension stores values as strings. JSONField A field class that validates that the incoming data structure consists of valid JSON primitives. In its alternate binary mode, it will represent and validate JSON-encoded binary strings. Signature: JSONField(binary, encoder) binary - If set to True then the field will output and validate a JSON encoded string, rather than a primitive data structure. Defaults to False. encoder - Use this JSON encoder to serialize input object. Defaults to None. Miscellaneous fields ReadOnlyField A field class that simply returns the value of the field without modification. This field is used by default with ModelSerializer when including field names that relate to an attribute rather than a model field. Signature: ReadOnlyField() For example, if has_expired was a property on the Account model, then the following serializer would automatically generate it as a ReadOnlyField: class AccountSerializer(serializers.ModelSerializer): class Meta: model = Account fields = ['id', 'account_name', 'has_expired'] HiddenField A field class that does not take a value based on user input, but instead takes its value from a default value or callable. Signature: HiddenField() For example, to include a field that always provides the current time as part of the serializer validated data, you would use the following: modified = serializers.HiddenField(default=timezone.now) The HiddenField class is usually only needed if you have some validation that needs to run based on some pre-provided field values, but you do not want to expose all of those fields to the end user. For further examples on HiddenField see the validators documentation. ModelField A generic field that can be tied to any arbitrary model field. The ModelField class delegates the task of serialization/deserialization to its associated model field. This field can be used to create serializer fields for custom model fields, without having to create a new custom serializer field. This field is used by ModelSerializer to correspond to custom model field classes. Signature: ModelField(model_field=<Django ModelField instance>) The ModelField class is generally intended for internal use, but can be used by your API if needed. In order to properly instantiate a ModelField, it must be passed a field that is attached to an instantiated model. For example: ModelField(model_field=MyModel()._meta.get_field('custom_field')) SerializerMethodField This is a read-only field. It gets its value by calling a method on the serializer class it is attached to. It can be used to add any sort of data to the serialized representation of your object. Signature: SerializerMethodField(method_name=None) method_name - The name of the method on the serializer to be called. If not included this defaults to get_<field_name>. The serializer method referred to by the method_name argument should accept a single argument (in addition to self), which is the object being serialized. It should return whatever you want to be included in the serialized representation of the object. For example: from django.contrib.auth.models import User from django.utils.timezone import now from rest_framework import serializers class UserSerializer(serializers.ModelSerializer): days_since_joined = serializers.SerializerMethodField() class Meta: model = User fields = '__all__' def get_days_since_joined(self, obj): return (now() - obj.date_joined).days Custom fields If you want to create a custom field, you'll need to subclass Field and then override either one or both of the .to_representation() and .to_internal_value() methods. These two methods are used to convert between the initial datatype, and a primitive, serializable datatype. Primitive datatypes will typically be any of a number, string, boolean, date/time/datetime or None. They may also be any list or dictionary like object that only contains other primitive objects. Other types might be supported, depending on the renderer that you are using. The .to_representation() method is called to convert the initial datatype into a primitive, serializable datatype. The .to_internal_value() method is called to restore a primitive datatype into its internal python representation. This method should raise a serializers.ValidationError if the data is invalid. Examples A Basic Custom Field Let's look at an example of serializing a class that represents an RGB color value: class Color: """ A color represented in the RGB colorspace. """ def __init__(self, red, green, blue): assert(red >= 0 and green >= 0 and blue >= 0) assert(red < 256 and green < 256 and blue < 256) self.red, self.green, self.blue = red, green, blue class ColorField(serializers.Field): """ Color objects are serialized into 'rgb(#, #, #)' notation. """ def to_representation(self, value): return "rgb(%d, %d, %d)" % (value.red, value.green, value.blue) def to_internal_value(self, data): data = data.strip('rgb(').rstrip(')') red, green, blue = [int(col) for col in data.split(',')] return Color(red, green, blue) By default field values are treated as mapping to an attribute on the object. If you need to customize how the field value is accessed and set you need to override .get_attribute() and/or .get_value(). As an example, let's create a field that can be used to represent the class name of the object being serialized: class ClassNameField(serializers.Field): def get_attribute(self, instance): # We pass the object instance onto `to_representation`, # not just the field attribute. return instance def to_representation(self, value): """ Serialize the value's class name. """ return value.__class__.__name__ Raising validation errors Our ColorField class above currently does not perform any data validation. To indicate invalid data, we should raise a serializers.ValidationError, like so: def to_internal_value(self, data): if not isinstance(data, str): msg = 'Incorrect type. Expected a string, but got %s' raise ValidationError(msg % type(data).__name__) if not re.match(r'^rgb\([0-9]+,[0-9]+,[0-9]+\)$', data): raise ValidationError('Incorrect format. Expected `rgb(#,#,#)`.') data = data.strip('rgb(').rstrip(')') red, green, blue = [int(col) for col in data.split(',')] if any([col > 255 or col < 0 for col in (red, green, blue)]): raise ValidationError('Value out of range. Must be between 0 and 255.') return Color(red, green, blue) The .fail() method is a shortcut for raising ValidationError that takes a message string from the error_messages dictionary. For example: default_error_messages = { 'incorrect_type': 'Incorrect type. Expected a string, but got {input_type}', 'incorrect_format': 'Incorrect format. Expected `rgb(#,#,#)`.', 'out_of_range': 'Value out of range. Must be between 0 and 255.' } def to_internal_value(self, data): if not isinstance(data, str): self.fail('incorrect_type', input_type=type(data).__name__) if not re.match(r'^rgb\([0-9]+,[0-9]+,[0-9]+\)$', data): self.fail('incorrect_format') data = data.strip('rgb(').rstrip(')') red, green, blue = [int(col) for col in data.split(',')] if any([col > 255 or col < 0 for col in (red, green, blue)]): self.fail('out_of_range') return Color(red, green, blue) This style keeps your error messages cleaner and more separated from your code, and should be preferred. Using source='*' Here we'll take an example of a flat DataPoint model with x_coordinate and y_coordinate attributes. class DataPoint(models.Model): label = models.CharField(max_length=50) x_coordinate = models.SmallIntegerField() y_coordinate = models.SmallIntegerField() Using a custom field and source='*' we can provide a nested representation of the coordinate pair: class CoordinateField(serializers.Field): def to_representation(self, value): ret = { "x": value.x_coordinate, "y": value.y_coordinate } return ret def to_internal_value(self, data): ret = { "x_coordinate": data["x"], "y_coordinate": data["y"], } return ret class DataPointSerializer(serializers.ModelSerializer): coordinates = CoordinateField(source='*') class Meta: model = DataPoint fields = ['label', 'coordinates'] Note that this example doesn't handle validation. Partly for that reason, in a real project, the coordinate nesting might be better handled with a nested serializer using source='*', with two IntegerField instances, each with their own source pointing to the relevant field. The key points from the example, though, are: to_representation is passed the entire DataPoint object and must map from that to the desired output. >>> instance = DataPoint(label='Example', x_coordinate=1, y_coordinate=2) >>> out_serializer = DataPointSerializer(instance) >>> out_serializer.data ReturnDict([('label', 'Example'), ('coordinates', {'x': 1, 'y': 2})]) Unless our field is to be read-only, to_internal_value must map back to a dict suitable for updating our target object. With source='*', the return from to_internal_value will update the root validated data dictionary, rather than a single key. >>> data = { ... "label": "Second Example", ... "coordinates": { ... "x": 3, ... "y": 4, ... } ... } >>> in_serializer = DataPointSerializer(data=data) >>> in_serializer.is_valid() True >>> in_serializer.validated_data OrderedDict([('label', 'Second Example'), ('y_coordinate', 4), ('x_coordinate', 3)]) For completeness lets do the same thing again but with the nested serializer approach suggested above: class NestedCoordinateSerializer(serializers.Serializer): x = serializers.IntegerField(source='x_coordinate') y = serializers.IntegerField(source='y_coordinate') class DataPointSerializer(serializers.ModelSerializer): coordinates = NestedCoordinateSerializer(source='*') class Meta: model = DataPoint fields = ['label', 'coordinates'] Here the mapping between the target and source attribute pairs (x and x_coordinate, y and y_coordinate) is handled in the IntegerField declarations. It's our NestedCoordinateSerializer that takes source='*'. Our new DataPointSerializer exhibits the same behaviour as the custom field approach. Serializing: >>> out_serializer = DataPointSerializer(instance) >>> out_serializer.data ReturnDict([('label', 'testing'), ('coordinates', OrderedDict([('x', 1), ('y', 2)]))]) Deserializing: >>> in_serializer = DataPointSerializer(data=data) >>> in_serializer.is_valid() True >>> in_serializer.validated_data OrderedDict([('label', 'still testing'), ('x_coordinate', 3), ('y_coordinate', 4)]) But we also get the built-in validation for free: >>> invalid_data = { ... "label": "still testing", ... "coordinates": { ... "x": 'a', ... "y": 'b', ... } ... } >>> invalid_serializer = DataPointSerializer(data=invalid_data) >>> invalid_serializer.is_valid() False >>> invalid_serializer.errors ReturnDict([('coordinates', {'x': ['A valid integer is required.'], 'y': ['A valid integer is required.']})]) For this reason, the nested serializer approach would be the first to try. You would use the custom field approach when the nested serializer becomes infeasible or overly complex. Third party packages The following third party packages are also available. DRF Compound Fields The drf-compound-fields package provides "compound" serializer fields, such as lists of simple values, which can be described by other fields rather than serializers with the many=True option. Also provided are fields for typed dictionaries and values that can be either a specific type or a list of items of that type. DRF Extra Fields The drf-extra-fields package provides extra serializer fields for REST framework, including Base64ImageField and PointField classes. djangorestframework-recursive the djangorestframework-recursive package provides a RecursiveField for serializing and deserializing recursive structures django-rest-framework-gis The django-rest-framework-gis package provides geographic addons for django rest framework like a GeometryField field and a GeoJSON serializer. django-rest-framework-hstore The django-rest-framework-hstore package provides an HStoreField to support django-hstore DictionaryField model field. fields.py
doc_29469
Returns the valid methods that match for a given path. Changelog New in version 0.7. Parameters path_info (Optional[str]) – Return type Iterable[str]
doc_29470
Return a context manager for temporarily changing rcParams. Parameters rcdict The rcParams to temporarily set. fnamestr or path-like A file with Matplotlib rc settings. If both fname and rc are given, settings from rc take precedence. See also The matplotlibrc file Examples Passing explicit values via a dict: with mpl.rc_context({'interactive': False}): fig, ax = plt.subplots() ax.plot(range(3), range(3)) fig.savefig('example.png') plt.close(fig) Loading settings from a file: with mpl.rc_context(fname='print.rc'): plt.plot(x, y) # uses 'print.rc' Examples using matplotlib.pyplot.rc_context Style sheets reference Matplotlib logo
doc_29471
Describes 52-53 week fiscal year. This is also known as a 4-4-5 calendar. It is used by companies that desire that their fiscal year always end on the same day of the week. It is a method of managing accounting periods. It is a common calendar structure for some industries, such as retail, manufacturing and parking industry. For more information see: https://en.wikipedia.org/wiki/4-4-5_calendar The year may either: end on the last X day of the Y month. end on the last X day closest to the last day of the Y month. X is a specific day of the week. Y is a certain month of the year Parameters n:int weekday:int {0, 1, …, 6}, default 0 A specific integer for the day of the week. 0 is Monday 1 is Tuesday 2 is Wednesday 3 is Thursday 4 is Friday 5 is Saturday 6 is Sunday. startingMonth:int {1, 2, … 12}, default 1 The month in which the fiscal year ends. variation:str, default “nearest” Method of employing 4-4-5 calendar. There are two options: “nearest” means year end is weekday closest to last day of month in year. “last” means year end is final weekday of the final month in fiscal year. Attributes base Returns a copy of the calling offset object with n=1 and all other attributes equal. freqstr kwds n name nanos normalize rule_code startingMonth variation weekday Methods __call__(*args, **kwargs) Call self as a function. rollback Roll provided date backward to next offset only if not on offset. rollforward Roll provided date forward to next offset only if not on offset. apply apply_index copy get_rule_code_suffix get_year_end isAnchored is_anchored is_month_end is_month_start is_on_offset is_quarter_end is_quarter_start is_year_end is_year_start onOffset
doc_29472
See Migration guide for more details. tf.compat.v1.estimator.NanTensorHook, tf.compat.v1.train.NanTensorHook tf.estimator.NanTensorHook( loss_tensor, fail_on_nan_loss=True ) Can either fail with exception or just stop training. Args loss_tensor Tensor, the loss tensor. fail_on_nan_loss bool, whether to raise exception when loss is NaN. Methods after_create_session View source after_create_session( session, coord ) Called when new TensorFlow session is created. This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which begin is called: When this is called, the graph is finalized and ops can no longer be added to the graph. This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session. Args session A TensorFlow Session that has been created. coord A Coordinator object which keeps track of all threads. after_run View source after_run( run_context, run_values ) Called after each call to run(). The run_values argument contains results of requested ops/tensors by before_run(). The run_context argument is the same one send to before_run call. run_context.request_stop() can be called to stop the iteration. If session.run() raises any exceptions then after_run() is not called. Args run_context A SessionRunContext object. run_values A SessionRunValues object. before_run View source before_run( run_context ) Called before each call to run(). You can return from this call a SessionRunArgs object indicating ops or tensors to add to the upcoming run() call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call. The run_context argument is a SessionRunContext that provides information about the upcoming run() call: the originally requested op/tensors, the TensorFlow Session. At this point graph is finalized and you can not add ops. Args run_context A SessionRunContext object. Returns None or a SessionRunArgs object. begin View source begin() Called once before using the session. When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the begin() call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of begin() on the same graph, should not change the graph. end View source end( session ) Called at the end of session. The session argument can be used in case the hook wants to run final ops, such as saving a last checkpoint. If session.run() raises exception other than OutOfRangeError or StopIteration then end() is not called. Note the difference between end() and after_run() behavior when session.run() raises OutOfRangeError or StopIteration. In that case end() is called but after_run() is not called. Args session A TensorFlow Session that will be soon closed.
doc_29473
Return the complex conjugate, element-wise. The complex conjugate of a complex number is obtained by changing the sign of its imaginary part. Parameters xarray_like Input value. outndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. wherearray_like, optional This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs For other keyword-only arguments, see the ufunc docs. Returns yndarray The complex conjugate of x, with same dtype as y. This is a scalar if x is a scalar. Notes conj is an alias for conjugate: >>> np.conj is np.conjugate True Examples >>> np.conjugate(1+2j) (1-2j) >>> x = np.eye(2) + 1j * np.eye(2) >>> np.conjugate(x) array([[ 1.-1.j, 0.-0.j], [ 0.-0.j, 1.-1.j]])
doc_29474
Create a spreadsheet-style pivot table as a DataFrame. The levels in the pivot table will be stored in MultiIndex objects (hierarchical indexes) on the index and columns of the result DataFrame. Parameters data:DataFrame values:column to aggregate, optional index:column, Grouper, array, or list of the previous If an array is passed, it must be the same length as the data. The list can contain any of the other types (except list). Keys to group by on the pivot table index. If an array is passed, it is being used as the same manner as column values. columns:column, Grouper, array, or list of the previous If an array is passed, it must be the same length as the data. The list can contain any of the other types (except list). Keys to group by on the pivot table column. If an array is passed, it is being used as the same manner as column values. aggfunc:function, list of functions, dict, default numpy.mean If list of functions passed, the resulting pivot table will have hierarchical columns whose top level are the function names (inferred from the function objects themselves) If dict is passed, the key is column to aggregate and value is function or list of functions. fill_value:scalar, default None Value to replace missing values with (in the resulting pivot table, after aggregation). margins:bool, default False Add all row / columns (e.g. for subtotal / grand totals). dropna:bool, default True Do not include columns whose entries are all NaN. margins_name:str, default ‘All’ Name of the row / column that will contain the totals when margins is True. observed:bool, default False This only applies if any of the groupers are Categoricals. If True: only show observed values for categorical groupers. If False: show all values for categorical groupers. Changed in version 0.25.0. sort:bool, default True Specifies if the result should be sorted. New in version 1.3.0. Returns DataFrame An Excel style pivot table. See also DataFrame.pivot Pivot without aggregation that can handle non-numeric data. DataFrame.melt Unpivot a DataFrame from wide to long format, optionally leaving identifiers set. wide_to_long Wide panel to long format. Less flexible but more user-friendly than melt. Examples >>> df = pd.DataFrame({"A": ["foo", "foo", "foo", "foo", "foo", ... "bar", "bar", "bar", "bar"], ... "B": ["one", "one", "one", "two", "two", ... "one", "one", "two", "two"], ... "C": ["small", "large", "large", "small", ... "small", "large", "small", "small", ... "large"], ... "D": [1, 2, 2, 3, 3, 4, 5, 6, 7], ... "E": [2, 4, 5, 5, 6, 6, 8, 9, 9]}) >>> df A B C D E 0 foo one small 1 2 1 foo one large 2 4 2 foo one large 2 5 3 foo two small 3 5 4 foo two small 3 6 5 bar one large 4 6 6 bar one small 5 8 7 bar two small 6 9 8 bar two large 7 9 This first example aggregates values by taking the sum. >>> table = pd.pivot_table(df, values='D', index=['A', 'B'], ... columns=['C'], aggfunc=np.sum) >>> table C large small A B bar one 4.0 5.0 two 7.0 6.0 foo one 4.0 1.0 two NaN 6.0 We can also fill missing values using the fill_value parameter. >>> table = pd.pivot_table(df, values='D', index=['A', 'B'], ... columns=['C'], aggfunc=np.sum, fill_value=0) >>> table C large small A B bar one 4 5 two 7 6 foo one 4 1 two 0 6 The next example aggregates by taking the mean across multiple columns. >>> table = pd.pivot_table(df, values=['D', 'E'], index=['A', 'C'], ... aggfunc={'D': np.mean, ... 'E': np.mean}) >>> table D E A C bar large 5.500000 7.500000 small 5.500000 8.500000 foo large 2.000000 4.500000 small 2.333333 4.333333 We can also calculate multiple types of aggregations for any given value column. >>> table = pd.pivot_table(df, values=['D', 'E'], index=['A', 'C'], ... aggfunc={'D': np.mean, ... 'E': [min, max, np.mean]}) >>> table D E mean max mean min A C bar large 5.500000 9 7.500000 6 small 5.500000 9 8.500000 8 foo large 2.000000 5 4.500000 4 small 2.333333 6 4.333333 2
doc_29475
Set a new protocol. Switching protocol should only be done when both protocols are documented to support the switch.
doc_29476
Fit the SVM model according to the given training data. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) Training vectors, where n_samples is the number of samples and n_features is the number of features. For kernel=”precomputed”, the expected shape of X is (n_samples, n_samples). yarray-like of shape (n_samples,) Target values (class labels in classification, real numbers in regression). sample_weightarray-like of shape (n_samples,), default=None Per-sample weights. Rescale C per sample. Higher weights force the classifier to put more emphasis on these points. Returns selfobject Notes If X and y are not C-ordered and contiguous arrays of np.float64 and X is not a scipy.sparse.csr_matrix, X and/or y may be copied. If X is a dense array, then the other methods will not support sparse matrices as input.
doc_29477
Decorator to indicate that annotations are not type hints. This works as class or function decorator. With a class, it applies recursively to all methods defined in that class (but not to methods defined in its superclasses or subclasses). This mutates the function(s) in place.
doc_29478
tf.optimizers.schedules.serialize Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.schedules.serialize tf.keras.optimizers.schedules.serialize( learning_rate_schedule )
doc_29479
Calls str.decode element-wise. The set of available codecs comes from the Python standard library, and may be extended at runtime. For more information, see the codecs module. Parameters aarray_like of str or unicode encodingstr, optional The name of an encoding errorsstr, optional Specifies how to handle encoding errors Returns outndarray See also str.decode Notes The type of the result will depend on the encoding specified. Examples >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> c array(['aAaAaA', ' aA ', 'abBABba'], dtype='<U7') >>> np.char.encode(c, encoding='cp037') array(['\x81\xc1\x81\xc1\x81\xc1', '@@\x81\xc1@@', '\x81\x82\xc2\xc1\xc2\x82\x81'], dtype='|S7')
doc_29480
Return the data associated with pathname. Raise OSError if the file wasn’t found. Changed in version 3.3: IOError used to be raised instead of OSError.
doc_29481
Adds a function to the internal list of functions that should be called as part of closing down the response. Since 0.7 this function also returns the function that was passed so that this can be used as a decorator. Changelog New in version 0.6. Parameters func (Callable[[], Any]) – Return type Callable[[], Any]
doc_29482
Return true if the lock is acquired.
doc_29483
Retrieve a file or directory listing in the encoding specified by the encoding parameter at initialization. cmd should be an appropriate RETR command (see retrbinary()) or a command such as LIST or NLST (usually just the string 'LIST'). LIST retrieves a list of files and information about those files. NLST retrieves a list of file names. The callback function is called for each line with a string argument containing the line with the trailing CRLF stripped. The default callback prints the line to sys.stdout.
doc_29484
Returns a copy of the calling offset object with n=1 and all other attributes equal.
doc_29485
Sets the learning rate of each parameter group according to cyclical learning rate policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper Cyclical Learning Rates for Training Neural Networks. The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis. Cyclical learning rate policy changes the learning rate after every batch. step should be called after a batch has been used for training. This class has three built-in policies, as put forth in the paper: “triangular”: A basic triangular cycle without amplitude scaling. “triangular2”: A basic triangular cycle that scales initial amplitude by half each cycle. “exp_range”: A cycle that scales initial amplitude by gammacycle iterations\text{gamma}^{\text{cycle iterations}} at each cycle iteration. This implementation was adapted from the github repo: bckenstler/CLR Parameters optimizer (Optimizer) – Wrapped optimizer. base_lr (float or list) – Initial learning rate which is the lower boundary in the cycle for each parameter group. max_lr (float or list) – Upper learning rate boundaries in the cycle for each parameter group. Functionally, it defines the cycle amplitude (max_lr - base_lr). The lr at any cycle is the sum of base_lr and some scaling of the amplitude; therefore max_lr may not actually be reached depending on scaling function. step_size_up (int) – Number of training iterations in the increasing half of a cycle. Default: 2000 step_size_down (int) – Number of training iterations in the decreasing half of a cycle. If step_size_down is None, it is set to step_size_up. Default: None mode (str) – One of {triangular, triangular2, exp_range}. Values correspond to policies detailed above. If scale_fn is not None, this argument is ignored. Default: ‘triangular’ gamma (float) – Constant in ‘exp_range’ scaling function: gamma**(cycle iterations) Default: 1.0 scale_fn (function) – Custom scaling policy defined by a single argument lambda function, where 0 <= scale_fn(x) <= 1 for all x >= 0. If specified, then ‘mode’ is ignored. Default: None scale_mode (str) – {‘cycle’, ‘iterations’}. Defines whether scale_fn is evaluated on cycle number or cycle iterations (training iterations since start of cycle). Default: ‘cycle’ cycle_momentum (bool) – If True, momentum is cycled inversely to learning rate between ‘base_momentum’ and ‘max_momentum’. Default: True base_momentum (float or list) – Lower momentum boundaries in the cycle for each parameter group. Note that momentum is cycled inversely to learning rate; at the peak of a cycle, momentum is ‘base_momentum’ and learning rate is ‘max_lr’. Default: 0.8 max_momentum (float or list) – Upper momentum boundaries in the cycle for each parameter group. Functionally, it defines the cycle amplitude (max_momentum - base_momentum). The momentum at any cycle is the difference of max_momentum and some scaling of the amplitude; therefore base_momentum may not actually be reached depending on scaling function. Note that momentum is cycled inversely to learning rate; at the start of a cycle, momentum is ‘max_momentum’ and learning rate is ‘base_lr’ Default: 0.9 last_epoch (int) – The index of the last batch. This parameter is used when resuming a training job. Since step() should be invoked after each batch instead of after each epoch, this number represents the total number of batches computed, not the total number of epochs computed. When last_epoch=-1, the schedule is started from the beginning. Default: -1 verbose (bool) – If True, prints a message to stdout for each update. Default: False. Example >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) >>> scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.01, max_lr=0.1) >>> data_loader = torch.utils.data.DataLoader(...) >>> for epoch in range(10): >>> for batch in data_loader: >>> train_batch(...) >>> scheduler.step() get_lr() [source] Calculates the learning rate at batch index. This function treats self.last_epoch as the last batch index. If self.cycle_momentum is True, this function has a side effect of updating the optimizer’s momentum.
doc_29486
Alias for self._offset.
doc_29487
Align the ylabels of subplots in the same subplot column if label alignment is being done automatically (i.e. the label position is not manually set). Alignment persists for draw events after this is called. If a label is on the left, it is aligned with labels on Axes that also have their label on the left and that have the same left-most subplot column. If the label is on the right, it is aligned with labels on Axes with the same right-most column. Parameters axslist of Axes Optional list (or ndarray) of Axes to align the ylabels. Default is to align all Axes on the figure. See also matplotlib.figure.Figure.align_xlabels matplotlib.figure.Figure.align_labels Notes This assumes that axs are from the same GridSpec, so that their SubplotSpec positions correspond to figure positions. Examples Example with large yticks labels: fig, axs = plt.subplots(2, 1) axs[0].plot(np.arange(0, 1000, 50)) axs[0].set_ylabel('YLabel 0') axs[1].set_ylabel('YLabel 1') fig.align_ylabels()
doc_29488
Set parameters within this locator.
doc_29489
tf.compat.v1.layers.batch_normalization( inputs, axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer=tf.zeros_initializer(), gamma_initializer=tf.ones_initializer(), moving_mean_initializer=tf.zeros_initializer(), moving_variance_initializer=tf.ones_initializer(), beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, training=False, trainable=True, name=None, reuse=None, renorm=False, renorm_clipping=None, renorm_momentum=0.99, fused=None, virtual_batch_size=None, adjustment=None ) Note: when training, the moving_mean and moving_variance need to be updated. By default the update ops are placed in tf.GraphKeys.UPDATE_OPS, so they need to be executed alongside the train_op. Also, be sure to add any batch_normalization ops before getting the update_ops collection. Otherwise, update_ops will be empty, and training/inference will not work properly. For example: x_norm = tf.compat.v1.layers.batch_normalization(x, training=training) # ... update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS) train_op = optimizer.minimize(loss) train_op = tf.group([train_op, update_ops]) Arguments inputs Tensor input. axis An int, the axis that should be normalized (typically the features axis). For instance, after a Convolution2D layer with data_format="channels_first", set axis=1 in BatchNormalization. momentum Momentum for the moving average. epsilon Small float added to variance to avoid dividing by zero. center If True, add offset of beta to normalized tensor. If False, beta is ignored. scale If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling can be done by the next layer. beta_initializer Initializer for the beta weight. gamma_initializer Initializer for the gamma weight. moving_mean_initializer Initializer for the moving mean. moving_variance_initializer Initializer for the moving variance. beta_regularizer Optional regularizer for the beta weight. gamma_regularizer Optional regularizer for the gamma weight. beta_constraint An optional projection function to be applied to the beta weight after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training. gamma_constraint An optional projection function to be applied to the gamma weight after being updated by an Optimizer. training Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (normalized with statistics of the current batch) or in inference mode (normalized with moving statistics). NOTE: make sure to set this parameter correctly, or else your training/inference will not work properly. trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable). name String, the name of the layer. reuse Boolean, whether to reuse the weights of a previous layer by the same name. renorm Whether to use Batch Renormalization (Ioffe, 2017). This adds extra variables during training. The inference is the same for either value of this parameter. renorm_clipping A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar Tensors used to clip the renorm correction. The correction (r, d) is used as corrected_value = normalized_value * r + d, with r clipped to [rmin, rmax], and d to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively. renorm_momentum Momentum used to update the moving means and standard deviations with renorm. Unlike momentum, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that momentum is still applied to get the means and variances for inference. fused if None or True, use a faster, fused implementation if possible. If False, use the system recommended implementation. virtual_batch_size An int. By default, virtual_batch_size is None, which means batch normalization is performed across the whole batch. When virtual_batch_size is not None, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution. adjustment A function taking the Tensor containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1, adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1)) will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If None, no adjustment is applied. Cannot be specified if virtual_batch_size is specified. Returns Output tensor. Raises ValueError if eager execution is enabled. References: Batch Normalization - Accelerating Deep Network Training by Reducing Internal Covariate Shift: Ioffe et al., 2015 (pdf) Batch Renormalization - Towards Reducing Minibatch Dependence in Batch-Normalized Models: Ioffe, 2017 (pdf)
doc_29490
A datetime of the user’s last login.
doc_29491
Alias for set_antialiased.
doc_29492
mmap.MADV_RANDOM mmap.MADV_SEQUENTIAL mmap.MADV_WILLNEED mmap.MADV_DONTNEED mmap.MADV_REMOVE mmap.MADV_DONTFORK mmap.MADV_DOFORK mmap.MADV_HWPOISON mmap.MADV_MERGEABLE mmap.MADV_UNMERGEABLE mmap.MADV_SOFT_OFFLINE mmap.MADV_HUGEPAGE mmap.MADV_NOHUGEPAGE mmap.MADV_DONTDUMP mmap.MADV_DODUMP mmap.MADV_FREE mmap.MADV_NOSYNC mmap.MADV_AUTOSYNC mmap.MADV_NOCORE mmap.MADV_CORE mmap.MADV_PROTECT These options can be passed to mmap.madvise(). Not every option will be present on every system. Availability: Systems with the madvise() system call. New in version 3.8.
doc_29493
Returns a list of all known themes.
doc_29494
Returns the underlying storage.
doc_29495
Least squares fit to data. Return a series instance that is the least squares fit to the data y sampled at x. The domain of the returned instance can be specified and this will often result in a superior fit with less chance of ill conditioning. Parameters xarray_like, shape (M,) x-coordinates of the M sample points (x[i], y[i]). yarray_like, shape (M,) y-coordinates of the M sample points (x[i], y[i]). degint or 1-D array_like Degree(s) of the fitting polynomials. If deg is a single integer all terms up to and including the deg’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. domain{None, [beg, end], []}, optional Domain to use for the returned series. If None, then a minimal domain that covers the points x is chosen. If [] the class domain is used. The default value was the class domain in NumPy 1.4 and None in later versions. The [] option was added in numpy 1.5.0. rcondfloat, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. fullbool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. warray_like, shape (M,), optional Weights. If not None, the weight w[i] applies to the unsquared residual y[i] - y_hat[i] at x[i]. Ideally the weights are chosen so that the errors of the products w[i]*y[i] all have the same variance. When using inverse-variance weighting, use w[i] = 1/sigma(y[i]). The default value is None. New in version 1.5.0. window{[beg, end]}, optional Window to use for the returned series. The default value is the default class domain New in version 1.6.0. Returns new_seriesseries A series that represents the least squares fit to the data and has the domain and window specified in the call. If the coefficients for the unscaled and unshifted basis polynomials are of interest, do new_series.convert().coef. [resid, rank, sv, rcond]list These values are only returned if full == True resid – sum of squared residuals of the least squares fit rank – the numerical rank of the scaled Vandermonde matrix sv – singular values of the scaled Vandermonde matrix rcond – value of rcond. For more details, see linalg.lstsq.
doc_29496
See Migration guide for more details. tf.compat.v1.raw_ops.ExperimentalDatasetToTFRecord tf.raw_ops.ExperimentalDatasetToTFRecord( input_dataset, filename, compression_type, name=None ) Args input_dataset A Tensor of type variant. A variant tensor representing the dataset to write. filename A Tensor of type string. A scalar string tensor representing the filename to use. compression_type A Tensor of type string. A scalar string tensor containing either (i) the empty string (no compression), (ii) "ZLIB", or (iii) "GZIP". name A name for the operation (optional). Returns The created Operation.
doc_29497
Group Series using a mapper or by a Series of columns. A groupby operation involves some combination of splitting the object, applying a function, and combining the results. This can be used to group large amounts of data and compute operations on these groups. Parameters by:mapping, function, label, or list of labels Used to determine the groups for the groupby. If by is a function, it’s called on each value of the object’s index. If a dict or Series is passed, the Series or dict VALUES will be used to determine the groups (the Series’ values are first aligned; see .align() method). If a list or ndarray of length equal to the selected axis is passed (see the groupby user guide), the values are used as-is to determine the groups. A label or list of labels may be passed to group by the columns in self. Notice that a tuple is interpreted as a (single) key. axis:{0 or ‘index’, 1 or ‘columns’}, default 0 Split along rows (0) or columns (1). level:int, level name, or sequence of such, default None If the axis is a MultiIndex (hierarchical), group by a particular level or levels. as_index:bool, default True For aggregated output, return object with group labels as the index. Only relevant for DataFrame input. as_index=False is effectively “SQL-style” grouped output. sort:bool, default True Sort group keys. Get better performance by turning this off. Note this does not influence the order of observations within each group. Groupby preserves the order of rows within each group. group_keys:bool, default True When calling apply, add group keys to index to identify pieces. squeeze:bool, default False Reduce the dimensionality of the return type if possible, otherwise return a consistent type. Deprecated since version 1.1.0. observed:bool, default False This only applies if any of the groupers are Categoricals. If True: only show observed values for categorical groupers. If False: show all values for categorical groupers. dropna:bool, default True If True, and if group keys contain NA values, NA values together with row/column will be dropped. If False, NA values will also be treated as the key in groups. New in version 1.1.0. Returns SeriesGroupBy Returns a groupby object that contains information about the groups. See also resample Convenience method for frequency conversion and resampling of time series. Notes See the user guide for more detailed usage and examples, including splitting an object into groups, iterating through groups, selecting a group, aggregation, and more. Examples >>> ser = pd.Series([390., 350., 30., 20.], ... index=['Falcon', 'Falcon', 'Parrot', 'Parrot'], name="Max Speed") >>> ser Falcon 390.0 Falcon 350.0 Parrot 30.0 Parrot 20.0 Name: Max Speed, dtype: float64 >>> ser.groupby(["a", "b", "a", "b"]).mean() a 210.0 b 185.0 Name: Max Speed, dtype: float64 >>> ser.groupby(level=0).mean() Falcon 370.0 Parrot 25.0 Name: Max Speed, dtype: float64 >>> ser.groupby(ser > 100).mean() Max Speed False 25.0 True 370.0 Name: Max Speed, dtype: float64 Grouping by Indexes We can groupby different levels of a hierarchical index using the level parameter: >>> arrays = [['Falcon', 'Falcon', 'Parrot', 'Parrot'], ... ['Captive', 'Wild', 'Captive', 'Wild']] >>> index = pd.MultiIndex.from_arrays(arrays, names=('Animal', 'Type')) >>> ser = pd.Series([390., 350., 30., 20.], index=index, name="Max Speed") >>> ser Animal Type Falcon Captive 390.0 Wild 350.0 Parrot Captive 30.0 Wild 20.0 Name: Max Speed, dtype: float64 >>> ser.groupby(level=0).mean() Animal Falcon 370.0 Parrot 25.0 Name: Max Speed, dtype: float64 >>> ser.groupby(level="Type").mean() Type Captive 210.0 Wild 185.0 Name: Max Speed, dtype: float64 We can also choose to include NA in group keys or not by defining dropna parameter, the default setting is True. >>> ser = pd.Series([1, 2, 3, 3], index=["a", 'a', 'b', np.nan]) >>> ser.groupby(level=0).sum() a 3 b 3 dtype: int64 >>> ser.groupby(level=0, dropna=False).sum() a 3 b 3 NaN 3 dtype: int64 >>> arrays = ['Falcon', 'Falcon', 'Parrot', 'Parrot'] >>> ser = pd.Series([390., 350., 30., 20.], index=arrays, name="Max Speed") >>> ser.groupby(["a", "b", "a", np.nan]).mean() a 210.0 b 350.0 Name: Max Speed, dtype: float64 >>> ser.groupby(["a", "b", "a", np.nan], dropna=False).mean() a 210.0 b 350.0 NaN 20.0 Name: Max Speed, dtype: float64
doc_29498
A list of formats used to attempt to convert a string to a valid datetime.datetime object, in addition to ISO 8601 formats.
doc_29499
Applies a 2D max pooling over an input signal composed of several input planes. See MaxPool2d for details.