_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_1700
Always returns None.
doc_1701
Set the CookiePolicy instance to be used.
doc_1702
See Migration guide for more details. tf.compat.v1.raw_ops.AssignSub tf.raw_ops.AssignSub( ref, value, use_locking=False, name=None ) This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value. Args ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable node. value A Tensor. Must have the same type as ref. The value to be subtracted to the variable. use_locking An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as ref.
doc_1703
Return the decision path in the forest. New in version 0.18. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. Returns indicatorsparse matrix of shape (n_samples, n_nodes) Return a node indicator matrix where non zero elements indicates that the samples goes through the nodes. The matrix is of CSR format. n_nodes_ptrndarray of shape (n_estimators + 1,) The columns from indicator[n_nodes_ptr[i]:n_nodes_ptr[i+1]] gives the indicator value for the i-th estimator.
doc_1704
Helps style a DataFrame or Series according to the data with HTML and CSS. Parameters data:Series or DataFrame Data to be styled - either a Series or DataFrame. precision:int, optional Precision to round floats to. If not given defaults to pandas.options.styler.format.precision. Changed in version 1.4.0. table_styles:list-like, default None List of {selector: (attr, value)} dicts; see Notes. uuid:str, default None A unique identifier to avoid CSS collisions; generated automatically. caption:str, tuple, default None String caption to attach to the table. Tuple only used for LaTeX dual captions. table_attributes:str, default None Items that show up in the opening <table> tag in addition to automatic (by default) id. cell_ids:bool, default True If True, each cell will have an id attribute in their HTML tag. The id takes the form T_<uuid>_row<num_row>_col<num_col> where <uuid> is the unique identifier, <num_row> is the row number and <num_col> is the column number. na_rep:str, optional Representation for missing values. If na_rep is None, no special formatting is applied, and falls back to pandas.options.styler.format.na_rep. New in version 1.0.0. uuid_len:int, default 5 If uuid is not specified, the length of the uuid to randomly generate expressed in hex characters, in range [0, 32]. New in version 1.2.0. decimal:str, optional Character used as decimal separator for floats, complex and integers. If not given uses pandas.options.styler.format.decimal. New in version 1.3.0. thousands:str, optional, default None Character used as thousands separator for floats, complex and integers. If not given uses pandas.options.styler.format.thousands. New in version 1.3.0. escape:str, optional Use ‘html’ to replace the characters &, <, >, ', and " in cell display string with HTML-safe sequences. Use ‘latex’ to replace the characters &, %, $, #, _, {, }, ~, ^, and \ in the cell display string with LaTeX-safe sequences. If not given uses pandas.options.styler.format.escape. New in version 1.3.0. formatter:str, callable, dict, optional Object to define how values are displayed. See Styler.format. If not given uses pandas.options.styler.format.formatter. New in version 1.4.0. See also DataFrame.style Return a Styler object containing methods for building a styled HTML representation for the DataFrame. Notes Most styling will be done by passing style functions into Styler.apply or Styler.applymap. Style functions should return values with strings containing CSS 'attr: value' that will be applied to the indicated cells. If using in the Jupyter notebook, Styler has defined a _repr_html_ to automatically render itself. Otherwise call Styler.to_html to get the generated HTML. CSS classes are attached to the generated HTML Index and Column names include index_name and level<k> where k is its level in a MultiIndex Index label cells include row_heading row<n> where n is the numeric position of the row level<k> where k is the level in a MultiIndex Column label cells include * col_heading * col<n> where n is the numeric position of the column * level<k> where k is the level in a MultiIndex Blank cells include blank Data cells include data Trimmed cells include col_trim or row_trim. Any, or all, or these classes can be renamed by using the css_class_names argument in Styler.set_table_classes, giving a value such as {“row”: “MY_ROW_CLASS”, “col_trim”: “”, “row_trim”: “”}. Attributes env (Jinja2 jinja2.Environment) template_html (Jinja2 Template) template_html_table (Jinja2 Template) template_html_style (Jinja2 Template) template_latex (Jinja2 Template) loader (Jinja2 Loader) Methods apply(func[, axis, subset]) Apply a CSS-styling function column-wise, row-wise, or table-wise. apply_index(func[, axis, level]) Apply a CSS-styling function to the index or column headers, level-wise. applymap(func[, subset]) Apply a CSS-styling function elementwise. applymap_index(func[, axis, level]) Apply a CSS-styling function to the index or column headers, elementwise. background_gradient([cmap, low, high, axis, ...]) Color the background in a gradient style. bar([subset, axis, color, cmap, width, ...]) Draw bar chart in the cell backgrounds. clear() Reset the Styler, removing any previously applied styles. export() Export the styles applied to the current Styler. format([formatter, subset, na_rep, ...]) Format the text display value of cells. format_index([formatter, axis, level, ...]) Format the text display value of index labels or column headers. from_custom_template(searchpath[, ...]) Factory function for creating a subclass of Styler. hide([subset, axis, level, names]) Hide the entire index / column headers, or specific rows / columns from display. hide_columns([subset, level, names]) Hide the column headers or specific keys in the columns from rendering. hide_index([subset, level, names]) (DEPRECATED) Hide the entire index, or specific keys in the index from rendering. highlight_between([subset, color, axis, ...]) Highlight a defined range with a style. highlight_max([subset, color, axis, props]) Highlight the maximum with a style. highlight_min([subset, color, axis, props]) Highlight the minimum with a style. highlight_null([null_color, subset, props]) Highlight missing values with a style. highlight_quantile([subset, color, axis, ...]) Highlight values defined by a quantile with a style. pipe(func, *args, **kwargs) Apply func(self, *args, **kwargs), and return the result. render([sparse_index, sparse_columns]) (DEPRECATED) Render the Styler including all applied styles to HTML. set_caption(caption) Set the text added to a <caption> HTML element. set_na_rep(na_rep) (DEPRECATED) Set the missing data representation on a Styler. set_precision(precision) (DEPRECATED) Set the precision used to display values. set_properties([subset]) Set defined CSS-properties to each <td> HTML element within the given subset. set_sticky([axis, pixel_size, levels]) Add CSS to permanently display the index or column headers in a scrolling frame. set_table_attributes(attributes) Set the table attributes added to the <table> HTML element. set_table_styles([table_styles, axis, ...]) Set the table styles included within the <style> HTML element. set_td_classes(classes) Set the DataFrame of strings added to the class attribute of <td> HTML elements. set_tooltips(ttips[, props, css_class]) Set the DataFrame of strings on Styler generating :hover tooltips. set_uuid(uuid) Set the uuid applied to id attributes of HTML elements. text_gradient([cmap, low, high, axis, ...]) Color the text in a gradient style. to_excel(excel_writer[, sheet_name, na_rep, ...]) Write Styler to an Excel sheet. to_html([buf, table_uuid, table_attributes, ...]) Write Styler to a file, buffer or string in HTML-CSS format. to_latex([buf, column_format, position, ...]) Write Styler to a file, buffer or string in LaTeX format. use(styles) Set the styles on the current Styler. where(cond, value[, other, subset]) (DEPRECATED) Apply CSS-styles based on a conditional function elementwise.
doc_1705
os.O_DIRECT os.O_DIRECTORY os.O_NOFOLLOW os.O_NOATIME os.O_PATH os.O_TMPFILE os.O_SHLOCK os.O_EXLOCK The above constants are extensions and not present if they are not defined by the C library. Changed in version 3.4: Add O_PATH on systems that support it. Add O_TMPFILE, only available on Linux Kernel 3.11 or newer.
doc_1706
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
doc_1707
See Migration guide for more details. tf.compat.v1.nn.depthwise_conv2d_backprop_filter, tf.compat.v1.nn.depthwise_conv2d_native_backprop_filter tf.nn.depthwise_conv2d_backprop_filter( input, filter_sizes, out_backprop, strides, padding, data_format='NHWC', dilations=[1, 1, 1, 1], name=None ) Args input A Tensor. Must be one of the following types: half, bfloat16, float32, float64. 4-D with shape based on data_format. For example, if data_format is 'NHWC' then input is a 4-D [batch, in_height, in_width, in_channels] tensor. filter_sizes A Tensor of type int32. An integer vector representing the tensor shape of filter, where filter is a 4-D [filter_height, filter_width, in_channels, depthwise_multiplier] tensor. out_backprop A Tensor. Must have the same type as input. 4-D with shape based on data_format. For example, if data_format is 'NHWC' then out_backprop shape is [batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution. strides A list of ints. The stride of the sliding window for each dimension of the input of the convolution. padding Controls how to pad the image before applying the convolution. Can be the string "SAME" or "VALID" indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is "NHWC", this should be in the form [[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is "NCHW", this should be in the form [[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]. data_format An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width]. dilations An optional list of ints. Defaults to [1, 1, 1, 1]. 1-D tensor of length 4. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions must be 1. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
doc_1708
Return a new NNTP object, representing a connection to the NNTP server running on host host, listening at port port. An optional timeout can be specified for the socket connection. If the optional user and password are provided, or if suitable credentials are present in /.netrc and the optional flag usenetrc is true, the AUTHINFO USER and AUTHINFO PASS commands are used to identify and authenticate the user to the server. If the optional flag readermode is true, then a mode reader command is sent before authentication is performed. Reader mode is sometimes necessary if you are connecting to an NNTP server on the local machine and intend to call reader-specific commands, such as group. If you get unexpected NNTPPermanentErrors, you might need to set readermode. The NNTP class supports the with statement to unconditionally consume OSError exceptions and to close the NNTP connection when done, e.g.: >>> from nntplib import NNTP >>> with NNTP('news.gmane.io') as n: ... n.group('gmane.comp.python.committers') ... ('211 1755 1 1755 gmane.comp.python.committers', 1755, 1, 1755, 'gmane.comp.python.committers') >>> Raises an auditing event nntplib.connect with arguments self, host, port. All commands will raise an auditing event nntplib.putline with arguments self and line, where line is the bytes about to be sent to the remote host. Changed in version 3.2: usenetrc is now False by default. Changed in version 3.3: Support for the with statement was added. Changed in version 3.9: If the timeout parameter is set to be zero, it will raise a ValueError to prevent the creation of a non-blocking socket.
doc_1709
Map value to the interval [0, 1]. The clip argument is unused.
doc_1710
Raise the SystemExit exception. When not caught, this will cause the thread to exit silently.
doc_1711
Inline, using the role: This text uses inline math: :mathmpl:`\alpha > \beta`. which produces: This text uses inline math: . Standalone, using the directive: Here is some standalone math: .. mathmpl:: \alpha > \beta which produces: Here is some standalone math: Options The mathmpl role and directive both support the following options: fontsetstr, default: 'cm' The font set to use when displaying math. See rcParams["mathtext.fontset"] (default: 'dejavusans'). fontsizefloat The font size, in points. Defaults to the value from the extension configuration option defined below. Configuration options The mathtext extension has the following configuration options: mathmpl_fontsizefloat, default: 10.0 Default font size, in points. mathmpl_srcsetlist of str, default: [] Additional image sizes to generate when embedding in HTML, to support responsive resolution images. The list should contain additional x-descriptors ('1.5x', '2x', etc.) to generate (1x is the default and always included.) classmatplotlib.sphinxext.mathmpl.MathDirective(name, arguments, options, content, lineno, content_offset, block_text, state, state_machine)[source] The .. mathmpl:: directive, as documented in the module's docstring.
doc_1712
Set the normalization instance. Parameters normNormalize or None Notes If there are any colorbars using the mappable for this norm, setting the norm of the mappable will reset the norm, locator, and formatters on the colorbar to default.
doc_1713
Divide one Laguerre series by another. Returns the quotient-with-remainder of two Laguerre series c1 / c2. The arguments are sequences of coefficients from lowest order “term” to highest, e.g., [1,2,3] represents the series P_0 + 2*P_1 + 3*P_2. Parameters c1, c2array_like 1-D arrays of Laguerre series coefficients ordered from low to high. Returns [quo, rem]ndarrays Of Laguerre series coefficients representing the quotient and remainder. See also lagadd, lagsub, lagmulx, lagmul, lagpow Notes In general, the (polynomial) division of one Laguerre series by another results in quotient and remainder terms that are not in the Laguerre polynomial basis set. Thus, to express these results as a Laguerre series, it is necessary to “reproject” the results onto the Laguerre basis set, which may produce “unintuitive” (but correct) results; see Examples section below. Examples >>> from numpy.polynomial.laguerre import lagdiv >>> lagdiv([ 8., -13., 38., -51., 36.], [0, 1, 2]) (array([1., 2., 3.]), array([0.])) >>> lagdiv([ 9., -12., 38., -51., 36.], [0, 1, 2]) (array([1., 2., 3.]), array([1., 1.]))
doc_1714
Bases: matplotlib.offsetbox.AnchoredOffsetbox Draw two perpendicular arrows to indicate directions. Parameters transformmatplotlib.transforms.Transform The transformation object for the coordinate system in use, i.e., matplotlib.axes.Axes.transAxes. label_x, label_ystr Label text for the x and y arrows lengthfloat, default: 0.15 Length of the arrow, given in coordinates of transform. fontsizefloat, default: 0.08 Size of label strings, given in coordinates of transform. locstr, default: 'upper left' Location of this ellipse. Valid locations are 'upper left', 'upper center', 'upper right', 'center left', 'center', 'center right', 'lower left', 'lower center, 'lower right'. For backward compatibility, numeric values are accepted as well. See the parameter loc of Legend for details. anglefloat, default: 0 The angle of the arrows in degrees. aspect_ratiofloat, default: 1 The ratio of the length of arrow_x and arrow_y. Negative numbers can be used to change the direction. padfloat, default: 0.4 Padding around the labels and arrows, in fraction of the font size. borderpadfloat, default: 0.4 Border padding, in fraction of the font size. frameonbool, default: False If True, draw a box around the arrows and labels. colorstr, default: 'white' Color for the arrows and labels. alphafloat, default: 1 Alpha values of the arrows and labels sep_x, sep_yfloat, default: 0.01 and 0 respectively Separation between the arrows and labels in coordinates of transform. fontpropertiesmatplotlib.font_manager.FontProperties, optional Font properties for the label text. back_lengthfloat, default: 0.15 Fraction of the arrow behind the arrow crossing. head_widthfloat, default: 10 Width of arrow head, sent to ArrowStyle. head_lengthfloat, default: 15 Length of arrow head, sent to ArrowStyle. tail_widthfloat, default: 2 Width of arrow tail, sent to ArrowStyle. text_props, arrow_propsdict Properties of the text and arrows, passed to textpath.TextPath and patches.FancyArrowPatch. **kwargs Keyword arguments forwarded to AnchoredOffsetbox. Notes If prop is passed as a keyword argument, but fontproperties is not, then prop is be assumed to be the intended fontproperties. Using both prop and fontproperties is not supported. Examples >>> import matplotlib.pyplot as plt >>> import numpy as np >>> from mpl_toolkits.axes_grid1.anchored_artists import ( ... AnchoredDirectionArrows) >>> fig, ax = plt.subplots() >>> ax.imshow(np.random.random((10, 10))) >>> arrows = AnchoredDirectionArrows(ax.transAxes, '111', '110') >>> ax.add_artist(arrows) >>> fig.show() Using several of the optional parameters, creating downward pointing arrow and high contrast text labels. >>> import matplotlib.font_manager as fm >>> fontprops = fm.FontProperties(family='monospace') >>> arrows = AnchoredDirectionArrows(ax.transAxes, 'East', 'South', ... loc='lower left', color='k', ... aspect_ratio=-1, sep_x=0.02, ... sep_y=-0.01, ... text_props={'ec':'w', 'fc':'k'}, ... fontproperties=fontprops) Attributes arrow_x, arrow_ymatplotlib.patches.FancyArrowPatch Arrow x and y text_path_x, text_path_ymatplotlib.textpath.TextPath Path for arrow labels p_x, p_ymatplotlib.patches.PathPatch Patch for arrow labels boxmatplotlib.offsetbox.AuxTransformBox Container for the arrows and labels. set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, bbox_to_anchor=<UNSET>, child=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, gid=<UNSET>, height=<UNSET>, in_layout=<UNSET>, label=<UNSET>, offset=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, width=<UNSET>, zorder=<UNSET>)[source] Set multiple properties at once. Supported properties are Property Description agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha scalar or None animated bool bbox_to_anchor unknown child unknown clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None figure Figure gid str height float in_layout bool label object offset (float, float) or callable path_effects AbstractPathEffect picker None or bool or float or callable rasterized bool sketch_params (scale: float, length: float, randomness: float) snap bool or None transform Transform url str visible bool width float zorder float Examples using mpl_toolkits.axes_grid1.anchored_artists.AnchoredDirectionArrows Anchored Direction Arrow
doc_1715
Calls super’s load_module(). Deprecated since version 3.4: Use Loader.exec_module() instead.
doc_1716
An array that represents the abbreviated months of the year in the current locale. This follows normal convention of January being month number 1, so it has a length of 13 and month_abbr[0] is the empty string.
doc_1717
Return the current hatching pattern.
doc_1718
Download all datasets for use with scikit-image offline. Scikit-image datasets are no longer shipped with the library by default. This allows us to use higher quality datasets, while keeping the library download size small. This function requires the installation of an optional dependency, pooch, to download the full dataset. Follow installation instruction found at https://scikit-image.org/docs/stable/install.html Call this function to download all sample images making them available offline on your machine. Parameters directory: path-like, optional The directory where the dataset should be stored. Raises ModuleNotFoundError: If pooch is not install, this error will be raised. Notes scikit-image will only search for images stored in the default directory. Only specify the directory if you wish to download the images to your own folder for a particular reason. You can access the location of the default data directory by inspecting the variable skimage.data.data_dir.
doc_1719
tf.compat.v1.estimator.DNNEstimator( head, hidden_units, feature_columns, model_dir=None, optimizer='Adagrad', activation_fn=tf.nn.relu, dropout=None, input_layer_partitioner=None, config=None, warm_start_from=None, batch_norm=False ) Example: sparse_feature_a = sparse_column_with_hash_bucket(...) sparse_feature_b = sparse_column_with_hash_bucket(...) sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a, ...) sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b, ...) estimator = tf.estimator.DNNEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], hidden_units=[1024, 512, 256]) # Or estimator using the ProximalAdagradOptimizer optimizer with # regularization. estimator = tf.estimator.DNNEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], hidden_units=[1024, 512, 256], optimizer=tf.compat.v1.train.ProximalAdagradOptimizer( learning_rate=0.1, l1_regularization_strength=0.001 )) # Or estimator using an optimizer with a learning rate decay. estimator = tf.estimator.DNNEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], hidden_units=[1024, 512, 256], optimizer=lambda: tf.keras.optimizers.Adam( learning_rate=tf.compat.v1.train.exponential_decay( learning_rate=0.1, global_step=tf.compat.v1.train.get_global_step(), decay_steps=10000, decay_rate=0.96)) # Or estimator with warm-starting from a previous checkpoint. estimator = tf.estimator.DNNEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], hidden_units=[1024, 512, 256], warm_start_from="/path/to/checkpoint/dir") # Input builders def input_fn_train: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_eval: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_predict: # Returns tf.data.Dataset of (x, None) tuple. pass estimator.train(input_fn=input_fn_train) metrics = estimator.evaluate(input_fn=input_fn_eval) predictions = estimator.predict(input_fn=input_fn_predict) Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor. Loss and predicted output are determined by the specified head. Args model_fn Model function. Follows the signature: features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same. labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None. mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning. config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used. config estimator.RunConfig configuration object. params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types. warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged. Raises ValueError parameters of model_fn don't match params. ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes. Attributes config model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source export_savedmodel( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False ) Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
doc_1720
This class is a dictionary-like object whose keys are strings and whose values are Morsel instances. Note that upon setting a key to a value, the value is first converted to a Morsel containing the key and the value. If input is given, it is passed to the load() method.
doc_1721
Create a Timer instance with the given statement, setup code and timer function and run its repeat() method with the given repeat count and number executions. The optional globals argument specifies a namespace in which to execute the code. Changed in version 3.5: The optional globals parameter was added. Changed in version 3.7: Default value of repeat changed from 3 to 5.
doc_1722
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_1723
Set whether the artist is intended to be used in an animation. If True, the artist is excluded from regular drawing of the figure. You have to call Figure.draw_artist / Axes.draw_artist explicitly on the artist. This appoach is used to speed up animations using blitting. See also matplotlib.animation and Faster rendering by using blitting. Parameters bbool
doc_1724
Remove named header from the request instance (both from regular and unredirected headers). New in version 3.4.
doc_1725
Parse an HTTP WWW-Authenticate header into a WWWAuthenticate object. Parameters value (Optional[str]) – a WWW-Authenticate header to parse. on_update (Optional[Callable[[werkzeug.datastructures.WWWAuthenticate], None]]) – an optional callable that is called every time a value on the WWWAuthenticate object is changed. Returns a WWWAuthenticate object. Return type werkzeug.datastructures.WWWAuthenticate
doc_1726
Broadcasts picklable objects in object_list to the whole group. Similar to broadcast(), but Python objects can be passed in. Note that all objects in object_list must be picklable in order to be broadcasted. Parameters object_list (List[Any]) – List of input objects to broadcast. Each object must be picklable. Only objects on the src rank will be broadcast, but each rank must provide lists of equal sizes. src (int) – Source rank from which to broadcast object_list. group – (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. Default is None. Returns None. If rank is part of the group, object_list will contain the broadcasted objects from src rank. Note For NCCL-based processed groups, internal tensor representations of objects must be moved to the GPU device before communication takes place. In this case, the device used is given by torch.cuda.current_device() and it is the user’s responsiblity to ensure that this is set so that each rank has an individual GPU, via torch.cuda.set_device(). Note Note that this API differs slightly from the all_gather() collective since it does not provide an async_op handle and thus will be a blocking call. Warning broadcast_object_list() uses pickle module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Example:: >>> # Note: Process group initialization omitted on each rank. >>> import torch.distributed as dist >>> if dist.get_rank() == 0: >>> # Assumes world_size of 3. >>> objects = ["foo", 12, {1: 2}] # any picklable object >>> else: >>> objects = [None, None, None] >>> dist.broadcast_object_list(objects, src=0) >>> broadcast_objects ['foo', 12, {1: 2}]
doc_1727
Like CHECKED_HASH, the .pyc file includes a hash of the source file content. However, Python will at runtime assume the .pyc file is up to date and not validate the .pyc against the source file at all. This option is useful when the .pycs are kept up to date by some system external to Python like a build system.
doc_1728
tf.logical_not Compat aliases for migration See Migration guide for more details. tf.compat.v1.logical_not, tf.compat.v1.math.logical_not tf.math.logical_not( x, name=None ) Example: tf.math.logical_not(tf.constant([True, False])) <tf.Tensor: shape=(2,), dtype=bool, numpy=array([False, True])> Args x A Tensor of type bool. A Tensor of type bool. name A name for the operation (optional). Returns A Tensor of type bool.
doc_1729
Return a token. If tokens have been stacked using push_token(), pop a token off the stack. Otherwise, read one from the input stream. If reading encounters an immediate end-of-file, eof is returned (the empty string ('') in non-POSIX mode, and None in POSIX mode).
doc_1730
Convert tz-aware axis to target time zone. Parameters tz:str or tzinfo object axis:the axis to convert level:int, str, default None If axis is a MultiIndex, convert a specific level. Otherwise must be None. copy:bool, default True Also make a copy of the underlying data. Returns {klass} Object with time zone converted axis. Raises TypeError If the axis is tz-naive.
doc_1731
Create a SaveAs dialog and return a file object opened in write-only mode.
doc_1732
Parses a range header into a Range object. If the header is missing or malformed None is returned. ranges is a list of (start, stop) tuples where the ranges are non-inclusive. Changelog New in version 0.7. Parameters value (Optional[str]) – make_inclusive (bool) – Return type Optional[werkzeug.datastructures.Range]
doc_1733
Compute Lasso path with coordinate descent The Lasso optimization function varies for mono and multi-outputs. For mono-output tasks it is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1 For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3 n_alphasint, default=100 Number of alphas along the regularization path alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. coef_initndarray of shape (n_features, ), default=None The initial values of the coefficients. verbosebool or int, default=False Amount of verbosity. return_n_iterbool, default=False whether to return the number of iterations or not. positivebool, default=False If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1). **paramskwargs keyword arguments passed to the coordinate descent solver. Returns alphasndarray of shape (n_alphas,) The alphas along the path where models are computed. coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas) Coefficients along the path. dual_gapsndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iterslist of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. See also lars_path Lasso LassoLars LassoCV LassoLarsCV sklearn.decomposition.sparse_encode Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars_path Examples Comparing lasso_path and lars_path with interpolation: >>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T >>> y = np.array([1, 2, 3.1]) >>> # Use lasso_path to compute a coefficient path >>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5]) >>> print(coef_path) [[0. 0. 0.46874778] [0.2159048 0.4425765 0.23689075]] >>> # Now use lars_path and 1D linear interpolation to compute the >>> # same path >>> from sklearn.linear_model import lars_path >>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso') >>> from scipy import interpolate >>> coef_path_continuous = interpolate.interp1d(alphas[::-1], ... coef_path_lars[:, ::-1]) >>> print(coef_path_continuous([5., 1., .5])) [[0. 0. 0.46915237] [0.2159048 0.4425765 0.23668876]]
doc_1734
Concatenate pandas objects along a particular axis with optional set logic along the other axes. Can also add a layer of hierarchical indexing on the concatenation axis, which may be useful if the labels are the same (or overlapping) on the passed axis number. Parameters objs:a sequence or mapping of Series or DataFrame objects If a mapping is passed, the sorted keys will be used as the keys argument, unless it is passed, in which case the values will be selected (see below). Any None objects will be dropped silently unless they are all None in which case a ValueError will be raised. axis:{0/’index’, 1/’columns’}, default 0 The axis to concatenate along. join:{‘inner’, ‘outer’}, default ‘outer’ How to handle indexes on other axis (or axes). ignore_index:bool, default False If True, do not use the index values along the concatenation axis. The resulting axis will be labeled 0, …, n - 1. This is useful if you are concatenating objects where the concatenation axis does not have meaningful indexing information. Note the index values on the other axes are still respected in the join. keys:sequence, default None If multiple levels passed, should contain tuples. Construct hierarchical index using the passed keys as the outermost level. levels:list of sequences, default None Specific levels (unique values) to use for constructing a MultiIndex. Otherwise they will be inferred from the keys. names:list, default None Names for the levels in the resulting hierarchical index. verify_integrity:bool, default False Check whether the new concatenated axis contains duplicates. This can be very expensive relative to the actual data concatenation. sort:bool, default False Sort non-concatenation axis if it is not already aligned when join is ‘outer’. This has no effect when join='inner', which already preserves the order of the non-concatenation axis. Changed in version 1.0.0: Changed to not sort by default. copy:bool, default True If False, do not copy data unnecessarily. Returns object, type of objs When concatenating all Series along the index (axis=0), a Series is returned. When objs contains at least one DataFrame, a DataFrame is returned. When concatenating along the columns (axis=1), a DataFrame is returned. See also Series.append Concatenate Series. DataFrame.append Concatenate DataFrames. DataFrame.join Join DataFrames using indexes. DataFrame.merge Merge DataFrames by indexes or columns. Notes The keys, levels, and names arguments are all optional. A walkthrough of how this method fits in with other tools for combining pandas objects can be found here. Examples Combine two Series. >>> s1 = pd.Series(['a', 'b']) >>> s2 = pd.Series(['c', 'd']) >>> pd.concat([s1, s2]) 0 a 1 b 0 c 1 d dtype: object Clear the existing index and reset it in the result by setting the ignore_index option to True. >>> pd.concat([s1, s2], ignore_index=True) 0 a 1 b 2 c 3 d dtype: object Add a hierarchical index at the outermost level of the data with the keys option. >>> pd.concat([s1, s2], keys=['s1', 's2']) s1 0 a 1 b s2 0 c 1 d dtype: object Label the index keys you create with the names option. >>> pd.concat([s1, s2], keys=['s1', 's2'], ... names=['Series name', 'Row ID']) Series name Row ID s1 0 a 1 b s2 0 c 1 d dtype: object Combine two DataFrame objects with identical columns. >>> df1 = pd.DataFrame([['a', 1], ['b', 2]], ... columns=['letter', 'number']) >>> df1 letter number 0 a 1 1 b 2 >>> df2 = pd.DataFrame([['c', 3], ['d', 4]], ... columns=['letter', 'number']) >>> df2 letter number 0 c 3 1 d 4 >>> pd.concat([df1, df2]) letter number 0 a 1 1 b 2 0 c 3 1 d 4 Combine DataFrame objects with overlapping columns and return everything. Columns outside the intersection will be filled with NaN values. >>> df3 = pd.DataFrame([['c', 3, 'cat'], ['d', 4, 'dog']], ... columns=['letter', 'number', 'animal']) >>> df3 letter number animal 0 c 3 cat 1 d 4 dog >>> pd.concat([df1, df3], sort=False) letter number animal 0 a 1 NaN 1 b 2 NaN 0 c 3 cat 1 d 4 dog Combine DataFrame objects with overlapping columns and return only those that are shared by passing inner to the join keyword argument. >>> pd.concat([df1, df3], join="inner") letter number 0 a 1 1 b 2 0 c 3 1 d 4 Combine DataFrame objects horizontally along the x axis by passing in axis=1. >>> df4 = pd.DataFrame([['bird', 'polly'], ['monkey', 'george']], ... columns=['animal', 'name']) >>> pd.concat([df1, df4], axis=1) letter number animal name 0 a 1 bird polly 1 b 2 monkey george Prevent the result from including duplicate index values with the verify_integrity option. >>> df5 = pd.DataFrame([1], index=['a']) >>> df5 0 a 1 >>> df6 = pd.DataFrame([2], index=['a']) >>> df6 0 a 2 >>> pd.concat([df5, df6], verify_integrity=True) Traceback (most recent call last): ... ValueError: Indexes have overlapping values: ['a']
doc_1735
Indicates unpack completion. Raises an Error exception if all of the data has not been unpacked.
doc_1736
See Migration guide for more details. tf.compat.v1.raw_ops.LogicalAnd tf.raw_ops.LogicalAnd( x, y, name=None ) Note: LogicalAnd supports broadcasting. More about broadcasting here Args x A Tensor of type bool. y A Tensor of type bool. name A name for the operation (optional). Returns A Tensor of type bool.
doc_1737
Create an invalid file descriptor by opening and closing a temporary file, and returning its descriptor.
doc_1738
Return a month’s calendar as an HTML table. If withyear is true the year will be included in the header, otherwise just the month name will be used.
doc_1739
Set a.flat[n] = values[n] for all n in indices. Refer to numpy.put for full documentation. See also numpy.put equivalent function
doc_1740
Clips gradient norm of an iterable of parameters. The norm is computed over all gradients together, as if they were concatenated into a single vector. Gradients are modified in-place. Parameters parameters (Iterable[Tensor] or Tensor) – an iterable of Tensors or a single Tensor that will have gradients normalized max_norm (float or int) – max norm of the gradients norm_type (float or int) – type of the used p-norm. Can be 'inf' for infinity norm. Returns Total norm of the parameters (viewed as a single vector).
doc_1741
Prints the formatted representation of object followed by a newline. If sort_dicts is false (the default), dictionaries will be displayed with their keys in insertion order, otherwise the dict keys will be sorted. args and kwargs will be passed to pprint() as formatting parameters. New in version 3.8.
doc_1742
Return the eigenvalues and eigenvectors of a complex Hermitian (conjugate symmetric) or a real symmetric matrix. Returns two objects, a 1-D array containing the eigenvalues of a, and a 2-D square array or matrix (depending on the input type) of the corresponding eigenvectors (in columns). Parameters a(…, M, M) array Hermitian or real symmetric matrices whose eigenvalues and eigenvectors are to be computed. UPLO{‘L’, ‘U’}, optional Specifies whether the calculation is done with the lower triangular part of a (‘L’, default) or the upper triangular part (‘U’). Irrespective of this value only the real parts of the diagonal will be considered in the computation to preserve the notion of a Hermitian matrix. It therefore follows that the imaginary part of the diagonal will always be treated as zero. Returns w(…, M) ndarray The eigenvalues in ascending order, each repeated according to its multiplicity. v{(…, M, M) ndarray, (…, M, M) matrix} The column v[:, i] is the normalized eigenvector corresponding to the eigenvalue w[i]. Will return a matrix object if a is a matrix object. Raises LinAlgError If the eigenvalue computation does not converge. See also eigvalsh eigenvalues of real symmetric or complex Hermitian (conjugate symmetric) arrays. eig eigenvalues and right eigenvectors for non-symmetric arrays. eigvals eigenvalues of non-symmetric arrays. scipy.linalg.eigh Similar function in SciPy (but also solves the generalized eigenvalue problem). Notes New in version 1.8.0. Broadcasting rules apply, see the numpy.linalg documentation for details. The eigenvalues/eigenvectors are computed using LAPACK routines _syevd, _heevd. The eigenvalues of real symmetric or complex Hermitian matrices are always real. [1] The array v of (column) eigenvectors is unitary and a, w, and v satisfy the equations dot(a, v[:, i]) = w[i] * v[:, i]. References 1 G. Strang, Linear Algebra and Its Applications, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pg. 222. Examples >>> from numpy import linalg as LA >>> a = np.array([[1, -2j], [2j, 5]]) >>> a array([[ 1.+0.j, -0.-2.j], [ 0.+2.j, 5.+0.j]]) >>> w, v = LA.eigh(a) >>> w; v array([0.17157288, 5.82842712]) array([[-0.92387953+0.j , -0.38268343+0.j ], # may vary [ 0. +0.38268343j, 0. -0.92387953j]]) >>> np.dot(a, v[:, 0]) - w[0] * v[:, 0] # verify 1st e-val/vec pair array([5.55111512e-17+0.0000000e+00j, 0.00000000e+00+1.2490009e-16j]) >>> np.dot(a, v[:, 1]) - w[1] * v[:, 1] # verify 2nd e-val/vec pair array([0.+0.j, 0.+0.j]) >>> A = np.matrix(a) # what happens if input is a matrix object >>> A matrix([[ 1.+0.j, -0.-2.j], [ 0.+2.j, 5.+0.j]]) >>> w, v = LA.eigh(A) >>> w; v array([0.17157288, 5.82842712]) matrix([[-0.92387953+0.j , -0.38268343+0.j ], # may vary [ 0. +0.38268343j, 0. -0.92387953j]]) >>> # demonstrate the treatment of the imaginary part of the diagonal >>> a = np.array([[5+2j, 9-2j], [0+2j, 2-1j]]) >>> a array([[5.+2.j, 9.-2.j], [0.+2.j, 2.-1.j]]) >>> # with UPLO='L' this is numerically equivalent to using LA.eig() with: >>> b = np.array([[5.+0.j, 0.-2.j], [0.+2.j, 2.-0.j]]) >>> b array([[5.+0.j, 0.-2.j], [0.+2.j, 2.+0.j]]) >>> wa, va = LA.eigh(a) >>> wb, vb = LA.eig(b) >>> wa; wb array([1., 6.]) array([6.+0.j, 1.+0.j]) >>> va; vb array([[-0.4472136 +0.j , -0.89442719+0.j ], # may vary [ 0. +0.89442719j, 0. -0.4472136j ]]) array([[ 0.89442719+0.j , -0. +0.4472136j], [-0. +0.4472136j, 0.89442719+0.j ]])
doc_1743
sklearn.utils.extmath.safe_sparse_dot(a, b, *, dense_output=False) [source] Dot product that handle the sparse matrix case correctly. Parameters a{ndarray, sparse matrix} b{ndarray, sparse matrix} dense_outputbool, default=False When False, a and b both being sparse will yield sparse output. When True, output will always be a dense array. Returns dot_product{ndarray, sparse matrix} Sparse if a and b are sparse and dense_output=False.
doc_1744
Generate the “Friedman #1” regression problem. This dataset is described in Friedman [1] and Breiman [2]. Inputs X are independent features uniformly distributed on the interval [0, 1]. The output y is created according to the formula: y(X) = 10 * sin(pi * X[:, 0] * X[:, 1]) + 20 * (X[:, 2] - 0.5) ** 2 + 10 * X[:, 3] + 5 * X[:, 4] + noise * N(0, 1). Out of the n_features features, only 5 are actually used to compute y. The remaining features are independent of y. The number of features has to be >= 5. Read more in the User Guide. Parameters n_samplesint, default=100 The number of samples. n_featuresint, default=10 The number of features. Should be at least 5. noisefloat, default=0.0 The standard deviation of the gaussian noise applied to the output. random_stateint, RandomState instance or None, default=None Determines random number generation for dataset noise. Pass an int for reproducible output across multiple function calls. See Glossary. Returns Xndarray of shape (n_samples, n_features) The input samples. yndarray of shape (n_samples,) The output values. References 1 J. Friedman, “Multivariate adaptive regression splines”, The Annals of Statistics 19 (1), pages 1-67, 1991. 2 L. Breiman, “Bagging predictors”, Machine Learning 24, pages 123-140, 1996.
doc_1745
Return the Colormap instance.
doc_1746
class socketserver.UnixDatagramServer(server_address, RequestHandlerClass, bind_and_activate=True) These more infrequently used classes are similar to the TCP and UDP classes, but use Unix domain sockets; they’re not available on non-Unix platforms. The parameters are the same as for TCPServer.
doc_1747
See Migration guide for more details. tf.compat.v1.keras.activations.tanh tf.keras.activations.tanh( x ) For example: a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32) b = tf.keras.activations.tanh(a) b.numpy() array([-0.9950547, -0.7615942, 0., 0.7615942, 0.9950547], dtype=float32) Arguments x Input tensor. Returns Tensor of same shape and dtype of input x, with tanh activation: tanh(x) = sinh(x)/cosh(x) = ((exp(x) - exp(-x))/(exp(x) + exp(-x))).
doc_1748
True if this transform is separable in the x- and y- dimensions.
doc_1749
Return x, y values at equally spaced points in domain. Returns the x, y values at n linearly spaced points across the domain. Here y is the value of the polynomial at the points x. By default the domain is the same as that of the series instance. This method is intended mostly as a plotting aid. New in version 1.5.0. Parameters nint, optional Number of point pairs to return. The default value is 100. domain{None, array_like}, optional If not None, the specified domain is used instead of that of the calling instance. It should be of the form [beg,end]. The default is None which case the class domain is used. Returns x, yndarray x is equal to linspace(self.domain[0], self.domain[1], n) and y is the series evaluated at element of x.
doc_1750
Set printing options. These options determine the way floating point numbers, arrays and other NumPy objects are displayed. Parameters precisionint or None, optional Number of digits of precision for floating point output (default 8). May be None if floatmode is not fixed, to print as many digits as necessary to uniquely specify the value. thresholdint, optional Total number of array elements which trigger summarization rather than full repr (default 1000). To always use the full repr without summarization, pass sys.maxsize. edgeitemsint, optional Number of array items in summary at beginning and end of each dimension (default 3). linewidthint, optional The number of characters per line for the purpose of inserting line breaks (default 75). suppressbool, optional If True, always print floating point numbers using fixed point notation, in which case numbers equal to zero in the current precision will print as zero. If False, then scientific notation is used when absolute value of the smallest number is < 1e-4 or the ratio of the maximum absolute value to the minimum is > 1e3. The default is False. nanstrstr, optional String representation of floating point not-a-number (default nan). infstrstr, optional String representation of floating point infinity (default inf). signstring, either ‘-’, ‘+’, or ‘ ‘, optional Controls printing of the sign of floating-point types. If ‘+’, always print the sign of positive values. If ‘ ‘, always prints a space (whitespace character) in the sign position of positive values. If ‘-’, omit the sign character of positive values. (default ‘-‘) formatterdict of callables, optional If not None, the keys should indicate the type(s) that the respective formatting function applies to. Callables should return a string. Types that are not specified (by their corresponding keys) are handled by the default formatters. Individual types for which a formatter can be set are: ‘bool’ ‘int’ ‘timedelta’ : a numpy.timedelta64 ‘datetime’ : a numpy.datetime64 ‘float’ ‘longfloat’ : 128-bit floats ‘complexfloat’ ‘longcomplexfloat’ : composed of two 128-bit floats ‘numpystr’ : types numpy.string_ and numpy.unicode_ ‘object’ : np.object_ arrays Other keys that can be used to set a group of types at once are: ‘all’ : sets all types ‘int_kind’ : sets ‘int’ ‘float_kind’ : sets ‘float’ and ‘longfloat’ ‘complex_kind’ : sets ‘complexfloat’ and ‘longcomplexfloat’ ‘str_kind’ : sets ‘numpystr’ floatmodestr, optional Controls the interpretation of the precision option for floating-point types. Can take the following values (default maxprec_equal): ‘fixed’: Always print exactly precision fractional digits, even if this would print more or fewer digits than necessary to specify the value uniquely. ‘unique’: Print the minimum number of fractional digits necessary to represent each value uniquely. Different elements may have a different number of digits. The value of the precision option is ignored. ‘maxprec’: Print at most precision fractional digits, but if an element can be uniquely represented with fewer digits only print it with that many. ‘maxprec_equal’: Print at most precision fractional digits, but if every element in the array can be uniquely represented with an equal number of fewer digits, use that many digits for all elements. legacystring or False, optional If set to the string ‘1.13’ enables 1.13 legacy printing mode. This approximates numpy 1.13 print output by including a space in the sign position of floats and different behavior for 0d arrays. This also enables 1.21 legacy printing mode (described below). If set to the string ‘1.21’ enables 1.21 legacy printing mode. This approximates numpy 1.21 print output of complex structured dtypes by not inserting spaces after commas that separate fields and after colons. If set to False, disables legacy mode. Unrecognized strings will be ignored with a warning for forward compatibility. New in version 1.14.0. Changed in version 1.22.0. See also get_printoptions, printoptions, set_string_function, array2string Notes formatter is always reset with a call to set_printoptions. Use printoptions as a context manager to set the values temporarily. Examples Floating point precision can be set: >>> np.set_printoptions(precision=4) >>> np.array([1.123456789]) [1.1235] Long arrays can be summarised: >>> np.set_printoptions(threshold=5) >>> np.arange(10) array([0, 1, 2, ..., 7, 8, 9]) Small results can be suppressed: >>> eps = np.finfo(float).eps >>> x = np.arange(4.) >>> x**2 - (x + eps)**2 array([-4.9304e-32, -4.4409e-16, 0.0000e+00, 0.0000e+00]) >>> np.set_printoptions(suppress=True) >>> x**2 - (x + eps)**2 array([-0., -0., 0., 0.]) A custom formatter can be used to display array elements as desired: >>> np.set_printoptions(formatter={'all':lambda x: 'int: '+str(-x)}) >>> x = np.arange(3) >>> x array([int: 0, int: -1, int: -2]) >>> np.set_printoptions() # formatter gets reset >>> x array([0, 1, 2]) To put back the default options, you can use: >>> np.set_printoptions(edgeitems=3, infstr='inf', ... linewidth=75, nanstr='nan', precision=8, ... suppress=False, threshold=1000, formatter=None) Also to temporarily override options, use printoptions as a context manager: >>> with np.printoptions(precision=2, suppress=True, threshold=5): ... np.linspace(0, 10, 10) array([ 0. , 1.11, 2.22, ..., 7.78, 8.89, 10. ])
doc_1751
Apply, test or remove a POSIX lock on an open file descriptor. fd is an open file descriptor. cmd specifies the command to use - one of F_LOCK, F_TLOCK, F_ULOCK or F_TEST. len specifies the section of the file to lock. Raises an auditing event os.lockf with arguments fd, cmd, len. Availability: Unix. New in version 3.3.
doc_1752
Returns an iterator over all modules in the network. Yields Module – a module in the network Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): print(idx, '->', m) 0 -> Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) 1 -> Linear(in_features=2, out_features=2, bias=True)
doc_1753
Linear regression model that is robust to outliers. The Huber Regressor optimizes the squared loss for the samples where |(y - X'w) / sigma| < epsilon and the absolute loss for the samples where |(y - X'w) / sigma| > epsilon, where w and sigma are parameters to be optimized. The parameter sigma makes sure that if y is scaled up or down by a certain factor, one does not need to rescale epsilon to achieve the same robustness. Note that this does not take into account the fact that the different features of X may be of different scales. This makes sure that the loss function is not heavily influenced by the outliers while not completely ignoring their effect. Read more in the User Guide New in version 0.18. Parameters epsilonfloat, greater than 1.0, default=1.35 The parameter epsilon controls the number of samples that should be classified as outliers. The smaller the epsilon, the more robust it is to outliers. max_iterint, default=100 Maximum number of iterations that scipy.optimize.minimize(method="L-BFGS-B") should run for. alphafloat, default=0.0001 Regularization parameter. warm_startbool, default=False This is useful if the stored attributes of a previously used model has to be reused. If set to False, then the coefficients will be rewritten for every call to fit. See the Glossary. fit_interceptbool, default=True Whether or not to fit the intercept. This can be set to False if the data is already centered around the origin. tolfloat, default=1e-05 The iteration will stop when max{|proj g_i | i = 1, ..., n} <= tol where pg_i is the i-th component of the projected gradient. Attributes coef_array, shape (n_features,) Features got by optimizing the Huber loss. intercept_float Bias. scale_float The value by which |y - X'w - c| is scaled down. n_iter_int Number of iterations that scipy.optimize.minimize(method="L-BFGS-B") has run for. Changed in version 0.20: In SciPy <= 1.0.0 the number of lbfgs iterations may exceed max_iter. n_iter_ will now report at most max_iter. outliers_array, shape (n_samples,) A boolean mask which is set to True where the samples are identified as outliers. References 1 Peter J. Huber, Elvezio M. Ronchetti, Robust Statistics Concomitant scale estimates, pg 172 2 Art B. Owen (2006), A robust hybrid of lasso and ridge regression. https://statweb.stanford.edu/~owen/reports/hhu.pdf Examples >>> import numpy as np >>> from sklearn.linear_model import HuberRegressor, LinearRegression >>> from sklearn.datasets import make_regression >>> rng = np.random.RandomState(0) >>> X, y, coef = make_regression( ... n_samples=200, n_features=2, noise=4.0, coef=True, random_state=0) >>> X[:4] = rng.uniform(10, 20, (4, 2)) >>> y[:4] = rng.uniform(10, 20, 4) >>> huber = HuberRegressor().fit(X, y) >>> huber.score(X, y) -7.284... >>> huber.predict(X[:1,]) array([806.7200...]) >>> linear = LinearRegression().fit(X, y) >>> print("True coefficients:", coef) True coefficients: [20.4923... 34.1698...] >>> print("Huber coefficients:", huber.coef_) Huber coefficients: [17.7906... 31.0106...] >>> print("Linear Regression coefficients:", linear.coef_) Linear Regression coefficients: [-1.9221... 7.0226...] Methods fit(X, y[, sample_weight]) Fit the model according to the given training data. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit the model according to the given training data. Parameters Xarray-like, shape (n_samples, n_features) Training vector, where n_samples in the number of samples and n_features is the number of features. yarray-like, shape (n_samples,) Target vector relative to X. sample_weightarray-like, shape (n_samples,) Weight given to each sample. Returns selfobject get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_1754
Deserialize s (a str, bytes or bytearray instance containing a JSON document) to a Python object using this conversion table. The other arguments have the same meaning as in load(). If the data being deserialized is not a valid JSON document, a JSONDecodeError will be raised. Changed in version 3.6: s can now be of type bytes or bytearray. The input encoding should be UTF-8, UTF-16 or UTF-32. Changed in version 3.9: The keyword argument encoding has been removed.
doc_1755
sys.last_value sys.last_traceback These three variables are not always defined; they are set when an exception is not handled and the interpreter prints an error message and a stack traceback. Their intended use is to allow an interactive user to import a debugger module and engage in post-mortem debugging without having to re-execute the command that caused the error. (Typical use is import pdb; pdb.pm() to enter the post-mortem debugger; see pdb module for more information.) The meaning of the variables is the same as that of the return values from exc_info() above.
doc_1756
Apply only the non-affine part of this transformation. transform(values) is always equivalent to transform_affine(transform_non_affine(values)). In non-affine transformations, this is generally equivalent to transform(values). In affine transformations, this is always a no-op. Parameters valuesarray The input values as NumPy array of length input_dims or shape (N x input_dims). Returns array The output values as NumPy array of length input_dims or shape (N x output_dims), depending on the input.
doc_1757
Bases: mpl_toolkits.axisartist.angle_helper.FormatterDMS __call__(direction, factor, values)[source] Call self as a function. deg_mark='^\\mathrm{h}' fmt_d='$%d^\\mathrm{h}$' fmt_d_m='$%s%d^\\mathrm{h}\\,%02d^\\mathrm{m}$' fmt_d_m_partial='$%s%d^\\mathrm{h}\\,%02d^\\mathrm{m}\\,' fmt_d_ms='$%s%d^\\mathrm{h}\\,%02d.%s^\\mathrm{m}$' fmt_ds='$%d.%s^\\mathrm{h}$' fmt_s_partial='%02d^\\mathrm{s}$' fmt_ss_partial='%02d.%s^\\mathrm{s}$' min_mark='^\\mathrm{m}' sec_mark='^\\mathrm{s}' Examples using mpl_toolkits.axisartist.angle_helper.FormatterHMS mpl_toolkits.axisartist.floating_axes features
doc_1758
Set multiple properties at once. Supported properties are Property Description agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha scalar or None animated bool clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None figure Figure gid str height float in_layout bool label object offset (float, float) or callable path_effects AbstractPathEffect picker None or bool or float or callable rasterized bool sketch_params (scale: float, length: float, randomness: float) snap bool or None transform Transform url str visible bool width float zorder float
doc_1759
Fit linear model. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values. Will be cast to X’s dtype if necessary sample_weightarray-like of shape (n_samples,), default=None Individual weights for each sample New in version 0.17: parameter sample_weight support to LinearRegression. Returns selfreturns an instance of self.
doc_1760
Make two interleaving half circles. A simple toy dataset to visualize clustering and classification algorithms. Read more in the User Guide. Parameters n_samplesint or tuple of shape (2,), dtype=int, default=100 If int, the total number of points generated. If two-element tuple, number of points in each of two moons. Changed in version 0.23: Added two-element tuple. shufflebool, default=True Whether to shuffle the samples. noisefloat, default=None Standard deviation of Gaussian noise added to the data. random_stateint, RandomState instance or None, default=None Determines random number generation for dataset shuffling and noise. Pass an int for reproducible output across multiple function calls. See Glossary. Returns Xndarray of shape (n_samples, 2) The generated samples. yndarray of shape (n_samples,) The integer labels (0 or 1) for class membership of each sample.
doc_1761
calculates the cross- or vector-product cross(Vector3) -> Vector3 calculates the cross-product.
doc_1762
Initialize the joystick module. init() -> None This function is called automatically by pygame.init(). It initializes the joystick module. The module must be initialized before any other functions will work. It is safe to call this function more than once.
doc_1763
See Migration guide for more details. tf.compat.v1.raw_ops.RandomShuffleQueue tf.raw_ops.RandomShuffleQueue( component_types, shapes=[], capacity=-1, min_after_dequeue=0, seed=0, seed2=0, container='', shared_name='', name=None ) Args component_types A list of tf.DTypes that has length >= 1. The type of each component in a value. shapes An optional list of shapes (each a tf.TensorShape or list of ints). Defaults to []. The shape of each component in a value. The length of this attr must be either 0 or the same as the length of component_types. If the length of this attr is 0, the shapes of queue elements are not constrained, and only one element may be dequeued at a time. capacity An optional int. Defaults to -1. The upper bound on the number of elements in this queue. Negative numbers mean no limit. min_after_dequeue An optional int. Defaults to 0. Dequeue will block unless there would be this many elements after the dequeue or the queue is closed. This ensures a minimum level of mixing of elements. seed An optional int. Defaults to 0. If either seed or seed2 is set to be non-zero, the random number generator is seeded by the given seed. Otherwise, a random seed is used. seed2 An optional int. Defaults to 0. A second seed to avoid seed collision. container An optional string. Defaults to "". If non-empty, this queue is placed in the given container. Otherwise, a default container is used. shared_name An optional string. Defaults to "". If non-empty, this queue will be shared under the given name across multiple sessions. name A name for the operation (optional). Returns A Tensor of type mutable string.
doc_1764
Raised when there is an authentication error.
doc_1765
Set the scrolling region from line top to line bottom. All scrolling actions will take place in this region.
doc_1766
Run the specified WSGI application, app.
doc_1767
The original URL passed to the constructor. Changed in version 3.4. Request.full_url is a property with setter, getter and a deleter. Getting full_url returns the original request URL with the fragment, if it was present.
doc_1768
DateOffset increments between calendar year begin dates. Attributes base Returns a copy of the calling offset object with n=1 and all other attributes equal. freqstr kwds month n name nanos normalize rule_code Methods __call__(*args, **kwargs) Call self as a function. rollback Roll provided date backward to next offset only if not on offset. rollforward Roll provided date forward to next offset only if not on offset. apply apply_index copy isAnchored is_anchored is_month_end is_month_start is_on_offset is_quarter_end is_quarter_start is_year_end is_year_start onOffset
doc_1769
Retrieve the currently selected item.
doc_1770
Set the artist transform. Parameters tTransform
doc_1771
The timezone.
doc_1772
Number of days for each element.
doc_1773
Adjustment factor for the underline position underline_adjustment -> float Gets or sets a factor which, when positive, is multiplied with the font's underline offset to adjust the underline position. A negative value turns an underline into a strike-through or overline. It is multiplied with the ascender. Accepted values range between -2.0 and 2.0 inclusive. A value of 0.5 closely matches Tango underlining. A value of 1.0 mimics pygame.font.Font underlining.
doc_1774
Returns a datetime of the last accessed time of the file. For storage systems unable to return the last accessed time this will raise NotImplementedError. If USE_TZ is True, returns an aware datetime, otherwise returns a naive datetime in the local timezone.
doc_1775
tf.distribute.MirroredStrategy( devices=None, cross_device_ops=None ) This strategy is typically used for training on one machine with multiple GPUs. For TPUs, use tf.distribute.TPUStrategy. To use MirroredStrategy with multiple workers, please refer to tf.distribute.experimental.MultiWorkerMirroredStrategy. For example, a variable created under a MirroredStrategy is a MirroredVariable. If no devices are specified in the constructor argument of the strategy then it will use all the available GPUs. If no GPUs are found, it will use the available CPUs. Note that TensorFlow treats all CPUs on a machine as a single device, and uses threads internally for parallelism. strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) with strategy.scope(): x = tf.Variable(1.) x MirroredVariable:{ 0: <tf.Variable ... shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable ... shape=() dtype=float32, numpy=1.0> } While using distribution strategies, all the variable creation should be done within the strategy's scope. This will replicate the variables across all the replicas and keep them in sync using an all-reduce algorithm. Variables created inside a MirroredStrategy which is wrapped with a tf.function are still MirroredVariables. x = [] @tf.function # Wrap the function with tf.function. def create_variable(): if not x: x.append(tf.Variable(1.)) return x[0] strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) with strategy.scope(): _ = create_variable() print(x[0]) MirroredVariable:{ 0: <tf.Variable ... shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable ... shape=() dtype=float32, numpy=1.0> } experimental_distribute_dataset can be used to distribute the dataset across the replicas when writing your own training loop. If you are using .fit and .compile methods available in tf.keras, then tf.keras will handle the distribution for you. For example: my_strategy = tf.distribute.MirroredStrategy() with my_strategy.scope(): @tf.function def distribute_train_epoch(dataset): def replica_fn(input): # process input and return result return result total_result = 0 for x in dataset: per_replica_result = my_strategy.run(replica_fn, args=(x,)) total_result += my_strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_result, axis=None) return total_result dist_dataset = my_strategy.experimental_distribute_dataset(dataset) for _ in range(EPOCHS): train_result = distribute_train_epoch(dist_dataset) Args devices a list of device strings such as ['/gpu:0', '/gpu:1']. If None, all available GPUs are used. If no GPUs are found, CPU is used. cross_device_ops optional, a descedant of CrossDeviceOps. If this is not set, NcclAllReduce() will be used by default. One would customize this if NCCL isn't available or if a special implementation that exploits the particular hardware is available. Attributes cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example, os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"], 'ps': ["localhost:34567"] }, 'task': {'type': 'worker', 'index': 0} }) # This implicitly uses TF_CONFIG for the cluster and current task info. strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ... if strategy.cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. Since we set this # as a worker above, this block will run on this particular instance. elif strategy.cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. Since we # set this as a worker above, this block will not run on this particular # instance. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring. extended tf.distribute.StrategyExtended with additional methods. num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source distribute_datasets_from_function( dataset_fn, options=None ) Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size. Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs. Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section. Args dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset. options tf.distribute.InputOptions used to control options on how this dataset is distributed. Returns A tf.distribute.DistributedDataset. experimental_distribute_dataset View source experimental_distribute_dataset( dataset, options=None ) Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example: global_batch_size = 2 # Passing the devices is optional. strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) # Create a dataset dataset = tf.data.Dataset.range(4).batch(global_batch_size) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(dataset) @tf.function def replica_fn(input): return input*2 result = [] # Iterate over the `tf.distribute.DistributedDataset` for x in dist_dataset: # process dataset elements result.append(strategy.run(replica_fn, args=(x,))) print(result) [PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])> }, PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])> }] Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding. Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF. By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you. Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs. Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section. Args dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above. options tf.distribute.InputOptions used to control options on how this dataset is distributed. Returns A tf.distribute.DistributedDataset. experimental_distribute_values_from_function View source experimental_distribute_values_from_function( value_fn ) Generates tf.distribute.DistributedValues from value_fn. This function is to generate tf.distribute.DistributedValues to pass into run, reduce, or other methods that take distributed values when not using datasets. Args value_fn The function to run to generate values. It is called for each replica with tf.distribute.ValueContext as the sole argument. It must return a Tensor or a type that can be converted to a Tensor. Returns A tf.distribute.DistributedValues containing a value for each replica. Example usage: Return constant value per replica: strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return tf.constant(1.) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (<tf.Tensor: shape=(), dtype=float32, numpy=1.0>, <tf.Tensor: shape=(), dtype=float32, numpy=1.0>) Distribute values in array based on replica_id: strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) array_value = np.array([3., 2., 1.]) def value_fn(ctx): return array_value[ctx.replica_id_in_sync_group] distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (3.0, 2.0) Specify values using num_replicas_in_sync: strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return ctx.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (2, 2) Place values on devices and distribute: strategy = tf.distribute.TPUStrategy() worker_devices = strategy.extended.worker_devices multiple_values = [] for i in range(strategy.num_replicas_in_sync): with tf.device(worker_devices[i]): multiple_values.append(tf.constant(1.0)) def value_fn(ctx): return multiple_values[ctx.replica_id_in_sync_group] distributed_values = strategy. experimental_distribute_values_from_function( value_fn) experimental_local_results View source experimental_local_results( value ) Returns the list of all local per-replica values contained in value. Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker. Args value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope. Returns A tuple of values contained in value. If value represents a single value, this returns (value,). gather View source gather( value, axis ) Gather value across replicas along axis to the current device. Given a tf.distribute.DistributedValues or tf.Tensor-like object value, this API gathers and concatenates value across replicas along the axis-th dimension. The result is copied to the "current" device which would typically be the CPU of the worker on which the program is running. For tf.distribute.TPUStrategy, it is the first TPU host. For multi-client MultiWorkerMirroredStrategy, this is CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see tf.distribute.ReplicaContext.all_gather. Note: For all strategies except tf.distribute.TPUStrategy, the input value on different replicas must have the same rank, and their shapes must be the same in all dimensions except the axis-th dimension. In other words, their shapes cannot be different in a dimension d where d does not equal to the axis argument. For example, given a tf.distribute.DistributedValues with component tensors of shape (1, 2, 3) and (1, 3, 3) on two replicas, you can call gather(..., axis=1, ...) on it, but not gather(..., axis=0, ...) or gather(..., axis=2, ...). However, for tf.distribute.TPUStrategy.gather, all tensors must have exactly the same rank and same shape. Note: Given a tf.distribute.DistributedValues value, its component tensors must have a non-zero rank. Otherwise, consider using tf.expand_dims before gathering them. strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # A DistributedValues with component tensor of shape (2, 1) on each replica distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]]))) @tf.function def run(): return strategy.gather(distributed_values, axis=0) run() <tf.Tensor: shape=(4, 1), dtype=int32, numpy= array([[1], [2], [1], [2]], dtype=int32)> Consider the following example for more combinations: strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"]) single_tensor = tf.reshape(tf.range(6), shape=(1,2,3)) distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor)) @tf.function def run(axis): return strategy.gather(distributed_values, axis=axis) axis=0 run(axis) <tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=1 run(axis) <tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=2 run(axis) <tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy= array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)> Args value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with tf.distribute.OneDeviceStrategy or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a tf.IndexedSlices. axis 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)). Returns A Tensor that's the concatenation of value across replicas along axis dimension. reduce View source reduce( reduce_op, value, axis ) Reduce value across replicas and return result on current device. strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) total = strategy.reduce("SUM", per_replica_result, axis=None) total <tf.Tensor: shape=(), dtype=int32, numpy=1> To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) # Check devices on which per replica result is: strategy.experimental_local_results(per_replica_result)[0].device # /job:localhost/replica:0/task:0/device:GPU:0 strategy.experimental_local_results(per_replica_result)[1].device # /job:localhost/replica:0/task:0/device:GPU:1 total = strategy.reduce("SUM", per_replica_result, axis=None) # Check device on which reduced result is: total.device # /job:localhost/replica:0/task:0/device:CPU:0 This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing. Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker. There are a number of different tf.distribute APIs for reducing values across replicas: tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients. tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None) Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0) If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4. Args reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN". value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy. axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension). Returns A Tensor. run View source run( fn, args=(), kwargs=None, options=None ) Invokes fn on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn on each replica. If args or kwargs have tf.distribute.DistributedValues, such as those produced by a tf.distribute.DistributedDataset from tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica. fn is invoked under a replica context. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in args or kwargs should either be Python values of a nested structure of tensors, e.g. a list of tensors, in which case args and kwargs will be passed to the fn invoked on each replica. Or args or kwargs can be tf.distribute.DistributedValues containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor, in which case each fn call will get the component of a tf.distribute.DistributedValues corresponding to its replica. Key Point: Depending on the implementation of tf.distribute.Strategy and whether eager execution is enabled, fn may be called one or more times. If fn is annotated with tf.function or tf.distribute.Strategy.run is called inside a tf.function (eager execution is disabled inside a tf.function by default), fn is called once per replica to generate a Tensorflow graph, which will then be reused for execution with new inputs. Otherwise, if eager execution is enabled, fn will be called once per replica every step just like regular python code. Example usage: Constant tensor input. strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) tensor_input = tf.constant(3.0) @tf.function def replica_fn(input): return input*2.0 result = strategy.run(replica_fn, args=(tensor_input,)) result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>, 1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0> } DistributedValues input. strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) @tf.function def run(): def value_fn(value_context): return value_context.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn2(input): return input*2 return strategy.run(replica_fn2, args=(distributed_values,)) result = run() result <tf.Tensor: shape=(), dtype=int32, numpy=4> Use tf.distribute.ReplicaContext to allreduce values. strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"]) @tf.function def run(): def value_fn(value_context): return tf.constant(value_context.replica_id_in_sync_group) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn(input): return tf.distribute.get_replica_context().all_reduce("sum", input) return strategy.run(replica_fn, args=(distributed_values,)) result = run() result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=int32, numpy=1>, 1: <tf.Tensor: shape=(), dtype=int32, numpy=1> } Args fn The function to run on each replica. args Optional positional arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues. kwargs Optional keyword arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues. options An optional instance of tf.distribute.RunOptions specifying the options to run fn. Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensor objects, or Tensors (for example, if running on a single replica). scope View source scope() Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows: strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # Variable created inside scope: with strategy.scope(): mirrored_variable = tf.Variable(1.) mirrored_variable MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0> } # Variable created outside scope: regular_variable = tf.Variable(1.) regular_variable <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0> What happens when Strategy.scope is entered? strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker. Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial. What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables. Returns A context manager.
doc_1776
Extra permissions to enter into the permissions table when creating this object. Add, change, delete, and view permissions are automatically created for each model. This example specifies an extra permission, can_deliver_pizzas: permissions = [('can_deliver_pizzas', 'Can deliver pizzas')] This is a list or tuple of 2-tuples in the format (permission_code, human_readable_permission_name).
doc_1777
Remove a click and the associated event from the list of clicks. Defaults to the last click.
doc_1778
See Migration guide for more details. tf.compat.v1.raw_ops.QueueEnqueueMany tf.raw_ops.QueueEnqueueMany( handle, components, timeout_ms=-1, name=None ) This operation slices each component tensor along the 0th dimension to make multiple queue elements. All of the tuple components must have the same size in the 0th dimension. The components input has k elements, which correspond to the components of tuples stored in the given queue. N.B. If the queue is full, this operation will block until the given elements have been enqueued (or 'timeout_ms' elapses, if specified). Args handle A Tensor of type mutable string. The handle to a queue. components A list of Tensor objects. One or more tensors from which the enqueued tensors should be taken. timeout_ms An optional int. Defaults to -1. If the queue is too full, this operation will block for up to timeout_ms milliseconds. Note: This option is not supported yet. name A name for the operation (optional). Returns The created Operation.
doc_1779
codecs.encode(obj, encoding='utf-8', errors='strict') Encodes obj using the codec registered for encoding. Errors may be given to set the desired error handling scheme. The default error handler is 'strict' meaning that encoding errors raise ValueError (or a more codec specific subclass, such as UnicodeEncodeError). Refer to Codec Base Classes for more information on codec error handling. codecs.decode(obj, encoding='utf-8', errors='strict') Decodes obj using the codec registered for encoding. Errors may be given to set the desired error handling scheme. The default error handler is 'strict' meaning that decoding errors raise ValueError (or a more codec specific subclass, such as UnicodeDecodeError). Refer to Codec Base Classes for more information on codec error handling. The full details for each codec can also be looked up directly: codecs.lookup(encoding) Looks up the codec info in the Python codec registry and returns a CodecInfo object as defined below. Encodings are first looked up in the registry’s cache. If not found, the list of registered search functions is scanned. If no CodecInfo object is found, a LookupError is raised. Otherwise, the CodecInfo object is stored in the cache and returned to the caller. class codecs.CodecInfo(encode, decode, streamreader=None, streamwriter=None, incrementalencoder=None, incrementaldecoder=None, name=None) Codec details when looking up the codec registry. The constructor arguments are stored in attributes of the same name: name The name of the encoding. encode decode The stateless encoding and decoding functions. These must be functions or methods which have the same interface as the encode() and decode() methods of Codec instances (see Codec Interface). The functions or methods are expected to work in a stateless mode. incrementalencoder incrementaldecoder Incremental encoder and decoder classes or factory functions. These have to provide the interface defined by the base classes IncrementalEncoder and IncrementalDecoder, respectively. Incremental codecs can maintain state. streamwriter streamreader Stream writer and reader classes or factory functions. These have to provide the interface defined by the base classes StreamWriter and StreamReader, respectively. Stream codecs can maintain state. To simplify access to the various codec components, the module provides these additional functions which use lookup() for the codec lookup: codecs.getencoder(encoding) Look up the codec for the given encoding and return its encoder function. Raises a LookupError in case the encoding cannot be found. codecs.getdecoder(encoding) Look up the codec for the given encoding and return its decoder function. Raises a LookupError in case the encoding cannot be found. codecs.getincrementalencoder(encoding) Look up the codec for the given encoding and return its incremental encoder class or factory function. Raises a LookupError in case the encoding cannot be found or the codec doesn’t support an incremental encoder. codecs.getincrementaldecoder(encoding) Look up the codec for the given encoding and return its incremental decoder class or factory function. Raises a LookupError in case the encoding cannot be found or the codec doesn’t support an incremental decoder. codecs.getreader(encoding) Look up the codec for the given encoding and return its StreamReader class or factory function. Raises a LookupError in case the encoding cannot be found. codecs.getwriter(encoding) Look up the codec for the given encoding and return its StreamWriter class or factory function. Raises a LookupError in case the encoding cannot be found. Custom codecs are made available by registering a suitable codec search function: codecs.register(search_function) Register a codec search function. Search functions are expected to take one argument, being the encoding name in all lower case letters with hyphens and spaces converted to underscores, and return a CodecInfo object. In case a search function cannot find a given encoding, it should return None. Changed in version 3.9: Hyphens and spaces are converted to underscore. Note Search function registration is not currently reversible, which may cause problems in some cases, such as unit testing or module reloading. While the builtin open() and the associated io module are the recommended approach for working with encoded text files, this module provides additional utility functions and classes that allow the use of a wider range of codecs when working with binary files: codecs.open(filename, mode='r', encoding=None, errors='strict', buffering=-1) Open an encoded file using the given mode and return an instance of StreamReaderWriter, providing transparent encoding/decoding. The default file mode is 'r', meaning to open the file in read mode. Note Underlying encoded files are always opened in binary mode. No automatic conversion of '\n' is done on reading and writing. The mode argument may be any binary mode acceptable to the built-in open() function; the 'b' is automatically added. encoding specifies the encoding which is to be used for the file. Any encoding that encodes to and decodes from bytes is allowed, and the data types supported by the file methods depend on the codec used. errors may be given to define the error handling. It defaults to 'strict' which causes a ValueError to be raised in case an encoding error occurs. buffering has the same meaning as for the built-in open() function. It defaults to -1 which means that the default buffer size will be used. codecs.EncodedFile(file, data_encoding, file_encoding=None, errors='strict') Return a StreamRecoder instance, a wrapped version of file which provides transparent transcoding. The original file is closed when the wrapped version is closed. Data written to the wrapped file is decoded according to the given data_encoding and then written to the original file as bytes using file_encoding. Bytes read from the original file are decoded according to file_encoding, and the result is encoded using data_encoding. If file_encoding is not given, it defaults to data_encoding. errors may be given to define the error handling. It defaults to 'strict', which causes ValueError to be raised in case an encoding error occurs. codecs.iterencode(iterator, encoding, errors='strict', **kwargs) Uses an incremental encoder to iteratively encode the input provided by iterator. This function is a generator. The errors argument (as well as any other keyword argument) is passed through to the incremental encoder. This function requires that the codec accept text str objects to encode. Therefore it does not support bytes-to-bytes encoders such as base64_codec. codecs.iterdecode(iterator, encoding, errors='strict', **kwargs) Uses an incremental decoder to iteratively decode the input provided by iterator. This function is a generator. The errors argument (as well as any other keyword argument) is passed through to the incremental decoder. This function requires that the codec accept bytes objects to decode. Therefore it does not support text-to-text encoders such as rot_13, although rot_13 may be used equivalently with iterencode(). The module also provides the following constants which are useful for reading and writing to platform dependent files: codecs.BOM codecs.BOM_BE codecs.BOM_LE codecs.BOM_UTF8 codecs.BOM_UTF16 codecs.BOM_UTF16_BE codecs.BOM_UTF16_LE codecs.BOM_UTF32 codecs.BOM_UTF32_BE codecs.BOM_UTF32_LE These constants define various byte sequences, being Unicode byte order marks (BOMs) for several encodings. They are used in UTF-16 and UTF-32 data streams to indicate the byte order used, and in UTF-8 as a Unicode signature. BOM_UTF16 is either BOM_UTF16_BE or BOM_UTF16_LE depending on the platform’s native byte order, BOM is an alias for BOM_UTF16, BOM_LE for BOM_UTF16_LE and BOM_BE for BOM_UTF16_BE. The others represent the BOM in UTF-8 and UTF-32 encodings. Codec Base Classes The codecs module defines a set of base classes which define the interfaces for working with codec objects, and can also be used as the basis for custom codec implementations. Each codec has to define four interfaces to make it usable as codec in Python: stateless encoder, stateless decoder, stream reader and stream writer. The stream reader and writers typically reuse the stateless encoder/decoder to implement the file protocols. Codec authors also need to define how the codec will handle encoding and decoding errors. Error Handlers To simplify and standardize error handling, codecs may implement different error handling schemes by accepting the errors string argument. The following string values are defined and implemented by all standard Python codecs: Value Meaning 'strict' Raise UnicodeError (or a subclass); this is the default. Implemented in strict_errors(). 'ignore' Ignore the malformed data and continue without further notice. Implemented in ignore_errors(). The following error handlers are only applicable to text encodings: Value Meaning 'replace' Replace with a suitable replacement marker; Python will use the official U+FFFD REPLACEMENT CHARACTER for the built-in codecs on decoding, and ‘?’ on encoding. Implemented in replace_errors(). 'xmlcharrefreplace' Replace with the appropriate XML character reference (only for encoding). Implemented in xmlcharrefreplace_errors(). 'backslashreplace' Replace with backslashed escape sequences. Implemented in backslashreplace_errors(). 'namereplace' Replace with \N{...} escape sequences (only for encoding). Implemented in namereplace_errors(). 'surrogateescape' On decoding, replace byte with individual surrogate code ranging from U+DC80 to U+DCFF. This code will then be turned back into the same byte when the 'surrogateescape' error handler is used when encoding the data. (See PEP 383 for more.) In addition, the following error handler is specific to the given codecs: Value Codecs Meaning 'surrogatepass' utf-8, utf-16, utf-32, utf-16-be, utf-16-le, utf-32-be, utf-32-le Allow encoding and decoding of surrogate codes. These codecs normally treat the presence of surrogates as an error. New in version 3.1: The 'surrogateescape' and 'surrogatepass' error handlers. Changed in version 3.4: The 'surrogatepass' error handlers now works with utf-16* and utf-32* codecs. New in version 3.5: The 'namereplace' error handler. Changed in version 3.5: The 'backslashreplace' error handlers now works with decoding and translating. The set of allowed values can be extended by registering a new named error handler: codecs.register_error(name, error_handler) Register the error handling function error_handler under the name name. The error_handler argument will be called during encoding and decoding in case of an error, when name is specified as the errors parameter. For encoding, error_handler will be called with a UnicodeEncodeError instance, which contains information about the location of the error. The error handler must either raise this or a different exception, or return a tuple with a replacement for the unencodable part of the input and a position where encoding should continue. The replacement may be either str or bytes. If the replacement is bytes, the encoder will simply copy them into the output buffer. If the replacement is a string, the encoder will encode the replacement. Encoding continues on original input at the specified position. Negative position values will be treated as being relative to the end of the input string. If the resulting position is out of bound an IndexError will be raised. Decoding and translating works similarly, except UnicodeDecodeError or UnicodeTranslateError will be passed to the handler and that the replacement from the error handler will be put into the output directly. Previously registered error handlers (including the standard error handlers) can be looked up by name: codecs.lookup_error(name) Return the error handler previously registered under the name name. Raises a LookupError in case the handler cannot be found. The following standard error handlers are also made available as module level functions: codecs.strict_errors(exception) Implements the 'strict' error handling: each encoding or decoding error raises a UnicodeError. codecs.replace_errors(exception) Implements the 'replace' error handling (for text encodings only): substitutes '?' for encoding errors (to be encoded by the codec), and '\ufffd' (the Unicode replacement character) for decoding errors. codecs.ignore_errors(exception) Implements the 'ignore' error handling: malformed data is ignored and encoding or decoding is continued without further notice. codecs.xmlcharrefreplace_errors(exception) Implements the 'xmlcharrefreplace' error handling (for encoding with text encodings only): the unencodable character is replaced by an appropriate XML character reference. codecs.backslashreplace_errors(exception) Implements the 'backslashreplace' error handling (for text encodings only): malformed data is replaced by a backslashed escape sequence. codecs.namereplace_errors(exception) Implements the 'namereplace' error handling (for encoding with text encodings only): the unencodable character is replaced by a \N{...} escape sequence. New in version 3.5. Stateless Encoding and Decoding The base Codec class defines these methods which also define the function interfaces of the stateless encoder and decoder: Codec.encode(input[, errors]) Encodes the object input and returns a tuple (output object, length consumed). For instance, text encoding converts a string object to a bytes object using a particular character set encoding (e.g., cp1252 or iso-8859-1). The errors argument defines the error handling to apply. It defaults to 'strict' handling. The method may not store state in the Codec instance. Use StreamWriter for codecs which have to keep state in order to make encoding efficient. The encoder must be able to handle zero length input and return an empty object of the output object type in this situation. Codec.decode(input[, errors]) Decodes the object input and returns a tuple (output object, length consumed). For instance, for a text encoding, decoding converts a bytes object encoded using a particular character set encoding to a string object. For text encodings and bytes-to-bytes codecs, input must be a bytes object or one which provides the read-only buffer interface – for example, buffer objects and memory mapped files. The errors argument defines the error handling to apply. It defaults to 'strict' handling. The method may not store state in the Codec instance. Use StreamReader for codecs which have to keep state in order to make decoding efficient. The decoder must be able to handle zero length input and return an empty object of the output object type in this situation. Incremental Encoding and Decoding The IncrementalEncoder and IncrementalDecoder classes provide the basic interface for incremental encoding and decoding. Encoding/decoding the input isn’t done with one call to the stateless encoder/decoder function, but with multiple calls to the encode()/decode() method of the incremental encoder/decoder. The incremental encoder/decoder keeps track of the encoding/decoding process during method calls. The joined output of calls to the encode()/decode() method is the same as if all the single inputs were joined into one, and this input was encoded/decoded with the stateless encoder/decoder. IncrementalEncoder Objects The IncrementalEncoder class is used for encoding an input in multiple steps. It defines the following methods which every incremental encoder must define in order to be compatible with the Python codec registry. class codecs.IncrementalEncoder(errors='strict') Constructor for an IncrementalEncoder instance. All incremental encoders must provide this constructor interface. They are free to add additional keyword arguments, but only the ones defined here are used by the Python codec registry. The IncrementalEncoder may implement different error handling schemes by providing the errors keyword argument. See Error Handlers for possible values. The errors argument will be assigned to an attribute of the same name. Assigning to this attribute makes it possible to switch between different error handling strategies during the lifetime of the IncrementalEncoder object. encode(object[, final]) Encodes object (taking the current state of the encoder into account) and returns the resulting encoded object. If this is the last call to encode() final must be true (the default is false). reset() Reset the encoder to the initial state. The output is discarded: call .encode(object, final=True), passing an empty byte or text string if necessary, to reset the encoder and to get the output. getstate() Return the current state of the encoder which must be an integer. The implementation should make sure that 0 is the most common state. (States that are more complicated than integers can be converted into an integer by marshaling/pickling the state and encoding the bytes of the resulting string into an integer.) setstate(state) Set the state of the encoder to state. state must be an encoder state returned by getstate(). IncrementalDecoder Objects The IncrementalDecoder class is used for decoding an input in multiple steps. It defines the following methods which every incremental decoder must define in order to be compatible with the Python codec registry. class codecs.IncrementalDecoder(errors='strict') Constructor for an IncrementalDecoder instance. All incremental decoders must provide this constructor interface. They are free to add additional keyword arguments, but only the ones defined here are used by the Python codec registry. The IncrementalDecoder may implement different error handling schemes by providing the errors keyword argument. See Error Handlers for possible values. The errors argument will be assigned to an attribute of the same name. Assigning to this attribute makes it possible to switch between different error handling strategies during the lifetime of the IncrementalDecoder object. decode(object[, final]) Decodes object (taking the current state of the decoder into account) and returns the resulting decoded object. If this is the last call to decode() final must be true (the default is false). If final is true the decoder must decode the input completely and must flush all buffers. If this isn’t possible (e.g. because of incomplete byte sequences at the end of the input) it must initiate error handling just like in the stateless case (which might raise an exception). reset() Reset the decoder to the initial state. getstate() Return the current state of the decoder. This must be a tuple with two items, the first must be the buffer containing the still undecoded input. The second must be an integer and can be additional state info. (The implementation should make sure that 0 is the most common additional state info.) If this additional state info is 0 it must be possible to set the decoder to the state which has no input buffered and 0 as the additional state info, so that feeding the previously buffered input to the decoder returns it to the previous state without producing any output. (Additional state info that is more complicated than integers can be converted into an integer by marshaling/pickling the info and encoding the bytes of the resulting string into an integer.) setstate(state) Set the state of the decoder to state. state must be a decoder state returned by getstate(). Stream Encoding and Decoding The StreamWriter and StreamReader classes provide generic working interfaces which can be used to implement new encoding submodules very easily. See encodings.utf_8 for an example of how this is done. StreamWriter Objects The StreamWriter class is a subclass of Codec and defines the following methods which every stream writer must define in order to be compatible with the Python codec registry. class codecs.StreamWriter(stream, errors='strict') Constructor for a StreamWriter instance. All stream writers must provide this constructor interface. They are free to add additional keyword arguments, but only the ones defined here are used by the Python codec registry. The stream argument must be a file-like object open for writing text or binary data, as appropriate for the specific codec. The StreamWriter may implement different error handling schemes by providing the errors keyword argument. See Error Handlers for the standard error handlers the underlying stream codec may support. The errors argument will be assigned to an attribute of the same name. Assigning to this attribute makes it possible to switch between different error handling strategies during the lifetime of the StreamWriter object. write(object) Writes the object’s contents encoded to the stream. writelines(list) Writes the concatenated list of strings to the stream (possibly by reusing the write() method). The standard bytes-to-bytes codecs do not support this method. reset() Resets the codec buffers used for keeping internal state. Calling this method should ensure that the data on the output is put into a clean state that allows appending of new fresh data without having to rescan the whole stream to recover state. In addition to the above methods, the StreamWriter must also inherit all other methods and attributes from the underlying stream. StreamReader Objects The StreamReader class is a subclass of Codec and defines the following methods which every stream reader must define in order to be compatible with the Python codec registry. class codecs.StreamReader(stream, errors='strict') Constructor for a StreamReader instance. All stream readers must provide this constructor interface. They are free to add additional keyword arguments, but only the ones defined here are used by the Python codec registry. The stream argument must be a file-like object open for reading text or binary data, as appropriate for the specific codec. The StreamReader may implement different error handling schemes by providing the errors keyword argument. See Error Handlers for the standard error handlers the underlying stream codec may support. The errors argument will be assigned to an attribute of the same name. Assigning to this attribute makes it possible to switch between different error handling strategies during the lifetime of the StreamReader object. The set of allowed values for the errors argument can be extended with register_error(). read([size[, chars[, firstline]]]) Decodes data from the stream and returns the resulting object. The chars argument indicates the number of decoded code points or bytes to return. The read() method will never return more data than requested, but it might return less, if there is not enough available. The size argument indicates the approximate maximum number of encoded bytes or code points to read for decoding. The decoder can modify this setting as appropriate. The default value -1 indicates to read and decode as much as possible. This parameter is intended to prevent having to decode huge files in one step. The firstline flag indicates that it would be sufficient to only return the first line, if there are decoding errors on later lines. The method should use a greedy read strategy meaning that it should read as much data as is allowed within the definition of the encoding and the given size, e.g. if optional encoding endings or state markers are available on the stream, these should be read too. readline([size[, keepends]]) Read one line from the input stream and return the decoded data. size, if given, is passed as size argument to the stream’s read() method. If keepends is false line-endings will be stripped from the lines returned. readlines([sizehint[, keepends]]) Read all lines available on the input stream and return them as a list of lines. Line-endings are implemented using the codec’s decode() method and are included in the list entries if keepends is true. sizehint, if given, is passed as the size argument to the stream’s read() method. reset() Resets the codec buffers used for keeping internal state. Note that no stream repositioning should take place. This method is primarily intended to be able to recover from decoding errors. In addition to the above methods, the StreamReader must also inherit all other methods and attributes from the underlying stream. StreamReaderWriter Objects The StreamReaderWriter is a convenience class that allows wrapping streams which work in both read and write modes. The design is such that one can use the factory functions returned by the lookup() function to construct the instance. class codecs.StreamReaderWriter(stream, Reader, Writer, errors='strict') Creates a StreamReaderWriter instance. stream must be a file-like object. Reader and Writer must be factory functions or classes providing the StreamReader and StreamWriter interface resp. Error handling is done in the same way as defined for the stream readers and writers. StreamReaderWriter instances define the combined interfaces of StreamReader and StreamWriter classes. They inherit all other methods and attributes from the underlying stream. StreamRecoder Objects The StreamRecoder translates data from one encoding to another, which is sometimes useful when dealing with different encoding environments. The design is such that one can use the factory functions returned by the lookup() function to construct the instance. class codecs.StreamRecoder(stream, encode, decode, Reader, Writer, errors='strict') Creates a StreamRecoder instance which implements a two-way conversion: encode and decode work on the frontend — the data visible to code calling read() and write(), while Reader and Writer work on the backend — the data in stream. You can use these objects to do transparent transcodings, e.g., from Latin-1 to UTF-8 and back. The stream argument must be a file-like object. The encode and decode arguments must adhere to the Codec interface. Reader and Writer must be factory functions or classes providing objects of the StreamReader and StreamWriter interface respectively. Error handling is done in the same way as defined for the stream readers and writers. StreamRecoder instances define the combined interfaces of StreamReader and StreamWriter classes. They inherit all other methods and attributes from the underlying stream. Encodings and Unicode Strings are stored internally as sequences of code points in range 0x0–0x10FFFF. (See PEP 393 for more details about the implementation.) Once a string object is used outside of CPU and memory, endianness and how these arrays are stored as bytes become an issue. As with other codecs, serialising a string into a sequence of bytes is known as encoding, and recreating the string from the sequence of bytes is known as decoding. There are a variety of different text serialisation codecs, which are collectivity referred to as text encodings. The simplest text encoding (called 'latin-1' or 'iso-8859-1') maps the code points 0–255 to the bytes 0x0–0xff, which means that a string object that contains code points above U+00FF can’t be encoded with this codec. Doing so will raise a UnicodeEncodeError that looks like the following (although the details of the error message may differ): UnicodeEncodeError: 'latin-1' codec can't encode character '\u1234' in position 3: ordinal not in range(256). There’s another group of encodings (the so called charmap encodings) that choose a different subset of all Unicode code points and how these code points are mapped to the bytes 0x0–0xff. To see how this is done simply open e.g. encodings/cp1252.py (which is an encoding that is used primarily on Windows). There’s a string constant with 256 characters that shows you which character is mapped to which byte value. All of these encodings can only encode 256 of the 1114112 code points defined in Unicode. A simple and straightforward way that can store each Unicode code point, is to store each code point as four consecutive bytes. There are two possibilities: store the bytes in big endian or in little endian order. These two encodings are called UTF-32-BE and UTF-32-LE respectively. Their disadvantage is that if e.g. you use UTF-32-BE on a little endian machine you will always have to swap bytes on encoding and decoding. UTF-32 avoids this problem: bytes will always be in natural endianness. When these bytes are read by a CPU with a different endianness, then bytes have to be swapped though. To be able to detect the endianness of a UTF-16 or UTF-32 byte sequence, there’s the so called BOM (“Byte Order Mark”). This is the Unicode character U+FEFF. This character can be prepended to every UTF-16 or UTF-32 byte sequence. The byte swapped version of this character (0xFFFE) is an illegal character that may not appear in a Unicode text. So when the first character in an UTF-16 or UTF-32 byte sequence appears to be a U+FFFE the bytes have to be swapped on decoding. Unfortunately the character U+FEFF had a second purpose as a ZERO WIDTH NO-BREAK SPACE: a character that has no width and doesn’t allow a word to be split. It can e.g. be used to give hints to a ligature algorithm. With Unicode 4.0 using U+FEFF as a ZERO WIDTH NO-BREAK SPACE has been deprecated (with U+2060 (WORD JOINER) assuming this role). Nevertheless Unicode software still must be able to handle U+FEFF in both roles: as a BOM it’s a device to determine the storage layout of the encoded bytes, and vanishes once the byte sequence has been decoded into a string; as a ZERO WIDTH NO-BREAK SPACE it’s a normal character that will be decoded like any other. There’s another encoding that is able to encoding the full range of Unicode characters: UTF-8. UTF-8 is an 8-bit encoding, which means there are no issues with byte order in UTF-8. Each byte in a UTF-8 byte sequence consists of two parts: marker bits (the most significant bits) and payload bits. The marker bits are a sequence of zero to four 1 bits followed by a 0 bit. Unicode characters are encoded like this (with x being payload bits, which when concatenated give the Unicode character): Range Encoding U-00000000 … U-0000007F 0xxxxxxx U-00000080 … U-000007FF 110xxxxx 10xxxxxx U-00000800 … U-0000FFFF 1110xxxx 10xxxxxx 10xxxxxx U-00010000 … U-0010FFFF 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx The least significant bit of the Unicode character is the rightmost x bit. As UTF-8 is an 8-bit encoding no BOM is required and any U+FEFF character in the decoded string (even if it’s the first character) is treated as a ZERO WIDTH NO-BREAK SPACE. Without external information it’s impossible to reliably determine which encoding was used for encoding a string. Each charmap encoding can decode any random byte sequence. However that’s not possible with UTF-8, as UTF-8 byte sequences have a structure that doesn’t allow arbitrary byte sequences. To increase the reliability with which a UTF-8 encoding can be detected, Microsoft invented a variant of UTF-8 (that Python 2.5 calls "utf-8-sig") for its Notepad program: Before any of the Unicode characters is written to the file, a UTF-8 encoded BOM (which looks like this as a byte sequence: 0xef, 0xbb, 0xbf) is written. As it’s rather improbable that any charmap encoded file starts with these byte values (which would e.g. map to in iso-8859-1), this increases the probability that a utf-8-sig encoding can be correctly guessed from the byte sequence. So here the BOM is not used to be able to determine the byte order used for generating the byte sequence, but as a signature that helps in guessing the encoding. On encoding the utf-8-sig codec will write 0xef, 0xbb, 0xbf as the first three bytes to the file. On decoding utf-8-sig will skip those three bytes if they appear as the first three bytes in the file. In UTF-8, the use of the BOM is discouraged and should generally be avoided. Standard Encodings Python comes with a number of codecs built-in, either implemented as C functions or with dictionaries as mapping tables. The following table lists the codecs by name, together with a few common aliases, and the languages for which the encoding is likely used. Neither the list of aliases nor the list of languages is meant to be exhaustive. Notice that spelling alternatives that only differ in case or use a hyphen instead of an underscore are also valid aliases; therefore, e.g. 'utf-8' is a valid alias for the 'utf_8' codec. CPython implementation detail: Some common encodings can bypass the codecs lookup machinery to improve performance. These optimization opportunities are only recognized by CPython for a limited set of (case insensitive) aliases: utf-8, utf8, latin-1, latin1, iso-8859-1, iso8859-1, mbcs (Windows only), ascii, us-ascii, utf-16, utf16, utf-32, utf32, and the same using underscores instead of dashes. Using alternative aliases for these encodings may result in slower execution. Changed in version 3.6: Optimization opportunity recognized for us-ascii. Many of the character sets support the same languages. They vary in individual characters (e.g. whether the EURO SIGN is supported or not), and in the assignment of characters to code positions. For the European languages in particular, the following variants typically exist: an ISO 8859 codeset a Microsoft Windows code page, which is typically derived from an 8859 codeset, but replaces control characters with additional graphic characters an IBM EBCDIC code page an IBM PC code page, which is ASCII compatible Codec Aliases Languages ascii 646, us-ascii English big5 big5-tw, csbig5 Traditional Chinese big5hkscs big5-hkscs, hkscs Traditional Chinese cp037 IBM037, IBM039 English cp273 273, IBM273, csIBM273 German New in version 3.4. cp424 EBCDIC-CP-HE, IBM424 Hebrew cp437 437, IBM437 English cp500 EBCDIC-CP-BE, EBCDIC-CP-CH, IBM500 Western Europe cp720 Arabic cp737 Greek cp775 IBM775 Baltic languages cp850 850, IBM850 Western Europe cp852 852, IBM852 Central and Eastern Europe cp855 855, IBM855 Bulgarian, Byelorussian, Macedonian, Russian, Serbian cp856 Hebrew cp857 857, IBM857 Turkish cp858 858, IBM858 Western Europe cp860 860, IBM860 Portuguese cp861 861, CP-IS, IBM861 Icelandic cp862 862, IBM862 Hebrew cp863 863, IBM863 Canadian cp864 IBM864 Arabic cp865 865, IBM865 Danish, Norwegian cp866 866, IBM866 Russian cp869 869, CP-GR, IBM869 Greek cp874 Thai cp875 Greek cp932 932, ms932, mskanji, ms-kanji Japanese cp949 949, ms949, uhc Korean cp950 950, ms950 Traditional Chinese cp1006 Urdu cp1026 ibm1026 Turkish cp1125 1125, ibm1125, cp866u, ruscii Ukrainian New in version 3.4. cp1140 ibm1140 Western Europe cp1250 windows-1250 Central and Eastern Europe cp1251 windows-1251 Bulgarian, Byelorussian, Macedonian, Russian, Serbian cp1252 windows-1252 Western Europe cp1253 windows-1253 Greek cp1254 windows-1254 Turkish cp1255 windows-1255 Hebrew cp1256 windows-1256 Arabic cp1257 windows-1257 Baltic languages cp1258 windows-1258 Vietnamese euc_jp eucjp, ujis, u-jis Japanese euc_jis_2004 jisx0213, eucjis2004 Japanese euc_jisx0213 eucjisx0213 Japanese euc_kr euckr, korean, ksc5601, ks_c-5601, ks_c-5601-1987, ksx1001, ks_x-1001 Korean gb2312 chinese, csiso58gb231280, euc-cn, euccn, eucgb2312-cn, gb2312-1980, gb2312-80, iso-ir-58 Simplified Chinese gbk 936, cp936, ms936 Unified Chinese gb18030 gb18030-2000 Unified Chinese hz hzgb, hz-gb, hz-gb-2312 Simplified Chinese iso2022_jp csiso2022jp, iso2022jp, iso-2022-jp Japanese iso2022_jp_1 iso2022jp-1, iso-2022-jp-1 Japanese iso2022_jp_2 iso2022jp-2, iso-2022-jp-2 Japanese, Korean, Simplified Chinese, Western Europe, Greek iso2022_jp_2004 iso2022jp-2004, iso-2022-jp-2004 Japanese iso2022_jp_3 iso2022jp-3, iso-2022-jp-3 Japanese iso2022_jp_ext iso2022jp-ext, iso-2022-jp-ext Japanese iso2022_kr csiso2022kr, iso2022kr, iso-2022-kr Korean latin_1 iso-8859-1, iso8859-1, 8859, cp819, latin, latin1, L1 Western Europe iso8859_2 iso-8859-2, latin2, L2 Central and Eastern Europe iso8859_3 iso-8859-3, latin3, L3 Esperanto, Maltese iso8859_4 iso-8859-4, latin4, L4 Baltic languages iso8859_5 iso-8859-5, cyrillic Bulgarian, Byelorussian, Macedonian, Russian, Serbian iso8859_6 iso-8859-6, arabic Arabic iso8859_7 iso-8859-7, greek, greek8 Greek iso8859_8 iso-8859-8, hebrew Hebrew iso8859_9 iso-8859-9, latin5, L5 Turkish iso8859_10 iso-8859-10, latin6, L6 Nordic languages iso8859_11 iso-8859-11, thai Thai languages iso8859_13 iso-8859-13, latin7, L7 Baltic languages iso8859_14 iso-8859-14, latin8, L8 Celtic languages iso8859_15 iso-8859-15, latin9, L9 Western Europe iso8859_16 iso-8859-16, latin10, L10 South-Eastern Europe johab cp1361, ms1361 Korean koi8_r Russian koi8_t Tajik New in version 3.5. koi8_u Ukrainian kz1048 kz_1048, strk1048_2002, rk1048 Kazakh New in version 3.5. mac_cyrillic maccyrillic Bulgarian, Byelorussian, Macedonian, Russian, Serbian mac_greek macgreek Greek mac_iceland maciceland Icelandic mac_latin2 maclatin2, maccentraleurope, mac_centeuro Central and Eastern Europe mac_roman macroman, macintosh Western Europe mac_turkish macturkish Turkish ptcp154 csptcp154, pt154, cp154, cyrillic-asian Kazakh shift_jis csshiftjis, shiftjis, sjis, s_jis Japanese shift_jis_2004 shiftjis2004, sjis_2004, sjis2004 Japanese shift_jisx0213 shiftjisx0213, sjisx0213, s_jisx0213 Japanese utf_32 U32, utf32 all languages utf_32_be UTF-32BE all languages utf_32_le UTF-32LE all languages utf_16 U16, utf16 all languages utf_16_be UTF-16BE all languages utf_16_le UTF-16LE all languages utf_7 U7, unicode-1-1-utf-7 all languages utf_8 U8, UTF, utf8, cp65001 all languages utf_8_sig all languages Changed in version 3.4: The utf-16* and utf-32* encoders no longer allow surrogate code points (U+D800–U+DFFF) to be encoded. The utf-32* decoders no longer decode byte sequences that correspond to surrogate code points. Changed in version 3.8: cp65001 is now an alias to utf_8. Python Specific Encodings A number of predefined codecs are specific to Python, so their codec names have no meaning outside Python. These are listed in the tables below based on the expected input and output types (note that while text encodings are the most common use case for codecs, the underlying codec infrastructure supports arbitrary data transforms rather than just text encodings). For asymmetric codecs, the stated meaning describes the encoding direction. Text Encodings The following codecs provide str to bytes encoding and bytes-like object to str decoding, similar to the Unicode text encodings. Codec Aliases Meaning idna Implement RFC 3490, see also encodings.idna. Only errors='strict' is supported. mbcs ansi, dbcs Windows only: Encode the operand according to the ANSI codepage (CP_ACP). oem Windows only: Encode the operand according to the OEM codepage (CP_OEMCP). New in version 3.6. palmos Encoding of PalmOS 3.5. punycode Implement RFC 3492. Stateful codecs are not supported. raw_unicode_escape Latin-1 encoding with \uXXXX and \UXXXXXXXX for other code points. Existing backslashes are not escaped in any way. It is used in the Python pickle protocol. undefined Raise an exception for all conversions, even empty strings. The error handler is ignored. unicode_escape Encoding suitable as the contents of a Unicode literal in ASCII-encoded Python source code, except that quotes are not escaped. Decode from Latin-1 source code. Beware that Python source code actually uses UTF-8 by default. Changed in version 3.8: “unicode_internal” codec is removed. Binary Transforms The following codecs provide binary transforms: bytes-like object to bytes mappings. They are not supported by bytes.decode() (which only produces str output). Codec Aliases Meaning Encoder / decoder base64_codec 1 base64, base_64 Convert the operand to multiline MIME base64 (the result always includes a trailing '\n'). Changed in version 3.4: accepts any bytes-like object as input for encoding and decoding base64.encodebytes() / base64.decodebytes() bz2_codec bz2 Compress the operand using bz2. bz2.compress() / bz2.decompress() hex_codec hex Convert the operand to hexadecimal representation, with two digits per byte. binascii.b2a_hex() / binascii.a2b_hex() quopri_codec quopri, quotedprintable, quoted_printable Convert the operand to MIME quoted printable. quopri.encode() with quotetabs=True / quopri.decode() uu_codec uu Convert the operand using uuencode. uu.encode() / uu.decode() zlib_codec zip, zlib Compress the operand using gzip. zlib.compress() / zlib.decompress() 1 In addition to bytes-like objects, 'base64_codec' also accepts ASCII-only instances of str for decoding New in version 3.2: Restoration of the binary transforms. Changed in version 3.4: Restoration of the aliases for the binary transforms. Text Transforms The following codec provides a text transform: a str to str mapping. It is not supported by str.encode() (which only produces bytes output). Codec Aliases Meaning rot_13 rot13 Return the Caesar-cypher encryption of the operand. New in version 3.2: Restoration of the rot_13 text transform. Changed in version 3.4: Restoration of the rot13 alias. encodings.idna — Internationalized Domain Names in Applications This module implements RFC 3490 (Internationalized Domain Names in Applications) and RFC 3492 (Nameprep: A Stringprep Profile for Internationalized Domain Names (IDN)). It builds upon the punycode encoding and stringprep. If you need the IDNA 2008 standard from RFC 5891 and RFC 5895, use the third-party idna module <https://pypi.org/project/idna/>_. These RFCs together define a protocol to support non-ASCII characters in domain names. A domain name containing non-ASCII characters (such as www.Alliancefrançaise.nu) is converted into an ASCII-compatible encoding (ACE, such as www.xn--alliancefranaise-npb.nu). The ACE form of the domain name is then used in all places where arbitrary characters are not allowed by the protocol, such as DNS queries, HTTP Host fields, and so on. This conversion is carried out in the application; if possible invisible to the user: The application should transparently convert Unicode domain labels to IDNA on the wire, and convert back ACE labels to Unicode before presenting them to the user. Python supports this conversion in several ways: the idna codec performs conversion between Unicode and ACE, separating an input string into labels based on the separator characters defined in section 3.1 of RFC 3490 and converting each label to ACE as required, and conversely separating an input byte string into labels based on the . separator and converting any ACE labels found into unicode. Furthermore, the socket module transparently converts Unicode host names to ACE, so that applications need not be concerned about converting host names themselves when they pass them to the socket module. On top of that, modules that have host names as function parameters, such as http.client and ftplib, accept Unicode host names (http.client then also transparently sends an IDNA hostname in the Host field if it sends that field at all). When receiving host names from the wire (such as in reverse name lookup), no automatic conversion to Unicode is performed: applications wishing to present such host names to the user should decode them to Unicode. The module encodings.idna also implements the nameprep procedure, which performs certain normalizations on host names, to achieve case-insensitivity of international domain names, and to unify similar characters. The nameprep functions can be used directly if desired. encodings.idna.nameprep(label) Return the nameprepped version of label. The implementation currently assumes query strings, so AllowUnassigned is true. encodings.idna.ToASCII(label) Convert a label to ASCII, as specified in RFC 3490. UseSTD3ASCIIRules is assumed to be false. encodings.idna.ToUnicode(label) Convert a label to Unicode, as specified in RFC 3490. encodings.mbcs — Windows ANSI codepage This module implements the ANSI codepage (CP_ACP). Availability: Windows only. Changed in version 3.3: Support any error handler. Changed in version 3.2: Before 3.2, the errors argument was ignored; 'replace' was always used to encode, and 'ignore' to decode. encodings.utf_8_sig — UTF-8 codec with BOM signature This module implements a variant of the UTF-8 codec. On encoding, a UTF-8 encoded BOM will be prepended to the UTF-8 encoded bytes. For the stateful encoder this is only done once (on the first write to the byte stream). On decoding, an optional UTF-8 encoded BOM at the start of the data will be skipped.
doc_1780
'blogs.blog': lambda o: "/blogs/%s/" % o.slug, 'news.story': lambda o: "/stories/%s/%s/" % (o.pub_year, o.slug), } The model name used in this setting should be all lowercase, regardless of the case of the actual model class name. ADMINS Default: [] (Empty list) A list of all the people who get code error notifications. When DEBUG=False and AdminEmailHandler is configured in LOGGING (done by default), Django emails these people the details of exceptions raised in the request/response cycle. Each item in the list should be a tuple of (Full name, email address). Example: [('John', 'john@example.com'), ('Mary', 'mary@example.com')] ALLOWED_HOSTS Default: [] (Empty list) A list of strings representing the host/domain names that this Django site can serve. This is a security measure to prevent HTTP Host header attacks, which are possible even under many seemingly-safe web server configurations. Values in this list can be fully qualified names (e.g. 'www.example.com'), in which case they will be matched against the request’s Host header exactly (case-insensitive, not including port). A value beginning with a period can be used as a subdomain wildcard: '.example.com' will match example.com, www.example.com, and any other subdomain of example.com. A value of '*' will match anything; in this case you are responsible to provide your own validation of the Host header (perhaps in a middleware; if so this middleware must be listed first in MIDDLEWARE). Django also allows the fully qualified domain name (FQDN) of any entries. Some browsers include a trailing dot in the Host header which Django strips when performing host validation. If the Host header (or X-Forwarded-Host if USE_X_FORWARDED_HOST is enabled) does not match any value in this list, the django.http.HttpRequest.get_host() method will raise SuspiciousOperation. When DEBUG is True and ALLOWED_HOSTS is empty, the host is validated against ['.localhost', '127.0.0.1', '[::1]']. ALLOWED_HOSTS is also checked when running tests. This validation only applies via get_host(); if your code accesses the Host header directly from request.META you are bypassing this security protection. APPEND_SLASH Default: True When set to True, if the request URL does not match any of the patterns in the URLconf and it doesn’t end in a slash, an HTTP redirect is issued to the same URL with a slash appended. Note that the redirect may cause any data submitted in a POST request to be lost. The APPEND_SLASH setting is only used if CommonMiddleware is installed (see Middleware). See also PREPEND_WWW. CACHES Default: { 'default': { 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache', } } A dictionary containing the settings for all caches to be used with Django. It is a nested dictionary whose contents maps cache aliases to a dictionary containing the options for an individual cache. The CACHES setting must configure a default cache; any number of additional caches may also be specified. If you are using a cache backend other than the local memory cache, or you need to define multiple caches, other options will be required. The following cache options are available. BACKEND Default: '' (Empty string) The cache backend to use. The built-in cache backends are: 'django.core.cache.backends.db.DatabaseCache' 'django.core.cache.backends.dummy.DummyCache' 'django.core.cache.backends.filebased.FileBasedCache' 'django.core.cache.backends.locmem.LocMemCache' 'django.core.cache.backends.memcached.PyMemcacheCache' 'django.core.cache.backends.memcached.PyLibMCCache' 'django.core.cache.backends.redis.RedisCache' You can use a cache backend that doesn’t ship with Django by setting BACKEND to a fully-qualified path of a cache backend class (i.e. mypackage.backends.whatever.WhateverCache). Changed in Django 3.2: The PyMemcacheCache backend was added. Changed in Django 4.0: The RedisCache backend was added. KEY_FUNCTION A string containing a dotted path to a function (or any callable) that defines how to compose a prefix, version and key into a final cache key. The default implementation is equivalent to the function: def make_key(key, key_prefix, version): return ':'.join([key_prefix, str(version), key]) You may use any key function you want, as long as it has the same argument signature. See the cache documentation for more information. KEY_PREFIX Default: '' (Empty string) A string that will be automatically included (prepended by default) to all cache keys used by the Django server. See the cache documentation for more information. LOCATION Default: '' (Empty string) The location of the cache to use. This might be the directory for a file system cache, a host and port for a memcache server, or an identifying name for a local memory cache. e.g.: CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache', 'LOCATION': '/var/tmp/django_cache', } } OPTIONS Default: None Extra parameters to pass to the cache backend. Available parameters vary depending on your cache backend. Some information on available parameters can be found in the cache arguments documentation. For more information, consult your backend module’s own documentation. TIMEOUT Default: 300 The number of seconds before a cache entry is considered stale. If the value of this setting is None, cache entries will not expire. A value of 0 causes keys to immediately expire (effectively “don’t cache”). VERSION Default: 1 The default version number for cache keys generated by the Django server. See the cache documentation for more information. CACHE_MIDDLEWARE_ALIAS Default: 'default' The cache connection to use for the cache middleware. CACHE_MIDDLEWARE_KEY_PREFIX Default: '' (Empty string) A string which will be prefixed to the cache keys generated by the cache middleware. This prefix is combined with the KEY_PREFIX setting; it does not replace it. See Django’s cache framework. CACHE_MIDDLEWARE_SECONDS Default: 600 The default number of seconds to cache a page for the cache middleware. See Django’s cache framework. CSRF_COOKIE_AGE Default: 31449600 (approximately 1 year, in seconds) The age of CSRF cookies, in seconds. The reason for setting a long-lived expiration time is to avoid problems in the case of a user closing a browser or bookmarking a page and then loading that page from a browser cache. Without persistent cookies, the form submission would fail in this case. Some browsers (specifically Internet Explorer) can disallow the use of persistent cookies or can have the indexes to the cookie jar corrupted on disk, thereby causing CSRF protection checks to (sometimes intermittently) fail. Change this setting to None to use session-based CSRF cookies, which keep the cookies in-memory instead of on persistent storage. CSRF_COOKIE_DOMAIN Default: None The domain to be used when setting the CSRF cookie. This can be useful for easily allowing cross-subdomain requests to be excluded from the normal cross site request forgery protection. It should be set to a string such as ".example.com" to allow a POST request from a form on one subdomain to be accepted by a view served from another subdomain. Please note that the presence of this setting does not imply that Django’s CSRF protection is safe from cross-subdomain attacks by default - please see the CSRF limitations section. CSRF_COOKIE_HTTPONLY Default: False Whether to use HttpOnly flag on the CSRF cookie. If this is set to True, client-side JavaScript will not be able to access the CSRF cookie. Designating the CSRF cookie as HttpOnly doesn’t offer any practical protection because CSRF is only to protect against cross-domain attacks. If an attacker can read the cookie via JavaScript, they’re already on the same domain as far as the browser knows, so they can do anything they like anyway. (XSS is a much bigger hole than CSRF.) Although the setting offers little practical benefit, it’s sometimes required by security auditors. If you enable this and need to send the value of the CSRF token with an AJAX request, your JavaScript must pull the value from a hidden CSRF token form input instead of from the cookie. See SESSION_COOKIE_HTTPONLY for details on HttpOnly. CSRF_COOKIE_NAME Default: 'csrftoken' The name of the cookie to use for the CSRF authentication token. This can be whatever you want (as long as it’s different from the other cookie names in your application). See Cross Site Request Forgery protection. CSRF_COOKIE_PATH Default: '/' The path set on the CSRF cookie. This should either match the URL path of your Django installation or be a parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths, and each instance will only see its own CSRF cookie. CSRF_COOKIE_SAMESITE Default: 'Lax' The value of the SameSite flag on the CSRF cookie. This flag prevents the cookie from being sent in cross-site requests. See SESSION_COOKIE_SAMESITE for details about SameSite. CSRF_COOKIE_SECURE Default: False Whether to use a secure cookie for the CSRF cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent with an HTTPS connection. CSRF_USE_SESSIONS Default: False Whether to store the CSRF token in the user’s session instead of in a cookie. It requires the use of django.contrib.sessions. Storing the CSRF token in a cookie (Django’s default) is safe, but storing it in the session is common practice in other web frameworks and therefore sometimes demanded by security auditors. Since the default error views require the CSRF token, SessionMiddleware must appear in MIDDLEWARE before any middleware that may raise an exception to trigger an error view (such as PermissionDenied) if you’re using CSRF_USE_SESSIONS. See Middleware ordering. CSRF_FAILURE_VIEW Default: 'django.views.csrf.csrf_failure' A dotted path to the view function to be used when an incoming request is rejected by the CSRF protection. The function should have this signature: def csrf_failure(request, reason=""): ... where reason is a short message (intended for developers or logging, not for end users) indicating the reason the request was rejected. It should return an HttpResponseForbidden. django.views.csrf.csrf_failure() accepts an additional template_name parameter that defaults to '403_csrf.html'. If a template with that name exists, it will be used to render the page. CSRF_HEADER_NAME Default: 'HTTP_X_CSRFTOKEN' The name of the request header used for CSRF authentication. As with other HTTP headers in request.META, the header name received from the server is normalized by converting all characters to uppercase, replacing any hyphens with underscores, and adding an 'HTTP_' prefix to the name. For example, if your client sends a 'X-XSRF-TOKEN' header, the setting should be 'HTTP_X_XSRF_TOKEN'. CSRF_TRUSTED_ORIGINS Default: [] (Empty list) A list of trusted origins for unsafe requests (e.g. POST). For requests that include the Origin header, Django’s CSRF protection requires that header match the origin present in the Host header. For a secure unsafe request that doesn’t include the Origin header, the request must have a Referer header that matches the origin present in the Host header. These checks prevent, for example, a POST request from subdomain.example.com from succeeding against api.example.com. If you need cross-origin unsafe requests, continuing the example, add 'https://subdomain.example.com' to this list (and/or http://... if requests originate from an insecure page). The setting also supports subdomains, so you could add 'https://*.example.com', for example, to allow access from all subdomains of example.com. Changed in Django 4.0: The values in older versions must only include the hostname (possibly with a leading dot) and not the scheme or an asterisk. Also, Origin header checking isn’t performed in older versions. DATABASES Default: {} (Empty dictionary) A dictionary containing the settings for all databases to be used with Django. It is a nested dictionary whose contents map a database alias to a dictionary containing the options for an individual database. The DATABASES setting must configure a default database; any number of additional databases may also be specified. The simplest possible settings file is for a single-database setup using SQLite. This can be configured using the following: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': 'mydatabase', } } When connecting to other database backends, such as MariaDB, MySQL, Oracle, or PostgreSQL, additional connection parameters will be required. See the ENGINE setting below on how to specify other database types. This example is for PostgreSQL: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'mydatabase', 'USER': 'mydatabaseuser', 'PASSWORD': 'mypassword', 'HOST': '127.0.0.1', 'PORT': '5432', } } The following inner options that may be required for more complex configurations are available: ATOMIC_REQUESTS Default: False Set this to True to wrap each view in a transaction on this database. See Tying transactions to HTTP requests. AUTOCOMMIT Default: True Set this to False if you want to disable Django’s transaction management and implement your own. ENGINE Default: '' (Empty string) The database backend to use. The built-in database backends are: 'django.db.backends.postgresql' 'django.db.backends.mysql' 'django.db.backends.sqlite3' 'django.db.backends.oracle' You can use a database backend that doesn’t ship with Django by setting ENGINE to a fully-qualified path (i.e. mypackage.backends.whatever). HOST Default: '' (Empty string) Which host to use when connecting to the database. An empty string means localhost. Not used with SQLite. If this value starts with a forward slash ('/') and you’re using MySQL, MySQL will connect via a Unix socket to the specified socket. For example: "HOST": '/var/run/mysql' If you’re using MySQL and this value doesn’t start with a forward slash, then this value is assumed to be the host. If you’re using PostgreSQL, by default (empty HOST), the connection to the database is done through UNIX domain sockets (‘local’ lines in pg_hba.conf). If your UNIX domain socket is not in the standard location, use the same value of unix_socket_directory from postgresql.conf. If you want to connect through TCP sockets, set HOST to ‘localhost’ or ‘127.0.0.1’ (‘host’ lines in pg_hba.conf). On Windows, you should always define HOST, as UNIX domain sockets are not available. NAME Default: '' (Empty string) The name of the database to use. For SQLite, it’s the full path to the database file. When specifying the path, always use forward slashes, even on Windows (e.g. C:/homes/user/mysite/sqlite3.db). CONN_MAX_AGE Default: 0 The lifetime of a database connection, as an integer of seconds. Use 0 to close database connections at the end of each request — Django’s historical behavior — and None for unlimited persistent connections. OPTIONS Default: {} (Empty dictionary) Extra parameters to use when connecting to the database. Available parameters vary depending on your database backend. Some information on available parameters can be found in the Database Backends documentation. For more information, consult your backend module’s own documentation. PASSWORD Default: '' (Empty string) The password to use when connecting to the database. Not used with SQLite. PORT Default: '' (Empty string) The port to use when connecting to the database. An empty string means the default port. Not used with SQLite. TIME_ZONE Default: None A string representing the time zone for this database connection or None. This inner option of the DATABASES setting accepts the same values as the general TIME_ZONE setting. When USE_TZ is True and this option is set, reading datetimes from the database returns aware datetimes in this time zone instead of UTC. When USE_TZ is False, it is an error to set this option. If the database backend doesn’t support time zones (e.g. SQLite, MySQL, Oracle), Django reads and writes datetimes in local time according to this option if it is set and in UTC if it isn’t. Changing the connection time zone changes how datetimes are read from and written to the database. If Django manages the database and you don’t have a strong reason to do otherwise, you should leave this option unset. It’s best to store datetimes in UTC because it avoids ambiguous or nonexistent datetimes during daylight saving time changes. Also, receiving datetimes in UTC keeps datetime arithmetic simple — there’s no need to consider potential offset changes over a DST transition. If you’re connecting to a third-party database that stores datetimes in a local time rather than UTC, then you must set this option to the appropriate time zone. Likewise, if Django manages the database but third-party systems connect to the same database and expect to find datetimes in local time, then you must set this option. If the database backend supports time zones (e.g. PostgreSQL), the TIME_ZONE option is very rarely needed. It can be changed at any time; the database takes care of converting datetimes to the desired time zone. Setting the time zone of the database connection may be useful for running raw SQL queries involving date/time functions provided by the database, such as date_trunc, because their results depend on the time zone. However, this has a downside: receiving all datetimes in local time makes datetime arithmetic more tricky — you must account for possible offset changes over DST transitions. Consider converting to local time explicitly with AT TIME ZONE in raw SQL queries instead of setting the TIME_ZONE option. DISABLE_SERVER_SIDE_CURSORS Default: False Set this to True if you want to disable the use of server-side cursors with QuerySet.iterator(). Transaction pooling and server-side cursors describes the use case. This is a PostgreSQL-specific setting. USER Default: '' (Empty string) The username to use when connecting to the database. Not used with SQLite. TEST Default: {} (Empty dictionary) A dictionary of settings for test databases; for more details about the creation and use of test databases, see The test database. Here’s an example with a test database configuration: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'USER': 'mydatabaseuser', 'NAME': 'mydatabase', 'TEST': { 'NAME': 'mytestdatabase', }, }, } The following keys in the TEST dictionary are available: CHARSET Default: None The character set encoding used to create the test database. The value of this string is passed directly through to the database, so its format is backend-specific. Supported by the PostgreSQL (postgresql) and MySQL (mysql) backends. COLLATION Default: None The collation order to use when creating the test database. This value is passed directly to the backend, so its format is backend-specific. Only supported for the mysql backend (see the MySQL manual for details). DEPENDENCIES Default: ['default'], for all databases other than default, which has no dependencies. The creation-order dependencies of the database. See the documentation on controlling the creation order of test databases for details. MIGRATE Default: True When set to False, migrations won’t run when creating the test database. This is similar to setting None as a value in MIGRATION_MODULES, but for all apps. MIRROR Default: None The alias of the database that this database should mirror during testing. This setting exists to allow for testing of primary/replica (referred to as master/slave by some databases) configurations of multiple databases. See the documentation on testing primary/replica configurations for details. NAME Default: None The name of database to use when running the test suite. If the default value (None) is used with the SQLite database engine, the tests will use a memory resident database. For all other database engines the test database will use the name 'test_' + DATABASE_NAME. See The test database. SERIALIZE Boolean value to control whether or not the default test runner serializes the database into an in-memory JSON string before running tests (used to restore the database state between tests if you don’t have transactions). You can set this to False to speed up creation time if you don’t have any test classes with serialized_rollback=True. Deprecated since version 4.0: This setting is deprecated as it can be inferred from the databases with the serialized_rollback option enabled. TEMPLATE This is a PostgreSQL-specific setting. The name of a template (e.g. 'template0') from which to create the test database. CREATE_DB Default: True This is an Oracle-specific setting. If it is set to False, the test tablespaces won’t be automatically created at the beginning of the tests or dropped at the end. CREATE_USER Default: True This is an Oracle-specific setting. If it is set to False, the test user won’t be automatically created at the beginning of the tests and dropped at the end. USER Default: None This is an Oracle-specific setting. The username to use when connecting to the Oracle database that will be used when running tests. If not provided, Django will use 'test_' + USER. PASSWORD Default: None This is an Oracle-specific setting. The password to use when connecting to the Oracle database that will be used when running tests. If not provided, Django will generate a random password. ORACLE_MANAGED_FILES Default: False This is an Oracle-specific setting. If set to True, Oracle Managed Files (OMF) tablespaces will be used. DATAFILE and DATAFILE_TMP will be ignored. TBLSPACE Default: None This is an Oracle-specific setting. The name of the tablespace that will be used when running tests. If not provided, Django will use 'test_' + USER. TBLSPACE_TMP Default: None This is an Oracle-specific setting. The name of the temporary tablespace that will be used when running tests. If not provided, Django will use 'test_' + USER + '_temp'. DATAFILE Default: None This is an Oracle-specific setting. The name of the datafile to use for the TBLSPACE. If not provided, Django will use TBLSPACE + '.dbf'. DATAFILE_TMP Default: None This is an Oracle-specific setting. The name of the datafile to use for the TBLSPACE_TMP. If not provided, Django will use TBLSPACE_TMP + '.dbf'. DATAFILE_MAXSIZE Default: '500M' This is an Oracle-specific setting. The maximum size that the DATAFILE is allowed to grow to. DATAFILE_TMP_MAXSIZE Default: '500M' This is an Oracle-specific setting. The maximum size that the DATAFILE_TMP is allowed to grow to. DATAFILE_SIZE Default: '50M' This is an Oracle-specific setting. The initial size of the DATAFILE. DATAFILE_TMP_SIZE Default: '50M' This is an Oracle-specific setting. The initial size of the DATAFILE_TMP. DATAFILE_EXTSIZE Default: '25M' This is an Oracle-specific setting. The amount by which the DATAFILE is extended when more space is required. DATAFILE_TMP_EXTSIZE Default: '25M' This is an Oracle-specific setting. The amount by which the DATAFILE_TMP is extended when more space is required. DATA_UPLOAD_MAX_MEMORY_SIZE Default: 2621440 (i.e. 2.5 MB). The maximum size in bytes that a request body may be before a SuspiciousOperation (RequestDataTooBig) is raised. The check is done when accessing request.body or request.POST and is calculated against the total request size excluding any file upload data. You can set this to None to disable the check. Applications that are expected to receive unusually large form posts should tune this setting. The amount of request data is correlated to the amount of memory needed to process the request and populate the GET and POST dictionaries. Large requests could be used as a denial-of-service attack vector if left unchecked. Since web servers don’t typically perform deep request inspection, it’s not possible to perform a similar check at that level. See also FILE_UPLOAD_MAX_MEMORY_SIZE. DATA_UPLOAD_MAX_NUMBER_FIELDS Default: 1000 The maximum number of parameters that may be received via GET or POST before a SuspiciousOperation (TooManyFields) is raised. You can set this to None to disable the check. Applications that are expected to receive an unusually large number of form fields should tune this setting. The number of request parameters is correlated to the amount of time needed to process the request and populate the GET and POST dictionaries. Large requests could be used as a denial-of-service attack vector if left unchecked. Since web servers don’t typically perform deep request inspection, it’s not possible to perform a similar check at that level. DATABASE_ROUTERS Default: [] (Empty list) The list of routers that will be used to determine which database to use when performing a database query. See the documentation on automatic database routing in multi database configurations. DATE_FORMAT Default: 'N j, Y' (e.g. Feb. 4, 2003) The default formatting to use for displaying date fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATETIME_FORMAT, TIME_FORMAT and SHORT_DATE_FORMAT. DATE_INPUT_FORMATS Default: [ '%Y-%m-%d', '%m/%d/%Y', '%m/%d/%y', # '2006-10-25', '10/25/2006', '10/25/06' '%b %d %Y', '%b %d, %Y', # 'Oct 25 2006', 'Oct 25, 2006' '%d %b %Y', '%d %b, %Y', # '25 Oct 2006', '25 Oct, 2006' '%B %d %Y', '%B %d, %Y', # 'October 25 2006', 'October 25, 2006' '%d %B %Y', '%d %B, %Y', # '25 October 2006', '25 October, 2006' ] A list of formats that will be accepted when inputting data on a date field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATETIME_INPUT_FORMATS and TIME_INPUT_FORMATS. DATETIME_FORMAT Default: 'N j, Y, P' (e.g. Feb. 4, 2003, 4 p.m.) The default formatting to use for displaying datetime fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATE_FORMAT, TIME_FORMAT and SHORT_DATETIME_FORMAT. DATETIME_INPUT_FORMATS Default: [ '%Y-%m-%d %H:%M:%S', # '2006-10-25 14:30:59' '%Y-%m-%d %H:%M:%S.%f', # '2006-10-25 14:30:59.000200' '%Y-%m-%d %H:%M', # '2006-10-25 14:30' '%m/%d/%Y %H:%M:%S', # '10/25/2006 14:30:59' '%m/%d/%Y %H:%M:%S.%f', # '10/25/2006 14:30:59.000200' '%m/%d/%Y %H:%M', # '10/25/2006 14:30' '%m/%d/%y %H:%M:%S', # '10/25/06 14:30:59' '%m/%d/%y %H:%M:%S.%f', # '10/25/06 14:30:59.000200' '%m/%d/%y %H:%M', # '10/25/06 14:30' ] A list of formats that will be accepted when inputting data on a datetime field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. Date-only formats are not included as datetime fields will automatically try DATE_INPUT_FORMATS in last resort. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATE_INPUT_FORMATS and TIME_INPUT_FORMATS. DEBUG Default: False A boolean that turns on/off debug mode. Never deploy a site into production with DEBUG turned on. One of the main features of debug mode is the display of detailed error pages. If your app raises an exception when DEBUG is True, Django will display a detailed traceback, including a lot of metadata about your environment, such as all the currently defined Django settings (from settings.py). As a security measure, Django will not include settings that might be sensitive, such as SECRET_KEY. Specifically, it will exclude any setting whose name includes any of the following: 'API' 'KEY' 'PASS' 'SECRET' 'SIGNATURE' 'TOKEN' Note that these are partial matches. 'PASS' will also match PASSWORD, just as 'TOKEN' will also match TOKENIZED and so on. Still, note that there are always going to be sections of your debug output that are inappropriate for public consumption. File paths, configuration options and the like all give attackers extra information about your server. It is also important to remember that when running with DEBUG turned on, Django will remember every SQL query it executes. This is useful when you’re debugging, but it’ll rapidly consume memory on a production server. Finally, if DEBUG is False, you also need to properly set the ALLOWED_HOSTS setting. Failing to do so will result in all requests being returned as “Bad Request (400)”. Note The default settings.py file created by django-admin startproject sets DEBUG = True for convenience. DEBUG_PROPAGATE_EXCEPTIONS Default: False If set to True, Django’s exception handling of view functions (handler500, or the debug view if DEBUG is True) and logging of 500 responses (django.request) is skipped and exceptions propagate upward. This can be useful for some test setups. It shouldn’t be used on a live site unless you want your web server (instead of Django) to generate “Internal Server Error” responses. In that case, make sure your server doesn’t show the stack trace or other sensitive information in the response. DECIMAL_SEPARATOR Default: '.' (Dot) Default decimal separator used when formatting decimal numbers. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also NUMBER_GROUPING, THOUSAND_SEPARATOR and USE_THOUSAND_SEPARATOR. DEFAULT_AUTO_FIELD New in Django 3.2. Default: 'django.db.models.AutoField' Default primary key field type to use for models that don’t have a field with primary_key=True. Migrating auto-created through tables The value of DEFAULT_AUTO_FIELD will be respected when creating new auto-created through tables for many-to-many relationships. Unfortunately, the primary keys of existing auto-created through tables cannot currently be updated by the migrations framework. This means that if you switch the value of DEFAULT_AUTO_FIELD and then generate migrations, the primary keys of the related models will be updated, as will the foreign keys from the through table, but the primary key of the auto-created through table will not be migrated. In order to address this, you should add a RunSQL operation to your migrations to perform the required ALTER TABLE step. You can check the existing table name through sqlmigrate, dbshell, or with the field’s remote_field.through._meta.db_table property. Explicitly defined through models are already handled by the migrations system. Allowing automatic migrations for the primary key of existing auto-created through tables may be implemented at a later date. DEFAULT_CHARSET Default: 'utf-8' Default charset to use for all HttpResponse objects, if a MIME type isn’t manually specified. Used when constructing the Content-Type header. DEFAULT_EXCEPTION_REPORTER Default: 'django.views.debug.ExceptionReporter' Default exception reporter class to be used if none has been assigned to the HttpRequest instance yet. See Custom error reports. DEFAULT_EXCEPTION_REPORTER_FILTER Default: 'django.views.debug.SafeExceptionReporterFilter' Default exception reporter filter class to be used if none has been assigned to the HttpRequest instance yet. See Filtering error reports. DEFAULT_FILE_STORAGE Default: 'django.core.files.storage.FileSystemStorage' Default file storage class to be used for any file-related operations that don’t specify a particular storage system. See Managing files. DEFAULT_FROM_EMAIL Default: 'webmaster@localhost' Default email address to use for various automated correspondence from the site manager(s). This doesn’t include error messages sent to ADMINS and MANAGERS; for that, see SERVER_EMAIL. DEFAULT_INDEX_TABLESPACE Default: '' (Empty string) Default tablespace to use for indexes on fields that don’t specify one, if the backend supports it (see Tablespaces). DEFAULT_TABLESPACE Default: '' (Empty string) Default tablespace to use for models that don’t specify one, if the backend supports it (see Tablespaces). DISALLOWED_USER_AGENTS Default: [] (Empty list) List of compiled regular expression objects representing User-Agent strings that are not allowed to visit any page, systemwide. Use this for bots/crawlers. This is only used if CommonMiddleware is installed (see Middleware). EMAIL_BACKEND Default: 'django.core.mail.backends.smtp.EmailBackend' The backend to use for sending emails. For the list of available backends see Sending email. EMAIL_FILE_PATH Default: Not defined The directory used by the file email backend to store output files. EMAIL_HOST Default: 'localhost' The host to use for sending email. See also EMAIL_PORT. EMAIL_HOST_PASSWORD Default: '' (Empty string) Password to use for the SMTP server defined in EMAIL_HOST. This setting is used in conjunction with EMAIL_HOST_USER when authenticating to the SMTP server. If either of these settings is empty, Django won’t attempt authentication. See also EMAIL_HOST_USER. EMAIL_HOST_USER Default: '' (Empty string) Username to use for the SMTP server defined in EMAIL_HOST. If empty, Django won’t attempt authentication. See also EMAIL_HOST_PASSWORD. EMAIL_PORT Default: 25 Port to use for the SMTP server defined in EMAIL_HOST. EMAIL_SUBJECT_PREFIX Default: '[Django] ' Subject-line prefix for email messages sent with django.core.mail.mail_admins or django.core.mail.mail_managers. You’ll probably want to include the trailing space. EMAIL_USE_LOCALTIME Default: False Whether to send the SMTP Date header of email messages in the local time zone (True) or in UTC (False). EMAIL_USE_TLS Default: False Whether to use a TLS (secure) connection when talking to the SMTP server. This is used for explicit TLS connections, generally on port 587. If you are experiencing hanging connections, see the implicit TLS setting EMAIL_USE_SSL. EMAIL_USE_SSL Default: False Whether to use an implicit TLS (secure) connection when talking to the SMTP server. In most email documentation this type of TLS connection is referred to as SSL. It is generally used on port 465. If you are experiencing problems, see the explicit TLS setting EMAIL_USE_TLS. Note that EMAIL_USE_TLS/EMAIL_USE_SSL are mutually exclusive, so only set one of those settings to True. EMAIL_SSL_CERTFILE Default: None If EMAIL_USE_SSL or EMAIL_USE_TLS is True, you can optionally specify the path to a PEM-formatted certificate chain file to use for the SSL connection. EMAIL_SSL_KEYFILE Default: None If EMAIL_USE_SSL or EMAIL_USE_TLS is True, you can optionally specify the path to a PEM-formatted private key file to use for the SSL connection. Note that setting EMAIL_SSL_CERTFILE and EMAIL_SSL_KEYFILE doesn’t result in any certificate checking. They’re passed to the underlying SSL connection. Please refer to the documentation of Python’s ssl.wrap_socket() function for details on how the certificate chain file and private key file are handled. EMAIL_TIMEOUT Default: None Specifies a timeout in seconds for blocking operations like the connection attempt. FILE_UPLOAD_HANDLERS Default: [ 'django.core.files.uploadhandler.MemoryFileUploadHandler', 'django.core.files.uploadhandler.TemporaryFileUploadHandler', ] A list of handlers to use for uploading. Changing this setting allows complete customization – even replacement – of Django’s upload process. See Managing files for details. FILE_UPLOAD_MAX_MEMORY_SIZE Default: 2621440 (i.e. 2.5 MB). The maximum size (in bytes) that an upload will be before it gets streamed to the file system. See Managing files for details. See also DATA_UPLOAD_MAX_MEMORY_SIZE. FILE_UPLOAD_DIRECTORY_PERMISSIONS Default: None The numeric mode to apply to directories created in the process of uploading files. This setting also determines the default permissions for collected static directories when using the collectstatic management command. See collectstatic for details on overriding it. This value mirrors the functionality and caveats of the FILE_UPLOAD_PERMISSIONS setting. FILE_UPLOAD_PERMISSIONS Default: 0o644 The numeric mode (i.e. 0o644) to set newly uploaded files to. For more information about what these modes mean, see the documentation for os.chmod(). If None, you’ll get operating-system dependent behavior. On most platforms, temporary files will have a mode of 0o600, and files saved from memory will be saved using the system’s standard umask. For security reasons, these permissions aren’t applied to the temporary files that are stored in FILE_UPLOAD_TEMP_DIR. This setting also determines the default permissions for collected static files when using the collectstatic management command. See collectstatic for details on overriding it. Warning Always prefix the mode with 0o . If you’re not familiar with file modes, please note that the 0o prefix is very important: it indicates an octal number, which is the way that modes must be specified. If you try to use 644, you’ll get totally incorrect behavior. FILE_UPLOAD_TEMP_DIR Default: None The directory to store data to (typically files larger than FILE_UPLOAD_MAX_MEMORY_SIZE) temporarily while uploading files. If None, Django will use the standard temporary directory for the operating system. For example, this will default to /tmp on *nix-style operating systems. See Managing files for details. FIRST_DAY_OF_WEEK Default: 0 (Sunday) A number representing the first day of the week. This is especially useful when displaying a calendar. This value is only used when not using format internationalization, or when a format cannot be found for the current locale. The value must be an integer from 0 to 6, where 0 means Sunday, 1 means Monday and so on. FIXTURE_DIRS Default: [] (Empty list) List of directories searched for fixture files, in addition to the fixtures directory of each application, in search order. Note that these paths should use Unix-style forward slashes, even on Windows. See Providing data with fixtures and Fixture loading. FORCE_SCRIPT_NAME Default: None If not None, this will be used as the value of the SCRIPT_NAME environment variable in any HTTP request. This setting can be used to override the server-provided value of SCRIPT_NAME, which may be a rewritten version of the preferred value or not supplied at all. It is also used by django.setup() to set the URL resolver script prefix outside of the request/response cycle (e.g. in management commands and standalone scripts) to generate correct URLs when SCRIPT_NAME is not /. FORM_RENDERER Default: 'django.forms.renderers.DjangoTemplates' The class that renders forms and form widgets. It must implement the low-level render API. Included form renderers are: 'django.forms.renderers.DjangoTemplates' 'django.forms.renderers.Jinja2' FORMAT_MODULE_PATH Default: None A full Python path to a Python package that contains custom format definitions for project locales. If not None, Django will check for a formats.py file, under the directory named as the current locale, and will use the formats defined in this file. For example, if FORMAT_MODULE_PATH is set to mysite.formats, and current language is en (English), Django will expect a directory tree like: mysite/ formats/ __init__.py en/ __init__.py formats.py You can also set this setting to a list of Python paths, for example: FORMAT_MODULE_PATH = [ 'mysite.formats', 'some_app.formats', ] When Django searches for a certain format, it will go through all given Python paths until it finds a module that actually defines the given format. This means that formats defined in packages farther up in the list will take precedence over the same formats in packages farther down. Available formats are: DATE_FORMAT DATE_INPUT_FORMATS DATETIME_FORMAT, DATETIME_INPUT_FORMATS DECIMAL_SEPARATOR FIRST_DAY_OF_WEEK MONTH_DAY_FORMAT NUMBER_GROUPING SHORT_DATE_FORMAT SHORT_DATETIME_FORMAT THOUSAND_SEPARATOR TIME_FORMAT TIME_INPUT_FORMATS YEAR_MONTH_FORMAT IGNORABLE_404_URLS Default: [] (Empty list) List of compiled regular expression objects describing URLs that should be ignored when reporting HTTP 404 errors via email (see How to manage error reporting). Regular expressions are matched against request's full paths (including query string, if any). Use this if your site does not provide a commonly requested file such as favicon.ico or robots.txt. This is only used if BrokenLinkEmailsMiddleware is enabled (see Middleware). INSTALLED_APPS Default: [] (Empty list) A list of strings designating all applications that are enabled in this Django installation. Each string should be a dotted Python path to: an application configuration class (preferred), or a package containing an application. Learn more about application configurations. Use the application registry for introspection Your code should never access INSTALLED_APPS directly. Use django.apps.apps instead. Application names and labels must be unique in INSTALLED_APPS Application names — the dotted Python path to the application package — must be unique. There is no way to include the same application twice, short of duplicating its code under another name. Application labels — by default the final part of the name — must be unique too. For example, you can’t include both django.contrib.auth and myproject.auth. However, you can relabel an application with a custom configuration that defines a different label. These rules apply regardless of whether INSTALLED_APPS references application configuration classes or application packages. When several applications provide different versions of the same resource (template, static file, management command, translation), the application listed first in INSTALLED_APPS has precedence. INTERNAL_IPS Default: [] (Empty list) A list of IP addresses, as strings, that: Allow the debug() context processor to add some variables to the template context. Can use the admindocs bookmarklets even if not logged in as a staff user. Are marked as “internal” (as opposed to “EXTERNAL”) in AdminEmailHandler emails. LANGUAGE_CODE Default: 'en-us' A string representing the language code for this installation. This should be in standard language ID format. For example, U.S. English is "en-us". See also the list of language identifiers and Internationalization and localization. USE_I18N must be active for this setting to have any effect. It serves two purposes: If the locale middleware isn’t in use, it decides which translation is served to all users. If the locale middleware is active, it provides a fallback language in case the user’s preferred language can’t be determined or is not supported by the website. It also provides the fallback translation when a translation for a given literal doesn’t exist for the user’s preferred language. See How Django discovers language preference for more details. LANGUAGE_COOKIE_AGE Default: None (expires at browser close) The age of the language cookie, in seconds. LANGUAGE_COOKIE_DOMAIN Default: None The domain to use for the language cookie. Set this to a string such as "example.com" for cross-domain cookies, or use None for a standard domain cookie. Be cautious when updating this setting on a production site. If you update this setting to enable cross-domain cookies on a site that previously used standard domain cookies, existing user cookies that have the old domain will not be updated. This will result in site users being unable to switch the language as long as these cookies persist. The only safe and reliable option to perform the switch is to change the language cookie name permanently (via the LANGUAGE_COOKIE_NAME setting) and to add a middleware that copies the value from the old cookie to a new one and then deletes the old one. LANGUAGE_COOKIE_HTTPONLY Default: False Whether to use HttpOnly flag on the language cookie. If this is set to True, client-side JavaScript will not be able to access the language cookie. See SESSION_COOKIE_HTTPONLY for details on HttpOnly. LANGUAGE_COOKIE_NAME Default: 'django_language' The name of the cookie to use for the language cookie. This can be whatever you want (as long as it’s different from the other cookie names in your application). See Internationalization and localization. LANGUAGE_COOKIE_PATH Default: '/' The path set on the language cookie. This should either match the URL path of your Django installation or be a parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths and each instance will only see its own language cookie. Be cautious when updating this setting on a production site. If you update this setting to use a deeper path than it previously used, existing user cookies that have the old path will not be updated. This will result in site users being unable to switch the language as long as these cookies persist. The only safe and reliable option to perform the switch is to change the language cookie name permanently (via the LANGUAGE_COOKIE_NAME setting), and to add a middleware that copies the value from the old cookie to a new one and then deletes the one. LANGUAGE_COOKIE_SAMESITE Default: None The value of the SameSite flag on the language cookie. This flag prevents the cookie from being sent in cross-site requests. See SESSION_COOKIE_SAMESITE for details about SameSite. LANGUAGE_COOKIE_SECURE Default: False Whether to use a secure cookie for the language cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent under an HTTPS connection. LANGUAGES Default: A list of all available languages. This list is continually growing and including a copy here would inevitably become rapidly out of date. You can see the current list of translated languages by looking in django/conf/global_settings.py. The list is a list of two-tuples in the format (language code, language name) – for example, ('ja', 'Japanese'). This specifies which languages are available for language selection. See Internationalization and localization. Generally, the default value should suffice. Only set this setting if you want to restrict language selection to a subset of the Django-provided languages. If you define a custom LANGUAGES setting, you can mark the language names as translation strings using the gettext_lazy() function. Here’s a sample settings file: from django.utils.translation import gettext_lazy as _ LANGUAGES = [ ('de', _('German')), ('en', _('English')), ] LANGUAGES_BIDI Default: A list of all language codes that are written right-to-left. You can see the current list of these languages by looking in django/conf/global_settings.py. The list contains language codes for languages that are written right-to-left. Generally, the default value should suffice. Only set this setting if you want to restrict language selection to a subset of the Django-provided languages. If you define a custom LANGUAGES setting, the list of bidirectional languages may contain language codes which are not enabled on a given site. LOCALE_PATHS Default: [] (Empty list) A list of directories where Django looks for translation files. See How Django discovers translations. Example: LOCALE_PATHS = [ '/home/www/project/common_files/locale', '/var/local/translations/locale', ] Django will look within each of these paths for the <locale_code>/LC_MESSAGES directories containing the actual translation files. LOGGING Default: A logging configuration dictionary. A data structure containing configuration information. The contents of this data structure will be passed as the argument to the configuration method described in LOGGING_CONFIG. Among other things, the default logging configuration passes HTTP 500 server errors to an email log handler when DEBUG is False. See also Configuring logging. You can see the default logging configuration by looking in django/utils/log.py. LOGGING_CONFIG Default: 'logging.config.dictConfig' A path to a callable that will be used to configure logging in the Django project. Points at an instance of Python’s dictConfig configuration method by default. If you set LOGGING_CONFIG to None, the logging configuration process will be skipped. MANAGERS Default: [] (Empty list) A list in the same format as ADMINS that specifies who should get broken link notifications when BrokenLinkEmailsMiddleware is enabled. MEDIA_ROOT Default: '' (Empty string) Absolute filesystem path to the directory that will hold user-uploaded files. Example: "/var/www/example.com/media/" See also MEDIA_URL. Warning MEDIA_ROOT and STATIC_ROOT must have different values. Before STATIC_ROOT was introduced, it was common to rely or fallback on MEDIA_ROOT to also serve static files; however, since this can have serious security implications, there is a validation check to prevent it. MEDIA_URL Default: '' (Empty string) URL that handles the media served from MEDIA_ROOT, used for managing stored files. It must end in a slash if set to a non-empty value. You will need to configure these files to be served in both development and production environments. If you want to use {{ MEDIA_URL }} in your templates, add 'django.template.context_processors.media' in the 'context_processors' option of TEMPLATES. Example: "http://media.example.com/" Warning There are security risks if you are accepting uploaded content from untrusted users! See the security guide’s topic on User-uploaded content for mitigation details. Warning MEDIA_URL and STATIC_URL must have different values. See MEDIA_ROOT for more details. Note If MEDIA_URL is a relative path, then it will be prefixed by the server-provided value of SCRIPT_NAME (or / if not set). This makes it easier to serve a Django application in a subpath without adding an extra configuration to the settings. MIDDLEWARE Default: None A list of middleware to use. See Middleware. MIGRATION_MODULES Default: {} (Empty dictionary) A dictionary specifying the package where migration modules can be found on a per-app basis. The default value of this setting is an empty dictionary, but the default package name for migration modules is migrations. Example: {'blog': 'blog.db_migrations'} In this case, migrations pertaining to the blog app will be contained in the blog.db_migrations package. If you provide the app_label argument, makemigrations will automatically create the package if it doesn’t already exist. When you supply None as a value for an app, Django will consider the app as an app without migrations regardless of an existing migrations submodule. This can be used, for example, in a test settings file to skip migrations while testing (tables will still be created for the apps’ models). To disable migrations for all apps during tests, you can set the MIGRATE to False instead. If MIGRATION_MODULES is used in your general project settings, remember to use the migrate --run-syncdb option if you want to create tables for the app. MONTH_DAY_FORMAT Default: 'F j' The default formatting to use for date fields on Django admin change-list pages – and, possibly, by other parts of the system – in cases when only the month and day are displayed. For example, when a Django admin change-list page is being filtered by a date drilldown, the header for a given day displays the day and month. Different locales have different formats. For example, U.S. English would say “January 1,” whereas Spanish might say “1 Enero.” Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT and YEAR_MONTH_FORMAT. NUMBER_GROUPING Default: 0 Number of digits grouped together on the integer part of a number. Common use is to display a thousand separator. If this setting is 0, then no grouping will be applied to the number. If this setting is greater than 0, then THOUSAND_SEPARATOR will be used as the separator between those groups. Some locales use non-uniform digit grouping, e.g. 10,00,00,000 in en_IN. For this case, you can provide a sequence with the number of digit group sizes to be applied. The first number defines the size of the group preceding the decimal delimiter, and each number that follows defines the size of preceding groups. If the sequence is terminated with -1, no further grouping is performed. If the sequence terminates with a 0, the last group size is used for the remainder of the number. Example tuple for en_IN: NUMBER_GROUPING = (3, 2, 0) Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also DECIMAL_SEPARATOR, THOUSAND_SEPARATOR and USE_THOUSAND_SEPARATOR. PREPEND_WWW Default: False Whether to prepend the “www.” subdomain to URLs that don’t have it. This is only used if CommonMiddleware is installed (see Middleware). See also APPEND_SLASH. ROOT_URLCONF Default: Not defined A string representing the full Python import path to your root URLconf, for example "mydjangoapps.urls". Can be overridden on a per-request basis by setting the attribute urlconf on the incoming HttpRequest object. See How Django processes a request for details. SECRET_KEY Default: '' (Empty string) A secret key for a particular Django installation. This is used to provide cryptographic signing, and should be set to a unique, unpredictable value. django-admin startproject automatically adds a randomly-generated SECRET_KEY to each new project. Uses of the key shouldn’t assume that it’s text or bytes. Every use should go through force_str() or force_bytes() to convert it to the desired type. Django will refuse to start if SECRET_KEY is not set. Warning Keep this value secret. Running Django with a known SECRET_KEY defeats many of Django’s security protections, and can lead to privilege escalation and remote code execution vulnerabilities. The secret key is used for: All sessions if you are using any other session backend than django.contrib.sessions.backends.cache, or are using the default get_session_auth_hash(). All messages if you are using CookieStorage or FallbackStorage. All PasswordResetView tokens. Any usage of cryptographic signing, unless a different key is provided. If you rotate your secret key, all of the above will be invalidated. Secret keys are not used for passwords of users and key rotation will not affect them. Note The default settings.py file created by django-admin startproject creates a unique SECRET_KEY for convenience. SECURE_CONTENT_TYPE_NOSNIFF Default: True If True, the SecurityMiddleware sets the X-Content-Type-Options: nosniff header on all responses that do not already have it. SECURE_CROSS_ORIGIN_OPENER_POLICY New in Django 4.0. Default: 'same-origin' Unless set to None, the SecurityMiddleware sets the Cross-Origin Opener Policy header on all responses that do not already have it to the value provided. SECURE_HSTS_INCLUDE_SUBDOMAINS Default: False If True, the SecurityMiddleware adds the includeSubDomains directive to the HTTP Strict Transport Security header. It has no effect unless SECURE_HSTS_SECONDS is set to a non-zero value. Warning Setting this incorrectly can irreversibly (for the value of SECURE_HSTS_SECONDS) break your site. Read the HTTP Strict Transport Security documentation first. SECURE_HSTS_PRELOAD Default: False If True, the SecurityMiddleware adds the preload directive to the HTTP Strict Transport Security header. It has no effect unless SECURE_HSTS_SECONDS is set to a non-zero value. SECURE_HSTS_SECONDS Default: 0 If set to a non-zero integer value, the SecurityMiddleware sets the HTTP Strict Transport Security header on all responses that do not already have it. Warning Setting this incorrectly can irreversibly (for some time) break your site. Read the HTTP Strict Transport Security documentation first. SECURE_PROXY_SSL_HEADER Default: None A tuple representing an HTTP header/value combination that signifies a request is secure. This controls the behavior of the request object’s is_secure() method. By default, is_secure() determines if a request is secure by confirming that a requested URL uses https://. This method is important for Django’s CSRF protection, and it may be used by your own code or third-party apps. If your Django app is behind a proxy, though, the proxy may be “swallowing” whether the original request uses HTTPS or not. If there is a non-HTTPS connection between the proxy and Django then is_secure() would always return False – even for requests that were made via HTTPS by the end user. In contrast, if there is an HTTPS connection between the proxy and Django then is_secure() would always return True – even for requests that were made originally via HTTP. In this situation, configure your proxy to set a custom HTTP header that tells Django whether the request came in via HTTPS, and set SECURE_PROXY_SSL_HEADER so that Django knows what header to look for. Set a tuple with two elements – the name of the header to look for and the required value. For example: SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') This tells Django to trust the X-Forwarded-Proto header that comes from our proxy, and any time its value is 'https', then the request is guaranteed to be secure (i.e., it originally came in via HTTPS). You should only set this setting if you control your proxy or have some other guarantee that it sets/strips this header appropriately. Note that the header needs to be in the format as used by request.META – all caps and likely starting with HTTP_. (Remember, Django automatically adds 'HTTP_' to the start of x-header names before making the header available in request.META.) Warning Modifying this setting can compromise your site’s security. Ensure you fully understand your setup before changing it. Make sure ALL of the following are true before setting this (assuming the values from the example above): Your Django app is behind a proxy. Your proxy strips the X-Forwarded-Proto header from all incoming requests. In other words, if end users include that header in their requests, the proxy will discard it. Your proxy sets the X-Forwarded-Proto header and sends it to Django, but only for requests that originally come in via HTTPS. If any of those are not true, you should keep this setting set to None and find another way of determining HTTPS, perhaps via custom middleware. SECURE_REDIRECT_EXEMPT Default: [] (Empty list) If a URL path matches a regular expression in this list, the request will not be redirected to HTTPS. The SecurityMiddleware strips leading slashes from URL paths, so patterns shouldn’t include them, e.g. SECURE_REDIRECT_EXEMPT = [r'^no-ssl/$', …]. If SECURE_SSL_REDIRECT is False, this setting has no effect. SECURE_REFERRER_POLICY Default: 'same-origin' If configured, the SecurityMiddleware sets the Referrer Policy header on all responses that do not already have it to the value provided. SECURE_SSL_HOST Default: None If a string (e.g. secure.example.com), all SSL redirects will be directed to this host rather than the originally-requested host (e.g. www.example.com). If SECURE_SSL_REDIRECT is False, this setting has no effect. SECURE_SSL_REDIRECT Default: False If True, the SecurityMiddleware redirects all non-HTTPS requests to HTTPS (except for those URLs matching a regular expression listed in SECURE_REDIRECT_EXEMPT). Note If turning this to True causes infinite redirects, it probably means your site is running behind a proxy and can’t tell which requests are secure and which are not. Your proxy likely sets a header to indicate secure requests; you can correct the problem by finding out what that header is and configuring the SECURE_PROXY_SSL_HEADER setting accordingly. SERIALIZATION_MODULES Default: Not defined A dictionary of modules containing serializer definitions (provided as strings), keyed by a string identifier for that serialization type. For example, to define a YAML serializer, use: SERIALIZATION_MODULES = {'yaml': 'path.to.yaml_serializer'} SERVER_EMAIL Default: 'root@localhost' The email address that error messages come from, such as those sent to ADMINS and MANAGERS. Why are my emails sent from a different address? This address is used only for error messages. It is not the address that regular email messages sent with send_mail() come from; for that, see DEFAULT_FROM_EMAIL. SHORT_DATE_FORMAT Default: 'm/d/Y' (e.g. 12/31/2003) An available formatting that can be used for displaying date fields on templates. Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT and SHORT_DATETIME_FORMAT. SHORT_DATETIME_FORMAT Default: 'm/d/Y P' (e.g. 12/31/2003 4 p.m.) An available formatting that can be used for displaying datetime fields on templates. Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT and SHORT_DATE_FORMAT. SIGNING_BACKEND Default: 'django.core.signing.TimestampSigner' The backend used for signing cookies and other data. See also the Cryptographic signing documentation. SILENCED_SYSTEM_CHECKS Default: [] (Empty list) A list of identifiers of messages generated by the system check framework (i.e. ["models.W001"]) that you wish to permanently acknowledge and ignore. Silenced checks will not be output to the console. See also the System check framework documentation. TEMPLATES Default: [] (Empty list) A list containing the settings for all template engines to be used with Django. Each item of the list is a dictionary containing the options for an individual engine. Here’s a setup that tells the Django template engine to load templates from the templates subdirectory inside each installed application: TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'APP_DIRS': True, }, ] The following options are available for all backends. BACKEND Default: Not defined The template backend to use. The built-in template backends are: 'django.template.backends.django.DjangoTemplates' 'django.template.backends.jinja2.Jinja2' You can use a template backend that doesn’t ship with Django by setting BACKEND to a fully-qualified path (i.e. 'mypackage.whatever.Backend'). NAME Default: see below The alias for this particular template engine. It’s an identifier that allows selecting an engine for rendering. Aliases must be unique across all configured template engines. It defaults to the name of the module defining the engine class, i.e. the next to last piece of BACKEND, when it isn’t provided. For example if the backend is 'mypackage.whatever.Backend' then its default name is 'whatever'. DIRS Default: [] (Empty list) Directories where the engine should look for template source files, in search order. APP_DIRS Default: False Whether the engine should look for template source files inside installed applications. Note The default settings.py file created by django-admin startproject sets 'APP_DIRS': True. OPTIONS Default: {} (Empty dict) Extra parameters to pass to the template backend. Available parameters vary depending on the template backend. See DjangoTemplates and Jinja2 for the options of the built-in backends. TEST_RUNNER Default: 'django.test.runner.DiscoverRunner' The name of the class to use for starting the test suite. See Using different testing frameworks. TEST_NON_SERIALIZED_APPS Default: [] (Empty list) In order to restore the database state between tests for TransactionTestCases and database backends without transactions, Django will serialize the contents of all apps when it starts the test run so it can then reload from that copy before running tests that need it. This slows down the startup time of the test runner; if you have apps that you know don’t need this feature, you can add their full names in here (e.g. 'django.contrib.contenttypes') to exclude them from this serialization process. THOUSAND_SEPARATOR Default: ',' (Comma) Default thousand separator used when formatting numbers. This setting is used only when USE_THOUSAND_SEPARATOR is True and NUMBER_GROUPING is greater than 0. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also NUMBER_GROUPING, DECIMAL_SEPARATOR and USE_THOUSAND_SEPARATOR. TIME_FORMAT Default: 'P' (e.g. 4 p.m.) The default formatting to use for displaying time fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATE_FORMAT and DATETIME_FORMAT. TIME_INPUT_FORMATS Default: [ '%H:%M:%S', # '14:30:59' '%H:%M:%S.%f', # '14:30:59.000200' '%H:%M', # '14:30' ] A list of formats that will be accepted when inputting data on a time field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATE_INPUT_FORMATS and DATETIME_INPUT_FORMATS. TIME_ZONE Default: 'America/Chicago' A string representing the time zone for this installation. See the list of time zones. Note Since Django was first released with the TIME_ZONE set to 'America/Chicago', the global setting (used if nothing is defined in your project’s settings.py) remains 'America/Chicago' for backwards compatibility. New project templates default to 'UTC'. Note that this isn’t necessarily the time zone of the server. For example, one server may serve multiple Django-powered sites, each with a separate time zone setting. When USE_TZ is False, this is the time zone in which Django will store all datetimes. When USE_TZ is True, this is the default time zone that Django will use to display datetimes in templates and to interpret datetimes entered in forms. On Unix environments (where time.tzset() is implemented), Django sets the os.environ['TZ'] variable to the time zone you specify in the TIME_ZONE setting. Thus, all your views and models will automatically operate in this time zone. However, Django won’t set the TZ environment variable if you’re using the manual configuration option as described in manually configuring settings. If Django doesn’t set the TZ environment variable, it’s up to you to ensure your processes are running in the correct environment. Note Django cannot reliably use alternate time zones in a Windows environment. If you’re running Django on Windows, TIME_ZONE must be set to match the system time zone. USE_DEPRECATED_PYTZ New in Django 4.0. Default: False A boolean that specifies whether to use pytz, rather than zoneinfo, as the default time zone implementation. Deprecated since version 4.0: This transitional setting is deprecated. Support for using pytz will be removed in Django 5.0. USE_I18N Default: True A boolean that specifies whether Django’s translation system should be enabled. This provides a way to turn it off, for performance. If this is set to False, Django will make some optimizations so as not to load the translation machinery. See also LANGUAGE_CODE, USE_L10N and USE_TZ. Note The default settings.py file created by django-admin startproject includes USE_I18N = True for convenience. USE_L10N Default: True A boolean that specifies if localized formatting of data will be enabled by default or not. If this is set to True, e.g. Django will display numbers and dates using the format of the current locale. See also LANGUAGE_CODE, USE_I18N and USE_TZ. Changed in Django 4.0: In older versions, the default value is False. Deprecated since version 4.0: This setting is deprecated. Starting with Django 5.0, localized formatting of data will always be enabled. For example Django will display numbers and dates using the format of the current locale. USE_THOUSAND_SEPARATOR Default: False A boolean that specifies whether to display numbers using a thousand separator. When set to True and USE_L10N is also True, Django will format numbers using the NUMBER_GROUPING and THOUSAND_SEPARATOR settings. These settings may also be dictated by the locale, which takes precedence. See also DECIMAL_SEPARATOR, NUMBER_GROUPING and THOUSAND_SEPARATOR. USE_TZ Default: False Note In Django 5.0, the default value will change from False to True. A boolean that specifies if datetimes will be timezone-aware by default or not. If this is set to True, Django will use timezone-aware datetimes internally. When USE_TZ is False, Django will use naive datetimes in local time, except when parsing ISO 8601 formatted strings, where timezone information will always be retained if present. See also TIME_ZONE, USE_I18N and USE_L10N. Note The default settings.py file created by django-admin startproject includes USE_TZ = True for convenience. USE_X_FORWARDED_HOST Default: False A boolean that specifies whether to use the X-Forwarded-Host header in preference to the Host header. This should only be enabled if a proxy which sets this header is in use. This setting takes priority over USE_X_FORWARDED_PORT. Per RFC 7239#section-5.3, the X-Forwarded-Host header can include the port number, in which case you shouldn’t use USE_X_FORWARDED_PORT. USE_X_FORWARDED_PORT Default: False A boolean that specifies whether to use the X-Forwarded-Port header in preference to the SERVER_PORT META variable. This should only be enabled if a proxy which sets this header is in use. USE_X_FORWARDED_HOST takes priority over this setting. WSGI_APPLICATION Default: None The full Python path of the WSGI application object that Django’s built-in servers (e.g. runserver) will use. The django-admin startproject management command will create a standard wsgi.py file with an application callable in it, and point this setting to that application. If not set, the return value of django.core.wsgi.get_wsgi_application() will be used. In this case, the behavior of runserver will be identical to previous Django versions. YEAR_MONTH_FORMAT Default: 'F Y' The default formatting to use for date fields on Django admin change-list pages – and, possibly, by other parts of the system – in cases when only the year and month are displayed. For example, when a Django admin change-list page is being filtered by a date drilldown, the header for a given month displays the month and the year. Different locales have different formats. For example, U.S. English would say “January 2006,” whereas another locale might say “2006/January.” Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT and MONTH_DAY_FORMAT. X_FRAME_OPTIONS Default: 'DENY' The default value for the X-Frame-Options header used by XFrameOptionsMiddleware. See the clickjacking protection documentation. Auth Settings for django.contrib.auth. AUTHENTICATION_BACKENDS Default: ['django.contrib.auth.backends.ModelBackend'] A list of authentication backend classes (as strings) to use when attempting to authenticate a user. See the authentication backends documentation for details. AUTH_USER_MODEL Default: 'auth.User' The model to use to represent a User. See Substituting a custom User model. Warning You cannot change the AUTH_USER_MODEL setting during the lifetime of a project (i.e. once you have made and migrated models that depend on it) without serious effort. It is intended to be set at the project start, and the model it refers to must be available in the first migration of the app that it lives in. See Substituting a custom User model for more details. LOGIN_REDIRECT_URL Default: '/accounts/profile/' The URL or named URL pattern where requests are redirected after login when the LoginView doesn’t get a next GET parameter. LOGIN_URL Default: '/accounts/login/' The URL or named URL pattern where requests are redirected for login when using the login_required() decorator, LoginRequiredMixin, or AccessMixin. LOGOUT_REDIRECT_URL Default: None The URL or named URL pattern where requests are redirected after logout if LogoutView doesn’t have a next_page attribute. If None, no redirect will be performed and the logout view will be rendered. PASSWORD_RESET_TIMEOUT Default: 259200 (3 days, in seconds) The number of seconds a password reset link is valid for. Used by the PasswordResetConfirmView. Note Reducing the value of this timeout doesn’t make any difference to the ability of an attacker to brute-force a password reset token. Tokens are designed to be safe from brute-forcing without any timeout. This timeout exists to protect against some unlikely attack scenarios, such as someone gaining access to email archives that may contain old, unused password reset tokens. PASSWORD_HASHERS See How Django stores passwords. Default: [ 'django.contrib.auth.hashers.PBKDF2PasswordHasher', 'django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher', 'django.contrib.auth.hashers.Argon2PasswordHasher', 'django.contrib.auth.hashers.BCryptSHA256PasswordHasher', ] AUTH_PASSWORD_VALIDATORS Default: [] (Empty list) The list of validators that are used to check the strength of user’s passwords. See Password validation for more details. By default, no validation is performed and all passwords are accepted. Messages Settings for django.contrib.messages. MESSAGE_LEVEL Default: messages.INFO Sets the minimum message level that will be recorded by the messages framework. See message levels for more details. Important If you override MESSAGE_LEVEL in your settings file and rely on any of the built-in constants, you must import the constants module directly to avoid the potential for circular imports, e.g.: from django.contrib.messages import constants as message_constants MESSAGE_LEVEL = message_constants.DEBUG If desired, you may specify the numeric values for the constants directly according to the values in the above constants table. MESSAGE_STORAGE Default: 'django.contrib.messages.storage.fallback.FallbackStorage' Controls where Django stores message data. Valid values are: 'django.contrib.messages.storage.fallback.FallbackStorage' 'django.contrib.messages.storage.session.SessionStorage' 'django.contrib.messages.storage.cookie.CookieStorage' See message storage backends for more details. The backends that use cookies – CookieStorage and FallbackStorage – use the value of SESSION_COOKIE_DOMAIN, SESSION_COOKIE_SECURE and SESSION_COOKIE_HTTPONLY when setting their cookies. MESSAGE_TAGS Default: { messages.DEBUG: 'debug', messages.INFO: 'info', messages.SUCCESS: 'success', messages.WARNING: 'warning', messages.ERROR: 'error', } This sets the mapping of message level to message tag, which is typically rendered as a CSS class in HTML. If you specify a value, it will extend the default. This means you only have to specify those values which you need to override. See Displaying messages above for more details. Important If you override MESSAGE_TAGS in your settings file and rely on any of the built-in constants, you must import the constants module directly to avoid the potential for circular imports, e.g.: from django.contrib.messages import constants as message_constants MESSAGE_TAGS = {message_constants.INFO: ''} If desired, you may specify the numeric values for the constants directly according to the values in the above constants table. Sessions Settings for django.contrib.sessions. SESSION_CACHE_ALIAS Default: 'default' If you’re using cache-based session storage, this selects the cache to use. SESSION_COOKIE_AGE Default: 1209600 (2 weeks, in seconds) The age of session cookies, in seconds. SESSION_COOKIE_DOMAIN Default: None The domain to use for session cookies. Set this to a string such as "example.com" for cross-domain cookies, or use None for a standard domain cookie. To use cross-domain cookies with CSRF_USE_SESSIONS, you must include a leading dot (e.g. ".example.com") to accommodate the CSRF middleware’s referer checking. Be cautious when updating this setting on a production site. If you update this setting to enable cross-domain cookies on a site that previously used standard domain cookies, existing user cookies will be set to the old domain. This may result in them being unable to log in as long as these cookies persist. This setting also affects cookies set by django.contrib.messages. SESSION_COOKIE_HTTPONLY Default: True Whether to use HttpOnly flag on the session cookie. If this is set to True, client-side JavaScript will not be able to access the session cookie. HttpOnly is a flag included in a Set-Cookie HTTP response header. It’s part of the RFC 6265#section-4.1.2.6 standard for cookies and can be a useful way to mitigate the risk of a client-side script accessing the protected cookie data. This makes it less trivial for an attacker to escalate a cross-site scripting vulnerability into full hijacking of a user’s session. There aren’t many good reasons for turning this off. Your code shouldn’t read session cookies from JavaScript. SESSION_COOKIE_NAME Default: 'sessionid' The name of the cookie to use for sessions. This can be whatever you want (as long as it’s different from the other cookie names in your application). SESSION_COOKIE_PATH Default: '/' The path set on the session cookie. This should either match the URL path of your Django installation or be parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths, and each instance will only see its own session cookie. SESSION_COOKIE_SAMESITE Default: 'Lax' The value of the SameSite flag on the session cookie. This flag prevents the cookie from being sent in cross-site requests thus preventing CSRF attacks and making some methods of stealing session cookie impossible. Possible values for the setting are: 'Strict': prevents the cookie from being sent by the browser to the target site in all cross-site browsing context, even when following a regular link. For example, for a GitHub-like website this would mean that if a logged-in user follows a link to a private GitHub project posted on a corporate discussion forum or email, GitHub will not receive the session cookie and the user won’t be able to access the project. A bank website, however, most likely doesn’t want to allow any transactional pages to be linked from external sites so the 'Strict' flag would be appropriate. 'Lax' (default): provides a balance between security and usability for websites that want to maintain user’s logged-in session after the user arrives from an external link. In the GitHub scenario, the session cookie would be allowed when following a regular link from an external website and be blocked in CSRF-prone request methods (e.g. POST). 'None' (string): the session cookie will be sent with all same-site and cross-site requests. False: disables the flag. Note Modern browsers provide a more secure default policy for the SameSite flag and will assume Lax for cookies without an explicit value set. SESSION_COOKIE_SECURE Default: False Whether to use a secure cookie for the session cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent under an HTTPS connection. Leaving this setting off isn’t a good idea because an attacker could capture an unencrypted session cookie with a packet sniffer and use the cookie to hijack the user’s session. SESSION_ENGINE Default: 'django.contrib.sessions.backends.db' Controls where Django stores session data. Included engines are: 'django.contrib.sessions.backends.db' 'django.contrib.sessions.backends.file' 'django.contrib.sessions.backends.cache' 'django.contrib.sessions.backends.cached_db' 'django.contrib.sessions.backends.signed_cookies' See Configuring the session engine for more details. SESSION_EXPIRE_AT_BROWSER_CLOSE Default: False Whether to expire the session when the user closes their browser. See Browser-length sessions vs. persistent sessions. SESSION_FILE_PATH Default: None If you’re using file-based session storage, this sets the directory in which Django will store session data. When the default value (None) is used, Django will use the standard temporary directory for the system. SESSION_SAVE_EVERY_REQUEST Default: False Whether to save the session data on every request. If this is False (default), then the session data will only be saved if it has been modified – that is, if any of its dictionary values have been assigned or deleted. Empty sessions won’t be created, even if this setting is active. SESSION_SERIALIZER Default: 'django.contrib.sessions.serializers.JSONSerializer' Full import path of a serializer class to use for serializing session data. Included serializers are: 'django.contrib.sessions.serializers.PickleSerializer' 'django.contrib.sessions.serializers.JSONSerializer' See Session serialization for details, including a warning regarding possible remote code execution when using PickleSerializer. Sites Settings for django.contrib.sites. SITE_ID Default: Not defined The ID, as an integer, of the current site in the django_site database table. This is used so that application data can hook into specific sites and a single database can manage content for multiple sites. Static Files Settings for django.contrib.staticfiles. STATIC_ROOT Default: None The absolute path to the directory where collectstatic will collect static files for deployment. Example: "/var/www/example.com/static/" If the staticfiles contrib app is enabled (as in the default project template), the collectstatic management command will collect static files into this directory. See the how-to on managing static files for more details about usage. Warning This should be an initially empty destination directory for collecting your static files from their permanent locations into one directory for ease of deployment; it is not a place to store your static files permanently. You should do that in directories that will be found by staticfiles’s finders, which by default, are 'static/' app sub-directories and any directories you include in STATICFILES_DIRS). STATIC_URL Default: None URL to use when referring to static files located in STATIC_ROOT. Example: "static/" or "http://static.example.com/" If not None, this will be used as the base path for asset definitions (the Media class) and the staticfiles app. It must end in a slash if set to a non-empty value. You may need to configure these files to be served in development and will definitely need to do so in production. Note If STATIC_URL is a relative path, then it will be prefixed by the server-provided value of SCRIPT_NAME (or / if not set). This makes it easier to serve a Django application in a subpath without adding an extra configuration to the settings. STATICFILES_DIRS Default: [] (Empty list) This setting defines the additional locations the staticfiles app will traverse if the FileSystemFinder finder is enabled, e.g. if you use the collectstatic or findstatic management command or use the static file serving view. This should be set to a list of strings that contain full paths to your additional files directory(ies) e.g.: STATICFILES_DIRS = [ "/home/special.polls.com/polls/static", "/home/polls.com/polls/static", "/opt/webfiles/common", ] Note that these paths should use Unix-style forward slashes, even on Windows (e.g. "C:/Users/user/mysite/extra_static_content"). Prefixes (optional) In case you want to refer to files in one of the locations with an additional namespace, you can optionally provide a prefix as (prefix, path) tuples, e.g.: STATICFILES_DIRS = [ # ... ("downloads", "/opt/webfiles/stats"), ] For example, assuming you have STATIC_URL set to 'static/', the collectstatic management command would collect the “stats” files in a 'downloads' subdirectory of STATIC_ROOT. This would allow you to refer to the local file '/opt/webfiles/stats/polls_20101022.tar.gz' with '/static/downloads/polls_20101022.tar.gz' in your templates, e.g.: <a href="{% static 'downloads/polls_20101022.tar.gz' %}"> STATICFILES_STORAGE Default: 'django.contrib.staticfiles.storage.StaticFilesStorage' The file storage engine to use when collecting static files with the collectstatic management command. A ready-to-use instance of the storage backend defined in this setting can be found at django.contrib.staticfiles.storage.staticfiles_storage. For an example, see Serving static files from a cloud service or CDN. STATICFILES_FINDERS Default: [ 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', ] The list of finder backends that know how to find static files in various locations. The default will find files stored in the STATICFILES_DIRS setting (using django.contrib.staticfiles.finders.FileSystemFinder) and in a static subdirectory of each app (using django.contrib.staticfiles.finders.AppDirectoriesFinder). If multiple files with the same name are present, the first file that is found will be used. One finder is disabled by default: django.contrib.staticfiles.finders.DefaultStorageFinder. If added to your STATICFILES_FINDERS setting, it will look for static files in the default file storage as defined by the DEFAULT_FILE_STORAGE setting. Note When using the AppDirectoriesFinder finder, make sure your apps can be found by staticfiles by adding the app to the INSTALLED_APPS setting of your site. Static file finders are currently considered a private interface, and this interface is thus undocumented. Core Settings Topical Index Cache CACHES CACHE_MIDDLEWARE_ALIAS CACHE_MIDDLEWARE_KEY_PREFIX CACHE_MIDDLEWARE_SECONDS Database DATABASES DATABASE_ROUTERS DEFAULT_INDEX_TABLESPACE DEFAULT_TABLESPACE Debugging DEBUG DEBUG_PROPAGATE_EXCEPTIONS Email ADMINS DEFAULT_CHARSET DEFAULT_FROM_EMAIL EMAIL_BACKEND EMAIL_FILE_PATH EMAIL_HOST EMAIL_HOST_PASSWORD EMAIL_HOST_USER EMAIL_PORT EMAIL_SSL_CERTFILE EMAIL_SSL_KEYFILE EMAIL_SUBJECT_PREFIX EMAIL_TIMEOUT EMAIL_USE_LOCALTIME EMAIL_USE_TLS MANAGERS SERVER_EMAIL Error reporting DEFAULT_EXCEPTION_REPORTER DEFAULT_EXCEPTION_REPORTER_FILTER IGNORABLE_404_URLS MANAGERS SILENCED_SYSTEM_CHECKS File uploads DEFAULT_FILE_STORAGE FILE_UPLOAD_HANDLERS FILE_UPLOAD_MAX_MEMORY_SIZE FILE_UPLOAD_PERMISSIONS FILE_UPLOAD_TEMP_DIR MEDIA_ROOT MEDIA_URL Forms FORM_RENDERER Globalization (i18n/l10n) DATE_FORMAT DATE_INPUT_FORMATS DATETIME_FORMAT DATETIME_INPUT_FORMATS DECIMAL_SEPARATOR FIRST_DAY_OF_WEEK FORMAT_MODULE_PATH LANGUAGE_CODE LANGUAGE_COOKIE_AGE LANGUAGE_COOKIE_DOMAIN LANGUAGE_COOKIE_HTTPONLY LANGUAGE_COOKIE_NAME LANGUAGE_COOKIE_PATH LANGUAGE_COOKIE_SAMESITE LANGUAGE_COOKIE_SECURE LANGUAGES LANGUAGES_BIDI LOCALE_PATHS MONTH_DAY_FORMAT NUMBER_GROUPING SHORT_DATE_FORMAT SHORT_DATETIME_FORMAT THOUSAND_SEPARATOR TIME_FORMAT TIME_INPUT_FORMATS TIME_ZONE USE_I18N USE_L10N USE_THOUSAND_SEPARATOR USE_TZ YEAR_MONTH_FORMAT HTTP DATA_UPLOAD_MAX_MEMORY_SIZE DATA_UPLOAD_MAX_NUMBER_FIELDS DEFAULT_CHARSET DISALLOWED_USER_AGENTS FORCE_SCRIPT_NAME INTERNAL_IPS MIDDLEWARE Security SECURE_CONTENT_TYPE_NOSNIFF SECURE_CROSS_ORIGIN_OPENER_POLICY SECURE_HSTS_INCLUDE_SUBDOMAINS SECURE_HSTS_PRELOAD SECURE_HSTS_SECONDS SECURE_PROXY_SSL_HEADER SECURE_REDIRECT_EXEMPT SECURE_REFERRER_POLICY SECURE_SSL_HOST SECURE_SSL_REDIRECT SIGNING_BACKEND USE_X_FORWARDED_HOST USE_X_FORWARDED_PORT WSGI_APPLICATION Logging LOGGING LOGGING_CONFIG Models ABSOLUTE_URL_OVERRIDES FIXTURE_DIRS INSTALLED_APPS Security Cross Site Request Forgery Protection CSRF_COOKIE_DOMAIN CSRF_COOKIE_NAME CSRF_COOKIE_PATH CSRF_COOKIE_SAMESITE CSRF_COOKIE_SECURE CSRF_FAILURE_VIEW CSRF_HEADER_NAME CSRF_TRUSTED_ORIGINS CSRF_USE_SESSIONS SECRET_KEY X_FRAME_OPTIONS Serialization DEFAULT_CHARSET SERIALIZATION_MODULES Templates TEMPLATES Testing Database: TEST TEST_NON_SERIALIZED_APPS TEST_RUNNER URLs APPEND_SLASH PREPEND_WWW ROOT_URLCONF
doc_1781
Permute the dimensions of an array. This function is exactly equivalent to numpy.transpose. See also numpy.transpose Equivalent function in top-level NumPy module. Examples >>> import numpy.ma as ma >>> x = ma.arange(4).reshape((2,2)) >>> x[1, 1] = ma.masked >>> x masked_array( data=[[0, 1], [2, --]], mask=[[False, False], [False, True]], fill_value=999999) >>> ma.transpose(x) masked_array( data=[[0, 2], [1, --]], mask=[[False, False], [False, True]], fill_value=999999)
doc_1782
os.spawnle(mode, path, ..., env) os.spawnlp(mode, file, ...) os.spawnlpe(mode, file, ..., env) os.spawnv(mode, path, args) os.spawnve(mode, path, args, env) os.spawnvp(mode, file, args) os.spawnvpe(mode, file, args, env) Execute the program path in a new process. (Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions. Check especially the Replacing Older Functions with the subprocess Module section.) If mode is P_NOWAIT, this function returns the process id of the new process; if mode is P_WAIT, returns the process’s exit code if it exits normally, or -signal, where signal is the signal that killed the process. On Windows, the process id will actually be the process handle, so can be used with the waitpid() function. Note on VxWorks, this function doesn’t return -signal when the new process is killed. Instead it raises OSError exception. The “l” and “v” variants of the spawn* functions differ in how command-line arguments are passed. The “l” variants are perhaps the easiest to work with if the number of parameters is fixed when the code is written; the individual parameters simply become additional parameters to the spawnl*() functions. The “v” variants are good when the number of parameters is variable, with the arguments being passed in a list or tuple as the args parameter. In either case, the arguments to the child process must start with the name of the command being run. The variants which include a second “p” near the end (spawnlp(), spawnlpe(), spawnvp(), and spawnvpe()) will use the PATH environment variable to locate the program file. When the environment is being replaced (using one of the spawn*e variants, discussed in the next paragraph), the new environment is used as the source of the PATH variable. The other variants, spawnl(), spawnle(), spawnv(), and spawnve(), will not use the PATH variable to locate the executable; path must contain an appropriate absolute or relative path. For spawnle(), spawnlpe(), spawnve(), and spawnvpe() (note that these all end in “e”), the env parameter must be a mapping which is used to define the environment variables for the new process (they are used instead of the current process’ environment); the functions spawnl(), spawnlp(), spawnv(), and spawnvp() all cause the new process to inherit the environment of the current process. Note that keys and values in the env dictionary must be strings; invalid keys or values will cause the function to fail, with a return value of 127. As an example, the following calls to spawnlp() and spawnvpe() are equivalent: import os os.spawnlp(os.P_WAIT, 'cp', 'cp', 'index.html', '/dev/null') L = ['cp', 'index.html', '/dev/null'] os.spawnvpe(os.P_WAIT, 'cp', L, os.environ) Raises an auditing event os.spawn with arguments mode, path, args, env. Availability: Unix, Windows. spawnlp(), spawnlpe(), spawnvp() and spawnvpe() are not available on Windows. spawnle() and spawnve() are not thread-safe on Windows; we advise you to use the subprocess module instead. Changed in version 3.6: Accepts a path-like object.
doc_1783
Find artist objects. Recursively find all Artist instances contained in the artist. Parameters match A filter criterion for the matches. This can be None: Return all objects contained in artist. A function with signature def match(artist: Artist) -> bool. The result will only contain artists for which the function returns True. A class instance: e.g., Line2D. The result will only contain artists of this class or its subclasses (isinstance check). include_selfbool Include self in the list to be checked for a match. Returns list of Artist
doc_1784
Implements L-BFGS algorithm, heavily inspired by minFunc <https://www.cs.ubc.ca/~schmidtm/Software/minFunc.html>. Warning This optimizer doesn’t support per-parameter options and parameter groups (there can be only one). Warning Right now all parameters have to be on a single device. This will be improved in the future. Note This is a very memory intensive optimizer (it requires additional param_bytes * (history_size + 1) bytes). If it doesn’t fit in memory try reducing the history size, or use a different algorithm. Parameters lr (float) – learning rate (default: 1) max_iter (int) – maximal number of iterations per optimization step (default: 20) max_eval (int) – maximal number of function evaluations per optimization step (default: max_iter * 1.25). tolerance_grad (float) – termination tolerance on first order optimality (default: 1e-5). tolerance_change (float) – termination tolerance on function value/parameter changes (default: 1e-9). history_size (int) – update history size (default: 100). line_search_fn (str) – either ‘strong_wolfe’ or None (default: None). step(closure) [source] Performs a single optimization step. Parameters closure (callable) – A closure that reevaluates the model and returns the loss.
doc_1785
Fit the model according to the given training data. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where n_samples in the number of samples and n_features is the number of features. yarray-like of shape (n_samples,) Target vector relative to X. sample_weightarray-like of shape (n_samples,), default=None Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. New in version 0.18. Returns selfobject An instance of the estimator.
doc_1786
Register a custom template test. Works exactly like the template_test() decorator. Changelog New in version 0.10. Parameters name (Optional[str]) – the optional name of the test, otherwise the function name will be used. f (Callable[[Any], bool]) – Return type None
doc_1787
See Migration guide for more details. tf.compat.v1.raw_ops.Iterator tf.raw_ops.Iterator( shared_name, container, output_types, output_shapes, name=None ) Args shared_name A string. container A string. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type resource.
doc_1788
Map new values to integer identifiers. Parameters dataiterable of str or bytes Raises TypeError If elements in data are neither str nor bytes.
doc_1789
Add node n while updating the maximum node id. See also networkx.Graph.add_node().
doc_1790
At the end of every ufunc, this method is called on the input object with the highest array priority, or the output object if one was specified. The ufunc-computed array is passed in and whatever is returned is passed to the user. Subclasses inherit a default implementation of this method, which transforms the array into a new instance of the object’s class. Subclasses may opt to use this method to transform the output array into an instance of the subclass and update metadata before returning the array to the user. Note For ufuncs, it is hoped to eventually deprecate this method in favour of __array_ufunc__.
doc_1791
List of type map file names commonly installed. These files are typically named mime.types and are installed in different locations by different packages.
doc_1792
Create a disk level set with binary values. Parameters image_shapetuple of positive integers Shape of the image centertuple of positive integers, optional Coordinates of the center of the disk given in (row, column). If not given, it defaults to the center of the image. radiusfloat, optional Radius of the disk. If not given, it is set to the 75% of the smallest image dimension. Returns outarray with shape image_shape Binary level set of the disk with the given radius and center. See also checkerboard_level_set
doc_1793
[Deprecated] Notes Deprecated since version 3.5:
doc_1794
See Migration guide for more details. tf.compat.v1.raw_ops.ExperimentalTakeWhileDataset tf.raw_ops.ExperimentalTakeWhileDataset( input_dataset, other_arguments, predicate, output_types, output_shapes, name=None ) The predicate function must return a scalar boolean and accept the following arguments: One tensor for each component of an element of input_dataset. One tensor for each value in other_arguments. Args input_dataset A Tensor of type variant. other_arguments A list of Tensor objects. A list of tensors, typically values that were captured when building a closure for predicate. predicate A function decorated with @Defun. A function returning a scalar boolean. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type variant.
doc_1795
Called when this tool gets used. This method is called by ToolManager.trigger_tool. Parameters eventEvent The canvas event that caused this tool to be called. senderobject Object that requested the tool to be triggered. dataobject Extra data.
doc_1796
This takes a binary file for writing a pickle data stream. The optional protocol argument, an integer, tells the pickler to use the given protocol; supported protocols are 0 to HIGHEST_PROTOCOL. If not specified, the default is DEFAULT_PROTOCOL. If a negative number is specified, HIGHEST_PROTOCOL is selected. The file argument must have a write() method that accepts a single bytes argument. It can thus be an on-disk file opened for binary writing, an io.BytesIO instance, or any other custom object that meets this interface. If fix_imports is true and protocol is less than 3, pickle will try to map the new Python 3 names to the old module names used in Python 2, so that the pickle data stream is readable with Python 2. If buffer_callback is None (the default), buffer views are serialized into file as part of the pickle stream. If buffer_callback is not None, then it can be called any number of times with a buffer view. If the callback returns a false value (such as None), the given buffer is out-of-band; otherwise the buffer is serialized in-band, i.e. inside the pickle stream. It is an error if buffer_callback is not None and protocol is None or smaller than 5. Changed in version 3.8: The buffer_callback argument was added. dump(obj) Write the pickled representation of obj to the open file object given in the constructor. persistent_id(obj) Do nothing by default. This exists so a subclass can override it. If persistent_id() returns None, obj is pickled as usual. Any other value causes Pickler to emit the returned value as a persistent ID for obj. The meaning of this persistent ID should be defined by Unpickler.persistent_load(). Note that the value returned by persistent_id() cannot itself have a persistent ID. See Persistence of External Objects for details and examples of uses. dispatch_table A pickler object’s dispatch table is a registry of reduction functions of the kind which can be declared using copyreg.pickle(). It is a mapping whose keys are classes and whose values are reduction functions. A reduction function takes a single argument of the associated class and should conform to the same interface as a __reduce__() method. By default, a pickler object will not have a dispatch_table attribute, and it will instead use the global dispatch table managed by the copyreg module. However, to customize the pickling for a specific pickler object one can set the dispatch_table attribute to a dict-like object. Alternatively, if a subclass of Pickler has a dispatch_table attribute then this will be used as the default dispatch table for instances of that class. See Dispatch Tables for usage examples. New in version 3.3. reducer_override(self, obj) Special reducer that can be defined in Pickler subclasses. This method has priority over any reducer in the dispatch_table. It should conform to the same interface as a __reduce__() method, and can optionally return NotImplemented to fallback on dispatch_table-registered reducers to pickle obj. For a detailed example, see Custom Reduction for Types, Functions, and Other Objects. New in version 3.8. fast Deprecated. Enable fast mode if set to a true value. The fast mode disables the usage of memo, therefore speeding the pickling process by not generating superfluous PUT opcodes. It should not be used with self-referential objects, doing otherwise will cause Pickler to recurse infinitely. Use pickletools.optimize() if you need more compact pickles.
doc_1797
Set the artist's clip path. Parameters pathPatch or Path or TransformedPath or None The clip path. If given a Path, transform must be provided as well. If None, a previously set clip path is removed. transformTransform, optional Only used if path is a Path, in which case the given Path is converted to a TransformedPath using transform. Notes For efficiency, if path is a Rectangle this method will set the clipping box to the corresponding rectangle and set the clipping path to None. For technical reasons (support of set), a tuple (path, transform) is also accepted as a single positional parameter.
doc_1798
Create new MultiIndex from current that removes unused levels. Unused level(s) means levels that are not expressed in the labels. The resulting MultiIndex will have the same outward appearance, meaning the same .values and ordering. It will also be .equals() to the original. Returns MultiIndex Examples >>> mi = pd.MultiIndex.from_product([range(2), list('ab')]) >>> mi MultiIndex([(0, 'a'), (0, 'b'), (1, 'a'), (1, 'b')], ) >>> mi[2:] MultiIndex([(1, 'a'), (1, 'b')], ) The 0 from the first level is not represented and can be removed >>> mi2 = mi[2:].remove_unused_levels() >>> mi2.levels FrozenList([[1], ['a', 'b']])
doc_1799
Alias for set_linewidth.