_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_2400 |
Remove missing values. See the User Guide for more on which values are considered missing, and how to work with missing data. Parameters
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
Determine if rows or columns which contain missing values are removed. 0, or ‘index’ : Drop rows which contain missing values. 1, or ‘columns’ : Drop columns which contain missing value. Changed in version 1.0.0: Pass tuple or list to drop on multiple axes. Only a single axis is allowed.
how:{‘any’, ‘all’}, default ‘any’
Determine if row or column is removed from DataFrame, when we have at least one NA or all NA. ‘any’ : If any NA values are present, drop that row or column. ‘all’ : If all values are NA, drop that row or column.
thresh:int, optional
Require that many non-NA values.
subset:column label or sequence of labels, optional
Labels along other axis to consider, e.g. if you are dropping rows these would be a list of columns to include.
inplace:bool, default False
If True, do operation inplace and return None. Returns
DataFrame or None
DataFrame with NA entries dropped from it or None if inplace=True. See also DataFrame.isna
Indicate missing values. DataFrame.notna
Indicate existing (non-missing) values. DataFrame.fillna
Replace missing values. Series.dropna
Drop missing values. Index.dropna
Drop missing indices. Examples
>>> df = pd.DataFrame({"name": ['Alfred', 'Batman', 'Catwoman'],
... "toy": [np.nan, 'Batmobile', 'Bullwhip'],
... "born": [pd.NaT, pd.Timestamp("1940-04-25"),
... pd.NaT]})
>>> df
name toy born
0 Alfred NaN NaT
1 Batman Batmobile 1940-04-25
2 Catwoman Bullwhip NaT
Drop the rows where at least one element is missing.
>>> df.dropna()
name toy born
1 Batman Batmobile 1940-04-25
Drop the columns where at least one element is missing.
>>> df.dropna(axis='columns')
name
0 Alfred
1 Batman
2 Catwoman
Drop the rows where all elements are missing.
>>> df.dropna(how='all')
name toy born
0 Alfred NaN NaT
1 Batman Batmobile 1940-04-25
2 Catwoman Bullwhip NaT
Keep only the rows with at least 2 non-NA values.
>>> df.dropna(thresh=2)
name toy born
1 Batman Batmobile 1940-04-25
2 Catwoman Bullwhip NaT
Define in which columns to look for missing values.
>>> df.dropna(subset=['name', 'toy'])
name toy born
1 Batman Batmobile 1940-04-25
2 Catwoman Bullwhip NaT
Keep the DataFrame with valid entries in the same variable.
>>> df.dropna(inplace=True)
>>> df
name toy born
1 Batman Batmobile 1940-04-25 | |
doc_2401 | sklearn.utils.sparsefuncs.inplace_swap_row(X, m, n) [source]
Swaps two rows of a CSC/CSR matrix in-place. Parameters
Xsparse matrix of shape (n_samples, n_features)
Matrix whose two rows are to be swapped. It should be of CSR or CSC format.
mint
Index of the row of X to be swapped.
nint
Index of the row of X to be swapped. | |
doc_2402 | A decorator for running a function in a specific timezone, correctly resetting it after it has finished. | |
doc_2403 | Connect a local manager object to a remote manager process: >>> from multiprocessing.managers import BaseManager
>>> m = BaseManager(address=('127.0.0.1', 50000), authkey=b'abc')
>>> m.connect() | |
doc_2404 |
A debug function to draw a rectangle around the bounding box returned by an artist's Artist.get_window_extent to test whether the artist is returning the correct bbox. | |
doc_2405 | A custom ref subclass which simulates a weak reference to a bound method (i.e., a method defined on a class and looked up on an instance). Since a bound method is ephemeral, a standard weak reference cannot keep hold of it. WeakMethod has special code to recreate the bound method until either the object or the original function dies: >>> class C:
... def method(self):
... print("method called!")
...
>>> c = C()
>>> r = weakref.ref(c.method)
>>> r()
>>> r = weakref.WeakMethod(c.method)
>>> r()
<bound method C.method of <__main__.C object at 0x7fc859830220>>
>>> r()()
method called!
>>> del c
>>> gc.collect()
0
>>> r()
>>>
New in version 3.4. | |
doc_2406 | tf.experimental.numpy.ndarray.clip
tf.experimental.numpy.clip(
a, a_min, a_max
)
Unsupported arguments: out, kwargs. See the NumPy documentation for numpy.clip. | |
doc_2407 | A UserWarning subclass issued when password input may be echoed. | |
doc_2408 |
Alias for get_linestyle. | |
doc_2409 |
Return the values (min, max) that are mapped to the colormap limits. | |
doc_2410 | sklearn.compose.make_column_selector(pattern=None, *, dtype_include=None, dtype_exclude=None) [source]
Create a callable to select columns to be used with ColumnTransformer. make_column_selector can select columns based on datatype or the columns name with a regex. When using multiple selection criteria, all criteria must match for a column to be selected. Parameters
patternstr, default=None
Name of columns containing this regex pattern will be included. If None, column selection will not be selected based on pattern.
dtype_includecolumn dtype or list of column dtypes, default=None
A selection of dtypes to include. For more details, see pandas.DataFrame.select_dtypes.
dtype_excludecolumn dtype or list of column dtypes, default=None
A selection of dtypes to exclude. For more details, see pandas.DataFrame.select_dtypes. Returns
selectorcallable
Callable for column selection to be used by a ColumnTransformer. See also
ColumnTransformer
Class that allows combining the outputs of multiple transformer objects used on column subsets of the data into a single feature space. Examples >>> from sklearn.preprocessing import StandardScaler, OneHotEncoder
>>> from sklearn.compose import make_column_transformer
>>> from sklearn.compose import make_column_selector
>>> import pandas as pd
>>> X = pd.DataFrame({'city': ['London', 'London', 'Paris', 'Sallisaw'],
... 'rating': [5, 3, 4, 5]})
>>> ct = make_column_transformer(
... (StandardScaler(),
... make_column_selector(dtype_include=np.number)), # rating
... (OneHotEncoder(),
... make_column_selector(dtype_include=object))) # city
>>> ct.fit_transform(X)
array([[ 0.90453403, 1. , 0. , 0. ],
[-1.50755672, 1. , 0. , 0. ],
[-0.30151134, 0. , 1. , 0. ],
[ 0.90453403, 0. , 0. , 1. ]])
Examples using sklearn.compose.make_column_selector
Categorical Feature Support in Gradient Boosting
Column Transformer with Mixed Types | |
doc_2411 | Return a month’s calendar in a multi-line string. If w is provided, it specifies the width of the date columns, which are centered. If l is given, it specifies the number of lines that each week will use. Depends on the first weekday as specified in the constructor or set by the setfirstweekday() method. | |
doc_2412 | Rename the file or directory src to dst. If dst exists, the operation will fail with an OSError subclass in a number of cases: On Windows, if dst exists a FileExistsError is always raised. On Unix, if src is a file and dst is a directory or vice-versa, an IsADirectoryError or a NotADirectoryError will be raised respectively. If both are directories and dst is empty, dst will be silently replaced. If dst is a non-empty directory, an OSError is raised. If both are files, dst it will be replaced silently if the user has permission. The operation may fail on some Unix flavors if src and dst are on different filesystems. If successful, the renaming will be an atomic operation (this is a POSIX requirement). This function can support specifying src_dir_fd and/or dst_dir_fd to supply paths relative to directory descriptors. If you want cross-platform overwriting of the destination, use replace(). Raises an auditing event os.rename with arguments src, dst, src_dir_fd, dst_dir_fd. New in version 3.3: The src_dir_fd and dst_dir_fd arguments. Changed in version 3.6: Accepts a path-like object for src and dst. | |
doc_2413 | A mixin that can be used to retrieve and provide parsing information for a day component of a date. Methods and Attributes
day_format
The strftime() format to use when parsing the day. By default, this is '%d'.
day
Optional The value for the day, as a string. By default, set to None, which means the day will be determined using other means.
get_day_format()
Returns the strftime() format to use when parsing the day. Returns day_format by default.
get_day()
Returns the day for which this view will display data, as a string. Tries the following sources, in order: The value of the DayMixin.day attribute. The value of the day argument captured in the URL pattern. The value of the day GET query argument. Raises a 404 if no valid day specification can be found.
get_next_day(date)
Returns a date object containing the next valid day after the date provided. This function can also return None or raise an Http404 exception, depending on the values of allow_empty and allow_future.
get_previous_day(date)
Returns a date object containing the previous valid day. This function can also return None or raise an Http404 exception, depending on the values of allow_empty and allow_future. | |
doc_2414 | run a Surface.scroll example that shows a magnified image scroll.main(image_file=None) -> None This example shows a scrollable image that has a zoom factor of eight. It uses the Surface.scroll() function to shift the image on the display surface. A clip rectangle protects a margin area. If called as a function, the example accepts an optional image file path. If run as a program it takes an optional file path command line argument. If no file is provided a default image file is used. When running click on a black triangle to move one pixel in the direction the triangle points. Or use the arrow keys. Close the window or press ESC to quit. | |
doc_2415 | See Migration guide for more details. tf.compat.v1.raw_ops.Sinh
tf.raw_ops.Sinh(
x, name=None
)
Given an input tensor, this function computes hyperbolic sine of every element in the tensor. Input range is [-inf,inf] and output range is [-inf,inf]. x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 2, 10, float("inf")])
tf.math.sinh(x) ==> [-inf -4.0515420e+03 -5.2109528e-01 1.1752012e+00 1.5094614e+00 3.6268604e+00 1.1013232e+04 inf]
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | |
doc_2416 | A proxy to the application handling the current request. This is useful to access the application without needing to import it, or if it can’t be imported, such as when using the application factory pattern or in blueprints and extensions. This is only available when an application context is pushed. This happens automatically during requests and CLI commands. It can be controlled manually with app_context(). This is a proxy. See Notes On Proxies for more information. | |
doc_2417 | Register func as a function to be executed at termination. Any optional arguments that are to be passed to func must be passed as arguments to register(). It is possible to register the same function and arguments more than once. At normal program termination (for instance, if sys.exit() is called or the main module’s execution completes), all functions registered are called in last in, first out order. The assumption is that lower level modules will normally be imported before higher level modules and thus must be cleaned up later. If an exception is raised during execution of the exit handlers, a traceback is printed (unless SystemExit is raised) and the exception information is saved. After all exit handlers have had a chance to run the last exception to be raised is re-raised. This function returns func, which makes it possible to use it as a decorator. | |
doc_2418 | Return a shallow copy of the dictionary. | |
doc_2419 | Sorts the elements of the input tensor along a given dimension in ascending order by value. If dim is not given, the last dimension of the input is chosen. If descending is True then the elements are sorted in descending order by value. A namedtuple of (values, indices) is returned, where the values are the sorted values and indices are the indices of the elements in the original input tensor. Parameters
input (Tensor) – the input tensor.
dim (int, optional) – the dimension to sort along
descending (bool, optional) – controls the sorting order (ascending or descending) Keyword Arguments
out (tuple, optional) – the output tuple of (Tensor, LongTensor) that can be optionally given to be used as output buffers Example: >>> x = torch.randn(3, 4)
>>> sorted, indices = torch.sort(x)
>>> sorted
tensor([[-0.2162, 0.0608, 0.6719, 2.3332],
[-0.5793, 0.0061, 0.6058, 0.9497],
[-0.5071, 0.3343, 0.9553, 1.0960]])
>>> indices
tensor([[ 1, 0, 2, 3],
[ 3, 1, 0, 2],
[ 0, 3, 1, 2]])
>>> sorted, indices = torch.sort(x, 0)
>>> sorted
tensor([[-0.5071, -0.2162, 0.6719, -0.5793],
[ 0.0608, 0.0061, 0.9497, 0.3343],
[ 0.6058, 0.9553, 1.0960, 2.3332]])
>>> indices
tensor([[ 2, 0, 0, 1],
[ 0, 1, 1, 2],
[ 1, 2, 2, 0]]) | |
doc_2420 |
Bases: mpl_toolkits.axes_grid1.axes_grid.Grid Parameters
figFigure
The parent figure.
rect(float, float, float, float) or int
The axes position, as a (left, bottom, width, height) tuple or as a three-digit subplot position code (e.g., "121").
nrows_ncols(int, int)
Number of rows and columns in the grid.
ngridsint or None, default: None
If not None, only the first ngrids axes in the grid are created.
direction{"row", "column"}, default: "row"
Whether axes are created in row-major ("row by row") or column-major order ("column by column"). This also affects the order in which axes are accessed using indexing (grid[index]).
axes_padfloat or (float, float), default: 0.02in
Padding or (horizontal padding, vertical padding) between axes, in inches.
share_allbool, default: False
Whether all axes share their x- and y-axis.
aspectbool, default: True
Whether the axes aspect ratio follows the aspect ratio of the data limits.
label_mode{"L", "1", "all"}, default: "L"
Determines which axes will get tick labels: "L": All axes on the left column get vertical tick labels; all axes on the bottom row get horizontal tick labels. "1": Only the bottom left axes is labelled. "all": all axes are labelled.
cbar_mode{"each", "single", "edge", None}, default: None
Whether to create a colorbar for "each" axes, a "single" colorbar for the entire grid, colorbars only for axes on the "edge" determined by cbar_location, or no colorbars. The colorbars are stored in the cbar_axes attribute.
cbar_location{"left", "right", "bottom", "top"}, default: "right"
cbar_padfloat, default: None
Padding between the image axes and the colorbar axes.
cbar_sizesize specification (see Size.from_any), default: "5%"
Colorbar size.
cbar_set_caxbool, default: True
If True, each axes in the grid has a cax attribute that is bound to associated cbar_axes.
axes_classsubclass of matplotlib.axes.Axes, default: None | |
doc_2421 | Copy source location (lineno, col_offset, end_lineno, and end_col_offset) from old_node to new_node if possible, and return new_node. | |
doc_2422 | If this attribute evaluates to true, events logged to this logger will be passed to the handlers of higher level (ancestor) loggers, in addition to any handlers attached to this logger. Messages are passed directly to the ancestor loggers’ handlers - neither the level nor filters of the ancestor loggers in question are considered. If this evaluates to false, logging messages are not passed to the handlers of ancestor loggers. The constructor sets this attribute to True. Note If you attach a handler to a logger and one or more of its ancestors, it may emit the same record multiple times. In general, you should not need to attach a handler to more than one logger - if you just attach it to the appropriate logger which is highest in the logger hierarchy, then it will see all events logged by all descendant loggers, provided that their propagate setting is left set to True. A common scenario is to attach handlers only to the root logger, and to let propagation take care of the rest. | |
doc_2423 |
Applies Instance Normalization for each channel in each data sample in a batch. See InstanceNorm1d, InstanceNorm2d, InstanceNorm3d for details. | |
doc_2424 | tf.compat.v1.profiler.Profiler(
graph=None, op_log=None
)
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/README.md Typical use case:
# Currently we are only allowed to create 1 profiler per process.
profiler = Profiler(sess.graph)
for i in xrange(total_steps):
if i % 10000 == 0:
run_meta = tf.compat.v1.RunMetadata()
_ = sess.run(...,
options=tf.compat.v1.RunOptions(
trace_level=tf.RunOptions.FULL_TRACE),
run_metadata=run_meta)
profiler.add_step(i, run_meta)
# Profile the parameters of your model.
profiler.profile_name_scope(options=(option_builder.ProfileOptionBuilder
.trainable_variables_parameter()))
# Or profile the timing of your model operations.
opts = option_builder.ProfileOptionBuilder.time_and_memory()
profiler.profile_operations(options=opts)
# Or you can generate a timeline:
opts = (option_builder.ProfileOptionBuilder(
option_builder.ProfileOptionBuilder.time_and_memory())
.with_step(i)
.with_timeline_output(filename).build())
profiler.profile_graph(options=opts)
else:
_ = sess.run(...)
# Auto detect problems and generate advice.
profiler.advise()
Args
graph tf.Graph. If None and eager execution is not enabled, use default graph.
op_log optional. tensorflow::tfprof::OpLogProto proto. Used to define extra op types. Methods add_step View source
add_step(
step, run_meta
)
Add statistics of a step.
Args
step int, An id used to group one or more different run_meta together. When profiling with the profile_xxx APIs, user can use the step id in the options to profile these run_meta together.
run_meta RunMetadata proto that contains statistics of a session run. advise View source
advise(
options
)
Automatically detect problems and generate reports.
Args
options A dict of options. See ALL_ADVICE example above.
Returns An Advise proto that contains the reports from all checkers.
profile_graph View source
profile_graph(
options
)
Profile the statistics of graph nodes, organized by dataflow graph.
Args
options A dict of options. See core/profiler/g3doc/options.md.
Returns a GraphNodeProto that records the results.
profile_name_scope View source
profile_name_scope(
options
)
Profile the statistics of graph nodes, organized by name scope.
Args
options A dict of options. See core/profiler/g3doc/options.md.
Returns a GraphNodeProto that records the results.
profile_operations View source
profile_operations(
options
)
Profile the statistics of the Operation types (e.g. MatMul, Conv2D).
Args
options A dict of options. See core/profiler/g3doc/options.md.
Returns a MultiGraphNodeProto that records the results.
profile_python View source
profile_python(
options
)
Profile the statistics of the Python codes. By default, it shows the call stack from root. To avoid redundant output, you may use options to filter as below options['show_name_regexes'] = ['.my_code.py.']
Args
options A dict of options. See core/profiler/g3doc/options.md.
Returns a MultiGraphNodeProto that records the results.
serialize_to_string View source
serialize_to_string()
Serialize the ProfileProto to a binary string. Users can write it to file for offline analysis by tfprof commandline or graphical interface.
Returns ProfileProto binary string. | |
doc_2425 |
Compute min of group values. Parameters
numeric_only:bool, default False
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data.
min_count:int, default -1
The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns
Series or DataFrame
Computed min of values within each group. | |
doc_2426 | Same as filter_horizontal, but uses a vertical display of the filter interface with the box of unselected options appearing above the box of selected options. | |
doc_2427 |
Call all of the registered callbacks. This function is triggered internally when a property is changed. See also add_callback
remove_callback | |
doc_2428 |
Use ghostscript's ps2pdf and xpdf's/poppler's pdftops to distill a file. This yields smaller files without illegal encapsulated postscript operators. This distiller is preferred, generating high-level postscript output that treats text as text. | |
doc_2429 |
Add a callback function that will be called whenever one of the Artist's properties changes. Parameters
funccallable
The callback function. It must have the signature: def func(artist: Artist) -> Any
where artist is the calling Artist. Return values may exist but are ignored. Returns
int
The observer id associated with the callback. This id can be used for removing the callback with remove_callback later. See also remove_callback | |
doc_2430 |
Applies a 3D adaptive average pooling over an input signal composed of several input planes. See AdaptiveAvgPool3d for details and output shape. Parameters
output_size – the target output size (single integer or triple-integer tuple) | |
doc_2431 | tf.config.experimental.set_visible_devices Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.set_visible_devices, tf.compat.v1.config.set_visible_devices
tf.config.set_visible_devices(
devices, device_type=None
)
Specifies which PhysicalDevice objects are visible to the runtime. TensorFlow will only allocate memory and place operations on visible physical devices, as otherwise no LogicalDevice will be created on them. By default all discovered devices are marked as visible. The following example demonstrates disabling the first GPU on the machine.
physical_devices = tf.config.list_physical_devices('GPU')
try:
# Disable first GPU
tf.config.set_visible_devices(physical_devices[1:], 'GPU')
logical_devices = tf.config.list_logical_devices('GPU')
# Logical device was not created for first GPU
assert len(logical_devices) == len(physical_devices) - 1
except:
# Invalid device or cannot modify virtual devices once initialized.
pass
Args
devices List of PhysicalDevices to make visible
device_type (optional) Only configure devices matching this device type. For example "CPU" or "GPU". Other devices will be left unaltered.
Raises
ValueError If argument validation fails.
RuntimeError Runtime is already initialized. | |
doc_2432 | Split the string s using shell-like syntax. If comments is False (the default), the parsing of comments in the given string will be disabled (setting the commenters attribute of the shlex instance to the empty string). This function operates in POSIX mode by default, but uses non-POSIX mode if the posix argument is false. Note Since the split() function instantiates a shlex instance, passing None for s will read the string to split from standard input. Deprecated since version 3.9: Passing None for s will raise an exception in future Python versions. | |
doc_2433 |
Draw the text instance. Parameters
gcGraphicsContextBase
The graphics context.
xfloat
The x location of the text in display coords.
yfloat
The y location of the text baseline in display coords.
sstr
The text string.
propmatplotlib.font_manager.FontProperties
The font properties.
anglefloat
The rotation angle in degrees anti-clockwise.
mtextmatplotlib.text.Text
The original text object to be rendered. Notes Note for backend implementers: When you are trying to determine if you have gotten your bounding box right (which is what enables the text layout/alignment to work properly), it helps to change the line in text.py: if 0: bbox_artist(self, renderer)
to if 1, and then the actual bounding box will be plotted along with your text. | |
doc_2434 | Class used to generate nicer error messages if sessions are not available. Will still allow read-only access to the empty session but fail on setting. Parameters
initial (Any) – Return type
None | |
doc_2435 | See Migration guide for more details. tf.compat.v1.raw_ops.CropAndResizeGradImage
tf.raw_ops.CropAndResizeGradImage(
grads, boxes, box_ind, image_size, T, method='bilinear', name=None
)
Args
grads A Tensor of type float32. A 4-D tensor of shape [num_boxes, crop_height, crop_width, depth].
boxes A Tensor of type float32. A 2-D tensor of shape [num_boxes, 4]. The i-th row of the tensor specifies the coordinates of a box in the box_ind[i] image and is specified in normalized coordinates [y1, x1, y2, x2]. A normalized coordinate value of y is mapped to the image coordinate at y * (image_height - 1), so as the [0, 1] interval of normalized image height is mapped to [0, image_height - 1] in image height coordinates. We do allow y1 > y2, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the[0, 1]range are allowed, in which case we useextrapolation_valueto extrapolate the input image values. </td> </tr><tr> <td>box_ind</td> <td> ATensorof typeint32. A 1-D tensor of shape[num_boxes]with int32 values in[0, batch). The value ofbox_ind[i]specifies the image that thei-th box refers to. </td> </tr><tr> <td>image_size</td> <td> ATensorof typeint32. A 1-D tensor with value[batch, image_height, image_width, depth]containing the original image size. Bothimage_heightandimage_widthneed to be positive. </td> </tr><tr> <td>T</td> <td> A <a href="../../tf/dtypes/DType"><code>tf.DType</code></a> from:tf.float32, tf.half, tf.float64. </td> </tr><tr> <td>method</td> <td> An optionalstringfrom:"bilinear", "nearest". Defaults to"bilinear". A string specifying the interpolation method. Only 'bilinear' is supported for now. </td> </tr><tr> <td>name` A name for the operation (optional).
Returns A Tensor of type T. | |
doc_2436 |
Parameters
new_interval (int) – Value to use as the new growth interval. | |
doc_2437 | Change the definition of a color, taking the number of the color to be changed followed by three RGB values (for the amounts of red, green, and blue components). The value of color_number must be between 0 and COLORS - 1. Each of r, g, b, must be a value between 0 and 1000. When init_color() is used, all occurrences of that color on the screen immediately change to the new definition. This function is a no-op on most terminals; it is active only if can_change_color() returns True. | |
doc_2438 | Called once for each element type declaration. name is the name of the element type, and model is a representation of the content model. | |
doc_2439 | Create an object that operates like a regular reader but maps the information in each row to a dict whose keys are given by the optional fieldnames parameter. The fieldnames parameter is a sequence. If fieldnames is omitted, the values in the first row of file f will be used as the fieldnames. Regardless of how the fieldnames are determined, the dictionary preserves their original ordering. If a row has more fields than fieldnames, the remaining data is put in a list and stored with the fieldname specified by restkey (which defaults to None). If a non-blank row has fewer fields than fieldnames, the missing values are filled-in with the value of restval (which defaults to None). All other optional or keyword arguments are passed to the underlying reader instance. Changed in version 3.6: Returned rows are now of type OrderedDict. Changed in version 3.8: Returned rows are now of type dict. A short usage example: >>> import csv
>>> with open('names.csv', newline='') as csvfile:
... reader = csv.DictReader(csvfile)
... for row in reader:
... print(row['first_name'], row['last_name'])
...
Eric Idle
John Cleese
>>> print(row)
{'first_name': 'John', 'last_name': 'Cleese'} | |
doc_2440 | where [args] are any number of arguments to script.py, or run python -m torch.utils.bottleneck -h for more usage instructions. Warning Because your script will be profiled, please ensure that it exits in a finite amount of time. Warning Due to the asynchronous nature of CUDA kernels, when running against CUDA code, the cProfile output and CPU-mode autograd profilers may not show correct timings: the reported CPU time reports the amount of time used to launch the kernels but does not include the time the kernel spent executing on a GPU unless the operation does a synchronize. Ops that do synchronize appear to be extremely expensive under regular CPU-mode profilers. In these case where timings are incorrect, the CUDA-mode autograd profiler may be helpful. Note To decide which (CPU-only-mode or CUDA-mode) autograd profiler output to look at, you should first check if your script is CPU-bound (“CPU total time is much greater than CUDA total time”). If it is CPU-bound, looking at the results of the CPU-mode autograd profiler will help. If on the other hand your script spends most of its time executing on the GPU, then it makes sense to start looking for responsible CUDA operators in the output of the CUDA-mode autograd profiler. Of course the reality is much more complicated and your script might not be in one of those two extremes depending on the part of the model you’re evaluating. If the profiler outputs don’t help, you could try looking at the result of torch.autograd.profiler.emit_nvtx() with nvprof. However, please take into account that the NVTX overhead is very high and often gives a heavily skewed timeline. Warning If you are profiling CUDA code, the first profiler that bottleneck runs (cProfile) will include the CUDA startup time (CUDA buffer allocation cost) in its time reporting. This should not matter if your bottlenecks result in code much slower than the CUDA startup time. For more complicated uses of the profilers (like in a multi-GPU case), please see https://docs.python.org/3/library/profile.html or torch.autograd.profiler.profile() for more information. | |
doc_2441 | tf.compat.v1.placeholder_with_default(
input, shape, name=None
)
Args
input A Tensor. The default value to produce when output is not fed.
shape A tf.TensorShape or list of ints. The (possibly partial) shape of the tensor.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_2442 | See torch.heaviside() | |
doc_2443 |
Plot the cross correlation between x and y. The correlation with lag k is defined as \(\sum_n x[n+k] \cdot y^*[n]\), where \(y^*\) is the complex conjugate of \(y\). Parameters
x, yarray-like of length n
detrendcallable, default: mlab.detrend_none (no detrending)
A detrending function applied to x and y. It must have the signature detrend(x: np.ndarray) -> np.ndarray
normedbool, default: True
If True, input vectors are normalised to unit length.
usevlinesbool, default: True
Determines the plot style. If True, vertical lines are plotted from 0 to the xcorr value using Axes.vlines. Additionally, a horizontal line is plotted at y=0 using Axes.axhline. If False, markers are plotted at the xcorr values using Axes.plot.
maxlagsint, default: 10
Number of lags to show. If None, will return all 2 * len(x) - 1 lags. Returns
lagsarray (length 2*maxlags+1)
The lag vector.
carray (length 2*maxlags+1)
The auto correlation vector.
lineLineCollection or Line2D
Artist added to the Axes of the correlation:
LineCollection if usevlines is True.
Line2D if usevlines is False.
bLine2D or None
Horizontal line at 0 if usevlines is True None usevlines is False. Other Parameters
linestyleLine2D property, optional
The linestyle for plotting the data points. Only used if usevlines is False.
markerstr, default: 'o'
The marker for plotting the data points. Only used if usevlines is False.
dataindexable object, optional
If given, the following parameters also accept a string s, which is interpreted as data[s] (unless this raises an exception): x, y **kwargs
Additional parameters are passed to Axes.vlines and Axes.axhline if usevlines is True; otherwise they are passed to Axes.plot. Notes The cross correlation is performed with numpy.correlate with mode = "full". | |
doc_2444 |
Add character data to the output stream. Parameters
textstr
Character data. | |
doc_2445 | Write the given bytes-like object, b, to the underlying raw stream, and return the number of bytes written. This can be less than the length of b in bytes, depending on specifics of the underlying raw stream, and especially if it is in non-blocking mode. None is returned if the raw stream is set not to block and no single byte could be readily written to it. The caller may release or mutate b after this method returns, so the implementation should only access b during the method call. | |
doc_2446 | Set or remove the function invoked by the rl_pre_input_hook callback of the underlying library. If function is specified, it will be used as the new hook function; if omitted or None, any function already installed is removed. The hook is called with no arguments after the first prompt has been printed and just before readline starts reading input characters. This function only exists if Python was compiled for a version of the library that supports it. | |
doc_2447 | Return True if the block has nested namespaces within it. These can be obtained with get_children(). | |
doc_2448 | Return the hyperbolic sine of x. | |
doc_2449 |
Save an image to file. Parameters
fnamestr
Target filename.
arrndarray of shape (M,N) or (M,N,3) or (M,N,4)
Image data.
pluginstr, optional
Name of plugin to use. By default, the different plugins are tried (starting with imageio) until a suitable candidate is found. If not given and fname is a tiff file, the tifffile plugin will be used.
check_contrastbool, optional
Check for low contrast and print warning (default: True). Other Parameters
plugin_argskeywords
Passed to the given plugin. Notes When saving a JPEG, the compression ratio may be controlled using the quality keyword argument which is an integer with values in [1, 100] where 1 is worst quality and smallest file size, and 100 is best quality and largest file size (default 75). This is only available when using the PIL and imageio plugins. | |
doc_2450 | Creates a B-Tree index. Provide an integer value from 10 to 100 to the fillfactor parameter to tune how packed the index pages will be. PostgreSQL’s default is 90. Changed in Django 3.2: Positional argument *expressions was added in order to support functional indexes. | |
doc_2451 |
Estimate model parameters with the EM algorithm. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or lower bound is less than tol, otherwise, a ConvergenceWarning is raised. If warm_start is True, then n_init is ignored and a single initialization is performed upon the first call. Upon consecutive calls, training starts where it left off. Parameters
Xarray-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point. Returns
self | |
doc_2452 |
Get the affine part of this transform. | |
doc_2453 | Use the real uid/gid to test for access to path. Note that most operations will use the effective uid/gid, therefore this routine can be used in a suid/sgid environment to test if the invoking user has the specified access to path. mode should be F_OK to test the existence of path, or it can be the inclusive OR of one or more of R_OK, W_OK, and X_OK to test permissions. Return True if access is allowed, False if not. See the Unix man page access(2) for more information. This function can support specifying paths relative to directory descriptors and not following symlinks. If effective_ids is True, access() will perform its access checks using the effective uid/gid instead of the real uid/gid. effective_ids may not be supported on your platform; you can check whether or not it is available using os.supports_effective_ids. If it is unavailable, using it will raise a NotImplementedError. Note Using access() to check if a user is authorized to e.g. open a file before actually doing so using open() creates a security hole, because the user might exploit the short time interval between checking and opening the file to manipulate it. It’s preferable to use EAFP techniques. For example: if os.access("myfile", os.R_OK):
with open("myfile") as fp:
return fp.read()
return "some default data"
is better written as: try:
fp = open("myfile")
except PermissionError:
return "some default data"
else:
with fp:
return fp.read()
Note I/O operations may fail even when access() indicates that they would succeed, particularly for operations on network filesystems which may have permissions semantics beyond the usual POSIX permission-bit model. Changed in version 3.3: Added the dir_fd, effective_ids, and follow_symlinks parameters. Changed in version 3.6: Accepts a path-like object. | |
doc_2454 |
Return boolean flag, True if artist is included in layout calculations. E.g. Constrained Layout Guide, Figure.tight_layout(), and fig.savefig(fname, bbox_inches='tight'). | |
doc_2455 | Rolling.count() Calculate the rolling count of non NaN observations.
Rolling.sum(*args[, engine, engine_kwargs]) Calculate the rolling sum.
Rolling.mean(*args[, engine, engine_kwargs]) Calculate the rolling mean.
Rolling.median([engine, engine_kwargs]) Calculate the rolling median.
Rolling.var([ddof, engine, engine_kwargs]) Calculate the rolling variance.
Rolling.std([ddof, engine, engine_kwargs]) Calculate the rolling standard deviation.
Rolling.min(*args[, engine, engine_kwargs]) Calculate the rolling minimum.
Rolling.max(*args[, engine, engine_kwargs]) Calculate the rolling maximum.
Rolling.corr([other, pairwise, ddof]) Calculate the rolling correlation.
Rolling.cov([other, pairwise, ddof]) Calculate the rolling sample covariance.
Rolling.skew(**kwargs) Calculate the rolling unbiased skewness.
Rolling.kurt(**kwargs) Calculate the rolling Fisher's definition of kurtosis without bias.
Rolling.apply(func[, raw, engine, ...]) Calculate the rolling custom aggregation function.
Rolling.aggregate(func, *args, **kwargs) Aggregate using one or more operations over the specified axis.
Rolling.quantile(quantile[, interpolation]) Calculate the rolling quantile.
Rolling.sem([ddof]) Calculate the rolling standard error of mean.
Rolling.rank([method, ascending, pct]) Calculate the rolling rank. Weighted window functions
Window.mean(*args, **kwargs) Calculate the rolling weighted window mean.
Window.sum(*args, **kwargs) Calculate the rolling weighted window sum.
Window.var([ddof]) Calculate the rolling weighted window variance.
Window.std([ddof]) Calculate the rolling weighted window standard deviation. Expanding window functions
Expanding.count() Calculate the expanding count of non NaN observations.
Expanding.sum(*args[, engine, engine_kwargs]) Calculate the expanding sum.
Expanding.mean(*args[, engine, engine_kwargs]) Calculate the expanding mean.
Expanding.median([engine, engine_kwargs]) Calculate the expanding median.
Expanding.var([ddof, engine, engine_kwargs]) Calculate the expanding variance.
Expanding.std([ddof, engine, engine_kwargs]) Calculate the expanding standard deviation.
Expanding.min(*args[, engine, engine_kwargs]) Calculate the expanding minimum.
Expanding.max(*args[, engine, engine_kwargs]) Calculate the expanding maximum.
Expanding.corr([other, pairwise, ddof]) Calculate the expanding correlation.
Expanding.cov([other, pairwise, ddof]) Calculate the expanding sample covariance.
Expanding.skew(**kwargs) Calculate the expanding unbiased skewness.
Expanding.kurt(**kwargs) Calculate the expanding Fisher's definition of kurtosis without bias.
Expanding.apply(func[, raw, engine, ...]) Calculate the expanding custom aggregation function.
Expanding.aggregate(func, *args, **kwargs) Aggregate using one or more operations over the specified axis.
Expanding.quantile(quantile[, interpolation]) Calculate the expanding quantile.
Expanding.sem([ddof]) Calculate the expanding standard error of mean.
Expanding.rank([method, ascending, pct]) Calculate the expanding rank. Exponentially-weighted window functions
ExponentialMovingWindow.mean(*args[, ...]) Calculate the ewm (exponential weighted moment) mean.
ExponentialMovingWindow.sum(*args[, engine, ...]) Calculate the ewm (exponential weighted moment) sum.
ExponentialMovingWindow.std([bias]) Calculate the ewm (exponential weighted moment) standard deviation.
ExponentialMovingWindow.var([bias]) Calculate the ewm (exponential weighted moment) variance.
ExponentialMovingWindow.corr([other, pairwise]) Calculate the ewm (exponential weighted moment) sample correlation.
ExponentialMovingWindow.cov([other, ...]) Calculate the ewm (exponential weighted moment) sample covariance. Window indexer Base class for defining custom window boundaries.
api.indexers.BaseIndexer([index_array, ...]) Base class for window bounds calculations.
api.indexers.FixedForwardWindowIndexer([...]) Creates window boundaries for fixed-length windows that include the current row.
api.indexers.VariableOffsetWindowIndexer([...]) Calculate window boundaries based on a non-fixed offset such as a BusinessDay. | |
doc_2456 | See Migration guide for more details. tf.compat.v1.app.flags.tf_decorator.TFDecorator
tf.compat.v1.flags.tf_decorator.TFDecorator(
decorator_name, target, decorator_doc='', decorator_argspec=None
)
TFDecorator captures and exposes the wrapped target, and provides details about the current decorator.
Attributes
decorated_target
decorator_argspec
decorator_doc
decorator_name
Methods __call__ View source
__call__(
*args, **kwargs
)
Call self as a function. | |
doc_2457 |
Calculate the rolling variance. Parameters
ddof:int, default 1
Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. *args
For NumPy compatibility and will not have an effect on the result.
engine:str, default None
'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.4.0.
engine_kwargs:dict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.4.0. **kwargs
For NumPy compatibility and will not have an effect on the result. Returns
Series or DataFrame
Return type is the same as the original object with np.float64 dtype. See also numpy.var
Equivalent method for NumPy array. pandas.Series.rolling
Calling rolling with Series data. pandas.DataFrame.rolling
Calling rolling with DataFrames. pandas.Series.var
Aggregating var for Series. pandas.DataFrame.var
Aggregating var for DataFrame. Notes The default ddof of 1 used in Series.var() is different than the default ddof of 0 in numpy.var(). A minimum of one period is required for the rolling calculation. The implementation is susceptible to floating point imprecision as shown in the example below. Examples
>>> s = pd.Series([5, 5, 6, 7, 5, 5, 5])
>>> s.rolling(3).var()
0 NaN
1 NaN
2 3.333333e-01
3 1.000000e+00
4 1.000000e+00
5 1.333333e+00
6 6.661338e-16
dtype: float64 | |
doc_2458 | Create a mapping-like object wrapping headers, which must be a list of header name/value tuples as described in PEP 3333. The default value of headers is an empty list. Headers objects support typical mapping operations including __getitem__(), get(), __setitem__(), setdefault(), __delitem__() and __contains__(). For each of these methods, the key is the header name (treated case-insensitively), and the value is the first value associated with that header name. Setting a header deletes any existing values for that header, then adds a new value at the end of the wrapped header list. Headers’ existing order is generally maintained, with new headers added to the end of the wrapped list. Unlike a dictionary, Headers objects do not raise an error when you try to get or delete a key that isn’t in the wrapped header list. Getting a nonexistent header just returns None, and deleting a nonexistent header does nothing. Headers objects also support keys(), values(), and items() methods. The lists returned by keys() and items() can include the same key more than once if there is a multi-valued header. The len() of a Headers object is the same as the length of its items(), which is the same as the length of the wrapped header list. In fact, the items() method just returns a copy of the wrapped header list. Calling bytes() on a Headers object returns a formatted bytestring suitable for transmission as HTTP response headers. Each header is placed on a line with its value, separated by a colon and a space. Each line is terminated by a carriage return and line feed, and the bytestring is terminated with a blank line. In addition to their mapping interface and formatting features, Headers objects also have the following methods for querying and adding multi-valued headers, and for adding headers with MIME parameters:
get_all(name)
Return a list of all the values for the named header. The returned list will be sorted in the order they appeared in the original header list or were added to this instance, and may contain duplicates. Any fields deleted and re-inserted are always appended to the header list. If no fields exist with the given name, returns an empty list.
add_header(name, value, **_params)
Add a (possibly multi-valued) header, with optional MIME parameters specified via keyword arguments. name is the header field to add. Keyword arguments can be used to set MIME parameters for the header field. Each parameter must be a string or None. Underscores in parameter names are converted to dashes, since dashes are illegal in Python identifiers, but many MIME parameter names include dashes. If the parameter value is a string, it is added to the header value parameters in the form name="value". If it is None, only the parameter name is added. (This is used for MIME parameters without a value.) Example usage: h.add_header('content-disposition', 'attachment', filename='bud.gif')
The above will add a header that looks like this: Content-Disposition: attachment; filename="bud.gif"
Changed in version 3.5: headers parameter is optional. | |
doc_2459 | See Migration guide for more details. tf.compat.v1.keras.applications.densenet.preprocess_input
tf.keras.applications.densenet.preprocess_input(
x, data_format=None
)
Usage example with applications.MobileNet: i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)
x = tf.cast(i, tf.float32)
x = tf.keras.applications.mobilenet.preprocess_input(x)
core = tf.keras.applications.MobileNet()
x = core(x)
model = tf.keras.Model(inputs=[i], outputs=[x])
image = tf.image.decode_png(tf.io.read_file('file.png'))
result = model(image)
Arguments
x A floating point numpy.array or a tf.Tensor, 3D or 4D with 3 color channels, with values in the range [0, 255]. The preprocessed data are written over the input data if the data types are compatible. To avoid this behaviour, numpy.copy(x) can be used.
data_format Optional data format of the image tensor/array. Defaults to None, in which case the global setting tf.keras.backend.image_data_format() is used (unless you changed it, it defaults to "channels_last").
Returns Preprocessed numpy.array or a tf.Tensor with type float32. The input pixels values are scaled between 0 and 1 and each channel is normalized with respect to the ImageNet dataset.
Raises
ValueError In case of unknown data_format argument. | |
doc_2460 | A class method which returns a closure for use on sys.path_hooks. An instance of FileFinder is returned by the closure using the path argument given to the closure directly and loader_details indirectly. If the argument to the closure is not an existing directory, ImportError is raised. | |
doc_2461 | This method does nothing. | |
doc_2462 |
Scatters a list of tensors to all processes in a group. Each process will receive exactly one tensor and store its data in the tensor argument. Parameters
tensor (Tensor) – Output tensor.
scatter_list (list[Tensor]) – List of tensors to scatter (default is None, must be specified on the source rank)
src (int) – Source rank (default is 0)
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op Returns
Async work handle, if async_op is set to True. None, if not async_op or if not part of the group | |
doc_2463 | tkinter.font.BOLD
tkinter.font.ITALIC
tkinter.font.ROMAN | |
doc_2464 |
Set whether the artist uses clipping. When False artists will be visible outside of the axes which can lead to unexpected results. Parameters
bbool | |
doc_2465 |
Set the color for high out-of-range values. | |
doc_2466 | Set the name used in the generated HTML documentation. This name will appear at the top of the generated documentation inside a “h1” element. | |
doc_2467 | tf.losses.MeanSquaredError Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.MeanSquaredError
tf.keras.losses.MeanSquaredError(
reduction=losses_utils.ReductionV2.AUTO, name='mean_squared_error'
)
loss = square(y_true - y_pred) Standalone usage:
y_true = [[0., 1.], [0., 0.]]
y_pred = [[1., 1.], [1., 0.]]
# Using 'auto'/'sum_over_batch_size' reduction type.
mse = tf.keras.losses.MeanSquaredError()
mse(y_true, y_pred).numpy()
0.5
# Calling with 'sample_weight'.
mse(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()
0.25
# Using 'sum' reduction type.
mse = tf.keras.losses.MeanSquaredError(
reduction=tf.keras.losses.Reduction.SUM)
mse(y_true, y_pred).numpy()
1.0
# Using 'none' reduction type.
mse = tf.keras.losses.MeanSquaredError(
reduction=tf.keras.losses.Reduction.NONE)
mse(y_true, y_pred).numpy()
array([0.5, 0.5], dtype=float32)
Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.MeanSquaredError())
Args
reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details.
name Optional name for the op. Defaults to 'mean_squared_error'. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates a Loss from its config (output of get_config()).
Args
config Output of get_config().
Returns A Loss instance.
get_config View source
get_config()
Returns the config dictionary for a Loss instance. __call__ View source
__call__(
y_true, y_pred, sample_weight=None
)
Invokes the Loss instance.
Args
y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]
y_pred The predicted values. shape = [batch_size, d0, .. dN]
sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.)
Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.)
Raises
ValueError If the shape of sample_weight is invalid. | |
doc_2468 |
Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters
deepbool, default=True
Setting it to True gets the various estimators and the parameters of the estimators as well. | |
doc_2469 |
Bases: mpl_toolkits.axes_grid1.axes_grid.Grid Parameters
figFigure
The parent figure.
rect(float, float, float, float) or int
The axes position, as a (left, bottom, width, height) tuple or as a three-digit subplot position code (e.g., "121").
nrows_ncols(int, int)
Number of rows and columns in the grid.
ngridsint or None, default: None
If not None, only the first ngrids axes in the grid are created.
direction{"row", "column"}, default: "row"
Whether axes are created in row-major ("row by row") or column-major order ("column by column"). This also affects the order in which axes are accessed using indexing (grid[index]).
axes_padfloat or (float, float), default: 0.02
Padding or (horizontal padding, vertical padding) between axes, in inches.
share_allbool, default: False
Whether all axes share their x- and y-axis. Overrides share_x and share_y.
share_xbool, default: True
Whether all axes of a column share their x-axis.
share_ybool, default: True
Whether all axes of a row share their y-axis.
label_mode{"L", "1", "all"}, default: "L"
Determines which axes will get tick labels: "L": All axes on the left column get vertical tick labels; all axes on the bottom row get horizontal tick labels. "1": Only the bottom left axes is labelled. "all": all axes are labelled.
axes_classsubclass of matplotlib.axes.Axes, default: None
aspectbool, default: False
Whether the axes aspect ratio follows the aspect ratio of the data limits. | |
doc_2470 | tf.experimental.numpy.string_(
*args, **kwargs
)
bytes(string, encoding[, errors]) -> bytes bytes(bytes_or_buffer) -> immutable copy of bytes_or_buffer bytes(int) -> bytes object of size given by the parameter initialized with null bytes bytes() -> empty bytes object Construct an immutable array of bytes from: an iterable yielding integers in range(256) a text string encoded using the specified encoding any object implementing the buffer API. an integer Methods all
all()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. any
any()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmax
argmax()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmin
argmin()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argsort
argsort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. astype
astype()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. byteswap
byteswap()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. capitalize
capitalize()
B.capitalize() -> copy of B Return a copy of B with only its first character capitalized (ASCII) and the rest lower-cased. center
center()
B.center(width[, fillchar]) -> copy of B Return B centered in a string of length width. Padding is done using the specified fill character (default is a space). choose
choose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. clip
clip()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. compress
compress()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. conj
conj()
conjugate
conjugate()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. copy
copy()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. count
count()
B.count(sub[, start[, end]]) -> int Return the number of non-overlapping occurrences of subsection sub in bytes B[start:end]. Optional arguments start and end are interpreted as in slice notation. cumprod
cumprod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. cumsum
cumsum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. decode
decode(
encoding='utf-8', errors='strict'
)
Decode the bytes using the codec registered for encoding. encoding The encoding with which to decode the bytes. errors The error handling scheme to use for the handling of decoding errors. The default is 'strict' meaning that decoding errors raise a UnicodeDecodeError. Other possible values are 'ignore' and 'replace' as well as any other name registered with codecs.register_error that can handle UnicodeDecodeErrors. diagonal
diagonal()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dump
dump()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dumps
dumps()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. endswith
endswith()
B.endswith(suffix[, start[, end]]) -> bool Return True if B ends with the specified suffix, False otherwise. With optional start, test B beginning at that position. With optional end, stop comparing B at that position. suffix can also be a tuple of bytes to try. expandtabs
expandtabs()
B.expandtabs(tabsize=8) -> copy of B Return a copy of B where all tab characters are expanded using spaces. If tabsize is not given, a tab size of 8 characters is assumed. fill
fill()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. find
find()
B.find(sub[, start[, end]]) -> int Return the lowest index in B where subsection sub is found, such that sub is contained within B[start,end]. Optional arguments start and end are interpreted as in slice notation. Return -1 on failure. flatten
flatten()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. fromhex
fromhex(
string, /
)
Create a bytes object from a string of hexadecimal numbers. Spaces between two numbers are accepted. Example: bytes.fromhex('B9 01EF') -> b'\xb9\x01\xef'. getfield
getfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. hex
hex()
B.hex() -> string Create a string of hexadecimal numbers from a bytes object. Example: b'\xb9\x01\xef'.hex() -> 'b901ef'. index
index()
B.index(sub[, start[, end]]) -> int Return the lowest index in B where subsection sub is found, such that sub is contained within B[start,end]. Optional arguments start and end are interpreted as in slice notation. Raises ValueError when the subsection is not found. isalnum
isalnum()
B.isalnum() -> bool Return True if all characters in B are alphanumeric and there is at least one character in B, False otherwise. isalpha
isalpha()
B.isalpha() -> bool Return True if all characters in B are alphabetic and there is at least one character in B, False otherwise. isascii
isascii()
B.isascii() -> bool Return True if B is empty or all characters in B are ASCII, False otherwise. isdigit
isdigit()
B.isdigit() -> bool Return True if all characters in B are digits and there is at least one character in B, False otherwise. islower
islower()
B.islower() -> bool Return True if all cased characters in B are lowercase and there is at least one cased character in B, False otherwise. isspace
isspace()
B.isspace() -> bool Return True if all characters in B are whitespace and there is at least one character in B, False otherwise. istitle
istitle()
B.istitle() -> bool Return True if B is a titlecased string and there is at least one character in B, i.e. uppercase characters may only follow uncased characters and lowercase characters only cased ones. Return False otherwise. isupper
isupper()
B.isupper() -> bool Return True if all cased characters in B are uppercase and there is at least one cased character in B, False otherwise. item
item()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. itemset
itemset()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. join
join(
iterable_of_bytes, /
)
Concatenate any number of bytes objects. The bytes whose method is called is inserted in between each pair. The result is returned as a new bytes object. Example: b'.'.join([b'ab', b'pq', b'rs']) -> b'ab.pq.rs'. ljust
ljust()
B.ljust(width[, fillchar]) -> copy of B Return B left justified in a string of length width. Padding is done using the specified fill character (default is a space). lower
lower()
B.lower() -> copy of B Return a copy of B with all ASCII characters converted to lowercase. lstrip
lstrip(
bytes, /
)
Strip leading bytes contained in the argument. If the argument is omitted or None, strip leading ASCII whitespace. maketrans
maketrans(
frm, to, /
)
Return a translation table useable for the bytes or bytearray translate method. The returned table will be one where each byte in frm is mapped to the byte at the same position in to. The bytes objects frm and to must be of the same length. max
max()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. mean
mean()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. min
min()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. newbyteorder
newbyteorder()
newbyteorder(new_order='S') Return a new dtype with a different byte order. Changes are also made in all fields and sub-arrays of the data type. The new_order code can be any from the following: 'S' - swap dtype from current to opposite endian '<', 'L'- little endian '>', 'B'- big endian '=', 'N'- native order '|', 'I'- ignore (no change to byte order) Parameters new_order : str, optional Byte order to force; a value from the byte order specifications above. The default value ('S') results in swapping the current byte order. The code does a case-insensitive check on the first letter of new_order for the alternatives above. For example, any of 'B' or 'b' or 'biggish' are valid to specify big-endian. Returns new_dtype : dtype New dtype object with the given change to the byte order. nonzero
nonzero()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. partition
partition(
sep, /
)
Partition the bytes into three parts using the given separator. This will search for the separator sep in the bytes. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it. If the separator is not found, returns a 3-tuple containing the original bytes object and two empty bytes objects. prod
prod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ptp
ptp()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. put
put()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ravel
ravel()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. repeat
repeat()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. replace
replace(
old, new, count, /
)
Return a copy with all occurrences of substring old replaced by new. count Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences. If the optional argument count is given, only the first count occurrences are replaced. reshape
reshape()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. resize
resize()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. rfind
rfind()
B.rfind(sub[, start[, end]]) -> int Return the highest index in B where subsection sub is found, such that sub is contained within B[start,end]. Optional arguments start and end are interpreted as in slice notation. Return -1 on failure. rindex
rindex()
B.rindex(sub[, start[, end]]) -> int Return the highest index in B where subsection sub is found, such that sub is contained within B[start,end]. Optional arguments start and end are interpreted as in slice notation. Raise ValueError when the subsection is not found. rjust
rjust()
B.rjust(width[, fillchar]) -> copy of B Return B right justified in a string of length width. Padding is done using the specified fill character (default is a space) round
round()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. rpartition
rpartition(
sep, /
)
Partition the bytes into three parts using the given separator. This will search for the separator sep in the bytes, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it. If the separator is not found, returns a 3-tuple containing two empty bytes objects and the original bytes object. rsplit
rsplit(
sep=None, maxsplit=-1
)
Return a list of the sections in the bytes, using sep as the delimiter. sep The delimiter according which to split the bytes. None (the default value) means split on ASCII whitespace characters (space, tab, return, newline, formfeed, vertical tab). maxsplit Maximum number of splits to do. -1 (the default value) means no limit. Splitting is done starting at the end of the bytes and working to the front. rstrip
rstrip(
bytes, /
)
Strip trailing bytes contained in the argument. If the argument is omitted or None, strip trailing ASCII whitespace. searchsorted
searchsorted()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setfield
setfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setflags
setflags()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. sort
sort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. split
split(
sep=None, maxsplit=-1
)
Return a list of the sections in the bytes, using sep as the delimiter. sep The delimiter according which to split the bytes. None (the default value) means split on ASCII whitespace characters (space, tab, return, newline, formfeed, vertical tab). maxsplit Maximum number of splits to do. -1 (the default value) means no limit. splitlines
splitlines(
keepends=False
)
Return a list of the lines in the bytes, breaking at line boundaries. Line breaks are not included in the resulting list unless keepends is given and true. squeeze
squeeze()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. startswith
startswith()
B.startswith(prefix[, start[, end]]) -> bool Return True if B starts with the specified prefix, False otherwise. With optional start, test B beginning at that position. With optional end, stop comparing B at that position. prefix can also be a tuple of bytes to try. std
std()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. strip
strip(
bytes, /
)
Strip leading and trailing bytes contained in the argument. If the argument is omitted or None, strip leading and trailing ASCII whitespace. sum
sum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. swapaxes
swapaxes()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. swapcase
swapcase()
B.swapcase() -> copy of B Return a copy of B with uppercase ASCII characters converted to lowercase ASCII and vice versa. take
take()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. title
title()
B.title() -> copy of B Return a titlecased version of B, i.e. ASCII words start with uppercase characters, all remaining cased characters have lowercase. tobytes
tobytes()
tofile
tofile()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tolist
tolist()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tostring
tostring()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. trace
trace()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. translate
translate(
table, /, delete=b''
)
Return a copy with each character mapped by the given translation table. table Translation table, which must be a bytes object of length 256. All characters occurring in the optional argument delete are removed. The remaining characters are mapped through the given translation table. transpose
transpose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. upper
upper()
B.upper() -> copy of B Return a copy of B with all ASCII characters converted to uppercase. var
var()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. view
view()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. zfill
zfill()
B.zfill(width) -> copy of B Pad a numeric string B with zeros on the left, to fill a field of the specified width. B is never truncated. __abs__
__abs__()
abs(self) __add__
__add__(
value, /
)
Return self+value. __and__
__and__(
value, /
)
Return self&value. __bool__
__bool__()
self != 0 __contains__
__contains__(
key, /
)
Return key in self. __eq__
__eq__(
value, /
)
Return self==value. __floordiv__
__floordiv__(
value, /
)
Return self//value. __ge__
__ge__(
value, /
)
Return self>=value. __getitem__
__getitem__(
key, /
)
Return self[key]. __gt__
__gt__(
value, /
)
Return self>value. __invert__
__invert__()
~self __iter__
__iter__()
Implement iter(self). __le__
__le__(
value, /
)
Return self<=value. __len__
__len__()
Return len(self). __lt__
__lt__(
value, /
)
Return self<value. __mod__
__mod__(
value, /
)
Return self%value. __mul__
__mul__(
value, /
)
Return self*value. __ne__
__ne__(
value, /
)
Return self!=value. __neg__
__neg__()
-self __or__
__or__(
value, /
)
Return self|value. __pos__
__pos__()
+self __pow__
__pow__(
value, mod, /
)
Return pow(self, value, mod). __radd__
__radd__(
value, /
)
Return value+self. __rand__
__rand__(
value, /
)
Return value&self. __rfloordiv__
__rfloordiv__(
value, /
)
Return value//self. __rmod__
__rmod__(
value, /
)
Return value%self. __rmul__
__rmul__(
value, /
)
Return value*self. __ror__
__ror__(
value, /
)
Return value|self. __rpow__
__rpow__(
value, mod, /
)
Return pow(value, self, mod). __rsub__
__rsub__(
value, /
)
Return value-self. __rtruediv__
__rtruediv__(
value, /
)
Return value/self. __rxor__
__rxor__(
value, /
)
Return value^self. __sub__
__sub__(
value, /
)
Return self-value. __truediv__
__truediv__(
value, /
)
Return self/value. __xor__
__xor__(
value, /
)
Return self^value.
Class Variables
T
base
data
dtype
flags
flat
imag
itemsize
nbytes
ndim
real
shape
size
strides | |
doc_2471 |
Alias for set_facecolor. | |
doc_2472 | tf.nn.dropout(
x, rate, noise_shape=None, seed=None, name=None
)
Note: The behavior of dropout has changed between TensorFlow 1.x and 2.x. When converting 1.x code, please use named arguments to ensure behavior stays consistent.
See also: tf.keras.layers.Dropout for a dropout layer. Dropout is useful for regularizing DNN models. Inputs elements are randomly set to zero (and the other elements are rescaled). This encourages each node to be independently useful, as it cannot rely on the output of other nodes. More precisely: With probability rate elements of x are set to 0. The remaining elements are scaled up by 1.0 / (1 - rate), so that the expected value is preserved.
tf.random.set_seed(0)
x = tf.ones([3,5])
tf.nn.dropout(x, rate = 0.5, seed = 1).numpy()
array([[2., 0., 0., 2., 2.],
[2., 2., 2., 2., 2.],
[2., 0., 2., 0., 2.]], dtype=float32)
tf.random.set_seed(0)
x = tf.ones([3,5])
tf.nn.dropout(x, rate = 0.8, seed = 1).numpy()
array([[0., 0., 0., 5., 5.],
[0., 5., 0., 5., 0.],
[5., 0., 5., 0., 5.]], dtype=float32)
tf.nn.dropout(x, rate = 0.0) == x
<tf.Tensor: shape=(3, 5), dtype=bool, numpy=
array([[ True, True, True, True, True],
[ True, True, True, True, True],
[ True, True, True, True, True]])>
By default, each element is kept or dropped independently. If noise_shape is specified, it must be broadcastable to the shape of x, and only dimensions with noise_shape[i] == shape(x)[i] will make independent decisions. This is useful for dropping whole channels from an image or sequence. For example:
tf.random.set_seed(0)
x = tf.ones([3,10])
tf.nn.dropout(x, rate = 2/3, noise_shape=[1,10], seed=1).numpy()
array([[0., 0., 0., 3., 3., 0., 3., 3., 3., 0.],
[0., 0., 0., 3., 3., 0., 3., 3., 3., 0.],
[0., 0., 0., 3., 3., 0., 3., 3., 3., 0.]], dtype=float32)
Args
x A floating point tensor.
rate A scalar Tensor with the same type as x. The probability that each element is dropped. For example, setting rate=0.1 would drop 10% of input elements.
noise_shape A 1-D Tensor of type int32, representing the shape for randomly generated keep/drop flags.
seed A Python integer. Used to create random seeds. See tf.random.set_seed for behavior.
name A name for this operation (optional).
Returns A Tensor of the same shape of x.
Raises
ValueError If rate is not in [0, 1) or if x is not a floating point tensor. rate=1 is disallowed, because the output would be all zeros, which is likely not what was intended. | |
doc_2473 | Generate a CAB file, add it as a stream to the MSI file, put it into the Media table, and remove the generated file from the disk. | |
doc_2474 | tf.distribute.InputOptions(
experimental_prefetch_to_device=True,
experimental_replication_mode=tf.distribute.InputReplicationMode.PER_WORKER,
experimental_place_dataset_on_device=False
)
This can be used to hold some strategy specific configs. # Setup TPUStrategy
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.TPUStrategy(resolver)
dataset = tf.data.Dataset.range(16)
distributed_dataset_on_host = (
strategy.experimental_distribute_dataset(
dataset,
tf.distribute.InputOptions(
experimental_replication_mode=
experimental_replication_mode.PER_WORKER,
experimental_place_dataset_on_device=False)))
Attributes
experimental_prefetch_to_device Boolean. Defaults to True. If True, dataset elements will be prefetched to accelerator device memory. When False, dataset elements are prefetched to host device memory. Must be False when using TPUEmbedding API. experimental_prefetch_to_device can only be used with experimental_replication_mode=PER_WORKER
experimental_replication_mode Replication mode for the input function. Currently, the InputReplicationMode.PER_REPLICA is only supported with tf.distribute.MirroredStrategy. experimental_distribute_datasets_from_function. The default value is InputReplicationMode.PER_WORKER.
experimental_place_dataset_on_device Boolean. Default to False. When True, dataset will be placed on the device, otherwise it will remain on the host. experimental_place_dataset_on_device=True can only be used with experimental_replication_mode=PER_REPLICA | |
doc_2475 |
The number of input dimensions of this transform. Must be overridden (with integers) in the subclass. | |
doc_2476 | See Migration guide for more details. tf.compat.v1.raw_ops.GatherV2
tf.raw_ops.GatherV2(
params, indices, axis, batch_dims=0, name=None
)
indices must be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shape params.shape[:axis] + indices.shape[batch_dims:] + params.shape[axis + 1:] where: # Scalar indices (output is rank(params) - 1).
output[a_0, ..., a_n, b_0, ..., b_n] =
params[a_0, ..., a_n, indices, b_0, ..., b_n]
# Vector indices (output is rank(params)).
output[a_0, ..., a_n, i, b_0, ..., b_n] =
params[a_0, ..., a_n, indices[i], b_0, ..., b_n]
# Higher rank indices (output is rank(params) + rank(indices) - 1).
output[a_0, ..., a_n, i, ..., j, b_0, ... b_n] =
params[a_0, ..., a_n, indices[i, ..., j], b_0, ..., b_n]
Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, a 0 is stored in the corresponding output value. See also tf.batch_gather and tf.gather_nd.
Args
params A Tensor. The tensor from which to gather values. Must be at least rank axis + 1.
indices A Tensor. Must be one of the following types: int32, int64. Index tensor. Must be in range [0, params.shape[axis]).
axis A Tensor. Must be one of the following types: int32, int64. The axis in params to gather indices from. Defaults to the first dimension. Supports negative indexes.
batch_dims An optional int. Defaults to 0.
name A name for the operation (optional).
Returns A Tensor. Has the same type as params. | |
doc_2477 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_2478 |
Return the picking behavior of the artist. The possible values are described in set_picker. See also
set_picker, pickable, pick | |
doc_2479 | See Migration guide for more details. tf.compat.v1.nn.uniform_candidate_sampler, tf.compat.v1.random.uniform_candidate_sampler
tf.random.uniform_candidate_sampler(
true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None
)
This operation randomly samples a tensor of sampled classes (sampled_candidates) from the range of integers [0, range_max). The elements of sampled_candidates are drawn without replacement (if unique=True) or with replacement (if unique=False) from the base distribution. The base distribution for this operation is the uniform distribution over the range of integers [0, range_max). In addition, this operation returns tensors true_expected_count and sampled_expected_count representing the number of times each of the target classes (true_classes) and the sampled classes (sampled_candidates) is expected to occur in an average tensor of sampled classes. These values correspond to Q(y|x) defined in this document. If unique=True, then these are post-rejection probabilities and we compute them approximately.
Args
true_classes A Tensor of type int64 and shape [batch_size, num_true]. The target classes.
num_true An int. The number of target classes per training example.
num_sampled An int. The number of classes to randomly sample. The sampled_candidates return value will have shape [num_sampled]. If unique=True, num_sampled must be less than or equal to range_max.
unique A bool. Determines whether all sampled classes in a batch are unique.
range_max An int. The number of possible classes.
seed An int. An operation-specific seed. Default is 0.
name A name for the operation (optional).
Returns
sampled_candidates A tensor of type int64 and shape [num_sampled]. The sampled classes, either with possible duplicates (unique=False) or all unique (unique=True). In either case, sampled_candidates is independent of the true classes.
true_expected_count A tensor of type float. Same shape as true_classes. The expected counts under the sampling distribution of each of true_classes.
sampled_expected_count A tensor of type float. Same shape as sampled_candidates. The expected counts under the sampling distribution of each of sampled_candidates. | |
doc_2480 | Whether debug mode is enabled. When using flask run to start the development server, an interactive debugger will be shown for unhandled exceptions, and the server will be reloaded when code changes. The debug attribute maps to this config key. This is enabled when ENV is 'development' and is overridden by the FLASK_DEBUG environment variable. It may not behave as expected if set in code. Do not enable debug mode when deploying in production. Default: True if ENV is 'development', or False otherwise. | |
doc_2481 | Functionally equivalent to a ScriptModule, but represents a single function and does not have any attributes or Parameters.
get_debug_state(self: torch._C.ScriptFunction) → torch._C.GraphExecutorState
save(self: torch._C.ScriptFunction, filename: str, _extra_files: Dict[str, str] = {}) → None
save_to_buffer(self: torch._C.ScriptFunction, _extra_files: Dict[str, str] = {}) → bytes | |
doc_2482 | Resize the standard and current windows to the specified dimensions, and adjusts other bookkeeping data used by the curses library that record the window dimensions (in particular the SIGWINCH handler). | |
doc_2483 |
Generate indices to split data into training and test set. Parameters
Xobject
Always ignored, exists for compatibility.
yobject
Always ignored, exists for compatibility.
groupsobject
Always ignored, exists for compatibility. Yields
trainndarray
The training set indices for that split.
testndarray
The testing set indices for that split. | |
doc_2484 |
Translates slice objects to concatenation along the first axis. This is a simple way to build up arrays quickly. There are two use cases. If the index expression contains comma separated arrays, then stack them along their first axis. If the index expression contains slice notation or scalars then create a 1-D array with a range indicated by the slice notation. If slice notation is used, the syntax start:stop:step is equivalent to np.arange(start, stop, step) inside of the brackets. However, if step is an imaginary number (i.e. 100j) then its integer portion is interpreted as a number-of-points desired and the start and stop are inclusive. In other words start:stop:stepj is interpreted as np.linspace(start, stop, step, endpoint=1) inside of the brackets. After expansion of slice notation, all comma separated sequences are concatenated together. Optional character strings placed as the first element of the index expression can be used to change the output. The strings ‘r’ or ‘c’ result in matrix output. If the result is 1-D and ‘r’ is specified a 1 x N (row) matrix is produced. If the result is 1-D and ‘c’ is specified, then a N x 1 (column) matrix is produced. If the result is 2-D then both provide the same matrix result. A string integer specifies which axis to stack multiple comma separated arrays along. A string of two comma-separated integers allows indication of the minimum number of dimensions to force each entry into as the second integer (the axis to concatenate along is still the first integer). A string with three comma-separated integers allows specification of the axis to concatenate along, the minimum number of dimensions to force the entries to, and which axis should contain the start of the arrays which are less than the specified number of dimensions. In other words the third integer allows you to specify where the 1’s should be placed in the shape of the arrays that have their shapes upgraded. By default, they are placed in the front of the shape tuple. The third argument allows you to specify where the start of the array should be instead. Thus, a third argument of ‘0’ would place the 1’s at the end of the array shape. Negative integers specify where in the new shape tuple the last dimension of upgraded arrays should be placed, so the default is ‘-1’. Parameters
Not a function, so takes no parameters
Returns
A concatenated ndarray or matrix.
See also concatenate
Join a sequence of arrays along an existing axis. c_
Translates slice objects to concatenation along the second axis. Examples >>> np.r_[np.array([1,2,3]), 0, 0, np.array([4,5,6])]
array([1, 2, 3, ..., 4, 5, 6])
>>> np.r_[-1:1:6j, [0]*3, 5, 6]
array([-1. , -0.6, -0.2, 0.2, 0.6, 1. , 0. , 0. , 0. , 5. , 6. ])
String integers specify the axis to concatenate along or the minimum number of dimensions to force entries into. >>> a = np.array([[0, 1, 2], [3, 4, 5]])
>>> np.r_['-1', a, a] # concatenate along last axis
array([[0, 1, 2, 0, 1, 2],
[3, 4, 5, 3, 4, 5]])
>>> np.r_['0,2', [1,2,3], [4,5,6]] # concatenate along first axis, dim>=2
array([[1, 2, 3],
[4, 5, 6]])
>>> np.r_['0,2,0', [1,2,3], [4,5,6]]
array([[1],
[2],
[3],
[4],
[5],
[6]])
>>> np.r_['1,2,0', [1,2,3], [4,5,6]]
array([[1, 4],
[2, 5],
[3, 6]])
Using ‘r’ or ‘c’ as a first string argument creates a matrix. >>> np.r_['r',[1,2,3], [4,5,6]]
matrix([[1, 2, 3, 4, 5, 6]]) | |
doc_2485 | See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.random_channel_shift
tf.keras.preprocessing.image.random_channel_shift(
x, intensity_range, channel_axis=0
)
Arguments
x Input tensor. Must be 3D.
intensity_range Transformation intensity.
channel_axis Index of axis for channels in the input tensor.
Returns Numpy image tensor. | |
doc_2486 |
Draw the figure using the renderer. It is important that this method actually walk the artist tree even if not output is produced because this will trigger deferred work (like computing limits auto-limits and tick values) that users may want access to before saving to disk. | |
doc_2487 | 'blogs.blog': lambda o: "/blogs/%s/" % o.slug,
'news.story': lambda o: "/stories/%s/%s/" % (o.pub_year, o.slug),
}
The model name used in this setting should be all lowercase, regardless of the case of the actual model class name. ADMINS Default: [] (Empty list) A list of all the people who get code error notifications. When DEBUG=False and AdminEmailHandler is configured in LOGGING (done by default), Django emails these people the details of exceptions raised in the request/response cycle. Each item in the list should be a tuple of (Full name, email address). Example: [('John', 'john@example.com'), ('Mary', 'mary@example.com')]
ALLOWED_HOSTS Default: [] (Empty list) A list of strings representing the host/domain names that this Django site can serve. This is a security measure to prevent HTTP Host header attacks, which are possible even under many seemingly-safe web server configurations. Values in this list can be fully qualified names (e.g. 'www.example.com'), in which case they will be matched against the request’s Host header exactly (case-insensitive, not including port). A value beginning with a period can be used as a subdomain wildcard: '.example.com' will match example.com, www.example.com, and any other subdomain of example.com. A value of '*' will match anything; in this case you are responsible to provide your own validation of the Host header (perhaps in a middleware; if so this middleware must be listed first in MIDDLEWARE). Django also allows the fully qualified domain name (FQDN) of any entries. Some browsers include a trailing dot in the Host header which Django strips when performing host validation. If the Host header (or X-Forwarded-Host if USE_X_FORWARDED_HOST is enabled) does not match any value in this list, the django.http.HttpRequest.get_host() method will raise SuspiciousOperation. When DEBUG is True and ALLOWED_HOSTS is empty, the host is validated against ['.localhost', '127.0.0.1', '[::1]']. ALLOWED_HOSTS is also checked when running tests. This validation only applies via get_host(); if your code accesses the Host header directly from request.META you are bypassing this security protection. APPEND_SLASH Default: True When set to True, if the request URL does not match any of the patterns in the URLconf and it doesn’t end in a slash, an HTTP redirect is issued to the same URL with a slash appended. Note that the redirect may cause any data submitted in a POST request to be lost. The APPEND_SLASH setting is only used if CommonMiddleware is installed (see Middleware). See also PREPEND_WWW. CACHES Default: {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
}
}
A dictionary containing the settings for all caches to be used with Django. It is a nested dictionary whose contents maps cache aliases to a dictionary containing the options for an individual cache. The CACHES setting must configure a default cache; any number of additional caches may also be specified. If you are using a cache backend other than the local memory cache, or you need to define multiple caches, other options will be required. The following cache options are available. BACKEND Default: '' (Empty string) The cache backend to use. The built-in cache backends are: 'django.core.cache.backends.db.DatabaseCache' 'django.core.cache.backends.dummy.DummyCache' 'django.core.cache.backends.filebased.FileBasedCache' 'django.core.cache.backends.locmem.LocMemCache' 'django.core.cache.backends.memcached.PyMemcacheCache' 'django.core.cache.backends.memcached.PyLibMCCache' 'django.core.cache.backends.redis.RedisCache' You can use a cache backend that doesn’t ship with Django by setting BACKEND to a fully-qualified path of a cache backend class (i.e. mypackage.backends.whatever.WhateverCache). Changed in Django 3.2: The PyMemcacheCache backend was added. Changed in Django 4.0: The RedisCache backend was added. KEY_FUNCTION A string containing a dotted path to a function (or any callable) that defines how to compose a prefix, version and key into a final cache key. The default implementation is equivalent to the function: def make_key(key, key_prefix, version):
return ':'.join([key_prefix, str(version), key])
You may use any key function you want, as long as it has the same argument signature. See the cache documentation for more information. KEY_PREFIX Default: '' (Empty string) A string that will be automatically included (prepended by default) to all cache keys used by the Django server. See the cache documentation for more information. LOCATION Default: '' (Empty string) The location of the cache to use. This might be the directory for a file system cache, a host and port for a memcache server, or an identifying name for a local memory cache. e.g.: CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
'LOCATION': '/var/tmp/django_cache',
}
}
OPTIONS Default: None Extra parameters to pass to the cache backend. Available parameters vary depending on your cache backend. Some information on available parameters can be found in the cache arguments documentation. For more information, consult your backend module’s own documentation. TIMEOUT Default: 300 The number of seconds before a cache entry is considered stale. If the value of this setting is None, cache entries will not expire. A value of 0 causes keys to immediately expire (effectively “don’t cache”). VERSION Default: 1 The default version number for cache keys generated by the Django server. See the cache documentation for more information. CACHE_MIDDLEWARE_ALIAS Default: 'default' The cache connection to use for the cache middleware. CACHE_MIDDLEWARE_KEY_PREFIX Default: '' (Empty string) A string which will be prefixed to the cache keys generated by the cache middleware. This prefix is combined with the KEY_PREFIX setting; it does not replace it. See Django’s cache framework. CACHE_MIDDLEWARE_SECONDS Default: 600 The default number of seconds to cache a page for the cache middleware. See Django’s cache framework. CSRF_COOKIE_AGE Default: 31449600 (approximately 1 year, in seconds) The age of CSRF cookies, in seconds. The reason for setting a long-lived expiration time is to avoid problems in the case of a user closing a browser or bookmarking a page and then loading that page from a browser cache. Without persistent cookies, the form submission would fail in this case. Some browsers (specifically Internet Explorer) can disallow the use of persistent cookies or can have the indexes to the cookie jar corrupted on disk, thereby causing CSRF protection checks to (sometimes intermittently) fail. Change this setting to None to use session-based CSRF cookies, which keep the cookies in-memory instead of on persistent storage. CSRF_COOKIE_DOMAIN Default: None The domain to be used when setting the CSRF cookie. This can be useful for easily allowing cross-subdomain requests to be excluded from the normal cross site request forgery protection. It should be set to a string such as ".example.com" to allow a POST request from a form on one subdomain to be accepted by a view served from another subdomain. Please note that the presence of this setting does not imply that Django’s CSRF protection is safe from cross-subdomain attacks by default - please see the CSRF limitations section. CSRF_COOKIE_HTTPONLY Default: False Whether to use HttpOnly flag on the CSRF cookie. If this is set to True, client-side JavaScript will not be able to access the CSRF cookie. Designating the CSRF cookie as HttpOnly doesn’t offer any practical protection because CSRF is only to protect against cross-domain attacks. If an attacker can read the cookie via JavaScript, they’re already on the same domain as far as the browser knows, so they can do anything they like anyway. (XSS is a much bigger hole than CSRF.) Although the setting offers little practical benefit, it’s sometimes required by security auditors. If you enable this and need to send the value of the CSRF token with an AJAX request, your JavaScript must pull the value from a hidden CSRF token form input instead of from the cookie. See SESSION_COOKIE_HTTPONLY for details on HttpOnly. CSRF_COOKIE_NAME Default: 'csrftoken' The name of the cookie to use for the CSRF authentication token. This can be whatever you want (as long as it’s different from the other cookie names in your application). See Cross Site Request Forgery protection. CSRF_COOKIE_PATH Default: '/' The path set on the CSRF cookie. This should either match the URL path of your Django installation or be a parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths, and each instance will only see its own CSRF cookie. CSRF_COOKIE_SAMESITE Default: 'Lax' The value of the SameSite flag on the CSRF cookie. This flag prevents the cookie from being sent in cross-site requests. See SESSION_COOKIE_SAMESITE for details about SameSite. CSRF_COOKIE_SECURE Default: False Whether to use a secure cookie for the CSRF cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent with an HTTPS connection. CSRF_USE_SESSIONS Default: False Whether to store the CSRF token in the user’s session instead of in a cookie. It requires the use of django.contrib.sessions. Storing the CSRF token in a cookie (Django’s default) is safe, but storing it in the session is common practice in other web frameworks and therefore sometimes demanded by security auditors. Since the default error views require the CSRF token, SessionMiddleware must appear in MIDDLEWARE before any middleware that may raise an exception to trigger an error view (such as PermissionDenied) if you’re using CSRF_USE_SESSIONS. See Middleware ordering. CSRF_FAILURE_VIEW Default: 'django.views.csrf.csrf_failure' A dotted path to the view function to be used when an incoming request is rejected by the CSRF protection. The function should have this signature: def csrf_failure(request, reason=""):
...
where reason is a short message (intended for developers or logging, not for end users) indicating the reason the request was rejected. It should return an HttpResponseForbidden. django.views.csrf.csrf_failure() accepts an additional template_name parameter that defaults to '403_csrf.html'. If a template with that name exists, it will be used to render the page. CSRF_HEADER_NAME Default: 'HTTP_X_CSRFTOKEN' The name of the request header used for CSRF authentication. As with other HTTP headers in request.META, the header name received from the server is normalized by converting all characters to uppercase, replacing any hyphens with underscores, and adding an 'HTTP_' prefix to the name. For example, if your client sends a 'X-XSRF-TOKEN' header, the setting should be 'HTTP_X_XSRF_TOKEN'. CSRF_TRUSTED_ORIGINS Default: [] (Empty list) A list of trusted origins for unsafe requests (e.g. POST). For requests that include the Origin header, Django’s CSRF protection requires that header match the origin present in the Host header. For a secure unsafe request that doesn’t include the Origin header, the request must have a Referer header that matches the origin present in the Host header. These checks prevent, for example, a POST request from subdomain.example.com from succeeding against api.example.com. If you need cross-origin unsafe requests, continuing the example, add 'https://subdomain.example.com' to this list (and/or http://... if requests originate from an insecure page). The setting also supports subdomains, so you could add 'https://*.example.com', for example, to allow access from all subdomains of example.com. Changed in Django 4.0: The values in older versions must only include the hostname (possibly with a leading dot) and not the scheme or an asterisk. Also, Origin header checking isn’t performed in older versions. DATABASES Default: {} (Empty dictionary) A dictionary containing the settings for all databases to be used with Django. It is a nested dictionary whose contents map a database alias to a dictionary containing the options for an individual database. The DATABASES setting must configure a default database; any number of additional databases may also be specified. The simplest possible settings file is for a single-database setup using SQLite. This can be configured using the following: DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'mydatabase',
}
}
When connecting to other database backends, such as MariaDB, MySQL, Oracle, or PostgreSQL, additional connection parameters will be required. See the ENGINE setting below on how to specify other database types. This example is for PostgreSQL: DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'mydatabase',
'USER': 'mydatabaseuser',
'PASSWORD': 'mypassword',
'HOST': '127.0.0.1',
'PORT': '5432',
}
}
The following inner options that may be required for more complex configurations are available: ATOMIC_REQUESTS Default: False Set this to True to wrap each view in a transaction on this database. See Tying transactions to HTTP requests. AUTOCOMMIT Default: True Set this to False if you want to disable Django’s transaction management and implement your own. ENGINE Default: '' (Empty string) The database backend to use. The built-in database backends are: 'django.db.backends.postgresql' 'django.db.backends.mysql' 'django.db.backends.sqlite3' 'django.db.backends.oracle' You can use a database backend that doesn’t ship with Django by setting ENGINE to a fully-qualified path (i.e. mypackage.backends.whatever). HOST Default: '' (Empty string) Which host to use when connecting to the database. An empty string means localhost. Not used with SQLite. If this value starts with a forward slash ('/') and you’re using MySQL, MySQL will connect via a Unix socket to the specified socket. For example: "HOST": '/var/run/mysql'
If you’re using MySQL and this value doesn’t start with a forward slash, then this value is assumed to be the host. If you’re using PostgreSQL, by default (empty HOST), the connection to the database is done through UNIX domain sockets (‘local’ lines in pg_hba.conf). If your UNIX domain socket is not in the standard location, use the same value of unix_socket_directory from postgresql.conf. If you want to connect through TCP sockets, set HOST to ‘localhost’ or ‘127.0.0.1’ (‘host’ lines in pg_hba.conf). On Windows, you should always define HOST, as UNIX domain sockets are not available. NAME Default: '' (Empty string) The name of the database to use. For SQLite, it’s the full path to the database file. When specifying the path, always use forward slashes, even on Windows (e.g. C:/homes/user/mysite/sqlite3.db). CONN_MAX_AGE Default: 0 The lifetime of a database connection, as an integer of seconds. Use 0 to close database connections at the end of each request — Django’s historical behavior — and None for unlimited persistent connections. OPTIONS Default: {} (Empty dictionary) Extra parameters to use when connecting to the database. Available parameters vary depending on your database backend. Some information on available parameters can be found in the Database Backends documentation. For more information, consult your backend module’s own documentation. PASSWORD Default: '' (Empty string) The password to use when connecting to the database. Not used with SQLite. PORT Default: '' (Empty string) The port to use when connecting to the database. An empty string means the default port. Not used with SQLite. TIME_ZONE Default: None A string representing the time zone for this database connection or None. This inner option of the DATABASES setting accepts the same values as the general TIME_ZONE setting. When USE_TZ is True and this option is set, reading datetimes from the database returns aware datetimes in this time zone instead of UTC. When USE_TZ is False, it is an error to set this option.
If the database backend doesn’t support time zones (e.g. SQLite, MySQL, Oracle), Django reads and writes datetimes in local time according to this option if it is set and in UTC if it isn’t. Changing the connection time zone changes how datetimes are read from and written to the database. If Django manages the database and you don’t have a strong reason to do otherwise, you should leave this option unset. It’s best to store datetimes in UTC because it avoids ambiguous or nonexistent datetimes during daylight saving time changes. Also, receiving datetimes in UTC keeps datetime arithmetic simple — there’s no need to consider potential offset changes over a DST transition. If you’re connecting to a third-party database that stores datetimes in a local time rather than UTC, then you must set this option to the appropriate time zone. Likewise, if Django manages the database but third-party systems connect to the same database and expect to find datetimes in local time, then you must set this option.
If the database backend supports time zones (e.g. PostgreSQL), the TIME_ZONE option is very rarely needed. It can be changed at any time; the database takes care of converting datetimes to the desired time zone. Setting the time zone of the database connection may be useful for running raw SQL queries involving date/time functions provided by the database, such as date_trunc, because their results depend on the time zone. However, this has a downside: receiving all datetimes in local time makes datetime arithmetic more tricky — you must account for possible offset changes over DST transitions. Consider converting to local time explicitly with AT TIME ZONE in raw SQL queries instead of setting the TIME_ZONE option. DISABLE_SERVER_SIDE_CURSORS Default: False Set this to True if you want to disable the use of server-side cursors with QuerySet.iterator(). Transaction pooling and server-side cursors describes the use case. This is a PostgreSQL-specific setting. USER Default: '' (Empty string) The username to use when connecting to the database. Not used with SQLite. TEST Default: {} (Empty dictionary) A dictionary of settings for test databases; for more details about the creation and use of test databases, see The test database. Here’s an example with a test database configuration: DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'USER': 'mydatabaseuser',
'NAME': 'mydatabase',
'TEST': {
'NAME': 'mytestdatabase',
},
},
}
The following keys in the TEST dictionary are available: CHARSET Default: None The character set encoding used to create the test database. The value of this string is passed directly through to the database, so its format is backend-specific. Supported by the PostgreSQL (postgresql) and MySQL (mysql) backends. COLLATION Default: None The collation order to use when creating the test database. This value is passed directly to the backend, so its format is backend-specific. Only supported for the mysql backend (see the MySQL manual for details). DEPENDENCIES Default: ['default'], for all databases other than default, which has no dependencies. The creation-order dependencies of the database. See the documentation on controlling the creation order of test databases for details. MIGRATE Default: True When set to False, migrations won’t run when creating the test database. This is similar to setting None as a value in MIGRATION_MODULES, but for all apps. MIRROR Default: None The alias of the database that this database should mirror during testing. This setting exists to allow for testing of primary/replica (referred to as master/slave by some databases) configurations of multiple databases. See the documentation on testing primary/replica configurations for details. NAME Default: None The name of database to use when running the test suite. If the default value (None) is used with the SQLite database engine, the tests will use a memory resident database. For all other database engines the test database will use the name 'test_' + DATABASE_NAME. See The test database. SERIALIZE Boolean value to control whether or not the default test runner serializes the database into an in-memory JSON string before running tests (used to restore the database state between tests if you don’t have transactions). You can set this to False to speed up creation time if you don’t have any test classes with serialized_rollback=True. Deprecated since version 4.0: This setting is deprecated as it can be inferred from the databases with the serialized_rollback option enabled. TEMPLATE This is a PostgreSQL-specific setting. The name of a template (e.g. 'template0') from which to create the test database. CREATE_DB Default: True This is an Oracle-specific setting. If it is set to False, the test tablespaces won’t be automatically created at the beginning of the tests or dropped at the end. CREATE_USER Default: True This is an Oracle-specific setting. If it is set to False, the test user won’t be automatically created at the beginning of the tests and dropped at the end. USER Default: None This is an Oracle-specific setting. The username to use when connecting to the Oracle database that will be used when running tests. If not provided, Django will use 'test_' + USER. PASSWORD Default: None This is an Oracle-specific setting. The password to use when connecting to the Oracle database that will be used when running tests. If not provided, Django will generate a random password. ORACLE_MANAGED_FILES Default: False This is an Oracle-specific setting. If set to True, Oracle Managed Files (OMF) tablespaces will be used. DATAFILE and DATAFILE_TMP will be ignored. TBLSPACE Default: None This is an Oracle-specific setting. The name of the tablespace that will be used when running tests. If not provided, Django will use 'test_' + USER. TBLSPACE_TMP Default: None This is an Oracle-specific setting. The name of the temporary tablespace that will be used when running tests. If not provided, Django will use 'test_' + USER + '_temp'. DATAFILE Default: None This is an Oracle-specific setting. The name of the datafile to use for the TBLSPACE. If not provided, Django will use TBLSPACE + '.dbf'. DATAFILE_TMP Default: None This is an Oracle-specific setting. The name of the datafile to use for the TBLSPACE_TMP. If not provided, Django will use TBLSPACE_TMP + '.dbf'. DATAFILE_MAXSIZE Default: '500M' This is an Oracle-specific setting. The maximum size that the DATAFILE is allowed to grow to. DATAFILE_TMP_MAXSIZE Default: '500M' This is an Oracle-specific setting. The maximum size that the DATAFILE_TMP is allowed to grow to. DATAFILE_SIZE Default: '50M' This is an Oracle-specific setting. The initial size of the DATAFILE. DATAFILE_TMP_SIZE Default: '50M' This is an Oracle-specific setting. The initial size of the DATAFILE_TMP. DATAFILE_EXTSIZE Default: '25M' This is an Oracle-specific setting. The amount by which the DATAFILE is extended when more space is required. DATAFILE_TMP_EXTSIZE Default: '25M' This is an Oracle-specific setting. The amount by which the DATAFILE_TMP is extended when more space is required. DATA_UPLOAD_MAX_MEMORY_SIZE Default: 2621440 (i.e. 2.5 MB). The maximum size in bytes that a request body may be before a SuspiciousOperation (RequestDataTooBig) is raised. The check is done when accessing request.body or request.POST and is calculated against the total request size excluding any file upload data. You can set this to None to disable the check. Applications that are expected to receive unusually large form posts should tune this setting. The amount of request data is correlated to the amount of memory needed to process the request and populate the GET and POST dictionaries. Large requests could be used as a denial-of-service attack vector if left unchecked. Since web servers don’t typically perform deep request inspection, it’s not possible to perform a similar check at that level. See also FILE_UPLOAD_MAX_MEMORY_SIZE. DATA_UPLOAD_MAX_NUMBER_FIELDS Default: 1000 The maximum number of parameters that may be received via GET or POST before a SuspiciousOperation (TooManyFields) is raised. You can set this to None to disable the check. Applications that are expected to receive an unusually large number of form fields should tune this setting. The number of request parameters is correlated to the amount of time needed to process the request and populate the GET and POST dictionaries. Large requests could be used as a denial-of-service attack vector if left unchecked. Since web servers don’t typically perform deep request inspection, it’s not possible to perform a similar check at that level. DATABASE_ROUTERS Default: [] (Empty list) The list of routers that will be used to determine which database to use when performing a database query. See the documentation on automatic database routing in multi database configurations. DATE_FORMAT Default: 'N j, Y' (e.g. Feb. 4, 2003) The default formatting to use for displaying date fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATETIME_FORMAT, TIME_FORMAT and SHORT_DATE_FORMAT. DATE_INPUT_FORMATS Default: [
'%Y-%m-%d', '%m/%d/%Y', '%m/%d/%y', # '2006-10-25', '10/25/2006', '10/25/06'
'%b %d %Y', '%b %d, %Y', # 'Oct 25 2006', 'Oct 25, 2006'
'%d %b %Y', '%d %b, %Y', # '25 Oct 2006', '25 Oct, 2006'
'%B %d %Y', '%B %d, %Y', # 'October 25 2006', 'October 25, 2006'
'%d %B %Y', '%d %B, %Y', # '25 October 2006', '25 October, 2006'
]
A list of formats that will be accepted when inputting data on a date field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATETIME_INPUT_FORMATS and TIME_INPUT_FORMATS. DATETIME_FORMAT Default: 'N j, Y, P' (e.g. Feb. 4, 2003, 4 p.m.) The default formatting to use for displaying datetime fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATE_FORMAT, TIME_FORMAT and SHORT_DATETIME_FORMAT. DATETIME_INPUT_FORMATS Default: [
'%Y-%m-%d %H:%M:%S', # '2006-10-25 14:30:59'
'%Y-%m-%d %H:%M:%S.%f', # '2006-10-25 14:30:59.000200'
'%Y-%m-%d %H:%M', # '2006-10-25 14:30'
'%m/%d/%Y %H:%M:%S', # '10/25/2006 14:30:59'
'%m/%d/%Y %H:%M:%S.%f', # '10/25/2006 14:30:59.000200'
'%m/%d/%Y %H:%M', # '10/25/2006 14:30'
'%m/%d/%y %H:%M:%S', # '10/25/06 14:30:59'
'%m/%d/%y %H:%M:%S.%f', # '10/25/06 14:30:59.000200'
'%m/%d/%y %H:%M', # '10/25/06 14:30'
]
A list of formats that will be accepted when inputting data on a datetime field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. Date-only formats are not included as datetime fields will automatically try DATE_INPUT_FORMATS in last resort. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATE_INPUT_FORMATS and TIME_INPUT_FORMATS. DEBUG Default: False A boolean that turns on/off debug mode. Never deploy a site into production with DEBUG turned on. One of the main features of debug mode is the display of detailed error pages. If your app raises an exception when DEBUG is True, Django will display a detailed traceback, including a lot of metadata about your environment, such as all the currently defined Django settings (from settings.py). As a security measure, Django will not include settings that might be sensitive, such as SECRET_KEY. Specifically, it will exclude any setting whose name includes any of the following: 'API' 'KEY' 'PASS' 'SECRET' 'SIGNATURE' 'TOKEN' Note that these are partial matches. 'PASS' will also match PASSWORD, just as 'TOKEN' will also match TOKENIZED and so on. Still, note that there are always going to be sections of your debug output that are inappropriate for public consumption. File paths, configuration options and the like all give attackers extra information about your server. It is also important to remember that when running with DEBUG turned on, Django will remember every SQL query it executes. This is useful when you’re debugging, but it’ll rapidly consume memory on a production server. Finally, if DEBUG is False, you also need to properly set the ALLOWED_HOSTS setting. Failing to do so will result in all requests being returned as “Bad Request (400)”. Note The default settings.py file created by django-admin
startproject sets DEBUG = True for convenience. DEBUG_PROPAGATE_EXCEPTIONS Default: False If set to True, Django’s exception handling of view functions (handler500, or the debug view if DEBUG is True) and logging of 500 responses (django.request) is skipped and exceptions propagate upward. This can be useful for some test setups. It shouldn’t be used on a live site unless you want your web server (instead of Django) to generate “Internal Server Error” responses. In that case, make sure your server doesn’t show the stack trace or other sensitive information in the response. DECIMAL_SEPARATOR Default: '.' (Dot) Default decimal separator used when formatting decimal numbers. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also NUMBER_GROUPING, THOUSAND_SEPARATOR and USE_THOUSAND_SEPARATOR. DEFAULT_AUTO_FIELD New in Django 3.2. Default: 'django.db.models.AutoField' Default primary key field type to use for models that don’t have a field with primary_key=True. Migrating auto-created through tables The value of DEFAULT_AUTO_FIELD will be respected when creating new auto-created through tables for many-to-many relationships. Unfortunately, the primary keys of existing auto-created through tables cannot currently be updated by the migrations framework. This means that if you switch the value of DEFAULT_AUTO_FIELD and then generate migrations, the primary keys of the related models will be updated, as will the foreign keys from the through table, but the primary key of the auto-created through table will not be migrated. In order to address this, you should add a RunSQL operation to your migrations to perform the required ALTER TABLE step. You can check the existing table name through sqlmigrate, dbshell, or with the field’s remote_field.through._meta.db_table property. Explicitly defined through models are already handled by the migrations system. Allowing automatic migrations for the primary key of existing auto-created through tables may be implemented at a later date. DEFAULT_CHARSET Default: 'utf-8' Default charset to use for all HttpResponse objects, if a MIME type isn’t manually specified. Used when constructing the Content-Type header. DEFAULT_EXCEPTION_REPORTER Default: 'django.views.debug.ExceptionReporter' Default exception reporter class to be used if none has been assigned to the HttpRequest instance yet. See Custom error reports. DEFAULT_EXCEPTION_REPORTER_FILTER Default: 'django.views.debug.SafeExceptionReporterFilter' Default exception reporter filter class to be used if none has been assigned to the HttpRequest instance yet. See Filtering error reports. DEFAULT_FILE_STORAGE Default: 'django.core.files.storage.FileSystemStorage' Default file storage class to be used for any file-related operations that don’t specify a particular storage system. See Managing files. DEFAULT_FROM_EMAIL Default: 'webmaster@localhost' Default email address to use for various automated correspondence from the site manager(s). This doesn’t include error messages sent to ADMINS and MANAGERS; for that, see SERVER_EMAIL. DEFAULT_INDEX_TABLESPACE Default: '' (Empty string) Default tablespace to use for indexes on fields that don’t specify one, if the backend supports it (see Tablespaces). DEFAULT_TABLESPACE Default: '' (Empty string) Default tablespace to use for models that don’t specify one, if the backend supports it (see Tablespaces). DISALLOWED_USER_AGENTS Default: [] (Empty list) List of compiled regular expression objects representing User-Agent strings that are not allowed to visit any page, systemwide. Use this for bots/crawlers. This is only used if CommonMiddleware is installed (see Middleware). EMAIL_BACKEND Default: 'django.core.mail.backends.smtp.EmailBackend' The backend to use for sending emails. For the list of available backends see Sending email. EMAIL_FILE_PATH Default: Not defined The directory used by the file email backend to store output files. EMAIL_HOST Default: 'localhost' The host to use for sending email. See also EMAIL_PORT. EMAIL_HOST_PASSWORD Default: '' (Empty string) Password to use for the SMTP server defined in EMAIL_HOST. This setting is used in conjunction with EMAIL_HOST_USER when authenticating to the SMTP server. If either of these settings is empty, Django won’t attempt authentication. See also EMAIL_HOST_USER. EMAIL_HOST_USER Default: '' (Empty string) Username to use for the SMTP server defined in EMAIL_HOST. If empty, Django won’t attempt authentication. See also EMAIL_HOST_PASSWORD. EMAIL_PORT Default: 25 Port to use for the SMTP server defined in EMAIL_HOST. EMAIL_SUBJECT_PREFIX Default: '[Django] ' Subject-line prefix for email messages sent with django.core.mail.mail_admins or django.core.mail.mail_managers. You’ll probably want to include the trailing space. EMAIL_USE_LOCALTIME Default: False Whether to send the SMTP Date header of email messages in the local time zone (True) or in UTC (False). EMAIL_USE_TLS Default: False Whether to use a TLS (secure) connection when talking to the SMTP server. This is used for explicit TLS connections, generally on port 587. If you are experiencing hanging connections, see the implicit TLS setting EMAIL_USE_SSL. EMAIL_USE_SSL Default: False Whether to use an implicit TLS (secure) connection when talking to the SMTP server. In most email documentation this type of TLS connection is referred to as SSL. It is generally used on port 465. If you are experiencing problems, see the explicit TLS setting EMAIL_USE_TLS. Note that EMAIL_USE_TLS/EMAIL_USE_SSL are mutually exclusive, so only set one of those settings to True. EMAIL_SSL_CERTFILE Default: None If EMAIL_USE_SSL or EMAIL_USE_TLS is True, you can optionally specify the path to a PEM-formatted certificate chain file to use for the SSL connection. EMAIL_SSL_KEYFILE Default: None If EMAIL_USE_SSL or EMAIL_USE_TLS is True, you can optionally specify the path to a PEM-formatted private key file to use for the SSL connection. Note that setting EMAIL_SSL_CERTFILE and EMAIL_SSL_KEYFILE doesn’t result in any certificate checking. They’re passed to the underlying SSL connection. Please refer to the documentation of Python’s ssl.wrap_socket() function for details on how the certificate chain file and private key file are handled. EMAIL_TIMEOUT Default: None Specifies a timeout in seconds for blocking operations like the connection attempt. FILE_UPLOAD_HANDLERS Default: [
'django.core.files.uploadhandler.MemoryFileUploadHandler',
'django.core.files.uploadhandler.TemporaryFileUploadHandler',
]
A list of handlers to use for uploading. Changing this setting allows complete customization – even replacement – of Django’s upload process. See Managing files for details. FILE_UPLOAD_MAX_MEMORY_SIZE Default: 2621440 (i.e. 2.5 MB). The maximum size (in bytes) that an upload will be before it gets streamed to the file system. See Managing files for details. See also DATA_UPLOAD_MAX_MEMORY_SIZE. FILE_UPLOAD_DIRECTORY_PERMISSIONS Default: None The numeric mode to apply to directories created in the process of uploading files. This setting also determines the default permissions for collected static directories when using the collectstatic management command. See collectstatic for details on overriding it. This value mirrors the functionality and caveats of the FILE_UPLOAD_PERMISSIONS setting. FILE_UPLOAD_PERMISSIONS Default: 0o644 The numeric mode (i.e. 0o644) to set newly uploaded files to. For more information about what these modes mean, see the documentation for os.chmod(). If None, you’ll get operating-system dependent behavior. On most platforms, temporary files will have a mode of 0o600, and files saved from memory will be saved using the system’s standard umask. For security reasons, these permissions aren’t applied to the temporary files that are stored in FILE_UPLOAD_TEMP_DIR. This setting also determines the default permissions for collected static files when using the collectstatic management command. See collectstatic for details on overriding it. Warning Always prefix the mode with 0o . If you’re not familiar with file modes, please note that the 0o prefix is very important: it indicates an octal number, which is the way that modes must be specified. If you try to use 644, you’ll get totally incorrect behavior. FILE_UPLOAD_TEMP_DIR Default: None The directory to store data to (typically files larger than FILE_UPLOAD_MAX_MEMORY_SIZE) temporarily while uploading files. If None, Django will use the standard temporary directory for the operating system. For example, this will default to /tmp on *nix-style operating systems. See Managing files for details. FIRST_DAY_OF_WEEK Default: 0 (Sunday) A number representing the first day of the week. This is especially useful when displaying a calendar. This value is only used when not using format internationalization, or when a format cannot be found for the current locale. The value must be an integer from 0 to 6, where 0 means Sunday, 1 means Monday and so on. FIXTURE_DIRS Default: [] (Empty list) List of directories searched for fixture files, in addition to the fixtures directory of each application, in search order. Note that these paths should use Unix-style forward slashes, even on Windows. See Providing data with fixtures and Fixture loading. FORCE_SCRIPT_NAME Default: None If not None, this will be used as the value of the SCRIPT_NAME environment variable in any HTTP request. This setting can be used to override the server-provided value of SCRIPT_NAME, which may be a rewritten version of the preferred value or not supplied at all. It is also used by django.setup() to set the URL resolver script prefix outside of the request/response cycle (e.g. in management commands and standalone scripts) to generate correct URLs when SCRIPT_NAME is not /. FORM_RENDERER Default: 'django.forms.renderers.DjangoTemplates' The class that renders forms and form widgets. It must implement the low-level render API. Included form renderers are:
'django.forms.renderers.DjangoTemplates'
'django.forms.renderers.Jinja2'
FORMAT_MODULE_PATH Default: None A full Python path to a Python package that contains custom format definitions for project locales. If not None, Django will check for a formats.py file, under the directory named as the current locale, and will use the formats defined in this file. For example, if FORMAT_MODULE_PATH is set to mysite.formats, and current language is en (English), Django will expect a directory tree like: mysite/
formats/
__init__.py
en/
__init__.py
formats.py
You can also set this setting to a list of Python paths, for example: FORMAT_MODULE_PATH = [
'mysite.formats',
'some_app.formats',
]
When Django searches for a certain format, it will go through all given Python paths until it finds a module that actually defines the given format. This means that formats defined in packages farther up in the list will take precedence over the same formats in packages farther down. Available formats are: DATE_FORMAT DATE_INPUT_FORMATS
DATETIME_FORMAT, DATETIME_INPUT_FORMATS DECIMAL_SEPARATOR FIRST_DAY_OF_WEEK MONTH_DAY_FORMAT NUMBER_GROUPING SHORT_DATE_FORMAT SHORT_DATETIME_FORMAT THOUSAND_SEPARATOR TIME_FORMAT TIME_INPUT_FORMATS YEAR_MONTH_FORMAT IGNORABLE_404_URLS Default: [] (Empty list) List of compiled regular expression objects describing URLs that should be ignored when reporting HTTP 404 errors via email (see How to manage error reporting). Regular expressions are matched against request's full paths (including query string, if any). Use this if your site does not provide a commonly requested file such as favicon.ico or robots.txt. This is only used if BrokenLinkEmailsMiddleware is enabled (see Middleware). INSTALLED_APPS Default: [] (Empty list) A list of strings designating all applications that are enabled in this Django installation. Each string should be a dotted Python path to: an application configuration class (preferred), or a package containing an application. Learn more about application configurations. Use the application registry for introspection Your code should never access INSTALLED_APPS directly. Use django.apps.apps instead. Application names and labels must be unique in INSTALLED_APPS Application names — the dotted Python path to the application package — must be unique. There is no way to include the same application twice, short of duplicating its code under another name. Application labels — by default the final part of the name — must be unique too. For example, you can’t include both django.contrib.auth and myproject.auth. However, you can relabel an application with a custom configuration that defines a different label. These rules apply regardless of whether INSTALLED_APPS references application configuration classes or application packages. When several applications provide different versions of the same resource (template, static file, management command, translation), the application listed first in INSTALLED_APPS has precedence. INTERNAL_IPS Default: [] (Empty list) A list of IP addresses, as strings, that: Allow the debug() context processor to add some variables to the template context. Can use the admindocs bookmarklets even if not logged in as a staff user. Are marked as “internal” (as opposed to “EXTERNAL”) in AdminEmailHandler emails. LANGUAGE_CODE Default: 'en-us' A string representing the language code for this installation. This should be in standard language ID format. For example, U.S. English is "en-us". See also the list of language identifiers and Internationalization and localization. USE_I18N must be active for this setting to have any effect. It serves two purposes: If the locale middleware isn’t in use, it decides which translation is served to all users. If the locale middleware is active, it provides a fallback language in case the user’s preferred language can’t be determined or is not supported by the website. It also provides the fallback translation when a translation for a given literal doesn’t exist for the user’s preferred language. See How Django discovers language preference for more details. LANGUAGE_COOKIE_AGE Default: None (expires at browser close) The age of the language cookie, in seconds. LANGUAGE_COOKIE_DOMAIN Default: None The domain to use for the language cookie. Set this to a string such as "example.com" for cross-domain cookies, or use None for a standard domain cookie. Be cautious when updating this setting on a production site. If you update this setting to enable cross-domain cookies on a site that previously used standard domain cookies, existing user cookies that have the old domain will not be updated. This will result in site users being unable to switch the language as long as these cookies persist. The only safe and reliable option to perform the switch is to change the language cookie name permanently (via the LANGUAGE_COOKIE_NAME setting) and to add a middleware that copies the value from the old cookie to a new one and then deletes the old one. LANGUAGE_COOKIE_HTTPONLY Default: False Whether to use HttpOnly flag on the language cookie. If this is set to True, client-side JavaScript will not be able to access the language cookie. See SESSION_COOKIE_HTTPONLY for details on HttpOnly. LANGUAGE_COOKIE_NAME Default: 'django_language' The name of the cookie to use for the language cookie. This can be whatever you want (as long as it’s different from the other cookie names in your application). See Internationalization and localization. LANGUAGE_COOKIE_PATH Default: '/' The path set on the language cookie. This should either match the URL path of your Django installation or be a parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths and each instance will only see its own language cookie. Be cautious when updating this setting on a production site. If you update this setting to use a deeper path than it previously used, existing user cookies that have the old path will not be updated. This will result in site users being unable to switch the language as long as these cookies persist. The only safe and reliable option to perform the switch is to change the language cookie name permanently (via the LANGUAGE_COOKIE_NAME setting), and to add a middleware that copies the value from the old cookie to a new one and then deletes the one. LANGUAGE_COOKIE_SAMESITE Default: None The value of the SameSite flag on the language cookie. This flag prevents the cookie from being sent in cross-site requests. See SESSION_COOKIE_SAMESITE for details about SameSite. LANGUAGE_COOKIE_SECURE Default: False Whether to use a secure cookie for the language cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent under an HTTPS connection. LANGUAGES Default: A list of all available languages. This list is continually growing and including a copy here would inevitably become rapidly out of date. You can see the current list of translated languages by looking in django/conf/global_settings.py. The list is a list of two-tuples in the format (language code, language name) – for example, ('ja', 'Japanese'). This specifies which languages are available for language selection. See Internationalization and localization. Generally, the default value should suffice. Only set this setting if you want to restrict language selection to a subset of the Django-provided languages. If you define a custom LANGUAGES setting, you can mark the language names as translation strings using the gettext_lazy() function. Here’s a sample settings file: from django.utils.translation import gettext_lazy as _
LANGUAGES = [
('de', _('German')),
('en', _('English')),
]
LANGUAGES_BIDI Default: A list of all language codes that are written right-to-left. You can see the current list of these languages by looking in django/conf/global_settings.py. The list contains language codes for languages that are written right-to-left. Generally, the default value should suffice. Only set this setting if you want to restrict language selection to a subset of the Django-provided languages. If you define a custom LANGUAGES setting, the list of bidirectional languages may contain language codes which are not enabled on a given site. LOCALE_PATHS Default: [] (Empty list) A list of directories where Django looks for translation files. See How Django discovers translations. Example: LOCALE_PATHS = [
'/home/www/project/common_files/locale',
'/var/local/translations/locale',
]
Django will look within each of these paths for the <locale_code>/LC_MESSAGES directories containing the actual translation files. LOGGING Default: A logging configuration dictionary. A data structure containing configuration information. The contents of this data structure will be passed as the argument to the configuration method described in LOGGING_CONFIG. Among other things, the default logging configuration passes HTTP 500 server errors to an email log handler when DEBUG is False. See also Configuring logging. You can see the default logging configuration by looking in django/utils/log.py. LOGGING_CONFIG Default: 'logging.config.dictConfig' A path to a callable that will be used to configure logging in the Django project. Points at an instance of Python’s dictConfig configuration method by default. If you set LOGGING_CONFIG to None, the logging configuration process will be skipped. MANAGERS Default: [] (Empty list) A list in the same format as ADMINS that specifies who should get broken link notifications when BrokenLinkEmailsMiddleware is enabled. MEDIA_ROOT Default: '' (Empty string) Absolute filesystem path to the directory that will hold user-uploaded files. Example: "/var/www/example.com/media/" See also MEDIA_URL. Warning MEDIA_ROOT and STATIC_ROOT must have different values. Before STATIC_ROOT was introduced, it was common to rely or fallback on MEDIA_ROOT to also serve static files; however, since this can have serious security implications, there is a validation check to prevent it. MEDIA_URL Default: '' (Empty string) URL that handles the media served from MEDIA_ROOT, used for managing stored files. It must end in a slash if set to a non-empty value. You will need to configure these files to be served in both development and production environments. If you want to use {{ MEDIA_URL }} in your templates, add 'django.template.context_processors.media' in the 'context_processors' option of TEMPLATES. Example: "http://media.example.com/" Warning There are security risks if you are accepting uploaded content from untrusted users! See the security guide’s topic on User-uploaded content for mitigation details. Warning MEDIA_URL and STATIC_URL must have different values. See MEDIA_ROOT for more details. Note If MEDIA_URL is a relative path, then it will be prefixed by the server-provided value of SCRIPT_NAME (or / if not set). This makes it easier to serve a Django application in a subpath without adding an extra configuration to the settings. MIDDLEWARE Default: None A list of middleware to use. See Middleware. MIGRATION_MODULES Default: {} (Empty dictionary) A dictionary specifying the package where migration modules can be found on a per-app basis. The default value of this setting is an empty dictionary, but the default package name for migration modules is migrations. Example: {'blog': 'blog.db_migrations'}
In this case, migrations pertaining to the blog app will be contained in the blog.db_migrations package. If you provide the app_label argument, makemigrations will automatically create the package if it doesn’t already exist. When you supply None as a value for an app, Django will consider the app as an app without migrations regardless of an existing migrations submodule. This can be used, for example, in a test settings file to skip migrations while testing (tables will still be created for the apps’ models). To disable migrations for all apps during tests, you can set the MIGRATE to False instead. If MIGRATION_MODULES is used in your general project settings, remember to use the migrate --run-syncdb option if you want to create tables for the app. MONTH_DAY_FORMAT Default: 'F j' The default formatting to use for date fields on Django admin change-list pages – and, possibly, by other parts of the system – in cases when only the month and day are displayed. For example, when a Django admin change-list page is being filtered by a date drilldown, the header for a given day displays the day and month. Different locales have different formats. For example, U.S. English would say “January 1,” whereas Spanish might say “1 Enero.” Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT and YEAR_MONTH_FORMAT. NUMBER_GROUPING Default: 0 Number of digits grouped together on the integer part of a number. Common use is to display a thousand separator. If this setting is 0, then no grouping will be applied to the number. If this setting is greater than 0, then THOUSAND_SEPARATOR will be used as the separator between those groups. Some locales use non-uniform digit grouping, e.g. 10,00,00,000 in en_IN. For this case, you can provide a sequence with the number of digit group sizes to be applied. The first number defines the size of the group preceding the decimal delimiter, and each number that follows defines the size of preceding groups. If the sequence is terminated with -1, no further grouping is performed. If the sequence terminates with a 0, the last group size is used for the remainder of the number. Example tuple for en_IN: NUMBER_GROUPING = (3, 2, 0)
Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also DECIMAL_SEPARATOR, THOUSAND_SEPARATOR and USE_THOUSAND_SEPARATOR. PREPEND_WWW Default: False Whether to prepend the “www.” subdomain to URLs that don’t have it. This is only used if CommonMiddleware is installed (see Middleware). See also APPEND_SLASH. ROOT_URLCONF Default: Not defined A string representing the full Python import path to your root URLconf, for example "mydjangoapps.urls". Can be overridden on a per-request basis by setting the attribute urlconf on the incoming HttpRequest object. See How Django processes a request for details. SECRET_KEY Default: '' (Empty string) A secret key for a particular Django installation. This is used to provide cryptographic signing, and should be set to a unique, unpredictable value. django-admin startproject automatically adds a randomly-generated SECRET_KEY to each new project. Uses of the key shouldn’t assume that it’s text or bytes. Every use should go through force_str() or force_bytes() to convert it to the desired type. Django will refuse to start if SECRET_KEY is not set. Warning Keep this value secret. Running Django with a known SECRET_KEY defeats many of Django’s security protections, and can lead to privilege escalation and remote code execution vulnerabilities. The secret key is used for: All sessions if you are using any other session backend than django.contrib.sessions.backends.cache, or are using the default get_session_auth_hash(). All messages if you are using CookieStorage or FallbackStorage. All PasswordResetView tokens. Any usage of cryptographic signing, unless a different key is provided. If you rotate your secret key, all of the above will be invalidated. Secret keys are not used for passwords of users and key rotation will not affect them. Note The default settings.py file created by django-admin
startproject creates a unique SECRET_KEY for convenience. SECURE_CONTENT_TYPE_NOSNIFF Default: True If True, the SecurityMiddleware sets the X-Content-Type-Options: nosniff header on all responses that do not already have it. SECURE_CROSS_ORIGIN_OPENER_POLICY New in Django 4.0. Default: 'same-origin' Unless set to None, the SecurityMiddleware sets the Cross-Origin Opener Policy header on all responses that do not already have it to the value provided. SECURE_HSTS_INCLUDE_SUBDOMAINS Default: False If True, the SecurityMiddleware adds the includeSubDomains directive to the HTTP Strict Transport Security header. It has no effect unless SECURE_HSTS_SECONDS is set to a non-zero value. Warning Setting this incorrectly can irreversibly (for the value of SECURE_HSTS_SECONDS) break your site. Read the HTTP Strict Transport Security documentation first. SECURE_HSTS_PRELOAD Default: False If True, the SecurityMiddleware adds the preload directive to the HTTP Strict Transport Security header. It has no effect unless SECURE_HSTS_SECONDS is set to a non-zero value. SECURE_HSTS_SECONDS Default: 0 If set to a non-zero integer value, the SecurityMiddleware sets the HTTP Strict Transport Security header on all responses that do not already have it. Warning Setting this incorrectly can irreversibly (for some time) break your site. Read the HTTP Strict Transport Security documentation first. SECURE_PROXY_SSL_HEADER Default: None A tuple representing an HTTP header/value combination that signifies a request is secure. This controls the behavior of the request object’s is_secure() method. By default, is_secure() determines if a request is secure by confirming that a requested URL uses https://. This method is important for Django’s CSRF protection, and it may be used by your own code or third-party apps. If your Django app is behind a proxy, though, the proxy may be “swallowing” whether the original request uses HTTPS or not. If there is a non-HTTPS connection between the proxy and Django then is_secure() would always return False – even for requests that were made via HTTPS by the end user. In contrast, if there is an HTTPS connection between the proxy and Django then is_secure() would always return True – even for requests that were made originally via HTTP. In this situation, configure your proxy to set a custom HTTP header that tells Django whether the request came in via HTTPS, and set SECURE_PROXY_SSL_HEADER so that Django knows what header to look for. Set a tuple with two elements – the name of the header to look for and the required value. For example: SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
This tells Django to trust the X-Forwarded-Proto header that comes from our proxy, and any time its value is 'https', then the request is guaranteed to be secure (i.e., it originally came in via HTTPS). You should only set this setting if you control your proxy or have some other guarantee that it sets/strips this header appropriately. Note that the header needs to be in the format as used by request.META – all caps and likely starting with HTTP_. (Remember, Django automatically adds 'HTTP_' to the start of x-header names before making the header available in request.META.) Warning Modifying this setting can compromise your site’s security. Ensure you fully understand your setup before changing it. Make sure ALL of the following are true before setting this (assuming the values from the example above): Your Django app is behind a proxy. Your proxy strips the X-Forwarded-Proto header from all incoming requests. In other words, if end users include that header in their requests, the proxy will discard it. Your proxy sets the X-Forwarded-Proto header and sends it to Django, but only for requests that originally come in via HTTPS. If any of those are not true, you should keep this setting set to None and find another way of determining HTTPS, perhaps via custom middleware. SECURE_REDIRECT_EXEMPT Default: [] (Empty list) If a URL path matches a regular expression in this list, the request will not be redirected to HTTPS. The SecurityMiddleware strips leading slashes from URL paths, so patterns shouldn’t include them, e.g. SECURE_REDIRECT_EXEMPT = [r'^no-ssl/$', …]. If SECURE_SSL_REDIRECT is False, this setting has no effect. SECURE_REFERRER_POLICY Default: 'same-origin' If configured, the SecurityMiddleware sets the Referrer Policy header on all responses that do not already have it to the value provided. SECURE_SSL_HOST Default: None If a string (e.g. secure.example.com), all SSL redirects will be directed to this host rather than the originally-requested host (e.g. www.example.com). If SECURE_SSL_REDIRECT is False, this setting has no effect. SECURE_SSL_REDIRECT Default: False If True, the SecurityMiddleware redirects all non-HTTPS requests to HTTPS (except for those URLs matching a regular expression listed in SECURE_REDIRECT_EXEMPT). Note If turning this to True causes infinite redirects, it probably means your site is running behind a proxy and can’t tell which requests are secure and which are not. Your proxy likely sets a header to indicate secure requests; you can correct the problem by finding out what that header is and configuring the SECURE_PROXY_SSL_HEADER setting accordingly. SERIALIZATION_MODULES Default: Not defined A dictionary of modules containing serializer definitions (provided as strings), keyed by a string identifier for that serialization type. For example, to define a YAML serializer, use: SERIALIZATION_MODULES = {'yaml': 'path.to.yaml_serializer'}
SERVER_EMAIL Default: 'root@localhost' The email address that error messages come from, such as those sent to ADMINS and MANAGERS. Why are my emails sent from a different address? This address is used only for error messages. It is not the address that regular email messages sent with send_mail() come from; for that, see DEFAULT_FROM_EMAIL. SHORT_DATE_FORMAT Default: 'm/d/Y' (e.g. 12/31/2003) An available formatting that can be used for displaying date fields on templates. Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT and SHORT_DATETIME_FORMAT. SHORT_DATETIME_FORMAT Default: 'm/d/Y P' (e.g. 12/31/2003 4 p.m.) An available formatting that can be used for displaying datetime fields on templates. Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT and SHORT_DATE_FORMAT. SIGNING_BACKEND Default: 'django.core.signing.TimestampSigner' The backend used for signing cookies and other data. See also the Cryptographic signing documentation. SILENCED_SYSTEM_CHECKS Default: [] (Empty list) A list of identifiers of messages generated by the system check framework (i.e. ["models.W001"]) that you wish to permanently acknowledge and ignore. Silenced checks will not be output to the console. See also the System check framework documentation. TEMPLATES Default: [] (Empty list) A list containing the settings for all template engines to be used with Django. Each item of the list is a dictionary containing the options for an individual engine. Here’s a setup that tells the Django template engine to load templates from the templates subdirectory inside each installed application: TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'APP_DIRS': True,
},
]
The following options are available for all backends. BACKEND Default: Not defined The template backend to use. The built-in template backends are: 'django.template.backends.django.DjangoTemplates' 'django.template.backends.jinja2.Jinja2' You can use a template backend that doesn’t ship with Django by setting BACKEND to a fully-qualified path (i.e. 'mypackage.whatever.Backend'). NAME Default: see below The alias for this particular template engine. It’s an identifier that allows selecting an engine for rendering. Aliases must be unique across all configured template engines. It defaults to the name of the module defining the engine class, i.e. the next to last piece of BACKEND, when it isn’t provided. For example if the backend is 'mypackage.whatever.Backend' then its default name is 'whatever'. DIRS Default: [] (Empty list) Directories where the engine should look for template source files, in search order. APP_DIRS Default: False Whether the engine should look for template source files inside installed applications. Note The default settings.py file created by django-admin
startproject sets 'APP_DIRS': True. OPTIONS Default: {} (Empty dict) Extra parameters to pass to the template backend. Available parameters vary depending on the template backend. See DjangoTemplates and Jinja2 for the options of the built-in backends. TEST_RUNNER Default: 'django.test.runner.DiscoverRunner' The name of the class to use for starting the test suite. See Using different testing frameworks. TEST_NON_SERIALIZED_APPS Default: [] (Empty list) In order to restore the database state between tests for TransactionTestCases and database backends without transactions, Django will serialize the contents of all apps when it starts the test run so it can then reload from that copy before running tests that need it. This slows down the startup time of the test runner; if you have apps that you know don’t need this feature, you can add their full names in here (e.g. 'django.contrib.contenttypes') to exclude them from this serialization process. THOUSAND_SEPARATOR Default: ',' (Comma) Default thousand separator used when formatting numbers. This setting is used only when USE_THOUSAND_SEPARATOR is True and NUMBER_GROUPING is greater than 0. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also NUMBER_GROUPING, DECIMAL_SEPARATOR and USE_THOUSAND_SEPARATOR. TIME_FORMAT Default: 'P' (e.g. 4 p.m.) The default formatting to use for displaying time fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATE_FORMAT and DATETIME_FORMAT. TIME_INPUT_FORMATS Default: [
'%H:%M:%S', # '14:30:59'
'%H:%M:%S.%f', # '14:30:59.000200'
'%H:%M', # '14:30'
]
A list of formats that will be accepted when inputting data on a time field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATE_INPUT_FORMATS and DATETIME_INPUT_FORMATS. TIME_ZONE Default: 'America/Chicago' A string representing the time zone for this installation. See the list of time zones. Note Since Django was first released with the TIME_ZONE set to 'America/Chicago', the global setting (used if nothing is defined in your project’s settings.py) remains 'America/Chicago' for backwards compatibility. New project templates default to 'UTC'. Note that this isn’t necessarily the time zone of the server. For example, one server may serve multiple Django-powered sites, each with a separate time zone setting. When USE_TZ is False, this is the time zone in which Django will store all datetimes. When USE_TZ is True, this is the default time zone that Django will use to display datetimes in templates and to interpret datetimes entered in forms. On Unix environments (where time.tzset() is implemented), Django sets the os.environ['TZ'] variable to the time zone you specify in the TIME_ZONE setting. Thus, all your views and models will automatically operate in this time zone. However, Django won’t set the TZ environment variable if you’re using the manual configuration option as described in manually configuring settings. If Django doesn’t set the TZ environment variable, it’s up to you to ensure your processes are running in the correct environment. Note Django cannot reliably use alternate time zones in a Windows environment. If you’re running Django on Windows, TIME_ZONE must be set to match the system time zone. USE_DEPRECATED_PYTZ New in Django 4.0. Default: False A boolean that specifies whether to use pytz, rather than zoneinfo, as the default time zone implementation. Deprecated since version 4.0: This transitional setting is deprecated. Support for using pytz will be removed in Django 5.0. USE_I18N Default: True A boolean that specifies whether Django’s translation system should be enabled. This provides a way to turn it off, for performance. If this is set to False, Django will make some optimizations so as not to load the translation machinery. See also LANGUAGE_CODE, USE_L10N and USE_TZ. Note The default settings.py file created by django-admin
startproject includes USE_I18N = True for convenience. USE_L10N Default: True A boolean that specifies if localized formatting of data will be enabled by default or not. If this is set to True, e.g. Django will display numbers and dates using the format of the current locale. See also LANGUAGE_CODE, USE_I18N and USE_TZ. Changed in Django 4.0: In older versions, the default value is False. Deprecated since version 4.0: This setting is deprecated. Starting with Django 5.0, localized formatting of data will always be enabled. For example Django will display numbers and dates using the format of the current locale. USE_THOUSAND_SEPARATOR Default: False A boolean that specifies whether to display numbers using a thousand separator. When set to True and USE_L10N is also True, Django will format numbers using the NUMBER_GROUPING and THOUSAND_SEPARATOR settings. These settings may also be dictated by the locale, which takes precedence. See also DECIMAL_SEPARATOR, NUMBER_GROUPING and THOUSAND_SEPARATOR. USE_TZ Default: False Note In Django 5.0, the default value will change from False to True. A boolean that specifies if datetimes will be timezone-aware by default or not. If this is set to True, Django will use timezone-aware datetimes internally. When USE_TZ is False, Django will use naive datetimes in local time, except when parsing ISO 8601 formatted strings, where timezone information will always be retained if present. See also TIME_ZONE, USE_I18N and USE_L10N. Note The default settings.py file created by django-admin startproject includes USE_TZ = True for convenience. USE_X_FORWARDED_HOST Default: False A boolean that specifies whether to use the X-Forwarded-Host header in preference to the Host header. This should only be enabled if a proxy which sets this header is in use. This setting takes priority over USE_X_FORWARDED_PORT. Per RFC 7239#section-5.3, the X-Forwarded-Host header can include the port number, in which case you shouldn’t use USE_X_FORWARDED_PORT. USE_X_FORWARDED_PORT Default: False A boolean that specifies whether to use the X-Forwarded-Port header in preference to the SERVER_PORT META variable. This should only be enabled if a proxy which sets this header is in use. USE_X_FORWARDED_HOST takes priority over this setting. WSGI_APPLICATION Default: None The full Python path of the WSGI application object that Django’s built-in servers (e.g. runserver) will use. The django-admin
startproject management command will create a standard wsgi.py file with an application callable in it, and point this setting to that application. If not set, the return value of django.core.wsgi.get_wsgi_application() will be used. In this case, the behavior of runserver will be identical to previous Django versions. YEAR_MONTH_FORMAT Default: 'F Y' The default formatting to use for date fields on Django admin change-list pages – and, possibly, by other parts of the system – in cases when only the year and month are displayed. For example, when a Django admin change-list page is being filtered by a date drilldown, the header for a given month displays the month and the year. Different locales have different formats. For example, U.S. English would say “January 2006,” whereas another locale might say “2006/January.” Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT and MONTH_DAY_FORMAT. X_FRAME_OPTIONS Default: 'DENY' The default value for the X-Frame-Options header used by XFrameOptionsMiddleware. See the clickjacking protection documentation. Auth Settings for django.contrib.auth. AUTHENTICATION_BACKENDS Default: ['django.contrib.auth.backends.ModelBackend'] A list of authentication backend classes (as strings) to use when attempting to authenticate a user. See the authentication backends documentation for details. AUTH_USER_MODEL Default: 'auth.User' The model to use to represent a User. See Substituting a custom User model. Warning You cannot change the AUTH_USER_MODEL setting during the lifetime of a project (i.e. once you have made and migrated models that depend on it) without serious effort. It is intended to be set at the project start, and the model it refers to must be available in the first migration of the app that it lives in. See Substituting a custom User model for more details. LOGIN_REDIRECT_URL Default: '/accounts/profile/' The URL or named URL pattern where requests are redirected after login when the LoginView doesn’t get a next GET parameter. LOGIN_URL Default: '/accounts/login/' The URL or named URL pattern where requests are redirected for login when using the login_required() decorator, LoginRequiredMixin, or AccessMixin. LOGOUT_REDIRECT_URL Default: None The URL or named URL pattern where requests are redirected after logout if LogoutView doesn’t have a next_page attribute. If None, no redirect will be performed and the logout view will be rendered. PASSWORD_RESET_TIMEOUT Default: 259200 (3 days, in seconds) The number of seconds a password reset link is valid for. Used by the PasswordResetConfirmView. Note Reducing the value of this timeout doesn’t make any difference to the ability of an attacker to brute-force a password reset token. Tokens are designed to be safe from brute-forcing without any timeout. This timeout exists to protect against some unlikely attack scenarios, such as someone gaining access to email archives that may contain old, unused password reset tokens. PASSWORD_HASHERS See How Django stores passwords. Default: [
'django.contrib.auth.hashers.PBKDF2PasswordHasher',
'django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher',
'django.contrib.auth.hashers.Argon2PasswordHasher',
'django.contrib.auth.hashers.BCryptSHA256PasswordHasher',
]
AUTH_PASSWORD_VALIDATORS Default: [] (Empty list) The list of validators that are used to check the strength of user’s passwords. See Password validation for more details. By default, no validation is performed and all passwords are accepted. Messages Settings for django.contrib.messages. MESSAGE_LEVEL Default: messages.INFO Sets the minimum message level that will be recorded by the messages framework. See message levels for more details. Important If you override MESSAGE_LEVEL in your settings file and rely on any of the built-in constants, you must import the constants module directly to avoid the potential for circular imports, e.g.: from django.contrib.messages import constants as message_constants
MESSAGE_LEVEL = message_constants.DEBUG
If desired, you may specify the numeric values for the constants directly according to the values in the above constants table. MESSAGE_STORAGE Default: 'django.contrib.messages.storage.fallback.FallbackStorage' Controls where Django stores message data. Valid values are: 'django.contrib.messages.storage.fallback.FallbackStorage' 'django.contrib.messages.storage.session.SessionStorage' 'django.contrib.messages.storage.cookie.CookieStorage' See message storage backends for more details. The backends that use cookies – CookieStorage and FallbackStorage – use the value of SESSION_COOKIE_DOMAIN, SESSION_COOKIE_SECURE and SESSION_COOKIE_HTTPONLY when setting their cookies. MESSAGE_TAGS Default: {
messages.DEBUG: 'debug',
messages.INFO: 'info',
messages.SUCCESS: 'success',
messages.WARNING: 'warning',
messages.ERROR: 'error',
}
This sets the mapping of message level to message tag, which is typically rendered as a CSS class in HTML. If you specify a value, it will extend the default. This means you only have to specify those values which you need to override. See Displaying messages above for more details. Important If you override MESSAGE_TAGS in your settings file and rely on any of the built-in constants, you must import the constants module directly to avoid the potential for circular imports, e.g.: from django.contrib.messages import constants as message_constants
MESSAGE_TAGS = {message_constants.INFO: ''}
If desired, you may specify the numeric values for the constants directly according to the values in the above constants table. Sessions Settings for django.contrib.sessions. SESSION_CACHE_ALIAS Default: 'default' If you’re using cache-based session storage, this selects the cache to use. SESSION_COOKIE_AGE Default: 1209600 (2 weeks, in seconds) The age of session cookies, in seconds. SESSION_COOKIE_DOMAIN Default: None The domain to use for session cookies. Set this to a string such as "example.com" for cross-domain cookies, or use None for a standard domain cookie. To use cross-domain cookies with CSRF_USE_SESSIONS, you must include a leading dot (e.g. ".example.com") to accommodate the CSRF middleware’s referer checking. Be cautious when updating this setting on a production site. If you update this setting to enable cross-domain cookies on a site that previously used standard domain cookies, existing user cookies will be set to the old domain. This may result in them being unable to log in as long as these cookies persist. This setting also affects cookies set by django.contrib.messages. SESSION_COOKIE_HTTPONLY Default: True Whether to use HttpOnly flag on the session cookie. If this is set to True, client-side JavaScript will not be able to access the session cookie. HttpOnly is a flag included in a Set-Cookie HTTP response header. It’s part of the RFC 6265#section-4.1.2.6 standard for cookies and can be a useful way to mitigate the risk of a client-side script accessing the protected cookie data. This makes it less trivial for an attacker to escalate a cross-site scripting vulnerability into full hijacking of a user’s session. There aren’t many good reasons for turning this off. Your code shouldn’t read session cookies from JavaScript. SESSION_COOKIE_NAME Default: 'sessionid' The name of the cookie to use for sessions. This can be whatever you want (as long as it’s different from the other cookie names in your application). SESSION_COOKIE_PATH Default: '/' The path set on the session cookie. This should either match the URL path of your Django installation or be parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths, and each instance will only see its own session cookie. SESSION_COOKIE_SAMESITE Default: 'Lax' The value of the SameSite flag on the session cookie. This flag prevents the cookie from being sent in cross-site requests thus preventing CSRF attacks and making some methods of stealing session cookie impossible. Possible values for the setting are:
'Strict': prevents the cookie from being sent by the browser to the target site in all cross-site browsing context, even when following a regular link. For example, for a GitHub-like website this would mean that if a logged-in user follows a link to a private GitHub project posted on a corporate discussion forum or email, GitHub will not receive the session cookie and the user won’t be able to access the project. A bank website, however, most likely doesn’t want to allow any transactional pages to be linked from external sites so the 'Strict' flag would be appropriate.
'Lax' (default): provides a balance between security and usability for websites that want to maintain user’s logged-in session after the user arrives from an external link. In the GitHub scenario, the session cookie would be allowed when following a regular link from an external website and be blocked in CSRF-prone request methods (e.g. POST).
'None' (string): the session cookie will be sent with all same-site and cross-site requests.
False: disables the flag. Note Modern browsers provide a more secure default policy for the SameSite flag and will assume Lax for cookies without an explicit value set. SESSION_COOKIE_SECURE Default: False Whether to use a secure cookie for the session cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent under an HTTPS connection. Leaving this setting off isn’t a good idea because an attacker could capture an unencrypted session cookie with a packet sniffer and use the cookie to hijack the user’s session. SESSION_ENGINE Default: 'django.contrib.sessions.backends.db' Controls where Django stores session data. Included engines are: 'django.contrib.sessions.backends.db' 'django.contrib.sessions.backends.file' 'django.contrib.sessions.backends.cache' 'django.contrib.sessions.backends.cached_db' 'django.contrib.sessions.backends.signed_cookies' See Configuring the session engine for more details. SESSION_EXPIRE_AT_BROWSER_CLOSE Default: False Whether to expire the session when the user closes their browser. See Browser-length sessions vs. persistent sessions. SESSION_FILE_PATH Default: None If you’re using file-based session storage, this sets the directory in which Django will store session data. When the default value (None) is used, Django will use the standard temporary directory for the system. SESSION_SAVE_EVERY_REQUEST Default: False Whether to save the session data on every request. If this is False (default), then the session data will only be saved if it has been modified – that is, if any of its dictionary values have been assigned or deleted. Empty sessions won’t be created, even if this setting is active. SESSION_SERIALIZER Default: 'django.contrib.sessions.serializers.JSONSerializer' Full import path of a serializer class to use for serializing session data. Included serializers are: 'django.contrib.sessions.serializers.PickleSerializer' 'django.contrib.sessions.serializers.JSONSerializer' See Session serialization for details, including a warning regarding possible remote code execution when using PickleSerializer. Sites Settings for django.contrib.sites. SITE_ID Default: Not defined The ID, as an integer, of the current site in the django_site database table. This is used so that application data can hook into specific sites and a single database can manage content for multiple sites. Static Files Settings for django.contrib.staticfiles. STATIC_ROOT Default: None The absolute path to the directory where collectstatic will collect static files for deployment. Example: "/var/www/example.com/static/" If the staticfiles contrib app is enabled (as in the default project template), the collectstatic management command will collect static files into this directory. See the how-to on managing static files for more details about usage. Warning This should be an initially empty destination directory for collecting your static files from their permanent locations into one directory for ease of deployment; it is not a place to store your static files permanently. You should do that in directories that will be found by staticfiles’s finders, which by default, are 'static/' app sub-directories and any directories you include in STATICFILES_DIRS). STATIC_URL Default: None URL to use when referring to static files located in STATIC_ROOT. Example: "static/" or "http://static.example.com/" If not None, this will be used as the base path for asset definitions (the Media class) and the staticfiles app. It must end in a slash if set to a non-empty value. You may need to configure these files to be served in development and will definitely need to do so in production. Note If STATIC_URL is a relative path, then it will be prefixed by the server-provided value of SCRIPT_NAME (or / if not set). This makes it easier to serve a Django application in a subpath without adding an extra configuration to the settings. STATICFILES_DIRS Default: [] (Empty list) This setting defines the additional locations the staticfiles app will traverse if the FileSystemFinder finder is enabled, e.g. if you use the collectstatic or findstatic management command or use the static file serving view. This should be set to a list of strings that contain full paths to your additional files directory(ies) e.g.: STATICFILES_DIRS = [
"/home/special.polls.com/polls/static",
"/home/polls.com/polls/static",
"/opt/webfiles/common",
]
Note that these paths should use Unix-style forward slashes, even on Windows (e.g. "C:/Users/user/mysite/extra_static_content"). Prefixes (optional) In case you want to refer to files in one of the locations with an additional namespace, you can optionally provide a prefix as (prefix, path) tuples, e.g.: STATICFILES_DIRS = [
# ...
("downloads", "/opt/webfiles/stats"),
]
For example, assuming you have STATIC_URL set to 'static/', the collectstatic management command would collect the “stats” files in a 'downloads' subdirectory of STATIC_ROOT. This would allow you to refer to the local file '/opt/webfiles/stats/polls_20101022.tar.gz' with '/static/downloads/polls_20101022.tar.gz' in your templates, e.g.: <a href="{% static 'downloads/polls_20101022.tar.gz' %}">
STATICFILES_STORAGE Default: 'django.contrib.staticfiles.storage.StaticFilesStorage' The file storage engine to use when collecting static files with the collectstatic management command. A ready-to-use instance of the storage backend defined in this setting can be found at django.contrib.staticfiles.storage.staticfiles_storage. For an example, see Serving static files from a cloud service or CDN. STATICFILES_FINDERS Default: [
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
]
The list of finder backends that know how to find static files in various locations. The default will find files stored in the STATICFILES_DIRS setting (using django.contrib.staticfiles.finders.FileSystemFinder) and in a static subdirectory of each app (using django.contrib.staticfiles.finders.AppDirectoriesFinder). If multiple files with the same name are present, the first file that is found will be used. One finder is disabled by default: django.contrib.staticfiles.finders.DefaultStorageFinder. If added to your STATICFILES_FINDERS setting, it will look for static files in the default file storage as defined by the DEFAULT_FILE_STORAGE setting. Note When using the AppDirectoriesFinder finder, make sure your apps can be found by staticfiles by adding the app to the INSTALLED_APPS setting of your site. Static file finders are currently considered a private interface, and this interface is thus undocumented. Core Settings Topical Index Cache CACHES CACHE_MIDDLEWARE_ALIAS CACHE_MIDDLEWARE_KEY_PREFIX CACHE_MIDDLEWARE_SECONDS Database DATABASES DATABASE_ROUTERS DEFAULT_INDEX_TABLESPACE DEFAULT_TABLESPACE Debugging DEBUG DEBUG_PROPAGATE_EXCEPTIONS Email ADMINS DEFAULT_CHARSET DEFAULT_FROM_EMAIL EMAIL_BACKEND EMAIL_FILE_PATH EMAIL_HOST EMAIL_HOST_PASSWORD EMAIL_HOST_USER EMAIL_PORT EMAIL_SSL_CERTFILE EMAIL_SSL_KEYFILE EMAIL_SUBJECT_PREFIX EMAIL_TIMEOUT EMAIL_USE_LOCALTIME EMAIL_USE_TLS MANAGERS SERVER_EMAIL Error reporting DEFAULT_EXCEPTION_REPORTER DEFAULT_EXCEPTION_REPORTER_FILTER IGNORABLE_404_URLS MANAGERS SILENCED_SYSTEM_CHECKS File uploads DEFAULT_FILE_STORAGE FILE_UPLOAD_HANDLERS FILE_UPLOAD_MAX_MEMORY_SIZE FILE_UPLOAD_PERMISSIONS FILE_UPLOAD_TEMP_DIR MEDIA_ROOT MEDIA_URL Forms FORM_RENDERER Globalization (i18n/l10n) DATE_FORMAT DATE_INPUT_FORMATS DATETIME_FORMAT DATETIME_INPUT_FORMATS DECIMAL_SEPARATOR FIRST_DAY_OF_WEEK FORMAT_MODULE_PATH LANGUAGE_CODE LANGUAGE_COOKIE_AGE LANGUAGE_COOKIE_DOMAIN LANGUAGE_COOKIE_HTTPONLY LANGUAGE_COOKIE_NAME LANGUAGE_COOKIE_PATH LANGUAGE_COOKIE_SAMESITE LANGUAGE_COOKIE_SECURE LANGUAGES LANGUAGES_BIDI LOCALE_PATHS MONTH_DAY_FORMAT NUMBER_GROUPING SHORT_DATE_FORMAT SHORT_DATETIME_FORMAT THOUSAND_SEPARATOR TIME_FORMAT TIME_INPUT_FORMATS TIME_ZONE USE_I18N USE_L10N USE_THOUSAND_SEPARATOR USE_TZ YEAR_MONTH_FORMAT HTTP DATA_UPLOAD_MAX_MEMORY_SIZE DATA_UPLOAD_MAX_NUMBER_FIELDS DEFAULT_CHARSET DISALLOWED_USER_AGENTS FORCE_SCRIPT_NAME INTERNAL_IPS MIDDLEWARE Security SECURE_CONTENT_TYPE_NOSNIFF SECURE_CROSS_ORIGIN_OPENER_POLICY SECURE_HSTS_INCLUDE_SUBDOMAINS SECURE_HSTS_PRELOAD SECURE_HSTS_SECONDS SECURE_PROXY_SSL_HEADER SECURE_REDIRECT_EXEMPT SECURE_REFERRER_POLICY SECURE_SSL_HOST SECURE_SSL_REDIRECT SIGNING_BACKEND USE_X_FORWARDED_HOST USE_X_FORWARDED_PORT WSGI_APPLICATION Logging LOGGING LOGGING_CONFIG Models ABSOLUTE_URL_OVERRIDES FIXTURE_DIRS INSTALLED_APPS Security Cross Site Request Forgery Protection CSRF_COOKIE_DOMAIN CSRF_COOKIE_NAME CSRF_COOKIE_PATH CSRF_COOKIE_SAMESITE CSRF_COOKIE_SECURE CSRF_FAILURE_VIEW CSRF_HEADER_NAME CSRF_TRUSTED_ORIGINS CSRF_USE_SESSIONS SECRET_KEY X_FRAME_OPTIONS Serialization DEFAULT_CHARSET SERIALIZATION_MODULES Templates TEMPLATES Testing Database: TEST
TEST_NON_SERIALIZED_APPS TEST_RUNNER URLs APPEND_SLASH PREPEND_WWW ROOT_URLCONF | |
doc_2488 | tf.compat.v1.summary.merge_all(
key=tf.GraphKeys.SUMMARIES, scope=None, name=None
)
Args
key GraphKey used to collect the summaries. Defaults to GraphKeys.SUMMARIES.
scope Optional scope used to filter the summary ops, using re.match
Returns If no summaries were collected, returns None. Otherwise returns a scalar Tensor of type string containing the serialized Summary protocol buffer resulting from the merging.
Raises
RuntimeError If called with eager execution enabled. Eager Compatibility Not compatible with eager execution. To write TensorBoard summaries under eager execution, use tf.contrib.summary instead. | |
doc_2489 | See Migration guide for more details. tf.compat.v1.data.experimental.OptimizationOptions
tf.data.experimental.OptimizationOptions()
You can set the optimization options of a dataset through the experimental_optimization property of tf.data.Options; the property is an instance of tf.data.experimental.OptimizationOptions. options = tf.data.Options()
options.experimental_optimization.noop_elimination = True
options.experimental_optimization.map_vectorization.enabled = True
options.experimental_optimization.apply_default_optimizations = False
dataset = dataset.with_options(options)
Attributes
apply_default_optimizations Whether to apply default graph optimizations. If False, only graph optimizations that have been explicitly enabled will be applied.
autotune Whether to automatically tune performance knobs. If None, defaults to True.
autotune_buffers When autotuning is enabled (through autotune), determines whether to also autotune buffer sizes for datasets with parallelism. If None, defaults to False.
autotune_cpu_budget When autotuning is enabled (through autotune), determines the CPU budget to use. Values greater than the number of schedulable CPU cores are allowed but may result in CPU contention. If None, defaults to the number of schedulable CPU cores.
autotune_ram_budget When autotuning is enabled (through autotune), determines the RAM budget to use. Values greater than the available RAM in bytes may result in OOM. If None, defaults to half of the available RAM in bytes.
filter_fusion Whether to fuse filter transformations. If None, defaults to False.
filter_with_random_uniform_fusion Whether to fuse filter dataset that predicts random_uniform < rate into a sampling dataset. If None, defaults to False.
hoist_random_uniform Whether to hoist tf.random_uniform() ops out of map transformations. If None, defaults to False.
map_and_batch_fusion Whether to fuse map and batch transformations. If None, defaults to True.
map_and_filter_fusion Whether to fuse map and filter transformations. If None, defaults to False.
map_fusion Whether to fuse map transformations. If None, defaults to False.
map_parallelization Whether to parallelize stateless map transformations. If None, defaults to False.
map_vectorization The map vectorization options associated with the dataset. See tf.data.experimental.MapVectorizationOptions for more details.
noop_elimination Whether to eliminate no-op transformations. If None, defaults to True.
parallel_batch Whether to parallelize copying of batch elements. This optimization is highly experimental and can cause performance degradation (e.g. when the parallelization overhead exceeds the benefits of performing the data copies in parallel). You should only enable this optimization if a) your input pipeline is bottlenecked on batching and b) you have validated that this optimization improves performance. If None, defaults to False.
reorder_data_discarding_ops Whether to reorder ops that will discard data to the front of unary cardinality preserving transformations, e.g. dataset.map(...).take(3) will be optimized to dataset.take(3).map(...). For now this optimization will move skip, shard and take to the front of map and prefetch. This optimization is only for performance; it will not affect the output of the dataset. If None, defaults to True.
shuffle_and_repeat_fusion Whether to fuse shuffle and repeat transformations. If None, defaults to True. Methods __eq__ View source
__eq__(
other
)
Return self==value. __ne__ View source
__ne__(
other
)
Return self!=value. | |
doc_2490 | Push a KEY_MOUSE event onto the input queue, associating the given state data with it. | |
doc_2491 | See torch.jit.save for details. | |
doc_2492 |
Return a string representation of data. Note This method is intended to be overridden by artist subclasses. As an end-user of Matplotlib you will most likely not call this method yourself. The default implementation converts ints and floats and arrays of ints and floats into a comma-separated string enclosed in square brackets, unless the artist has an associated colorbar, in which case scalar values are formatted using the colorbar's formatter. See also get_cursor_data | |
doc_2493 | Spatial reference objects are initialized on the given srs_input, which may be one of the following: OGC Well Known Text (WKT) (a string) EPSG code (integer or string) PROJ string A shorthand string for well-known standards ('WGS84', 'WGS72', 'NAD27', 'NAD83') Example: >>> wgs84 = SpatialReference('WGS84') # shorthand string
>>> wgs84 = SpatialReference(4326) # EPSG code
>>> wgs84 = SpatialReference('EPSG:4326') # EPSG string
>>> proj = '+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs '
>>> wgs84 = SpatialReference(proj) # PROJ string
>>> wgs84 = SpatialReference("""GEOGCS["WGS 84",
DATUM["WGS_1984",
SPHEROID["WGS 84",6378137,298.257223563,
AUTHORITY["EPSG","7030"]],
AUTHORITY["EPSG","6326"]],
PRIMEM["Greenwich",0,
AUTHORITY["EPSG","8901"]],
UNIT["degree",0.01745329251994328,
AUTHORITY["EPSG","9122"]],
AUTHORITY["EPSG","4326"]]""") # OGC WKT
__getitem__(target)
Returns the value of the given string attribute node, None if the node doesn’t exist. Can also take a tuple as a parameter, (target, child), where child is the index of the attribute in the WKT. For example: >>> wkt = 'GEOGCS["WGS 84", DATUM["WGS_1984, ... AUTHORITY["EPSG","4326"]]')
>>> srs = SpatialReference(wkt) # could also use 'WGS84', or 4326
>>> print(srs['GEOGCS'])
WGS 84
>>> print(srs['DATUM'])
WGS_1984
>>> print(srs['AUTHORITY'])
EPSG
>>> print(srs['AUTHORITY', 1]) # The authority value
4326
>>> print(srs['TOWGS84', 4]) # the fourth value in this wkt
0
>>> print(srs['UNIT|AUTHORITY']) # For the units authority, have to use the pipe symbol.
EPSG
>>> print(srs['UNIT|AUTHORITY', 1]) # The authority value for the units
9122
attr_value(target, index=0)
The attribute value for the given target node (e.g. 'PROJCS'). The index keyword specifies an index of the child node to return.
auth_name(target)
Returns the authority name for the given string target node.
auth_code(target)
Returns the authority code for the given string target node.
clone()
Returns a clone of this spatial reference object.
identify_epsg()
This method inspects the WKT of this SpatialReference and will add EPSG authority nodes where an EPSG identifier is applicable.
from_esri()
Morphs this SpatialReference from ESRI’s format to EPSG
to_esri()
Morphs this SpatialReference to ESRI’s format.
validate()
Checks to see if the given spatial reference is valid, if not an exception will be raised.
import_epsg(epsg)
Import spatial reference from EPSG code.
import_proj(proj)
Import spatial reference from PROJ string.
import_user_input(user_input)
import_wkt(wkt)
Import spatial reference from WKT.
import_xml(xml)
Import spatial reference from XML.
name
Returns the name of this Spatial Reference.
srid
Returns the SRID of top-level authority, or None if undefined.
linear_name
Returns the name of the linear units.
linear_units
Returns the value of the linear units.
angular_name
Returns the name of the angular units.”
angular_units
Returns the value of the angular units.
units
Returns a 2-tuple of the units value and the units name and will automatically determines whether to return the linear or angular units.
ellipsoid
Returns a tuple of the ellipsoid parameters for this spatial reference: (semimajor axis, semiminor axis, and inverse flattening).
semi_major
Returns the semi major axis of the ellipsoid for this spatial reference.
semi_minor
Returns the semi minor axis of the ellipsoid for this spatial reference.
inverse_flattening
Returns the inverse flattening of the ellipsoid for this spatial reference.
geographic
Returns True if this spatial reference is geographic (root node is GEOGCS).
local
Returns True if this spatial reference is local (root node is LOCAL_CS).
projected
Returns True if this spatial reference is a projected coordinate system (root node is PROJCS).
wkt
Returns the WKT representation of this spatial reference.
pretty_wkt
Returns the ‘pretty’ representation of the WKT.
proj
Returns the PROJ representation for this spatial reference.
proj4
Alias for SpatialReference.proj.
xml
Returns the XML representation of this spatial reference. | |
doc_2494 | This class allows you to send requests to a wrapped application. The use_cookies parameter indicates whether cookies should be stored and sent for subsequent requests. This is True by default, but passing False will disable this behaviour. If you want to request some subdomain of your application you may set allow_subdomain_redirects to True as if not no external redirects are allowed. Changed in version 2.0: response_wrapper is always a subclass of :class:TestResponse. Changelog Changed in version 0.5: Added the use_cookies parameter. Parameters
application (WSGIApplication) –
response_wrapper (Optional[Type[Response]]) –
use_cookies (bool) –
allow_subdomain_redirects (bool) – Return type
None
set_cookie(server_name, key, value='', max_age=None, expires=None, path='/', domain=None, secure=False, httponly=False, samesite=None, charset='utf-8')
Sets a cookie in the client’s cookie jar. The server name is required and has to match the one that is also passed to the open call. Parameters
server_name (str) –
key (str) –
value (str) –
max_age (Optional[Union[datetime.timedelta, int]]) –
expires (Optional[Union[str, datetime.datetime, int, float]]) –
path (str) –
domain (Optional[str]) –
secure (bool) –
httponly (bool) –
samesite (Optional[str]) –
charset (str) – Return type
None
delete_cookie(server_name, key, path='/', domain=None, secure=False, httponly=False, samesite=None)
Deletes a cookie in the test client. Parameters
server_name (str) –
key (str) –
path (str) –
domain (Optional[str]) –
secure (bool) –
httponly (bool) –
samesite (Optional[str]) – Return type
None
open(*args, as_tuple=False, buffered=False, follow_redirects=False, **kwargs)
Generate an environ dict from the given arguments, make a request to the application using it, and return the response. Parameters
args (Any) – Passed to EnvironBuilder to create the environ for the request. If a single arg is passed, it can be an existing EnvironBuilder or an environ dict.
buffered (bool) – Convert the iterator returned by the app into a list. If the iterator has a close() method, it is called automatically.
follow_redirects (bool) – Make additional requests to follow HTTP redirects until a non-redirect status is returned. TestResponse.history lists the intermediate responses.
as_tuple (bool) –
kwargs (Any) – Return type
werkzeug.test.TestResponse Changed in version 2.0: as_tuple is deprecated and will be removed in Werkzeug 2.1. Use TestResponse.request and request.environ instead. Changed in version 2.0: The request input stream is closed when calling response.close(). Input streams for redirects are automatically closed. Changelog Changed in version 0.5: If a dict is provided as file in the dict for the data parameter the content type has to be called content_type instead of mimetype. This change was made for consistency with werkzeug.FileWrapper. Changed in version 0.5: Added the follow_redirects parameter.
get(*args, **kw)
Call open() with method set to GET. Parameters
args (Any) –
kw (Any) – Return type
werkzeug.test.TestResponse
post(*args, **kw)
Call open() with method set to POST. Parameters
args (Any) –
kw (Any) – Return type
werkzeug.test.TestResponse
put(*args, **kw)
Call open() with method set to PUT. Parameters
args (Any) –
kw (Any) – Return type
werkzeug.test.TestResponse
delete(*args, **kw)
Call open() with method set to DELETE. Parameters
args (Any) –
kw (Any) – Return type
werkzeug.test.TestResponse
patch(*args, **kw)
Call open() with method set to PATCH. Parameters
args (Any) –
kw (Any) – Return type
werkzeug.test.TestResponse
options(*args, **kw)
Call open() with method set to OPTIONS. Parameters
args (Any) –
kw (Any) – Return type
werkzeug.test.TestResponse
head(*args, **kw)
Call open() with method set to HEAD. Parameters
args (Any) –
kw (Any) – Return type
werkzeug.test.TestResponse
trace(*args, **kw)
Call open() with method set to TRACE. Parameters
args (Any) –
kw (Any) – Return type
werkzeug.test.TestResponse | |
doc_2495 |
Return a bounding box that encloses the axis. It only accounts tick labels, axis label, and offsetText. If for_layout_only is True, then the width of the label (if this is an x-axis) or the height of the label (if this is a y-axis) is collapsed to near zero. This allows tight/constrained_layout to ignore too-long labels when doing their layout. | |
doc_2496 | tf.compat.v1.gfile.IsDirectory(
dirname
)
Args
dirname string, path to a potential directory
Returns True, if the path is a directory; False otherwise | |
doc_2497 | Absolute path to the directory that will hold the files. Defaults to the value of your MEDIA_ROOT setting. | |
doc_2498 |
Validate a hatch pattern. A hatch pattern string can have any sequence of the following characters: \ / | - + * . x o O. | |
doc_2499 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.