_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_1200 | See Migration guide for more details. tf.compat.v1.app.flags.FloatParser
tf.compat.v1.flags.FloatParser(
lower_bound=None, upper_bound=None
)
Parsed value may be bounded to a given upper and lower bound. Methods convert
convert(
argument
)
Returns the float value of argument. flag_type
flag_type()
See base class. is_outside_bounds
is_outside_bounds(
val
)
Returns whether the value is outside the bounds or not. parse
parse(
argument
)
See base class.
Class Variables
number_article 'a'
number_name 'number'
syntactic_help 'a number' | |
doc_1201 |
Returns the score on the given data, if the estimator has been refit. This uses the score defined by scoring where provided, and the best_estimator_.score method otherwise. Parameters
Xarray-like of shape (n_samples, n_features)
Input data, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples, n_output) or (n_samples,), default=None
Target relative to X for classification or regression; None for unsupervised learning. Returns
scorefloat | |
doc_1202 |
Return whether the Artist has an explicitly set transform. This is True after set_transform has been called. | |
doc_1203 |
Warning used to notify implicit data conversions happening in the code. This warning occurs when some input data needs to be converted or interpreted in a way that may not match the user’s expectations. For example, this warning may occur when the user
passes an integer array to a function which expects float input and will convert the input requests a non-copying operation, but a copy is required to meet the implementation’s data-type expectations; passes an input whose shape can be interpreted ambiguously. Changed in version 0.18: Moved from sklearn.utils.validation. Attributes
args
Methods
with_traceback Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | |
doc_1204 | See Migration guide for more details. tf.compat.v1.keras.layers.StackedRNNCells
tf.keras.layers.StackedRNNCells(
cells, **kwargs
)
Used to implement efficient stacked RNNs.
Arguments
cells List of RNN cell instances. Examples: batch_size = 3
sentence_max_length = 5
n_features = 2
new_shape = (batch_size, sentence_max_length, n_features)
x = tf.constant(np.reshape(np.arange(30), new_shape), dtype = tf.float32)
rnn_cells = [tf.keras.layers.LSTMCell(128) for _ in range(2)]
stacked_lstm = tf.keras.layers.StackedRNNCells(rnn_cells)
lstm_layer = tf.keras.layers.RNN(stacked_lstm)
result = lstm_layer(x)
Attributes
output_size
state_size
Methods get_initial_state View source
get_initial_state(
inputs=None, batch_size=None, dtype=None
) | |
doc_1205 | A dictionary mapping from option flags to True or False, which is used to override default options for this example. Any option flags not contained in this dictionary are left at their default value (as specified by the DocTestRunner’s optionflags). By default, no options are set. | |
doc_1206 | Contains information about the current hardware profile of the local computer system. | |
doc_1207 |
Set the (group) id for the artist. Parameters
gidstr | |
doc_1208 | Listing subdirectories: >>> p = Path('.')
>>> [x for x in p.iterdir() if x.is_dir()]
[PosixPath('.hg'), PosixPath('docs'), PosixPath('dist'),
PosixPath('__pycache__'), PosixPath('build')]
Listing Python source files in this directory tree: >>> list(p.glob('**/*.py'))
[PosixPath('test_pathlib.py'), PosixPath('setup.py'),
PosixPath('pathlib.py'), PosixPath('docs/conf.py'),
PosixPath('build/lib/pathlib.py')]
Navigating inside a directory tree: >>> p = Path('/etc')
>>> q = p / 'init.d' / 'reboot'
>>> q
PosixPath('/etc/init.d/reboot')
>>> q.resolve()
PosixPath('/etc/rc.d/init.d/halt')
Querying path properties: >>> q.exists()
True
>>> q.is_dir()
False
Opening a file: >>> with q.open() as f: f.readline()
...
'#!/bin/bash\n'
Pure paths Pure path objects provide path-handling operations which don’t actually access a filesystem. There are three ways to access these classes, which we also call flavours:
class pathlib.PurePath(*pathsegments)
A generic class that represents the system’s path flavour (instantiating it creates either a PurePosixPath or a PureWindowsPath): >>> PurePath('setup.py') # Running on a Unix machine
PurePosixPath('setup.py')
Each element of pathsegments can be either a string representing a path segment, an object implementing the os.PathLike interface which returns a string, or another path object: >>> PurePath('foo', 'some/path', 'bar')
PurePosixPath('foo/some/path/bar')
>>> PurePath(Path('foo'), Path('bar'))
PurePosixPath('foo/bar')
When pathsegments is empty, the current directory is assumed: >>> PurePath()
PurePosixPath('.')
When several absolute paths are given, the last is taken as an anchor (mimicking os.path.join()’s behaviour): >>> PurePath('/etc', '/usr', 'lib64')
PurePosixPath('/usr/lib64')
>>> PureWindowsPath('c:/Windows', 'd:bar')
PureWindowsPath('d:bar')
However, in a Windows path, changing the local root doesn’t discard the previous drive setting: >>> PureWindowsPath('c:/Windows', '/Program Files')
PureWindowsPath('c:/Program Files')
Spurious slashes and single dots are collapsed, but double dots ('..') are not, since this would change the meaning of a path in the face of symbolic links: >>> PurePath('foo//bar')
PurePosixPath('foo/bar')
>>> PurePath('foo/./bar')
PurePosixPath('foo/bar')
>>> PurePath('foo/../bar')
PurePosixPath('foo/../bar')
(a naïve approach would make PurePosixPath('foo/../bar') equivalent to PurePosixPath('bar'), which is wrong if foo is a symbolic link to another directory) Pure path objects implement the os.PathLike interface, allowing them to be used anywhere the interface is accepted. Changed in version 3.6: Added support for the os.PathLike interface.
class pathlib.PurePosixPath(*pathsegments)
A subclass of PurePath, this path flavour represents non-Windows filesystem paths: >>> PurePosixPath('/etc')
PurePosixPath('/etc')
pathsegments is specified similarly to PurePath.
class pathlib.PureWindowsPath(*pathsegments)
A subclass of PurePath, this path flavour represents Windows filesystem paths: >>> PureWindowsPath('c:/Program Files/')
PureWindowsPath('c:/Program Files')
pathsegments is specified similarly to PurePath.
Regardless of the system you’re running on, you can instantiate all of these classes, since they don’t provide any operation that does system calls. General properties Paths are immutable and hashable. Paths of a same flavour are comparable and orderable. These properties respect the flavour’s case-folding semantics: >>> PurePosixPath('foo') == PurePosixPath('FOO')
False
>>> PureWindowsPath('foo') == PureWindowsPath('FOO')
True
>>> PureWindowsPath('FOO') in { PureWindowsPath('foo') }
True
>>> PureWindowsPath('C:') < PureWindowsPath('d:')
True
Paths of a different flavour compare unequal and cannot be ordered: >>> PureWindowsPath('foo') == PurePosixPath('foo')
False
>>> PureWindowsPath('foo') < PurePosixPath('foo')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: '<' not supported between instances of 'PureWindowsPath' and 'PurePosixPath'
Operators The slash operator helps create child paths, similarly to os.path.join(): >>> p = PurePath('/etc')
>>> p
PurePosixPath('/etc')
>>> p / 'init.d' / 'apache2'
PurePosixPath('/etc/init.d/apache2')
>>> q = PurePath('bin')
>>> '/usr' / q
PurePosixPath('/usr/bin')
A path object can be used anywhere an object implementing os.PathLike is accepted: >>> import os
>>> p = PurePath('/etc')
>>> os.fspath(p)
'/etc'
The string representation of a path is the raw filesystem path itself (in native form, e.g. with backslashes under Windows), which you can pass to any function taking a file path as a string: >>> p = PurePath('/etc')
>>> str(p)
'/etc'
>>> p = PureWindowsPath('c:/Program Files')
>>> str(p)
'c:\\Program Files'
Similarly, calling bytes on a path gives the raw filesystem path as a bytes object, as encoded by os.fsencode(): >>> bytes(p)
b'/etc'
Note Calling bytes is only recommended under Unix. Under Windows, the unicode form is the canonical representation of filesystem paths. Accessing individual parts To access the individual “parts” (components) of a path, use the following property:
PurePath.parts
A tuple giving access to the path’s various components: >>> p = PurePath('/usr/bin/python3')
>>> p.parts
('/', 'usr', 'bin', 'python3')
>>> p = PureWindowsPath('c:/Program Files/PSF')
>>> p.parts
('c:\\', 'Program Files', 'PSF')
(note how the drive and local root are regrouped in a single part)
Methods and properties Pure paths provide the following methods and properties:
PurePath.drive
A string representing the drive letter or name, if any: >>> PureWindowsPath('c:/Program Files/').drive
'c:'
>>> PureWindowsPath('/Program Files/').drive
''
>>> PurePosixPath('/etc').drive
''
UNC shares are also considered drives: >>> PureWindowsPath('//host/share/foo.txt').drive
'\\\\host\\share'
PurePath.root
A string representing the (local or global) root, if any: >>> PureWindowsPath('c:/Program Files/').root
'\\'
>>> PureWindowsPath('c:Program Files/').root
''
>>> PurePosixPath('/etc').root
'/'
UNC shares always have a root: >>> PureWindowsPath('//host/share').root
'\\'
PurePath.anchor
The concatenation of the drive and root: >>> PureWindowsPath('c:/Program Files/').anchor
'c:\\'
>>> PureWindowsPath('c:Program Files/').anchor
'c:'
>>> PurePosixPath('/etc').anchor
'/'
>>> PureWindowsPath('//host/share').anchor
'\\\\host\\share\\'
PurePath.parents
An immutable sequence providing access to the logical ancestors of the path: >>> p = PureWindowsPath('c:/foo/bar/setup.py')
>>> p.parents[0]
PureWindowsPath('c:/foo/bar')
>>> p.parents[1]
PureWindowsPath('c:/foo')
>>> p.parents[2]
PureWindowsPath('c:/')
PurePath.parent
The logical parent of the path: >>> p = PurePosixPath('/a/b/c/d')
>>> p.parent
PurePosixPath('/a/b/c')
You cannot go past an anchor, or empty path: >>> p = PurePosixPath('/')
>>> p.parent
PurePosixPath('/')
>>> p = PurePosixPath('.')
>>> p.parent
PurePosixPath('.')
Note This is a purely lexical operation, hence the following behaviour: >>> p = PurePosixPath('foo/..')
>>> p.parent
PurePosixPath('foo')
If you want to walk an arbitrary filesystem path upwards, it is recommended to first call Path.resolve() so as to resolve symlinks and eliminate “..” components.
PurePath.name
A string representing the final path component, excluding the drive and root, if any: >>> PurePosixPath('my/library/setup.py').name
'setup.py'
UNC drive names are not considered: >>> PureWindowsPath('//some/share/setup.py').name
'setup.py'
>>> PureWindowsPath('//some/share').name
''
PurePath.suffix
The file extension of the final component, if any: >>> PurePosixPath('my/library/setup.py').suffix
'.py'
>>> PurePosixPath('my/library.tar.gz').suffix
'.gz'
>>> PurePosixPath('my/library').suffix
''
PurePath.suffixes
A list of the path’s file extensions: >>> PurePosixPath('my/library.tar.gar').suffixes
['.tar', '.gar']
>>> PurePosixPath('my/library.tar.gz').suffixes
['.tar', '.gz']
>>> PurePosixPath('my/library').suffixes
[]
PurePath.stem
The final path component, without its suffix: >>> PurePosixPath('my/library.tar.gz').stem
'library.tar'
>>> PurePosixPath('my/library.tar').stem
'library'
>>> PurePosixPath('my/library').stem
'library'
PurePath.as_posix()
Return a string representation of the path with forward slashes (/): >>> p = PureWindowsPath('c:\\windows')
>>> str(p)
'c:\\windows'
>>> p.as_posix()
'c:/windows'
PurePath.as_uri()
Represent the path as a file URI. ValueError is raised if the path isn’t absolute. >>> p = PurePosixPath('/etc/passwd')
>>> p.as_uri()
'file:///etc/passwd'
>>> p = PureWindowsPath('c:/Windows')
>>> p.as_uri()
'file:///c:/Windows'
PurePath.is_absolute()
Return whether the path is absolute or not. A path is considered absolute if it has both a root and (if the flavour allows) a drive: >>> PurePosixPath('/a/b').is_absolute()
True
>>> PurePosixPath('a/b').is_absolute()
False
>>> PureWindowsPath('c:/a/b').is_absolute()
True
>>> PureWindowsPath('/a/b').is_absolute()
False
>>> PureWindowsPath('c:').is_absolute()
False
>>> PureWindowsPath('//some/share').is_absolute()
True
PurePath.is_relative_to(*other)
Return whether or not this path is relative to the other path. >>> p = PurePath('/etc/passwd')
>>> p.is_relative_to('/etc')
True
>>> p.is_relative_to('/usr')
False
New in version 3.9.
PurePath.is_reserved()
With PureWindowsPath, return True if the path is considered reserved under Windows, False otherwise. With PurePosixPath, False is always returned. >>> PureWindowsPath('nul').is_reserved()
True
>>> PurePosixPath('nul').is_reserved()
False
File system calls on reserved paths can fail mysteriously or have unintended effects.
PurePath.joinpath(*other)
Calling this method is equivalent to combining the path with each of the other arguments in turn: >>> PurePosixPath('/etc').joinpath('passwd')
PurePosixPath('/etc/passwd')
>>> PurePosixPath('/etc').joinpath(PurePosixPath('passwd'))
PurePosixPath('/etc/passwd')
>>> PurePosixPath('/etc').joinpath('init.d', 'apache2')
PurePosixPath('/etc/init.d/apache2')
>>> PureWindowsPath('c:').joinpath('/Program Files')
PureWindowsPath('c:/Program Files')
PurePath.match(pattern)
Match this path against the provided glob-style pattern. Return True if matching is successful, False otherwise. If pattern is relative, the path can be either relative or absolute, and matching is done from the right: >>> PurePath('a/b.py').match('*.py')
True
>>> PurePath('/a/b/c.py').match('b/*.py')
True
>>> PurePath('/a/b/c.py').match('a/*.py')
False
If pattern is absolute, the path must be absolute, and the whole path must match: >>> PurePath('/a.py').match('/*.py')
True
>>> PurePath('a/b.py').match('/*.py')
False
As with other methods, case-sensitivity follows platform defaults: >>> PurePosixPath('b.py').match('*.PY')
False
>>> PureWindowsPath('b.py').match('*.PY')
True
PurePath.relative_to(*other)
Compute a version of this path relative to the path represented by other. If it’s impossible, ValueError is raised: >>> p = PurePosixPath('/etc/passwd')
>>> p.relative_to('/')
PurePosixPath('etc/passwd')
>>> p.relative_to('/etc')
PurePosixPath('passwd')
>>> p.relative_to('/usr')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pathlib.py", line 694, in relative_to
.format(str(self), str(formatted)))
ValueError: '/etc/passwd' is not in the subpath of '/usr' OR one path is relative and the other absolute.
NOTE: This function is part of PurePath and works with strings. It does not check or access the underlying file structure.
PurePath.with_name(name)
Return a new path with the name changed. If the original path doesn’t have a name, ValueError is raised: >>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz')
>>> p.with_name('setup.py')
PureWindowsPath('c:/Downloads/setup.py')
>>> p = PureWindowsPath('c:/')
>>> p.with_name('setup.py')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/antoine/cpython/default/Lib/pathlib.py", line 751, in with_name
raise ValueError("%r has an empty name" % (self,))
ValueError: PureWindowsPath('c:/') has an empty name
PurePath.with_stem(stem)
Return a new path with the stem changed. If the original path doesn’t have a name, ValueError is raised: >>> p = PureWindowsPath('c:/Downloads/draft.txt')
>>> p.with_stem('final')
PureWindowsPath('c:/Downloads/final.txt')
>>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz')
>>> p.with_stem('lib')
PureWindowsPath('c:/Downloads/lib.gz')
>>> p = PureWindowsPath('c:/')
>>> p.with_stem('')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/antoine/cpython/default/Lib/pathlib.py", line 861, in with_stem
return self.with_name(stem + self.suffix)
File "/home/antoine/cpython/default/Lib/pathlib.py", line 851, in with_name
raise ValueError("%r has an empty name" % (self,))
ValueError: PureWindowsPath('c:/') has an empty name
New in version 3.9.
PurePath.with_suffix(suffix)
Return a new path with the suffix changed. If the original path doesn’t have a suffix, the new suffix is appended instead. If the suffix is an empty string, the original suffix is removed: >>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz')
>>> p.with_suffix('.bz2')
PureWindowsPath('c:/Downloads/pathlib.tar.bz2')
>>> p = PureWindowsPath('README')
>>> p.with_suffix('.txt')
PureWindowsPath('README.txt')
>>> p = PureWindowsPath('README.txt')
>>> p.with_suffix('')
PureWindowsPath('README')
Concrete paths Concrete paths are subclasses of the pure path classes. In addition to operations provided by the latter, they also provide methods to do system calls on path objects. There are three ways to instantiate concrete paths:
class pathlib.Path(*pathsegments)
A subclass of PurePath, this class represents concrete paths of the system’s path flavour (instantiating it creates either a PosixPath or a WindowsPath): >>> Path('setup.py')
PosixPath('setup.py')
pathsegments is specified similarly to PurePath.
class pathlib.PosixPath(*pathsegments)
A subclass of Path and PurePosixPath, this class represents concrete non-Windows filesystem paths: >>> PosixPath('/etc')
PosixPath('/etc')
pathsegments is specified similarly to PurePath.
class pathlib.WindowsPath(*pathsegments)
A subclass of Path and PureWindowsPath, this class represents concrete Windows filesystem paths: >>> WindowsPath('c:/Program Files/')
WindowsPath('c:/Program Files')
pathsegments is specified similarly to PurePath.
You can only instantiate the class flavour that corresponds to your system (allowing system calls on non-compatible path flavours could lead to bugs or failures in your application): >>> import os
>>> os.name
'posix'
>>> Path('setup.py')
PosixPath('setup.py')
>>> PosixPath('setup.py')
PosixPath('setup.py')
>>> WindowsPath('setup.py')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pathlib.py", line 798, in __new__
% (cls.__name__,))
NotImplementedError: cannot instantiate 'WindowsPath' on your system
Methods Concrete paths provide the following methods in addition to pure paths methods. Many of these methods can raise an OSError if a system call fails (for example because the path doesn’t exist). Changed in version 3.8: exists(), is_dir(), is_file(), is_mount(), is_symlink(), is_block_device(), is_char_device(), is_fifo(), is_socket() now return False instead of raising an exception for paths that contain characters unrepresentable at the OS level.
classmethod Path.cwd()
Return a new path object representing the current directory (as returned by os.getcwd()): >>> Path.cwd()
PosixPath('/home/antoine/pathlib')
classmethod Path.home()
Return a new path object representing the user’s home directory (as returned by os.path.expanduser() with ~ construct): >>> Path.home()
PosixPath('/home/antoine')
New in version 3.5.
Path.stat()
Return a os.stat_result object containing information about this path, like os.stat(). The result is looked up at each call to this method. >>> p = Path('setup.py')
>>> p.stat().st_size
956
>>> p.stat().st_mtime
1327883547.852554
Path.chmod(mode)
Change the file mode and permissions, like os.chmod(): >>> p = Path('setup.py')
>>> p.stat().st_mode
33277
>>> p.chmod(0o444)
>>> p.stat().st_mode
33060
Path.exists()
Whether the path points to an existing file or directory: >>> Path('.').exists()
True
>>> Path('setup.py').exists()
True
>>> Path('/etc').exists()
True
>>> Path('nonexistentfile').exists()
False
Note If the path points to a symlink, exists() returns whether the symlink points to an existing file or directory.
Path.expanduser()
Return a new path with expanded ~ and ~user constructs, as returned by os.path.expanduser(): >>> p = PosixPath('~/films/Monty Python')
>>> p.expanduser()
PosixPath('/home/eric/films/Monty Python')
New in version 3.5.
Path.glob(pattern)
Glob the given relative pattern in the directory represented by this path, yielding all matching files (of any kind): >>> sorted(Path('.').glob('*.py'))
[PosixPath('pathlib.py'), PosixPath('setup.py'), PosixPath('test_pathlib.py')]
>>> sorted(Path('.').glob('*/*.py'))
[PosixPath('docs/conf.py')]
The “**” pattern means “this directory and all subdirectories, recursively”. In other words, it enables recursive globbing: >>> sorted(Path('.').glob('**/*.py'))
[PosixPath('build/lib/pathlib.py'),
PosixPath('docs/conf.py'),
PosixPath('pathlib.py'),
PosixPath('setup.py'),
PosixPath('test_pathlib.py')]
Note Using the “**” pattern in large directory trees may consume an inordinate amount of time. Raises an auditing event pathlib.Path.glob with arguments self, pattern.
Path.group()
Return the name of the group owning the file. KeyError is raised if the file’s gid isn’t found in the system database.
Path.is_dir()
Return True if the path points to a directory (or a symbolic link pointing to a directory), False if it points to another kind of file. False is also returned if the path doesn’t exist or is a broken symlink; other errors (such as permission errors) are propagated.
Path.is_file()
Return True if the path points to a regular file (or a symbolic link pointing to a regular file), False if it points to another kind of file. False is also returned if the path doesn’t exist or is a broken symlink; other errors (such as permission errors) are propagated.
Path.is_mount()
Return True if the path is a mount point: a point in a file system where a different file system has been mounted. On POSIX, the function checks whether path’s parent, path/.., is on a different device than path, or whether path/.. and path point to the same i-node on the same device — this should detect mount points for all Unix and POSIX variants. Not implemented on Windows. New in version 3.7.
Path.is_symlink()
Return True if the path points to a symbolic link, False otherwise. False is also returned if the path doesn’t exist; other errors (such as permission errors) are propagated.
Path.is_socket()
Return True if the path points to a Unix socket (or a symbolic link pointing to a Unix socket), False if it points to another kind of file. False is also returned if the path doesn’t exist or is a broken symlink; other errors (such as permission errors) are propagated.
Path.is_fifo()
Return True if the path points to a FIFO (or a symbolic link pointing to a FIFO), False if it points to another kind of file. False is also returned if the path doesn’t exist or is a broken symlink; other errors (such as permission errors) are propagated.
Path.is_block_device()
Return True if the path points to a block device (or a symbolic link pointing to a block device), False if it points to another kind of file. False is also returned if the path doesn’t exist or is a broken symlink; other errors (such as permission errors) are propagated.
Path.is_char_device()
Return True if the path points to a character device (or a symbolic link pointing to a character device), False if it points to another kind of file. False is also returned if the path doesn’t exist or is a broken symlink; other errors (such as permission errors) are propagated.
Path.iterdir()
When the path points to a directory, yield path objects of the directory contents: >>> p = Path('docs')
>>> for child in p.iterdir(): child
...
PosixPath('docs/conf.py')
PosixPath('docs/_templates')
PosixPath('docs/make.bat')
PosixPath('docs/index.rst')
PosixPath('docs/_build')
PosixPath('docs/_static')
PosixPath('docs/Makefile')
The children are yielded in arbitrary order, and the special entries '.' and '..' are not included. If a file is removed from or added to the directory after creating the iterator, whether an path object for that file be included is unspecified.
Path.lchmod(mode)
Like Path.chmod() but, if the path points to a symbolic link, the symbolic link’s mode is changed rather than its target’s.
Path.lstat()
Like Path.stat() but, if the path points to a symbolic link, return the symbolic link’s information rather than its target’s.
Path.mkdir(mode=0o777, parents=False, exist_ok=False)
Create a new directory at this given path. If mode is given, it is combined with the process’ umask value to determine the file mode and access flags. If the path already exists, FileExistsError is raised. If parents is true, any missing parents of this path are created as needed; they are created with the default permissions without taking mode into account (mimicking the POSIX mkdir -p command). If parents is false (the default), a missing parent raises FileNotFoundError. If exist_ok is false (the default), FileExistsError is raised if the target directory already exists. If exist_ok is true, FileExistsError exceptions will be ignored (same behavior as the POSIX mkdir -p command), but only if the last path component is not an existing non-directory file. Changed in version 3.5: The exist_ok parameter was added.
Path.open(mode='r', buffering=-1, encoding=None, errors=None, newline=None)
Open the file pointed to by the path, like the built-in open() function does: >>> p = Path('setup.py')
>>> with p.open() as f:
... f.readline()
...
'#!/usr/bin/env python3\n'
Path.owner()
Return the name of the user owning the file. KeyError is raised if the file’s uid isn’t found in the system database.
Path.read_bytes()
Return the binary contents of the pointed-to file as a bytes object: >>> p = Path('my_binary_file')
>>> p.write_bytes(b'Binary file contents')
20
>>> p.read_bytes()
b'Binary file contents'
New in version 3.5.
Path.read_text(encoding=None, errors=None)
Return the decoded contents of the pointed-to file as a string: >>> p = Path('my_text_file')
>>> p.write_text('Text file contents')
18
>>> p.read_text()
'Text file contents'
The file is opened and then closed. The optional parameters have the same meaning as in open(). New in version 3.5.
Path.readlink()
Return the path to which the symbolic link points (as returned by os.readlink()): >>> p = Path('mylink')
>>> p.symlink_to('setup.py')
>>> p.readlink()
PosixPath('setup.py')
New in version 3.9.
Path.rename(target)
Rename this file or directory to the given target, and return a new Path instance pointing to target. On Unix, if target exists and is a file, it will be replaced silently if the user has permission. target can be either a string or another path object: >>> p = Path('foo')
>>> p.open('w').write('some text')
9
>>> target = Path('bar')
>>> p.rename(target)
PosixPath('bar')
>>> target.open().read()
'some text'
The target path may be absolute or relative. Relative paths are interpreted relative to the current working directory, not the directory of the Path object. Changed in version 3.8: Added return value, return the new Path instance.
Path.replace(target)
Rename this file or directory to the given target, and return a new Path instance pointing to target. If target points to an existing file or directory, it will be unconditionally replaced. The target path may be absolute or relative. Relative paths are interpreted relative to the current working directory, not the directory of the Path object. Changed in version 3.8: Added return value, return the new Path instance.
Path.resolve(strict=False)
Make the path absolute, resolving any symlinks. A new path object is returned: >>> p = Path()
>>> p
PosixPath('.')
>>> p.resolve()
PosixPath('/home/antoine/pathlib')
“..” components are also eliminated (this is the only method to do so): >>> p = Path('docs/../setup.py')
>>> p.resolve()
PosixPath('/home/antoine/pathlib/setup.py')
If the path doesn’t exist and strict is True, FileNotFoundError is raised. If strict is False, the path is resolved as far as possible and any remainder is appended without checking whether it exists. If an infinite loop is encountered along the resolution path, RuntimeError is raised. New in version 3.6: The strict argument (pre-3.6 behavior is strict).
Path.rglob(pattern)
This is like calling Path.glob() with “**/” added in front of the given relative pattern: >>> sorted(Path().rglob("*.py"))
[PosixPath('build/lib/pathlib.py'),
PosixPath('docs/conf.py'),
PosixPath('pathlib.py'),
PosixPath('setup.py'),
PosixPath('test_pathlib.py')]
Raises an auditing event pathlib.Path.rglob with arguments self, pattern.
Path.rmdir()
Remove this directory. The directory must be empty.
Path.samefile(other_path)
Return whether this path points to the same file as other_path, which can be either a Path object, or a string. The semantics are similar to os.path.samefile() and os.path.samestat(). An OSError can be raised if either file cannot be accessed for some reason. >>> p = Path('spam')
>>> q = Path('eggs')
>>> p.samefile(q)
False
>>> p.samefile('spam')
True
New in version 3.5.
Path.symlink_to(target, target_is_directory=False)
Make this path a symbolic link to target. Under Windows, target_is_directory must be true (default False) if the link’s target is a directory. Under POSIX, target_is_directory’s value is ignored. >>> p = Path('mylink')
>>> p.symlink_to('setup.py')
>>> p.resolve()
PosixPath('/home/antoine/pathlib/setup.py')
>>> p.stat().st_size
956
>>> p.lstat().st_size
8
Note The order of arguments (link, target) is the reverse of os.symlink()’s.
Path.link_to(target)
Make target a hard link to this path. Warning This function does not make this path a hard link to target, despite the implication of the function and argument names. The argument order (target, link) is the reverse of Path.symlink_to(), but matches that of os.link(). New in version 3.8.
Path.touch(mode=0o666, exist_ok=True)
Create a file at this given path. If mode is given, it is combined with the process’ umask value to determine the file mode and access flags. If the file already exists, the function succeeds if exist_ok is true (and its modification time is updated to the current time), otherwise FileExistsError is raised.
Path.unlink(missing_ok=False)
Remove this file or symbolic link. If the path points to a directory, use Path.rmdir() instead. If missing_ok is false (the default), FileNotFoundError is raised if the path does not exist. If missing_ok is true, FileNotFoundError exceptions will be ignored (same behavior as the POSIX rm -f command). Changed in version 3.8: The missing_ok parameter was added.
Path.write_bytes(data)
Open the file pointed to in bytes mode, write data to it, and close the file: >>> p = Path('my_binary_file')
>>> p.write_bytes(b'Binary file contents')
20
>>> p.read_bytes()
b'Binary file contents'
An existing file of the same name is overwritten. New in version 3.5.
Path.write_text(data, encoding=None, errors=None)
Open the file pointed to in text mode, write data to it, and close the file: >>> p = Path('my_text_file')
>>> p.write_text('Text file contents')
18
>>> p.read_text()
'Text file contents'
An existing file of the same name is overwritten. The optional parameters have the same meaning as in open(). New in version 3.5.
Correspondence to tools in the os module Below is a table mapping various os functions to their corresponding PurePath/Path equivalent. Note Although os.path.relpath() and PurePath.relative_to() have some overlapping use-cases, their semantics differ enough to warrant not considering them equivalent.
os and os.path pathlib
os.path.abspath() Path.resolve()
os.chmod() Path.chmod()
os.mkdir() Path.mkdir()
os.makedirs() Path.mkdir()
os.rename() Path.rename()
os.replace() Path.replace()
os.rmdir() Path.rmdir()
os.remove(), os.unlink() Path.unlink()
os.getcwd() Path.cwd()
os.path.exists() Path.exists()
os.path.expanduser() Path.expanduser() and Path.home()
os.listdir() Path.iterdir()
os.path.isdir() Path.is_dir()
os.path.isfile() Path.is_file()
os.path.islink() Path.is_symlink()
os.link() Path.link_to()
os.symlink() Path.symlink_to()
os.readlink() Path.readlink()
os.stat() Path.stat(), Path.owner(), Path.group()
os.path.isabs() PurePath.is_absolute()
os.path.join() PurePath.joinpath()
os.path.basename() PurePath.name
os.path.dirname() PurePath.parent
os.path.samefile() Path.samefile()
os.path.splitext() PurePath.suffix | |
doc_1209 |
Plot a triangulated surface. The (optional) triangulation can be specified in one of two ways; either: plot_trisurf(triangulation, ...)
where triangulation is a Triangulation object, or: plot_trisurf(X, Y, ...)
plot_trisurf(X, Y, triangles, ...)
plot_trisurf(X, Y, triangles=triangles, ...)
in which case a Triangulation object will be created. See Triangulation for a explanation of these possibilities. The remaining arguments are: plot_trisurf(..., Z)
where Z is the array of values to contour, one per point in the triangulation. Parameters
X, Y, Zarray-like
Data values as 1D arrays. color
Color of the surface patches. cmap
A colormap for the surface patches.
normNormalize
An instance of Normalize to map values to colors.
vmin, vmaxfloat, default: None
Minimum and maximum value to map.
shadebool, default: True
Whether to shade the facecolors. Shading is always disabled when cmap is specified.
lightsourceLightSource
The lightsource to use when shade is True. **kwargs
All other arguments are passed on to Poly3DCollection Examples (Source code, png, pdf) (Source code, png, pdf) | |
doc_1210 |
Predict regression target at each stage for X. This method allows monitoring (i.e. determine error on testing set) after each stage. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. Returns
ygenerator of ndarray of shape (n_samples,)
The predicted value of the input samples. | |
doc_1211 |
Determine if an image is low contrast. Parameters
imagearray-like
The image under test.
fraction_thresholdfloat, optional
The low contrast fraction threshold. An image is considered low- contrast when its range of brightness spans less than this fraction of its data type’s full range. [1]
lower_percentilefloat, optional
Disregard values below this percentile when computing image contrast.
upper_percentilefloat, optional
Disregard values above this percentile when computing image contrast.
methodstr, optional
The contrast determination method. Right now the only available option is “linear”. Returns
outbool
True when the image is determined to be low contrast. References
1
https://scikit-image.org/docs/dev/user_guide/data_types.html Examples >>> image = np.linspace(0, 0.04, 100)
>>> is_low_contrast(image)
True
>>> image[-1] = 1
>>> is_low_contrast(image)
True
>>> is_low_contrast(image, upper_percentile=100)
False | |
doc_1212 |
Fit the model according to the given training data. Parameters
Xarray-like of shape (n_samples, n_features)
The training samples.
yarray-like of shape (n_samples,)
The corresponding training labels. Returns
selfobject
returns a trained NeighborhoodComponentsAnalysis model. | |
doc_1213 |
Applies the element-wise function: ReLU6(x)=min(max(x0,x),q(6))\text{ReLU6}(x) = \min(\max(x_0, x), q(6)) , where x0x_0 is the zero_point, and q(6)q(6) is the quantized representation of number 6. Parameters
inplace – can optionally do the operation in-place. Default: False Shape:
Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.quantized.ReLU6()
>>> input = torch.randn(2)
>>> input = torch.quantize_per_tensor(input, 1.0, 0, dtype=torch.qint32)
>>> output = m(input) | |
doc_1214 |
Returns the number of splitting iterations in the cross-validator Parameters
Xobject
Always ignored, exists for compatibility. np.zeros(n_samples) may be used as a placeholder.
yobject
Always ignored, exists for compatibility. np.zeros(n_samples) may be used as a placeholder.
groupsarray-like of shape (n_samples,), default=None
Group labels for the samples used while splitting the dataset into train/test set. Returns
n_splitsint
Returns the number of splitting iterations in the cross-validator. | |
doc_1215 |
Remove axes of length one from a. Parameters
aarray_like
Input data.
axisNone or int or tuple of ints, optional
New in version 1.7.0. Selects a subset of the entries of length one in the shape. If an axis is selected with shape entry greater than one, an error is raised. Returns
squeezedMaskedArray
The input array, but with all or a subset of the dimensions of length 1 removed. This is always a itself or a view into a. Note that if all axes are squeezed, the result is a 0d array and not a scalar. Raises
ValueError
If axis is not None, and an axis being squeezed is not of length 1 See also expand_dims
The inverse operation, adding entries of length one reshape
Insert, remove, and combine dimensions, and resize existing ones Examples >>> x = np.array([[[0], [1], [2]]])
>>> x.shape
(1, 3, 1)
>>> np.squeeze(x).shape
(3,)
>>> np.squeeze(x, axis=0).shape
(3, 1)
>>> np.squeeze(x, axis=1).shape
Traceback (most recent call last):
...
ValueError: cannot select an axis to squeeze out which has size not equal to one
>>> np.squeeze(x, axis=2).shape
(1, 3)
>>> x = np.array([[1234]])
>>> x.shape
(1, 1)
>>> np.squeeze(x)
array(1234) # 0d array
>>> np.squeeze(x).shape
()
>>> np.squeeze(x)[()]
1234 | |
doc_1216 |
Return the visibility. | |
doc_1217 | Send an HTTP request, which can be either GET or POST, depending on req.has_data(). | |
doc_1218 |
Return peaks in a circle Hough transform. Identifies most prominent circles separated by certain distances in given Hough spaces. Non-maximum suppression with different sizes is applied separately in the first and second dimension of the Hough space to identify peaks. For circles with different radius but close in distance, only the one with highest peak is kept. Parameters
hspaces(N, M) array
Hough spaces returned by the hough_circle function.
radii(M,) array
Radii corresponding to Hough spaces.
min_xdistanceint, optional
Minimum distance separating centers in the x dimension.
min_ydistanceint, optional
Minimum distance separating centers in the y dimension.
thresholdfloat, optional
Minimum intensity of peaks in each Hough space. Default is 0.5 * max(hspace).
num_peaksint, optional
Maximum number of peaks in each Hough space. When the number of peaks exceeds num_peaks, only num_peaks coordinates based on peak intensity are considered for the corresponding radius.
total_num_peaksint, optional
Maximum number of peaks. When the number of peaks exceeds num_peaks, return num_peaks coordinates based on peak intensity.
normalizebool, optional
If True, normalize the accumulator by the radius to sort the prominent peaks. Returns
accum, cx, cy, radtuple of array
Peak values in Hough space, x and y center coordinates and radii. Notes Circles with bigger radius have higher peaks in Hough space. If larger circles are preferred over smaller ones, normalize should be False. Otherwise, circles will be returned in the order of decreasing voting number. Examples >>> from skimage import transform, draw
>>> img = np.zeros((120, 100), dtype=int)
>>> radius, x_0, y_0 = (20, 99, 50)
>>> y, x = draw.circle_perimeter(y_0, x_0, radius)
>>> img[x, y] = 1
>>> hspaces = transform.hough_circle(img, radius)
>>> accum, cx, cy, rad = hough_circle_peaks(hspaces, [radius,]) | |
doc_1219 |
Return the mapping parameters. The returned values define a linear map off + scl*x that is applied to the input arguments before the series is evaluated. The map depends on the domain and window; if the current domain is equal to the window the resulting map is the identity. If the coefficients of the series instance are to be used by themselves outside this class, then the linear function must be substituted for the x in the standard representation of the base polynomials. Returns
off, sclfloat or complex
The mapping function is defined by off + scl*x. Notes If the current domain is the interval [l1, r1] and the window is [l2, r2], then the linear mapping function L is defined by the equations: L(l1) = l2
L(r1) = r2 | |
doc_1220 |
Return the clip path with the non-affine part of its transformation applied, and the remaining affine part of its transformation. | |
doc_1221 | Pattern Meaning
* matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any character not in seq For a literal match, wrap the meta-characters in brackets. For example, '[?]' matches the character '?'. Note that the filename separator ('/' on Unix) is not special to this module. See module glob for pathname expansion (glob uses filter() to match pathname segments). Similarly, filenames starting with a period are not special for this module, and are matched by the * and ? patterns.
fnmatch.fnmatch(filename, pattern)
Test whether the filename string matches the pattern string, returning True or False. Both parameters are case-normalized using os.path.normcase(). fnmatchcase() can be used to perform a case-sensitive comparison, regardless of whether that’s standard for the operating system. This example will print all file names in the current directory with the extension .txt: import fnmatch
import os
for file in os.listdir('.'):
if fnmatch.fnmatch(file, '*.txt'):
print(file)
fnmatch.fnmatchcase(filename, pattern)
Test whether filename matches pattern, returning True or False; the comparison is case-sensitive and does not apply os.path.normcase().
fnmatch.filter(names, pattern)
Construct a list from those elements of the iterable names that match pattern. It is the same as [n for n in names if fnmatch(n, pattern)], but implemented more efficiently.
fnmatch.translate(pattern)
Return the shell-style pattern converted to a regular expression for using with re.match(). Example: >>> import fnmatch, re
>>>
>>> regex = fnmatch.translate('*.txt')
>>> regex
'(?s:.*\\.txt)\\Z'
>>> reobj = re.compile(regex)
>>> reobj.match('foobar.txt')
<re.Match object; span=(0, 10), match='foobar.txt'>
See also
Module glob
Unix shell-style path expansion. | |
doc_1222 |
Return the visibility. | |
doc_1223 |
Return the values (min, max) that are mapped to the colormap limits. | |
doc_1224 |
Warp an image according to a given coordinate transformation. Parameters
imagendarray
Input image.
inverse_maptransformation object, callable cr = f(cr, **kwargs), or ndarray
Inverse coordinate map, which transforms coordinates in the output images into their corresponding coordinates in the input image. There are a number of different options to define this map, depending on the dimensionality of the input image. A 2-D image can have 2 dimensions for gray-scale images, or 3 dimensions with color information. For 2-D images, you can directly pass a transformation object, e.g. skimage.transform.SimilarityTransform, or its inverse. For 2-D images, you can pass a (3, 3) homogeneous transformation matrix, e.g. skimage.transform.SimilarityTransform.params. For 2-D images, a function that transforms a (M, 2) array of (col, row) coordinates in the output image to their corresponding coordinates in the input image. Extra parameters to the function can be specified through map_args. For N-D images, you can directly pass an array of coordinates. The first dimension specifies the coordinates in the input image, while the subsequent dimensions determine the position in the output image. E.g. in case of 2-D images, you need to pass an array of shape (2, rows, cols), where rows and cols determine the shape of the output image, and the first dimension contains the (row, col) coordinate in the input image. See scipy.ndimage.map_coordinates for further documentation. Note, that a (3, 3) matrix is interpreted as a homogeneous transformation matrix, so you cannot interpolate values from a 3-D input, if the output is of shape (3,). See example section for usage.
map_argsdict, optional
Keyword arguments passed to inverse_map.
output_shapetuple (rows, cols), optional
Shape of the output image generated. By default the shape of the input image is preserved. Note that, even for multi-band images, only rows and columns need to be specified.
orderint, optional
The order of interpolation. The order has to be in the range 0-5:
0: Nearest-neighbor 1: Bi-linear (default) 2: Bi-quadratic 3: Bi-cubic 4: Bi-quartic 5: Bi-quintic Default is 0 if image.dtype is bool and 1 otherwise.
mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional
Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad.
cvalfloat, optional
Used in conjunction with mode ‘constant’, the value outside the image boundaries.
clipbool, optional
Whether to clip the output to the range of values of the input image. This is enabled by default, since higher order interpolation may produce values outside the given input range.
preserve_rangebool, optional
Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns
warpeddouble ndarray
The warped input image. Notes The input image is converted to a double image. In case of a SimilarityTransform, AffineTransform and ProjectiveTransform and order in [0, 3] this function uses the underlying transformation matrix to warp the image with a much faster routine. Examples >>> from skimage.transform import warp
>>> from skimage import data
>>> image = data.camera()
The following image warps are all equal but differ substantially in execution time. The image is shifted to the bottom. Use a geometric transform to warp an image (fast): >>> from skimage.transform import SimilarityTransform
>>> tform = SimilarityTransform(translation=(0, -10))
>>> warped = warp(image, tform)
Use a callable (slow): >>> def shift_down(xy):
... xy[:, 1] -= 10
... return xy
>>> warped = warp(image, shift_down)
Use a transformation matrix to warp an image (fast): >>> matrix = np.array([[1, 0, 0], [0, 1, -10], [0, 0, 1]])
>>> warped = warp(image, matrix)
>>> from skimage.transform import ProjectiveTransform
>>> warped = warp(image, ProjectiveTransform(matrix=matrix))
You can also use the inverse of a geometric transformation (fast): >>> warped = warp(image, tform.inverse)
For N-D images you can pass a coordinate array, that specifies the coordinates in the input image for every element in the output image. E.g. if you want to rescale a 3-D cube, you can do: >>> cube_shape = np.array([30, 30, 30])
>>> cube = np.random.rand(*cube_shape)
Setup the coordinate array, that defines the scaling: >>> scale = 0.1
>>> output_shape = (scale * cube_shape).astype(int)
>>> coords0, coords1, coords2 = np.mgrid[:output_shape[0],
... :output_shape[1], :output_shape[2]]
>>> coords = np.array([coords0, coords1, coords2])
Assume that the cube contains spatial data, where the first array element center is at coordinate (0.5, 0.5, 0.5) in real space, i.e. we have to account for this extra offset when scaling the image: >>> coords = (coords + 0.5) / scale - 0.5
>>> warped = warp(cube, coords) | |
doc_1225 |
Determine if two Index object are equal. The things that are being compared are: The elements inside the Index object. The order of the elements inside the Index object. Parameters
other:Any
The other object to compare against. Returns
bool
True if “other” is an Index and it has the same elements and order as the calling index; False otherwise. Examples
>>> idx1 = pd.Index([1, 2, 3])
>>> idx1
Int64Index([1, 2, 3], dtype='int64')
>>> idx1.equals(pd.Index([1, 2, 3]))
True
The elements inside are compared
>>> idx2 = pd.Index(["1", "2", "3"])
>>> idx2
Index(['1', '2', '3'], dtype='object')
>>> idx1.equals(idx2)
False
The order is compared
>>> ascending_idx = pd.Index([1, 2, 3])
>>> ascending_idx
Int64Index([1, 2, 3], dtype='int64')
>>> descending_idx = pd.Index([3, 2, 1])
>>> descending_idx
Int64Index([3, 2, 1], dtype='int64')
>>> ascending_idx.equals(descending_idx)
False
The dtype is not compared
>>> int64_idx = pd.Int64Index([1, 2, 3])
>>> int64_idx
Int64Index([1, 2, 3], dtype='int64')
>>> uint64_idx = pd.UInt64Index([1, 2, 3])
>>> uint64_idx
UInt64Index([1, 2, 3], dtype='uint64')
>>> int64_idx.equals(uint64_idx)
True | |
doc_1226 |
Return the roots of the series polynomial. Compute the roots for the series. Note that the accuracy of the roots decrease the further outside the domain they lie. Returns
rootsndarray
Array containing the roots of the series. | |
doc_1227 | Return the encoding used for text data, according to user preferences. User preferences are expressed differently on different systems, and might not be available programmatically on some systems, so this function only returns a guess. On some systems, it is necessary to invoke setlocale() to obtain the user preferences, so this function is not thread-safe. If invoking setlocale is not necessary or desired, do_setlocale should be set to False. On Android or in the UTF-8 mode (-X utf8 option), always return 'UTF-8', the locale and the do_setlocale argument are ignored. Changed in version 3.7: The function now always returns UTF-8 on Android or if the UTF-8 mode is enabled. | |
doc_1228 |
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X.
sample_weightarray-like of shape (n_samples,) default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. New in version 0.17: sample_weight support to LogisticRegression. Returns
self
Fitted estimator. Notes The SAGA solver supports both float64 and float32 bit arrays. | |
doc_1229 | turtle.st()
Make the turtle visible. >>> turtle.showturtle() | |
doc_1230 |
Given the location and size of the box, return the path of the box around it. Parameters
x0, y0, width, heightfloat
Location and size of the box.
mutation_sizefloat
A reference scale for the mutation. Returns
Path | |
doc_1231 | Returns the variance and mean of all elements in the input tensor. If unbiased is False, then the variance will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used. Parameters
input (Tensor) – the input tensor.
unbiased (bool) – whether to use the unbiased estimation or not Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[0.0146, 0.4258, 0.2211]])
>>> torch.var_mean(a)
(tensor(0.0423), tensor(0.2205))
torch.var_mean(input, dim, keepdim=False, unbiased=True) -> (Tensor, Tensor)
Returns the variance and mean of each row of the input tensor in the given dimension dim. If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s). If unbiased is False, then the variance will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used. Parameters
input (Tensor) – the input tensor.
dim (int or tuple of python:ints) – the dimension or dimensions to reduce.
keepdim (bool) – whether the output tensor has dim retained or not.
unbiased (bool) – whether to use the unbiased estimation or not Example: >>> a = torch.randn(4, 4)
>>> a
tensor([[-1.5650, 2.0415, -0.1024, -0.5790],
[ 0.2325, -2.6145, -1.6428, -0.3537],
[-0.2159, -1.1069, 1.2882, -1.3265],
[-0.6706, -1.5893, 0.6827, 1.6727]])
>>> torch.var_mean(a, 1)
(tensor([2.3174, 1.6403, 1.4092, 2.0791]), tensor([-0.0512, -1.0946, -0.3403, 0.0239])) | |
doc_1232 | tf.compat.v1.all_variables()
Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02. Instructions for updating: Please use tf.global_variables instead. | |
doc_1233 | See Migration guide for more details. tf.compat.v1.distribute.cluster_resolver.UnionResolver
tf.distribute.cluster_resolver.UnionResolver(
*args, **kwargs
)
This class performs a union given two or more existing ClusterResolvers. It merges the underlying ClusterResolvers, and returns one unified ClusterSpec when cluster_spec is called. The details of the merge function is documented in the cluster_spec function. For additional ClusterResolver properties such as task type, task index, rpc layer, environment, etc..., we will return the value from the first ClusterResolver in the union. An example to combine two cluster resolvers: cluster_0 = tf.train.ClusterSpec({"worker": ["worker0.example.com:2222",
"worker1.example.com:2222"]})
cluster_resolver_0 = SimpleClusterResolver(cluster, task_type="worker",
task_id=0,
rpc_layer="grpc")
cluster_1 = tf.train.ClusterSpec({"ps": ["ps0.example.com:2222",
"ps1.example.com:2222"]})
cluster_resolver_1 = SimpleClusterResolver(cluster, task_type="ps",
task_id=0,
rpc_layer="grpc")
# Its task type would be "worker".
cluster_resolver = UnionClusterResolver(cluster_resolver_0,
cluster_resolver_1)
An example to override the number of GPUs in a TFConfigClusterResolver instance: tf_config = TFConfigClusterResolver()
gpu_override = SimpleClusterResolver(tf_config.cluster_spec(),
num_accelerators={"GPU": 1})
cluster_resolver = UnionResolver(gpu_override, tf_config)
Args
*args ClusterResolver objects to be unionized.
**kwargs rpc_layer - (Optional) Override value for the RPC layer used by TensorFlow. task_type - (Optional) Override value for the current task type. task_id - (Optional) Override value for the current task index.
Raises
TypeError If any argument is not a subclass of ClusterResolvers.
ValueError If there are no arguments passed.
Attributes
environment Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property.
rpc_layer
task_id Returns the task id this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=0)
...
if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0:
# Perform something that's only applicable on 'worker' type, id 0. This
# block will run on this particular instance since we've specified this
# task to be a 'worker', id 0 in above cluster resolver.
else:
# Perform something that's only applicable on other ids. This block will
# not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.cluster_resolver.TPUClusterResolver. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class docstring.
task_type Returns the task type this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See Multi-worker configuration for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=1)
...
if cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. This block
# will run on this particular instance since we've specified this task to
# be a worker in above cluster resolver.
elif cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. This
# block will not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.experimental.TPUStrategy. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class doc.
Methods cluster_spec View source
cluster_spec()
Returns a union of all the ClusterSpecs from the ClusterResolvers.
Returns A ClusterSpec containing host information merged from all the underlying ClusterResolvers.
Raises
KeyError If there are conflicting keys detected when merging two or more dictionaries, this exception is raised.
Note: If there are multiple ClusterResolvers exposing ClusterSpecs with the same job name, we will merge the list/dict of workers.
If all underlying ClusterSpecs expose the set of workers as lists, we will concatenate the lists of workers, starting with the list of workers from the first ClusterResolver passed into the constructor. If any of the ClusterSpecs expose the set of workers as a dict, we will treat all the sets of workers as dicts (even if they are returned as lists) and will only merge them into a dict if there is no conflicting keys. If there is a conflicting key, we will raise a KeyError. master View source
master(
task_type=None, task_id=None, rpc_layer=None
)
Returns the master address to use when creating a session. This usually returns the master from the first ClusterResolver passed in, but you can override this by specifying the task_type and task_id.
Note: this is only useful for TensorFlow 1.x.
Args
task_type (Optional) The type of the TensorFlow task of the master.
task_id (Optional) The index of the TensorFlow task of the master.
rpc_layer (Optional) The RPC protocol for the given cluster.
Returns The name or URL of the session master.
num_accelerators View source
num_accelerators(
task_type=None, task_id=None, config_proto=None
)
Returns the number of accelerator cores per worker. This returns the number of accelerator cores (such as GPUs and TPUs) available per worker. Optionally, we allow callers to specify the task_type, and task_id, for if they want to target a specific TensorFlow task to query the number of accelerators. This is to support heterogenous environments, where the number of accelerators cores per host is different.
Args
task_type (Optional) The type of the TensorFlow task of the machine we want to query.
task_id (Optional) The index of the TensorFlow task of the machine we want to query.
config_proto (Optional) Configuration for starting a new session to query how many accelerator cores it has.
Returns A map of accelerator types to number of cores. | |
doc_1234 |
Initialize self. See help(type(self)) for accurate signature. | |
doc_1235 | See torch.subtract(). | |
doc_1236 |
Perform classification on an array of test vectors X. The predicted class C for each sample in X is returned. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Cndarray of shape (n_samples,) | |
doc_1237 | Dictionary mapping filename extensions to MIME types. | |
doc_1238 |
Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter tol; “trailing” means highest order coefficient(s), e.g., in [0, 1, 1, 0, 0] (which represents 0 + x + x**2 + 0*x**3 + 0*x**4) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters
carray_like
1-d array of coefficients, ordered from lowest order to highest.
tolnumber, optional
Trailing (i.e., highest order) elements with absolute value less than or equal to tol (default value is zero) are removed. Returns
trimmedndarray
1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises
ValueError
If tol < 0 See also trimseq
Examples >>> from numpy.polynomial import polyutils as pu
>>> pu.trimcoef((0,0,3,0,5,0,0))
array([0., 0., 3., 0., 5.])
>>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed
array([0.])
>>> i = complex(0,1) # works for complex
>>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3)
array([0.0003+0.j , 0.001 -0.001j]) | |
doc_1239 | See Migration guide for more details. tf.compat.v1.raw_ops.Conv3DBackpropInputV2
tf.raw_ops.Conv3DBackpropInputV2(
input_sizes, filter, out_backprop, strides, padding,
data_format='NDHWC', dilations=[1, 1, 1, 1, 1], name=None
)
Args
input_sizes A Tensor. Must be one of the following types: int32, int64. An integer vector representing the tensor shape of input, where input is a 5-D [batch, depth, rows, cols, in_channels] tensor.
filter A Tensor. Must be one of the following types: half, bfloat16, float32, float64. Shape [depth, rows, cols, in_channels, out_channels]. in_channels must match between input and filter.
out_backprop A Tensor. Must have the same type as filter. Backprop signal of shape [batch, out_depth, out_rows, out_cols, out_channels].
strides A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1.
padding A string from: "SAME", "VALID". The type of padding algorithm to use.
data_format An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
dilations An optional list of ints. Defaults to [1, 1, 1, 1, 1]. 1-D tensor of length 5. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions must be 1.
name A name for the operation (optional).
Returns A Tensor. Has the same type as filter. | |
doc_1240 |
Return True if year is a leap year. Examples
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.is_leap_year
True | |
doc_1241 |
Set the (group) id for the artist. Parameters
gidstr | |
doc_1242 |
Stack arrays in sequence vertically (row wise). This is equivalent to concatenation along the first axis after 1-D arrays of shape (N,) have been reshaped to (1,N). Rebuilds arrays divided by vsplit. This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions concatenate, stack and block provide more general stacking and concatenation operations. Parameters
tupsequence of ndarrays
The arrays must have the same shape along all but the first axis. 1-D arrays must have the same length. Returns
stackedndarray
The array formed by stacking the given arrays, will be at least 2-D. See also concatenate
Join a sequence of arrays along an existing axis. stack
Join a sequence of arrays along a new axis. block
Assemble an nd-array from nested lists of blocks. hstack
Stack arrays in sequence horizontally (column wise). dstack
Stack arrays in sequence depth wise (along third axis). column_stack
Stack 1-D arrays as columns into a 2-D array. vsplit
Split an array into multiple sub-arrays vertically (row-wise). Notes The function is applied to both the _data and the _mask, if any. Examples >>> a = np.array([1, 2, 3])
>>> b = np.array([4, 5, 6])
>>> np.vstack((a,b))
array([[1, 2, 3],
[4, 5, 6]])
>>> a = np.array([[1], [2], [3]])
>>> b = np.array([[4], [5], [6]])
>>> np.vstack((a,b))
array([[1],
[2],
[3],
[4],
[5],
[6]]) | |
doc_1243 |
Logistic Regression (aka logit, MaxEnt) classifier. In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the ‘multi_class’ option is set to ‘ovr’, and uses the cross-entropy loss if the ‘multi_class’ option is set to ‘multinomial’. (Currently the ‘multinomial’ option is supported only by the ‘lbfgs’, ‘sag’, ‘saga’ and ‘newton-cg’ solvers.) This class implements regularized logistic regression using the ‘liblinear’ library, ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ solvers. Note that regularization is applied by default. It can handle both dense and sparse input. Use C-ordered arrays or CSR matrices containing 64-bit floats for optimal performance; any other input format will be converted (and copied). The ‘newton-cg’, ‘sag’, and ‘lbfgs’ solvers support only L2 regularization with primal formulation, or no regularization. The ‘liblinear’ solver supports both L1 and L2 regularization, with a dual formulation only for the L2 penalty. The Elastic-Net regularization is only supported by the ‘saga’ solver. Read more in the User Guide. Parameters
penalty{‘l1’, ‘l2’, ‘elasticnet’, ‘none’}, default=’l2’
Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties. ‘elasticnet’ is only supported by the ‘saga’ solver. If ‘none’ (not supported by the liblinear solver), no regularization is applied. New in version 0.19: l1 penalty with SAGA solver (allowing ‘multinomial’ + L1)
dualbool, default=False
Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features.
tolfloat, default=1e-4
Tolerance for stopping criteria.
Cfloat, default=1.0
Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization.
fit_interceptbool, default=True
Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function.
intercept_scalingfloat, default=1
Useful only when the solver ‘liblinear’ is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic_feature_weight. Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.
class_weightdict or ‘balanced’, default=None
Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. New in version 0.17: class_weight=’balanced’
random_stateint, RandomState instance, default=None
Used when solver == ‘sag’, ‘saga’ or ‘liblinear’ to shuffle the data. See Glossary for details.
solver{‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’}, default=’lbfgs’
Algorithm to use in the optimization problem. For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and ‘saga’ are faster for large ones. For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes. ‘newton-cg’, ‘lbfgs’, ‘sag’ and ‘saga’ handle L2 or no penalty ‘liblinear’ and ‘saga’ also handle L1 penalty ‘saga’ also supports ‘elasticnet’ penalty ‘liblinear’ does not support setting penalty='none'
Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. New in version 0.17: Stochastic Average Gradient descent solver. New in version 0.19: SAGA solver. Changed in version 0.22: The default solver changed from ‘liblinear’ to ‘lbfgs’ in 0.22.
max_iterint, default=100
Maximum number of iterations taken for the solvers to converge.
multi_class{‘auto’, ‘ovr’, ‘multinomial’}, default=’auto’
If the option chosen is ‘ovr’, then a binary problem is fit for each label. For ‘multinomial’ the loss minimised is the multinomial loss fit across the entire probability distribution, even when the data is binary. ‘multinomial’ is unavailable when solver=’liblinear’. ‘auto’ selects ‘ovr’ if the data is binary, or if solver=’liblinear’, and otherwise selects ‘multinomial’. New in version 0.18: Stochastic Average Gradient descent solver for ‘multinomial’ case. Changed in version 0.22: Default changed from ‘ovr’ to ‘auto’ in 0.22.
verboseint, default=0
For the liblinear and lbfgs solvers set verbose to any positive number for verbosity.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. Useless for liblinear solver. See the Glossary. New in version 0.17: warm_start to support lbfgs, newton-cg, sag, saga solvers.
n_jobsint, default=None
Number of CPU cores used when parallelizing over classes if multi_class=’ovr’”. This parameter is ignored when the solver is set to ‘liblinear’ regardless of whether ‘multi_class’ is specified or not. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
l1_ratiofloat, default=None
The Elastic-Net mixing parameter, with 0 <= l1_ratio <= 1. Only used if penalty='elasticnet'. Setting l1_ratio=0 is equivalent to using penalty='l2', while setting l1_ratio=1 is equivalent to using penalty='l1'. For 0 < l1_ratio <1, the penalty is a combination of L1 and L2. Attributes
classes_ndarray of shape (n_classes, )
A list of class labels known to the classifier.
coef_ndarray of shape (1, n_features) or (n_classes, n_features)
Coefficient of the features in the decision function. coef_ is of shape (1, n_features) when the given problem is binary. In particular, when multi_class='multinomial', coef_ corresponds to outcome 1 (True) and -coef_ corresponds to outcome 0 (False).
intercept_ndarray of shape (1,) or (n_classes,)
Intercept (a.k.a. bias) added to the decision function. If fit_intercept is set to False, the intercept is set to zero. intercept_ is of shape (1,) when the given problem is binary. In particular, when multi_class='multinomial', intercept_ corresponds to outcome 1 (True) and -intercept_ corresponds to outcome 0 (False).
n_iter_ndarray of shape (n_classes,) or (1, )
Actual number of iterations for all classes. If binary or multinomial, it returns only 1 element. For liblinear solver, only the maximum number of iteration across all classes is given. Changed in version 0.20: In SciPy <= 1.0.0 the number of lbfgs iterations may exceed max_iter. n_iter_ will now report at most max_iter. See also
SGDClassifier
Incrementally trained logistic regression (when given the parameter loss="log").
LogisticRegressionCV
Logistic regression with built-in cross validation. Notes The underlying C implementation uses a random number generator to select features when fitting the model. It is thus not uncommon, to have slightly different results for the same input data. If that happens, try with a smaller tol parameter. Predict output may not match that of standalone liblinear in certain cases. See differences from liblinear in the narrative documentation. References L-BFGS-B – Software for Large-scale Bound-constrained Optimization
Ciyou Zhu, Richard Byrd, Jorge Nocedal and Jose Luis Morales. http://users.iems.northwestern.edu/~nocedal/lbfgsb.html LIBLINEAR – A Library for Large Linear Classification
https://www.csie.ntu.edu.tw/~cjlin/liblinear/ SAG – Mark Schmidt, Nicolas Le Roux, and Francis Bach
Minimizing Finite Sums with the Stochastic Average Gradient https://hal.inria.fr/hal-00860051/document SAGA – Defazio, A., Bach F. & Lacoste-Julien S. (2014).
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives https://arxiv.org/abs/1407.0202 Hsiang-Fu Yu, Fang-Lan Huang, Chih-Jen Lin (2011). Dual coordinate descent
methods for logistic regression and maximum entropy models. Machine Learning 85(1-2):41-75. https://www.csie.ntu.edu.tw/~cjlin/papers/maxent_dual.pdf Examples >>> from sklearn.datasets import load_iris
>>> from sklearn.linear_model import LogisticRegression
>>> X, y = load_iris(return_X_y=True)
>>> clf = LogisticRegression(random_state=0).fit(X, y)
>>> clf.predict(X[:2, :])
array([0, 0])
>>> clf.predict_proba(X[:2, :])
array([[9.8...e-01, 1.8...e-02, 1.4...e-08],
[9.7...e-01, 2.8...e-02, ...e-08]])
>>> clf.score(X, y)
0.97...
Methods
decision_function(X) Predict confidence scores for samples.
densify() Convert coefficient matrix to dense array format.
fit(X, y[, sample_weight]) Fit the model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict class labels for samples in X.
predict_log_proba(X) Predict logarithm of probability estimates.
predict_proba(X) Probability estimates.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
sparsify() Convert coefficient matrix to sparse format.
decision_function(X) [source]
Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.
densify() [source]
Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns
self
Fitted estimator.
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X.
sample_weightarray-like of shape (n_samples,) default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. New in version 0.17: sample_weight support to LogisticRegression. Returns
self
Fitted estimator. Notes The SAGA solver supports both float64 and float32 bit arrays.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict class labels for samples in X. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape [n_samples]
Predicted class label per sample.
predict_log_proba(X) [source]
Predict logarithm of probability estimates. The returned estimates for all classes are ordered by the label of classes. Parameters
Xarray-like of shape (n_samples, n_features)
Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns
Tarray-like of shape (n_samples, n_classes)
Returns the log-probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
predict_proba(X) [source]
Probability estimates. The returned estimates for all classes are ordered by the label of classes. For a multi_class problem, if multi_class is set to be “multinomial” the softmax function is used to find the predicted probability of each class. Else use a one-vs-rest approach, i.e calculate the probability of each class assuming it to be positive using the logistic function. and normalize these values across all the classes. Parameters
Xarray-like of shape (n_samples, n_features)
Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns
Tarray-like of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
sparsify() [source]
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns
self
Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. | |
doc_1244 |
Add new categories. new_categories will be included at the last/highest place in the categories and will be unused directly after this call. Parameters
new_categories:category or list-like of category
The new categories to be included.
inplace:bool, default False
Whether or not to add the categories inplace or return a copy of this categorical with added categories. Deprecated since version 1.3.0. Returns
cat:Categorical or None
Categorical with new categories added or None if inplace=True. Raises
ValueError
If the new categories include old categories or do not validate as categories See also rename_categories
Rename categories. reorder_categories
Reorder categories. remove_categories
Remove the specified categories. remove_unused_categories
Remove categories which are not used. set_categories
Set the categories to the specified ones. Examples
>>> c = pd.Categorical(['c', 'b', 'c'])
>>> c
['c', 'b', 'c']
Categories (2, object): ['b', 'c']
>>> c.add_categories(['d', 'a'])
['c', 'b', 'c']
Categories (4, object): ['b', 'c', 'd', 'a'] | |
doc_1245 |
Bases: matplotlib.cm.ScalarMappable, matplotlib.contour.ContourLabeler Store a set of contour lines or filled regions. User-callable method: clabel Parameters
axAxes
levels[level0, level1, ..., leveln]
A list of floating point numbers indicating the contour levels.
allsegs[level0segs, level1segs, ...]
List of all the polygon segments for all the levels. For contour lines len(allsegs) == len(levels), and for filled contour regions len(allsegs) = len(levels)-1. The lists should look like level0segs = [polygon0, polygon1, ...]
polygon0 = [[x0, y0], [x1, y1], ...]
allkindsNone or [level0kinds, level1kinds, ...]
Optional list of all the polygon vertex kinds (code types), as described and used in Path. This is used to allow multiply- connected paths such as holes within filled polygons. If not None, len(allkinds) == len(allsegs). The lists should look like level0kinds = [polygon0kinds, ...]
polygon0kinds = [vertexcode0, vertexcode1, ...]
If allkinds is not None, usually all polygons for a particular contour level are grouped together so that level0segs = [polygon0] and level0kinds = [polygon0kinds]. **kwargs
Keyword arguments are as described in the docstring of contour. Attributes
axAxes
The Axes object in which the contours are drawn.
collectionssilent_list of PathCollections
The Artists representing the contour. This is a list of PathCollections for both line and filled contours.
levelsarray
The values of the contour levels.
layersarray
Same as levels for line contours; half-way between levels for filled contours. See ContourSet._process_colors. Draw contour lines or filled regions, depending on whether keyword arg filled is False (default) or True. Call signature: ContourSet(ax, levels, allsegs, [allkinds], **kwargs)
Parameters
axAxes
The Axes object to draw on.
levels[level0, level1, ..., leveln]
A list of floating point numbers indicating the contour levels.
allsegs[level0segs, level1segs, ...]
List of all the polygon segments for all the levels. For contour lines len(allsegs) == len(levels), and for filled contour regions len(allsegs) = len(levels)-1. The lists should look like level0segs = [polygon0, polygon1, ...]
polygon0 = [[x0, y0], [x1, y1], ...]
allkinds[level0kinds, level1kinds, ...], optional
Optional list of all the polygon vertex kinds (code types), as described and used in Path. This is used to allow multiply- connected paths such as holes within filled polygons. If not None, len(allkinds) == len(allsegs). The lists should look like level0kinds = [polygon0kinds, ...]
polygon0kinds = [vertexcode0, vertexcode1, ...]
If allkinds is not None, usually all polygons for a particular contour level are grouped together so that level0segs = [polygon0] and level0kinds = [polygon0kinds]. **kwargs
Keyword arguments are as described in the docstring of contour. changed()[source]
Call this whenever the mappable is changed to notify all the callbackSM listeners to the 'changed' signal.
find_nearest_contour(x, y, indices=None, pixel=True)[source]
Find the point in the contour plot that is closest to (x, y). Parameters
x, yfloat
The reference point.
indiceslist of int or None, default: None
Indices of contour levels to consider. If None (the default), all levels are considered.
pixelbool, default: True
If True, measure distance in pixel (screen) space, which is useful for manual contour labeling; else, measure distance in axes space. Returns
contourCollection
The contour that is closest to (x, y).
segmentint
The index of the Path in contour that is closest to (x, y).
indexint
The index of the path segment in segment that is closest to (x, y).
xmin, yminfloat
The point in the contour plot that is closest to (x, y).
d2float
The squared distance from (xmin, ymin) to (x, y).
get_alpha()[source]
Return alpha to be applied to all ContourSet artists.
get_transform()[source]
Return the Transform instance used by this ContourSet.
legend_elements(variable_name='x', str_format=<class 'str'>)[source]
Return a list of artists and labels suitable for passing through to legend which represent this ContourSet. The labels have the form "0 < x <= 1" stating the data ranges which the artists represent. Parameters
variable_namestr
The string used inside the inequality used on the labels.
str_formatfunction: float -> str
Function used to format the numbers in the labels. Returns
artistslist[Artist]
A list of the artists.
labelslist[str]
A list of the labels.
set_alpha(alpha)[source]
Set the alpha blending value for all ContourSet artists. alpha must be between 0 (transparent) and 1 (opaque). | |
doc_1246 | Return (bytes, is_cryptographic): bytes are num pseudo-random bytes, is_cryptographic is True if the bytes generated are cryptographically strong. Raises an SSLError if the operation is not supported by the current RAND method. Generated pseudo-random byte sequences will be unique if they are of sufficient length, but are not necessarily unpredictable. They can be used for non-cryptographic purposes and for certain purposes in cryptographic protocols, but usually not for key generation etc. For almost all applications os.urandom() is preferable. New in version 3.3. Deprecated since version 3.6: OpenSSL has deprecated ssl.RAND_pseudo_bytes(), use ssl.RAND_bytes() instead. | |
doc_1247 | In-place version of frac() | |
doc_1248 |
Get the artist's bounding box in display space. The bounding box' width and height are nonnegative. Subclasses should override for inclusion in the bounding box "tight" calculation. Default is to return an empty bounding box at 0, 0. Be careful when using this function, the results will not update if the artist window extent of the artist changes. The extent can change due to any changes in the transform stack, such as changing the axes limits, the figure size, or the canvas used (as is done when saving a figure). This can lead to unexpected behavior where interactive figures will look fine on the screen, but will save incorrectly. | |
doc_1249 |
Return a parametrized wrapper around the number type. New in version 1.22. Returns
aliastypes.GenericAlias
A parametrized number type. See also PEP 585
Type hinting generics in standard collections. Notes This method is only available for python 3.9 and later. Examples >>> from typing import Any
>>> import numpy as np
>>> np.signedinteger[Any]
numpy.signedinteger[typing.Any] | |
doc_1250 |
Make a box plot of the DataFrame columns. A box plot is a method for graphically depicting groups of numerical data through their quartiles. The box extends from the Q1 to Q3 quartile values of the data, with a line at the median (Q2). The whiskers extend from the edges of box to show the range of the data. The position of the whiskers is set by default to 1.5*IQR (IQR = Q3 - Q1) from the edges of the box. Outlier points are those past the end of the whiskers. For further details see Wikipedia’s entry for boxplot. A consideration when using this chart is that the box and the whiskers can overlap, which is very common when plotting small sets of data. Parameters
by:str or sequence
Column in the DataFrame to group by. Changed in version 1.4.0: Previously, by is silently ignore and makes no groupings **kwargs
Additional keywords are documented in DataFrame.plot(). Returns
matplotlib.axes.Axes or numpy.ndarray of them
See also DataFrame.boxplot
Another method to draw a box plot. Series.plot.box
Draw a box plot from a Series object. matplotlib.pyplot.boxplot
Draw a box plot in matplotlib. Examples Draw a box plot from a DataFrame with four columns of randomly generated data.
>>> data = np.random.randn(25, 4)
>>> df = pd.DataFrame(data, columns=list('ABCD'))
>>> ax = df.plot.box()
You can also generate groupings if you specify the by parameter (which can take a column name, or a list or tuple of column names): Changed in version 1.4.0.
>>> age_list = [8, 10, 12, 14, 72, 74, 76, 78, 20, 25, 30, 35, 60, 85]
>>> df = pd.DataFrame({"gender": list("MMMMMMMMFFFFFF"), "age": age_list})
>>> ax = df.plot.box(column="age", by="gender", figsize=(10, 8)) | |
doc_1251 |
Return list of slices corresponding to the unmasked clumps of a 1-D array. (A “clump” is defined as a contiguous region of the array). Parameters
andarray
A one-dimensional masked array. Returns
sliceslist of slice
The list of slices, one for each continuous region of unmasked elements in a. See also
flatnotmasked_edges, flatnotmasked_contiguous, notmasked_edges
notmasked_contiguous, clump_masked
Notes New in version 1.4.0. Examples >>> a = np.ma.masked_array(np.arange(10))
>>> a[[0, 1, 2, 6, 8, 9]] = np.ma.masked
>>> np.ma.clump_unmasked(a)
[slice(3, 6, None), slice(7, 8, None)] | |
doc_1252 | tf.compat.v1.nn.fused_batch_norm(
x, scale, offset, mean=None, variance=None, epsilon=0.001,
data_format='NHWC', is_training=True, name=None,
exponential_avg_factor=1.0
)
See Source: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy.
Args
x Input Tensor of 4 or 5 dimensions.
scale A Tensor of 1 dimension for scaling.
offset A Tensor of 1 dimension for bias.
mean A Tensor of 1 dimension for population mean. The shape and meaning of this argument depends on the value of is_training and exponential_avg_factor as follows: is_trainingFalse (inference): Mean must be a Tensor of the same shape as scale containing the estimated population mean computed during training. is_trainingTrue and exponential_avg_factor == 1.0: Mean must be None. is_trainingTrue and exponential_avg_factor != 1.0: Mean must be a Tensor of the same shape as scale containing the exponential running mean.
variance A Tensor of 1 dimension for population variance. The shape and meaning of this argument depends on the value of is_training and exponential_avg_factor as follows: is_trainingFalse (inference): Variance must be a Tensor of the same shape as scale containing the estimated population variance computed during training. is_training==True and exponential_avg_factor == 1.0: Variance must be None. is_training==True and exponential_avg_factor != 1.0: Variance must be a Tensor of the same shape as scale containing the exponential running variance.
epsilon A small float number added to the variance of x.
data_format The data format for x. Support "NHWC" (default) or "NCHW" for 4D tenors and "NDHWC" or "NCDHW" for 5D tensors.
is_training A bool value to specify if the operation is used for training or inference.
name A name for this operation (optional).
exponential_avg_factor A float number (usually between 0 and 1) used for controlling the decay of the running population average of mean and variance. If set to 1.0, the current batch average is returned.
Returns
y A 4D or 5D Tensor for the normalized, scaled, offsetted x.
running_mean A 1D Tensor for the exponential running mean of x. The output value is (1 - exponential_avg_factor) * mean + exponential_avg_factor * batch_mean), where batch_mean is the mean of the current batch in x.
running_var A 1D Tensor for the exponential running variance The output value is (1 - exponential_avg_factor) * variance + exponential_avg_factor * batch_variance), where batch_variance is the variance of the current batch in x. References: Batch Normalization - Accelerating Deep Network Training by Reducing Internal Covariate Shift: Ioffe et al., 2015 (pdf) | |
doc_1253 |
Return the width ratios. This is None if no width ratios have been set explicitly. | |
doc_1254 | See Migration guide for more details. tf.compat.v1.raw_ops.ExperimentalStatsAggregatorHandle
tf.raw_ops.ExperimentalStatsAggregatorHandle(
container='', shared_name='', name=None
)
Args
container An optional string. Defaults to "".
shared_name An optional string. Defaults to "".
name A name for the operation (optional).
Returns A Tensor of type resource. | |
doc_1255 |
Autoscale the scalar limits on the norm instance using the current array | |
doc_1256 | Compress data, returning a bytes object containing compressed data for at least part of the data in data. This data should be concatenated to the output produced by any preceding calls to the compress() method. Some input may be kept in internal buffers for later processing. | |
doc_1257 | sys.ps2
Strings specifying the primary and secondary prompt of the interpreter. These are only defined if the interpreter is in interactive mode. Their initial values in this case are '>>> ' and '... '. If a non-string object is assigned to either variable, its str() is re-evaluated each time the interpreter prepares to read a new interactive command; this can be used to implement a dynamic prompt. | |
doc_1258 | See torch.ger() | |
doc_1259 | Locale category for the formatting of time. The function time.strftime() follows these conventions. | |
doc_1260 |
router = routers.SimpleRouter()
router.register(r'users', UserViewSet)
router.register(r'accounts', AccountViewSet)
urlpatterns = router.urls
There are two mandatory arguments to the register() method:
prefix - The URL prefix to use for this set of routes.
viewset - The viewset class. Optionally, you may also specify an additional argument:
basename - The base to use for the URL names that are created. If unset the basename will be automatically generated based on the queryset attribute of the viewset, if it has one. Note that if the viewset does not include a queryset attribute then you must set basename when registering the viewset. The example above would generate the following URL patterns: URL pattern: ^users/$ Name: 'user-list'
URL pattern: ^users/{pk}/$ Name: 'user-detail'
URL pattern: ^accounts/$ Name: 'account-list'
URL pattern: ^accounts/{pk}/$ Name: 'account-detail'
Note: The basename argument is used to specify the initial part of the view name pattern. In the example above, that's the user or account part. Typically you won't need to specify the basename argument, but if you have a viewset where you've defined a custom get_queryset method, then the viewset may not have a .queryset attribute set. If you try to register that viewset you'll see an error like this: 'basename' argument not specified, and could not automatically determine the name from the viewset, as it does not have a '.queryset' attribute.
This means you'll need to explicitly set the basename argument when registering the viewset, as it could not be automatically determined from the model name. Using include with routers The .urls attribute on a router instance is simply a standard list of URL patterns. There are a number of different styles for how you can include these URLs. For example, you can append router.urls to a list of existing views... router = routers.SimpleRouter()
router.register(r'users', UserViewSet)
router.register(r'accounts', AccountViewSet)
urlpatterns = [
path('forgot-password/', ForgotPasswordFormView.as_view()),
]
urlpatterns += router.urls
Alternatively you can use Django's include function, like so... urlpatterns = [
path('forgot-password', ForgotPasswordFormView.as_view()),
path('', include(router.urls)),
]
You may use include with an application namespace: urlpatterns = [
path('forgot-password/', ForgotPasswordFormView.as_view()),
path('api/', include((router.urls, 'app_name'))),
]
Or both an application and instance namespace: urlpatterns = [
path('forgot-password/', ForgotPasswordFormView.as_view()),
path('api/', include((router.urls, 'app_name'), namespace='instance_name')),
]
See Django's URL namespaces docs and the include API reference for more details. Note: If using namespacing with hyperlinked serializers you'll also need to ensure that any view_name parameters on the serializers correctly reflect the namespace. In the examples above you'd need to include a parameter such as view_name='app_name:user-detail' for serializer fields hyperlinked to the user detail view. The automatic view_name generation uses a pattern like %(model_name)-detail. Unless your models names actually clash you may be better off not namespacing your Django REST Framework views when using hyperlinked serializers. Routing for extra actions A viewset may mark extra actions for routing by decorating a method with the @action decorator. These extra actions will be included in the generated routes. For example, given the set_password method on the UserViewSet class: from myapp.permissions import IsAdminOrIsSelf
from rest_framework.decorators import action
class UserViewSet(ModelViewSet):
...
@action(methods=['post'], detail=True, permission_classes=[IsAdminOrIsSelf])
def set_password(self, request, pk=None):
...
The following route would be generated: URL pattern: ^users/{pk}/set_password/$
URL name: 'user-set-password'
By default, the URL pattern is based on the method name, and the URL name is the combination of the ViewSet.basename and the hyphenated method name. If you don't want to use the defaults for either of these values, you can instead provide the url_path and url_name arguments to the @action decorator. For example, if you want to change the URL for our custom action to ^users/{pk}/change-password/$, you could write: from myapp.permissions import IsAdminOrIsSelf
from rest_framework.decorators import action
class UserViewSet(ModelViewSet):
...
@action(methods=['post'], detail=True, permission_classes=[IsAdminOrIsSelf],
url_path='change-password', url_name='change_password')
def set_password(self, request, pk=None):
...
The above example would now generate the following URL pattern: URL path: ^users/{pk}/change-password/$
URL name: 'user-change_password'
API Guide SimpleRouter This router includes routes for the standard set of list, create, retrieve, update, partial_update and destroy actions. The viewset can also mark additional methods to be routed, using the @action decorator.
URL Style
HTTP Method
Action
URL Name
{prefix}/
GET
list
{basename}-list
POST
create
{prefix}/{url_path}/
GET, or as specified by `methods` argument
`@action(detail=False)` decorated method
{basename}-{url_name}
{prefix}/{lookup}/
GET
retrieve
{basename}-detail
PUT
update
PATCH
partial_update
DELETE
destroy
{prefix}/{lookup}/{url_path}/
GET, or as specified by `methods` argument
`@action(detail=True)` decorated method
{basename}-{url_name}
By default the URLs created by SimpleRouter are appended with a trailing slash. This behavior can be modified by setting the trailing_slash argument to False when instantiating the router. For example: router = SimpleRouter(trailing_slash=False)
Trailing slashes are conventional in Django, but are not used by default in some other frameworks such as Rails. Which style you choose to use is largely a matter of preference, although some javascript frameworks may expect a particular routing style. The router will match lookup values containing any characters except slashes and period characters. For a more restrictive (or lenient) lookup pattern, set the lookup_value_regex attribute on the viewset. For example, you can limit the lookup to valid UUIDs: class MyModelViewSet(mixins.RetrieveModelMixin, viewsets.GenericViewSet):
lookup_field = 'my_model_id'
lookup_value_regex = '[0-9a-f]{32}'
DefaultRouter This router is similar to SimpleRouter as above, but additionally includes a default API root view, that returns a response containing hyperlinks to all the list views. It also generates routes for optional .json style format suffixes.
URL Style
HTTP Method
Action
URL Name
[.format]
GET
automatically generated root view
api-root
{prefix}/[.format]
GET
list
{basename}-list
POST
create
{prefix}/{url_path}/[.format]
GET, or as specified by `methods` argument
`@action(detail=False)` decorated method
{basename}-{url_name}
{prefix}/{lookup}/[.format]
GET
retrieve
{basename}-detail
PUT
update
PATCH
partial_update
DELETE
destroy
{prefix}/{lookup}/{url_path}/[.format]
GET, or as specified by `methods` argument
`@action(detail=True)` decorated method
{basename}-{url_name}
As with SimpleRouter the trailing slashes on the URL routes can be removed by setting the trailing_slash argument to False when instantiating the router. router = DefaultRouter(trailing_slash=False)
Custom Routers Implementing a custom router isn't something you'd need to do very often, but it can be useful if you have specific requirements about how the URLs for your API are structured. Doing so allows you to encapsulate the URL structure in a reusable way that ensures you don't have to write your URL patterns explicitly for each new view. The simplest way to implement a custom router is to subclass one of the existing router classes. The .routes attribute is used to template the URL patterns that will be mapped to each viewset. The .routes attribute is a list of Route named tuples. The arguments to the Route named tuple are: url: A string representing the URL to be routed. May include the following format strings:
{prefix} - The URL prefix to use for this set of routes.
{lookup} - The lookup field used to match against a single instance.
{trailing_slash} - Either a '/' or an empty string, depending on the trailing_slash argument. mapping: A mapping of HTTP method names to the view methods name: The name of the URL as used in reverse calls. May include the following format string:
{basename} - The base to use for the URL names that are created. initkwargs: A dictionary of any additional arguments that should be passed when instantiating the view. Note that the detail, basename, and suffix arguments are reserved for viewset introspection and are also used by the browsable API to generate the view name and breadcrumb links. Customizing dynamic routes You can also customize how the @action decorator is routed. Include the DynamicRoute named tuple in the .routes list, setting the detail argument as appropriate for the list-based and detail-based routes. In addition to detail, the arguments to DynamicRoute are: url: A string representing the URL to be routed. May include the same format strings as Route, and additionally accepts the {url_path} format string. name: The name of the URL as used in reverse calls. May include the following format strings:
{basename} - The base to use for the URL names that are created.
{url_name} - The url_name provided to the @action. initkwargs: A dictionary of any additional arguments that should be passed when instantiating the view. Example The following example will only route to the list and retrieve actions, and does not use the trailing slash convention. from rest_framework.routers import Route, DynamicRoute, SimpleRouter
class CustomReadOnlyRouter(SimpleRouter):
"""
A router for read-only APIs, which doesn't use trailing slashes.
"""
routes = [
Route(
url=r'^{prefix}$',
mapping={'get': 'list'},
name='{basename}-list',
detail=False,
initkwargs={'suffix': 'List'}
),
Route(
url=r'^{prefix}/{lookup}$',
mapping={'get': 'retrieve'},
name='{basename}-detail',
detail=True,
initkwargs={'suffix': 'Detail'}
),
DynamicRoute(
url=r'^{prefix}/{lookup}/{url_path}$',
name='{basename}-{url_name}',
detail=True,
initkwargs={}
)
]
Let's take a look at the routes our CustomReadOnlyRouter would generate for a simple viewset. views.py: class UserViewSet(viewsets.ReadOnlyModelViewSet):
"""
A viewset that provides the standard actions
"""
queryset = User.objects.all()
serializer_class = UserSerializer
lookup_field = 'username'
@action(detail=True)
def group_names(self, request, pk=None):
"""
Returns a list of all the group names that the given
user belongs to.
"""
user = self.get_object()
groups = user.groups.all()
return Response([group.name for group in groups])
urls.py: router = CustomReadOnlyRouter()
router.register('users', UserViewSet)
urlpatterns = router.urls
The following mappings would be generated...
URL
HTTP Method
Action
URL Name
/users
GET
list
user-list
/users/{username}
GET
retrieve
user-detail
/users/{username}/group_names
GET
group_names
user-group-names
For another example of setting the .routes attribute, see the source code for the SimpleRouter class. Advanced custom routers If you want to provide totally custom behavior, you can override BaseRouter and override the get_urls(self) method. The method should inspect the registered viewsets and return a list of URL patterns. The registered prefix, viewset and basename tuples may be inspected by accessing the self.registry attribute. You may also want to override the get_default_basename(self, viewset) method, or else always explicitly set the basename argument when registering your viewsets with the router. Third Party Packages The following third party packages are also available. DRF Nested Routers The drf-nested-routers package provides routers and relationship fields for working with nested resources. ModelRouter (wq.db.rest) The wq.db package provides an advanced ModelRouter class (and singleton instance) that extends DefaultRouter with a register_model() API. Much like Django's admin.site.register, the only required argument to rest.router.register_model is a model class. Reasonable defaults for a url prefix, serializer, and viewset will be inferred from the model and global configuration. from wq.db import rest
from myapp.models import MyModel
rest.router.register_model(MyModel)
DRF-extensions The DRF-extensions package provides routers for creating nested viewsets, collection level controllers with customizable endpoint names. routers.py | |
doc_1261 |
Call self as a function. | |
doc_1262 | See Migration guide for more details. tf.compat.v1.app.flags.flag_dict_to_args
tf.compat.v1.flags.flag_dict_to_args(
flag_map, multi_flags=None
)
This method is used to convert a dictionary into a sequence of parameters for a binary that parses arguments using this module.
Args
flag_map dict, a mapping where the keys are flag names (strings). values are treated according to their type: If value is None, then only the name is emitted. If value is True, then only the name is emitted. If value is False, then only the name prepended with 'no' is emitted. If value is a string then --name=value is emitted. If value is a collection, this will emit --name=value1,value2,value3, unless the flag name is in multi_flags, in which case this will emit --name=value1 --name=value2 --name=value3. Everything else is converted to string an passed as such.
multi_flags set, names (strings) of flags that should be treated as multi-flags.
Yields sequence of string suitable for a subprocess execution. | |
doc_1263 | Decorator for skipping tests if bz2 doesn’t exist. | |
doc_1264 |
For base class, this just appends an event to events. | |
doc_1265 |
Identity function. If p is the returned series, then p(x) == x for all values of x. Parameters
domain{None, array_like}, optional
If given, the array must be of the form [beg, end], where beg and end are the endpoints of the domain. If None is given then the class domain is used. The default is None.
window{None, array_like}, optional
If given, the resulting array must be if the form [beg, end], where beg and end are the endpoints of the window. If None is given then the class window is used. The default is None. Returns
new_seriesseries
Series of representing the identity. | |
doc_1266 | See Migration guide for more details. tf.compat.v1.distribute.ReduceOp
SUM: Add all the values.
MEAN: Take the arithmetic mean ("average") of the values.
Class Variables
MEAN tf.distribute.ReduceOp
SUM tf.distribute.ReduceOp | |
doc_1267 |
Attributes
element_spec The type specification of an element of tf.distribute.DistributedIterator.
global_batch_size = 16
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensors(([1.],[2])).repeat(100).batch(global_batch_size)
distributed_iterator = iter(strategy.experimental_distribute_dataset(dataset))
distributed_iterator.element_spec
(PerReplicaSpec(TensorSpec(shape=(None, 1), dtype=tf.float32, name=None),
TensorSpec(shape=(None, 1), dtype=tf.float32, name=None)),
PerReplicaSpec(TensorSpec(shape=(None, 1), dtype=tf.int32, name=None),
TensorSpec(shape=(None, 1), dtype=tf.int32, name=None)))
Methods get_next View source
get_next()
Returns the next input from the iterator for all replicas. Example use:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.range(100).batch(2)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
dist_dataset_iterator = iter(dist_dataset)
@tf.function
def one_step(input):
return input
step_num = 5
for _ in range(step_num):
strategy.run(one_step, args=(dist_dataset_iterator.get_next(),))
strategy.experimental_local_results(dist_dataset_iterator.get_next())
(<tf.Tensor: shape=(1,), dtype=int64, numpy=array([10])>,
<tf.Tensor: shape=(1,), dtype=int64, numpy=array([11])>)
Returns A single tf.Tensor or a tf.distribute.DistributedValues which contains the next input for all replicas.
Raises tf.errors.OutOfRangeError: If the end of the iterator has been reached.
get_next_as_optional View source
get_next_as_optional()
Returns a tf.experimental.Optional that contains the next value for all replicas. If the tf.distribute.DistributedIterator has reached the end of the sequence, the returned tf.experimental.Optional will have no value. Example usage:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
global_batch_size = 2
steps_per_loop = 2
dataset = tf.data.Dataset.range(10).batch(global_batch_size)
distributed_iterator = iter(
strategy.experimental_distribute_dataset(dataset))
def step_fn(x):
# train the model with inputs
return x
@tf.function
def train_fn(distributed_iterator):
for _ in tf.range(steps_per_loop):
optional_data = distributed_iterator.get_next_as_optional()
if not optional_data.has_value():
break
per_replica_results = strategy.run(step_fn, args=(optional_data.get_value(),))
tf.print(strategy.experimental_local_results(per_replica_results))
train_fn(distributed_iterator)
# ([0 1], [2 3])
# ([4], [])
Returns An tf.experimental.Optional object representing the next value from the tf.distribute.DistributedIterator (if it has one) or no value.
__iter__
__iter__() | |
doc_1268 |
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators. | |
doc_1269 | See Migration guide for more details. tf.compat.v1.keras.applications.vgg16.decode_predictions
tf.keras.applications.vgg16.decode_predictions(
preds, top=5
)
Arguments
preds Numpy array encoding a batch of predictions.
top Integer, how many top-guesses to return. Defaults to 5.
Returns A list of lists of top class prediction tuples (class_name, class_description, score). One list of tuples per sample in batch input.
Raises
ValueError In case of invalid shape of the pred array (must be 2D). | |
doc_1270 |
Alias for get_edgecolor. | |
doc_1271 | Holds the SMTPServer that spawned this channel. | |
doc_1272 | This method is called from dispatch_return() when stop_here() yields True. | |
doc_1273 |
Set the default font weight. The initial value is 'normal'. | |
doc_1274 |
Update k means estimate on a single mini-batch X. Parameters
Xarray-like of shape (n_samples, n_features)
Coordinates of the data points to cluster. It must be noted that X will be copied if it is not C-contiguous.
yIgnored
Not used, present here for API consistency by convention.
sample_weightarray-like of shape (n_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight (default: None). Returns
self | |
doc_1275 | See Migration guide for more details. tf.compat.v1.function
tf.function(
func=None, input_signature=None, autograph=True, experimental_implements=None,
experimental_autograph_options=None, experimental_relax_shapes=False,
experimental_compile=None, experimental_follow_type_hints=None
)
tf.function constructs a callable that executes a TensorFlow graph (tf.Graph) created by trace-compiling the TensorFlow operations in func, effectively executing func as a TensorFlow graph. Example usage:
@tf.function
def f(x, y):
return x ** 2 + y
x = tf.constant([2, 3])
y = tf.constant([3, -2])
f(x, y)
<tf.Tensor: ... numpy=array([7, 7], ...)>
Features func may use data-dependent control flow, including if, for, while break, continue and return statements:
@tf.function
def f(x):
if tf.reduce_sum(x) > 0:
return x * x
else:
return -x // 2
f(tf.constant(-2))
<tf.Tensor: ... numpy=1>
func's closure may include tf.Tensor and tf.Variable objects:
@tf.function
def f():
return x ** 2 + y
x = tf.constant([-2, -3])
y = tf.Variable([3, -2])
f()
<tf.Tensor: ... numpy=array([7, 7], ...)>
func may also use ops with side effects, such as tf.print, tf.Variable and others:
v = tf.Variable(1)
@tf.function
def f(x):
for i in tf.range(x):
v.assign_add(i)
f(3)
v
<tf.Variable ... numpy=4>
Key Point: Any Python side-effects (appending to a list, printing with print, etc) will only happen once, when func is traced. To have side-effects executed into your tf.function they need to be written as TF ops:
l = []
@tf.function
def f(x):
for i in x:
l.append(i + 1) # Caution! Will only happen once when tracing
f(tf.constant([1, 2, 3]))
l
[<tf.Tensor ...>]
Instead, use TensorFlow collections like tf.TensorArray:
@tf.function
def f(x):
ta = tf.TensorArray(dtype=tf.int32, size=0, dynamic_size=True)
for i in range(len(x)):
ta = ta.write(i, x[i] + 1)
return ta.stack()
f(tf.constant([1, 2, 3]))
<tf.Tensor: ..., numpy=array([2, 3, 4], ...)>
tf.function is polymorphic Internally, tf.function can build more than one graph, to support arguments with different data types or shapes, since TensorFlow can build more efficient graphs that are specialized on shapes and dtypes. tf.function also treats any pure Python value as opaque objects, and builds a separate graph for each set of Python arguments that it encounters. To obtain an individual graph, use the get_concrete_function method of the callable created by tf.function. It can be called with the same arguments as func and returns a special tf.Graph object:
@tf.function
def f(x):
return x + 1
isinstance(f.get_concrete_function(1).graph, tf.Graph)
True
Caution: Passing python scalars or lists as arguments to tf.function will always build a new graph. To avoid this, pass numeric arguments as Tensors whenever possible:
@tf.function
def f(x):
return tf.abs(x)
f1 = f.get_concrete_function(1)
f2 = f.get_concrete_function(2) # Slow - builds new graph
f1 is f2
False
f1 = f.get_concrete_function(tf.constant(1))
f2 = f.get_concrete_function(tf.constant(2)) # Fast - reuses f1
f1 is f2
True
Python numerical arguments should only be used when they take few distinct values, such as hyperparameters like the number of layers in a neural network. Input signatures For Tensor arguments, tf.function instantiates a separate graph for every unique set of input shapes and datatypes. The example below creates two separate graphs, each specialized to a different shape:
@tf.function
def f(x):
return x + 1
vector = tf.constant([1.0, 1.0])
matrix = tf.constant([[3.0]])
f.get_concrete_function(vector) is f.get_concrete_function(matrix)
False
An "input signature" can be optionally provided to tf.function to control the graphs traced. The input signature specifies the shape and type of each Tensor argument to the function using a tf.TensorSpec object. More general shapes can be used. This is useful to avoid creating multiple graphs when Tensors have dynamic shapes. It also restricts the shape and datatype of Tensors that can be used:
@tf.function(
input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)])
def f(x):
return x + 1
vector = tf.constant([1.0, 1.0])
matrix = tf.constant([[3.0]])
f.get_concrete_function(vector) is f.get_concrete_function(matrix)
True
Variables may only be created once tf.function only allows creating new tf.Variable objects when it is called for the first time:
class MyModule(tf.Module):
def __init__(self):
self.v = None
@tf.function
def __call__(self, x):
if self.v is None:
self.v = tf.Variable(tf.ones_like(x))
return self.v * x
In general, it is recommended to create stateful objects like tf.Variable outside of tf.function and passing them as arguments. Using type annotations to improve performance 'experimental_follow_type_hints` can be used along with type annotations to improve performance by reducing the number of expensive graph retracings. For example, an argument annotated with tf.Tensor is converted to Tensor even when the input is a non-Tensor value.
@tf.function(experimental_follow_type_hints=True)
def f_with_hints(x: tf.Tensor):
print('Tracing')
return x
@tf.function(experimental_follow_type_hints=False)
def f_no_hints(x: tf.Tensor):
print('Tracing')
return x
f_no_hints(1)
Tracing
<tf.Tensor: shape=(), dtype=int32, numpy=1>
f_no_hints(2)
Tracing
<tf.Tensor: shape=(), dtype=int32, numpy=2>
f_with_hints(1)
Tracing
<tf.Tensor: shape=(), dtype=int32, numpy=1>
f_with_hints(2)
<tf.Tensor: shape=(), dtype=int32, numpy=2>
Args
func the function to be compiled. If func is None, tf.function returns a decorator that can be invoked with a single argument - func. In other words, tf.function(input_signature=...)(func) is equivalent to tf.function(func, input_signature=...). The former can be used as decorator.
input_signature A possibly nested sequence of tf.TensorSpec objects specifying the shapes and dtypes of the Tensors that will be supplied to this function. If None, a separate function is instantiated for each inferred input signature. If input_signature is specified, every input to func must be a Tensor, and func cannot accept **kwargs.
autograph Whether autograph should be applied on func before tracing a graph. Data-dependent control flow requires autograph=True. For more information, see the tf.function and AutoGraph guide.
experimental_implements If provided, contains a name of a "known" function this implements. For example "mycompany.my_recurrent_cell". This is stored as an attribute in inference function, which can then be detected when processing serialized function. See standardizing composite ops for details. For an example of utilizing this attribute see this example The code above automatically detects and substitutes function that implements "embedded_matmul" and allows TFLite to substitute its own implementations. For instance, a tensorflow user can use this attribute to mark that their function also implements embedded_matmul (perhaps more efficiently!) by specifying it using this parameter: @tf.function(experimental_implements="embedded_matmul") This can either be specified as just the string name of the function or a NameAttrList corresponding to a list of key-value attributes associated with the function name. The name of the function will be in the 'name' field of the NameAttrList.
experimental_autograph_options Optional tuple of tf.autograph.experimental.Feature values.
experimental_relax_shapes When True, tf.function may generate fewer, graphs that are less specialized on input shapes.
experimental_compile If True, the function is always compiled by XLA. XLA may be more efficient in some cases (e.g. TPU, XLA_GPU, dense tensor computations).
experimental_follow_type_hints When True, the function may use type annotations from func to optimize the tracing performance. For example, arguments annotated with tf.Tensor will automatically be converted to a Tensor.
Returns If func is not None, returns a callable that will execute the compiled function (and return zero or more tf.Tensor objects). If func is None, returns a decorator that, when invoked with a single func argument, returns a callable equivalent to the case above.
Raises ValueError when attempting to use experimental_compile, but XLA support is not enabled. | |
doc_1276 | stat.FILE_ATTRIBUTE_COMPRESSED
stat.FILE_ATTRIBUTE_DEVICE
stat.FILE_ATTRIBUTE_DIRECTORY
stat.FILE_ATTRIBUTE_ENCRYPTED
stat.FILE_ATTRIBUTE_HIDDEN
stat.FILE_ATTRIBUTE_INTEGRITY_STREAM
stat.FILE_ATTRIBUTE_NORMAL
stat.FILE_ATTRIBUTE_NOT_CONTENT_INDEXED
stat.FILE_ATTRIBUTE_NO_SCRUB_DATA
stat.FILE_ATTRIBUTE_OFFLINE
stat.FILE_ATTRIBUTE_READONLY
stat.FILE_ATTRIBUTE_REPARSE_POINT
stat.FILE_ATTRIBUTE_SPARSE_FILE
stat.FILE_ATTRIBUTE_SYSTEM
stat.FILE_ATTRIBUTE_TEMPORARY
stat.FILE_ATTRIBUTE_VIRTUAL
New in version 3.5. | |
doc_1277 | See Migration guide for more details. tf.compat.v1.raw_ops.VarHandleOp
tf.raw_ops.VarHandleOp(
dtype, shape, container='', shared_name='',
allowed_devices=[], name=None
)
Args
dtype A tf.DType. the type of this variable. Must agree with the dtypes of all ops using this variable.
shape A tf.TensorShape or list of ints. The (possibly partially specified) shape of this variable.
container An optional string. Defaults to "". the container this variable is placed in.
shared_name An optional string. Defaults to "". the name by which this variable is referred to.
allowed_devices An optional list of strings. Defaults to []. DEPRECATED. The allowed devices containing the resource variable. Set when the output ResourceHandle represents a per-replica/partitioned resource variable.
name A name for the operation (optional).
Returns A Tensor of type resource. | |
doc_1278 |
Load a FontManager from the JSON file named filename. See also json_dump | |
doc_1279 | The response object that is used by default in Flask. Works like the response object from Werkzeug but is set to have an HTML mimetype by default. Quite often you don’t have to create this object yourself because make_response() will take care of that for you. If you want to replace the response object used you can subclass this and set response_class to your subclass. Changelog Changed in version 1.0: JSON support is added to the response, like the request. This is useful when testing to get the test client response data as JSON. Changed in version 1.0: Added max_cookie_size. Parameters
response (Union[Iterable[str], Iterable[bytes]]) –
status (Optional[Union[int, str, http.HTTPStatus]]) –
headers (werkzeug.datastructures.Headers) –
mimetype (Optional[str]) –
content_type (Optional[str]) –
direct_passthrough (bool) – Return type
None
accept_ranges
The Accept-Ranges header. Even though the name would indicate that multiple values are supported, it must be one string token only. The values 'bytes' and 'none' are common. Changelog New in version 0.7.
property access_control_allow_credentials: bool
Whether credentials can be shared by the browser to JavaScript code. As part of the preflight request it indicates whether credentials can be used on the cross origin request.
access_control_allow_headers
Which headers can be sent with the cross origin request.
access_control_allow_methods
Which methods can be used for the cross origin request.
access_control_allow_origin
The origin or ‘*’ for any origin that may make cross origin requests.
access_control_expose_headers
Which headers can be shared by the browser to JavaScript code.
access_control_max_age
The maximum age in seconds the access control settings can be cached for.
add_etag(overwrite=False, weak=False)
Add an etag for the current response if there is none yet. Changed in version 2.0: SHA-1 is used to generate the value. MD5 may not be available in some environments. Parameters
overwrite (bool) –
weak (bool) – Return type
None
age
The Age response-header field conveys the sender’s estimate of the amount of time since the response (or its revalidation) was generated at the origin server. Age values are non-negative decimal integers, representing time in seconds.
property allow: werkzeug.datastructures.HeaderSet
The Allow entity-header field lists the set of methods supported by the resource identified by the Request-URI. The purpose of this field is strictly to inform the recipient of valid methods associated with the resource. An Allow header field MUST be present in a 405 (Method Not Allowed) response.
property cache_control: werkzeug.datastructures.ResponseCacheControl
The Cache-Control general-header field is used to specify directives that MUST be obeyed by all caching mechanisms along the request/response chain.
calculate_content_length()
Returns the content length if available or None otherwise. Return type
Optional[int]
call_on_close(func)
Adds a function to the internal list of functions that should be called as part of closing down the response. Since 0.7 this function also returns the function that was passed so that this can be used as a decorator. Changelog New in version 0.6. Parameters
func (Callable[[], Any]) – Return type
Callable[[], Any]
close()
Close the wrapped response if possible. You can also use the object in a with statement which will automatically close it. Changelog New in version 0.9: Can now be used in a with statement. Return type
None
content_encoding
The Content-Encoding entity-header field is used as a modifier to the media-type. When present, its value indicates what additional content codings have been applied to the entity-body, and thus what decoding mechanisms must be applied in order to obtain the media-type referenced by the Content-Type header field.
property content_language: werkzeug.datastructures.HeaderSet
The Content-Language entity-header field describes the natural language(s) of the intended audience for the enclosed entity. Note that this might not be equivalent to all the languages used within the entity-body.
content_length
The Content-Length entity-header field indicates the size of the entity-body, in decimal number of OCTETs, sent to the recipient or, in the case of the HEAD method, the size of the entity-body that would have been sent had the request been a GET.
content_location
The Content-Location entity-header field MAY be used to supply the resource location for the entity enclosed in the message when that entity is accessible from a location separate from the requested resource’s URI.
content_md5
The Content-MD5 entity-header field, as defined in RFC 1864, is an MD5 digest of the entity-body for the purpose of providing an end-to-end message integrity check (MIC) of the entity-body. (Note: a MIC is good for detecting accidental modification of the entity-body in transit, but is not proof against malicious attacks.)
property content_range: werkzeug.datastructures.ContentRange
The Content-Range header as a ContentRange object. Available even if the header is not set. Changelog New in version 0.7.
content_security_policy
The Content-Security-Policy header adds an additional layer of security to help detect and mitigate certain types of attacks.
content_security_policy_report_only
The Content-Security-Policy-Report-Only header adds a csp policy that is not enforced but is reported thereby helping detect certain types of attacks.
content_type
The Content-Type entity-header field indicates the media type of the entity-body sent to the recipient or, in the case of the HEAD method, the media type that would have been sent had the request been a GET.
cross_origin_embedder_policy
Prevents a document from loading any cross-origin resources that do not explicitly grant the document permission. Values must be a member of the werkzeug.http.COEP enum.
cross_origin_opener_policy
Allows control over sharing of browsing context group with cross-origin documents. Values must be a member of the werkzeug.http.COOP enum.
property data: Union[bytes, str]
A descriptor that calls get_data() and set_data().
date
The Date general-header field represents the date and time at which the message was originated, having the same semantics as orig-date in RFC 822. Changed in version 2.0: The datetime object is timezone-aware.
delete_cookie(key, path='/', domain=None, secure=False, httponly=False, samesite=None)
Delete a cookie. Fails silently if key doesn’t exist. Parameters
key (str) – the key (name) of the cookie to be deleted.
path (str) – if the cookie that should be deleted was limited to a path, the path has to be defined here.
domain (Optional[str]) – if the cookie that should be deleted was limited to a domain, that domain has to be defined here.
secure (bool) – If True, the cookie will only be available via HTTPS.
httponly (bool) – Disallow JavaScript access to the cookie.
samesite (Optional[str]) – Limit the scope of the cookie to only be attached to requests that are “same-site”. Return type
None
direct_passthrough
Pass the response body directly through as the WSGI iterable. This can be used when the body is a binary file or other iterator of bytes, to skip some unnecessary checks. Use send_file() instead of setting this manually.
expires
The Expires entity-header field gives the date/time after which the response is considered stale. A stale cache entry may not normally be returned by a cache. Changed in version 2.0: The datetime object is timezone-aware.
classmethod force_type(response, environ=None)
Enforce that the WSGI response is a response object of the current type. Werkzeug will use the Response internally in many situations like the exceptions. If you call get_response() on an exception you will get back a regular Response object, even if you are using a custom subclass. This method can enforce a given response type, and it will also convert arbitrary WSGI callables into response objects if an environ is provided: # convert a Werkzeug response object into an instance of the
# MyResponseClass subclass.
response = MyResponseClass.force_type(response)
# convert any WSGI application into a response object
response = MyResponseClass.force_type(response, environ)
This is especially useful if you want to post-process responses in the main dispatcher and use functionality provided by your subclass. Keep in mind that this will modify response objects in place if possible! Parameters
response (Response) – a response object or wsgi application.
environ (Optional[WSGIEnvironment]) – a WSGI environment object. Returns
a response object. Return type
Response
freeze(no_etag=None)
Make the response object ready to be pickled. Does the following: Buffer the response into a list, ignoring implicity_sequence_conversion and direct_passthrough. Set the Content-Length header. Generate an ETag header if one is not already set. Changed in version 2.0: An ETag header is added, the no_etag parameter is deprecated and will be removed in Werkzeug 2.1. Changelog Changed in version 0.6: The Content-Length header is set. Parameters
no_etag (None) – Return type
None
classmethod from_app(app, environ, buffered=False)
Create a new response object from an application output. This works best if you pass it an application that returns a generator all the time. Sometimes applications may use the write() callable returned by the start_response function. This tries to resolve such edge cases automatically. But if you don’t get the expected output you should set buffered to True which enforces buffering. Parameters
app (WSGIApplication) – the WSGI application to execute.
environ (WSGIEnvironment) – the WSGI environment to execute against.
buffered (bool) – set to True to enforce buffering. Returns
a response object. Return type
Response
get_app_iter(environ)
Returns the application iterator for the given environ. Depending on the request method and the current status code the return value might be an empty response rather than the one from the response. If the request method is HEAD or the status code is in a range where the HTTP specification requires an empty response, an empty iterable is returned. Changelog New in version 0.6. Parameters
environ (WSGIEnvironment) – the WSGI environment of the request. Returns
a response iterable. Return type
Iterable[bytes]
get_data(as_text=False)
The string representation of the response body. Whenever you call this property the response iterable is encoded and flattened. This can lead to unwanted behavior if you stream big data. This behavior can be disabled by setting implicit_sequence_conversion to False. If as_text is set to True the return value will be a decoded string. Changelog New in version 0.9. Parameters
as_text (bool) – Return type
Union[bytes, str]
get_etag()
Return a tuple in the form (etag, is_weak). If there is no ETag the return value is (None, None). Return type
Union[Tuple[str, bool], Tuple[None, None]]
get_json(force=False, silent=False)
Parse data as JSON. Useful during testing. If the mimetype does not indicate JSON (application/json, see is_json()), this returns None. Unlike Request.get_json(), the result is not cached. Parameters
force (bool) – Ignore the mimetype and always try to parse JSON.
silent (bool) – Silence parsing errors and return None instead. Return type
Optional[Any]
get_wsgi_headers(environ)
This is automatically called right before the response is started and returns headers modified for the given environment. It returns a copy of the headers from the response with some modifications applied if necessary. For example the location header (if present) is joined with the root URL of the environment. Also the content length is automatically set to zero here for certain status codes. Changelog Changed in version 0.6: Previously that function was called fix_headers and modified the response object in place. Also since 0.6, IRIs in location and content-location headers are handled properly. Also starting with 0.6, Werkzeug will attempt to set the content length if it is able to figure it out on its own. This is the case if all the strings in the response iterable are already encoded and the iterable is buffered. Parameters
environ (WSGIEnvironment) – the WSGI environment of the request. Returns
returns a new Headers object. Return type
werkzeug.datastructures.Headers
get_wsgi_response(environ)
Returns the final WSGI response as tuple. The first item in the tuple is the application iterator, the second the status and the third the list of headers. The response returned is created specially for the given environment. For example if the request method in the WSGI environment is 'HEAD' the response will be empty and only the headers and status code will be present. Changelog New in version 0.6. Parameters
environ (WSGIEnvironment) – the WSGI environment of the request. Returns
an (app_iter, status, headers) tuple. Return type
Tuple[Iterable[bytes], str, List[Tuple[str, str]]]
property is_json: bool
Check if the mimetype indicates JSON data, either application/json or application/*+json.
property is_sequence: bool
If the iterator is buffered, this property will be True. A response object will consider an iterator to be buffered if the response attribute is a list or tuple. Changelog New in version 0.6.
property is_streamed: bool
If the response is streamed (the response is not an iterable with a length information) this property is True. In this case streamed means that there is no information about the number of iterations. This is usually True if a generator is passed to the response object. This is useful for checking before applying some sort of post filtering that should not take place for streamed responses.
iter_encoded()
Iter the response encoded with the encoding of the response. If the response object is invoked as WSGI application the return value of this method is used as application iterator unless direct_passthrough was activated. Return type
Iterator[bytes]
property json: Optional[Any]
The parsed JSON data if mimetype indicates JSON (application/json, see is_json()). Calls get_json() with default arguments.
last_modified
The Last-Modified entity-header field indicates the date and time at which the origin server believes the variant was last modified. Changed in version 2.0: The datetime object is timezone-aware.
location
The Location response-header field is used to redirect the recipient to a location other than the Request-URI for completion of the request or identification of a new resource.
make_conditional(request_or_environ, accept_ranges=False, complete_length=None)
Make the response conditional to the request. This method works best if an etag was defined for the response already. The add_etag method can be used to do that. If called without etag just the date header is set. This does nothing if the request method in the request or environ is anything but GET or HEAD. For optimal performance when handling range requests, it’s recommended that your response data object implements seekable, seek and tell methods as described by io.IOBase. Objects returned by wrap_file() automatically implement those methods. It does not remove the body of the response because that’s something the __call__() function does for us automatically. Returns self so that you can do return resp.make_conditional(req) but modifies the object in-place. Parameters
request_or_environ (WSGIEnvironment) – a request object or WSGI environment to be used to make the response conditional against.
accept_ranges (Union[bool, str]) – This parameter dictates the value of Accept-Ranges header. If False (default), the header is not set. If True, it will be set to "bytes". If None, it will be set to "none". If it’s a string, it will use this value.
complete_length (Optional[int]) – Will be used only in valid Range Requests. It will set Content-Range complete length value and compute Content-Length real value. This parameter is mandatory for successful Range Requests completion. Raises
RequestedRangeNotSatisfiable if Range header could not be parsed or satisfied. Return type
Response Changed in version 2.0: Range processing is skipped if length is 0 instead of raising a 416 Range Not Satisfiable error.
make_sequence()
Converts the response iterator in a list. By default this happens automatically if required. If implicit_sequence_conversion is disabled, this method is not automatically called and some properties might raise exceptions. This also encodes all the items. Changelog New in version 0.6. Return type
None
property max_cookie_size: int
Read-only view of the MAX_COOKIE_SIZE config key. See max_cookie_size in Werkzeug’s docs.
property mimetype: Optional[str]
The mimetype (content type without charset etc.)
property mimetype_params: Dict[str, str]
The mimetype parameters as dict. For example if the content type is text/html; charset=utf-8 the params would be {'charset': 'utf-8'}. Changelog New in version 0.5.
property retry_after: Optional[datetime.datetime]
The Retry-After response-header field can be used with a 503 (Service Unavailable) response to indicate how long the service is expected to be unavailable to the requesting client. Time in seconds until expiration or date. Changed in version 2.0: The datetime object is timezone-aware.
set_cookie(key, value='', max_age=None, expires=None, path='/', domain=None, secure=False, httponly=False, samesite=None)
Sets a cookie. A warning is raised if the size of the cookie header exceeds max_cookie_size, but the header will still be set. Parameters
key (str) – the key (name) of the cookie to be set.
value (str) – the value of the cookie.
max_age (Optional[Union[datetime.timedelta, int]]) – should be a number of seconds, or None (default) if the cookie should last only as long as the client’s browser session.
expires (Optional[Union[str, datetime.datetime, int, float]]) – should be a datetime object or UNIX timestamp.
path (Optional[str]) – limits the cookie to a given path, per default it will span the whole domain.
domain (Optional[str]) – if you want to set a cross-domain cookie. For example, domain=".example.com" will set a cookie that is readable by the domain www.example.com, foo.example.com etc. Otherwise, a cookie will only be readable by the domain that set it.
secure (bool) – If True, the cookie will only be available via HTTPS.
httponly (bool) – Disallow JavaScript access to the cookie.
samesite (Optional[str]) – Limit the scope of the cookie to only be attached to requests that are “same-site”. Return type
None
set_data(value)
Sets a new string as response. The value must be a string or bytes. If a string is set it’s encoded to the charset of the response (utf-8 by default). Changelog New in version 0.9. Parameters
value (Union[bytes, str]) – Return type
None
set_etag(etag, weak=False)
Set the etag, and override the old one if there was one. Parameters
etag (str) –
weak (bool) – Return type
None
property status: str
The HTTP status code as a string.
property status_code: int
The HTTP status code as a number.
property stream: werkzeug.wrappers.response.ResponseStream
The response iterable as write-only stream.
property vary: werkzeug.datastructures.HeaderSet
The Vary field value indicates the set of request-header fields that fully determines, while the response is fresh, whether a cache is permitted to use the response to reply to a subsequent request without revalidation.
property www_authenticate: werkzeug.datastructures.WWWAuthenticate
The WWW-Authenticate header in a parsed form. | |
doc_1280 | Returns an InlineFormSet using modelformset_factory() with defaults of formset=BaseInlineFormSet, can_delete=True, and extra=3. If your model has more than one ForeignKey to the parent_model, you must specify a fk_name. See Inline formsets for example usage. Changed in Django 3.2: The absolute_max and can_delete_extra arguments were added. Changed in Django 4.0: The renderer argument was added. | |
doc_1281 | Returns a list of all hyperparameter. | |
doc_1282 |
Applies the hard shrinkage function element-wise: HardShrink(x)={x, if x>λx, if x<−λ0, otherwise \text{HardShrink}(x) = \begin{cases} x, & \text{ if } x > \lambda \\ x, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}
Parameters
lambd – the λ\lambda value for the Hardshrink formulation. Default: 0.5 Shape:
Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.Hardshrink()
>>> input = torch.randn(2)
>>> output = m(input) | |
doc_1283 | Return non-zero if the mode is from a socket. | |
doc_1284 | Returns a date object containing the previous valid day. This function can also return None or raise an Http404 exception, depending on the values of allow_empty and allow_future. | |
doc_1285 | Returns the GEOS geometry type identification number. The following table shows the value for each geometry type:
Geometry ID
Point 0
LineString 1
LinearRing 2
Polygon 3
MultiPoint 4
MultiLineString 5
MultiPolygon 6
GeometryCollection 7 | |
doc_1286 | tf.compat.v1.train.natural_exp_decay(
learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None
)
When training a model, it is often recommended to lower the learning rate as the training progresses. This function applies an exponential decay function to a provided initial learning rate. It requires an global_step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The function returns the decayed learning rate. It is computed as: decayed_learning_rate = learning_rate * exp(-decay_rate * global_step /
decay_step)
or, if staircase is True, as: decayed_learning_rate = learning_rate * exp(-decay_rate * floor(global_step /
decay_step))
Example: decay exponentially with a base of 0.96: ...
global_step = tf.Variable(0, trainable=False)
learning_rate = 0.1
decay_steps = 5
k = 0.5
learning_rate = tf.compat.v1.train.natural_exp_decay(learning_rate,
global_step,
decay_steps, k)
# Passing global_step to minimize() will increment it at each step.
learning_step = (
tf.compat.v1.train.GradientDescentOptimizer(learning_rate)
.minimize(...my loss..., global_step=global_step)
)
Args
learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate.
global_step A Python number. Global step to use for the decay computation. Must not be negative.
decay_steps How often to apply decay.
decay_rate A Python number. The decay rate.
staircase Whether to apply decay in a discrete staircase, as opposed to continuous, fashion.
name String. Optional name of the operation. Defaults to 'ExponentialTimeDecay'.
Returns A scalar Tensor of the same type as learning_rate. The decayed learning rate.
Raises
ValueError if global_step is not supplied. Eager Compatibility When eager execution is enabled, this function returns a function which in turn returns the decayed learning rate Tensor. This can be useful for changing the learning rate value across different invocations of optimizer functions. | |
doc_1287 | 'blogs.blog': lambda o: "/blogs/%s/" % o.slug,
'news.story': lambda o: "/stories/%s/%s/" % (o.pub_year, o.slug),
}
The model name used in this setting should be all lowercase, regardless of the case of the actual model class name. ADMINS Default: [] (Empty list) A list of all the people who get code error notifications. When DEBUG=False and AdminEmailHandler is configured in LOGGING (done by default), Django emails these people the details of exceptions raised in the request/response cycle. Each item in the list should be a tuple of (Full name, email address). Example: [('John', 'john@example.com'), ('Mary', 'mary@example.com')]
ALLOWED_HOSTS Default: [] (Empty list) A list of strings representing the host/domain names that this Django site can serve. This is a security measure to prevent HTTP Host header attacks, which are possible even under many seemingly-safe web server configurations. Values in this list can be fully qualified names (e.g. 'www.example.com'), in which case they will be matched against the request’s Host header exactly (case-insensitive, not including port). A value beginning with a period can be used as a subdomain wildcard: '.example.com' will match example.com, www.example.com, and any other subdomain of example.com. A value of '*' will match anything; in this case you are responsible to provide your own validation of the Host header (perhaps in a middleware; if so this middleware must be listed first in MIDDLEWARE). Django also allows the fully qualified domain name (FQDN) of any entries. Some browsers include a trailing dot in the Host header which Django strips when performing host validation. If the Host header (or X-Forwarded-Host if USE_X_FORWARDED_HOST is enabled) does not match any value in this list, the django.http.HttpRequest.get_host() method will raise SuspiciousOperation. When DEBUG is True and ALLOWED_HOSTS is empty, the host is validated against ['.localhost', '127.0.0.1', '[::1]']. ALLOWED_HOSTS is also checked when running tests. This validation only applies via get_host(); if your code accesses the Host header directly from request.META you are bypassing this security protection. APPEND_SLASH Default: True When set to True, if the request URL does not match any of the patterns in the URLconf and it doesn’t end in a slash, an HTTP redirect is issued to the same URL with a slash appended. Note that the redirect may cause any data submitted in a POST request to be lost. The APPEND_SLASH setting is only used if CommonMiddleware is installed (see Middleware). See also PREPEND_WWW. CACHES Default: {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
}
}
A dictionary containing the settings for all caches to be used with Django. It is a nested dictionary whose contents maps cache aliases to a dictionary containing the options for an individual cache. The CACHES setting must configure a default cache; any number of additional caches may also be specified. If you are using a cache backend other than the local memory cache, or you need to define multiple caches, other options will be required. The following cache options are available. BACKEND Default: '' (Empty string) The cache backend to use. The built-in cache backends are: 'django.core.cache.backends.db.DatabaseCache' 'django.core.cache.backends.dummy.DummyCache' 'django.core.cache.backends.filebased.FileBasedCache' 'django.core.cache.backends.locmem.LocMemCache' 'django.core.cache.backends.memcached.PyMemcacheCache' 'django.core.cache.backends.memcached.PyLibMCCache' 'django.core.cache.backends.redis.RedisCache' You can use a cache backend that doesn’t ship with Django by setting BACKEND to a fully-qualified path of a cache backend class (i.e. mypackage.backends.whatever.WhateverCache). Changed in Django 3.2: The PyMemcacheCache backend was added. Changed in Django 4.0: The RedisCache backend was added. KEY_FUNCTION A string containing a dotted path to a function (or any callable) that defines how to compose a prefix, version and key into a final cache key. The default implementation is equivalent to the function: def make_key(key, key_prefix, version):
return ':'.join([key_prefix, str(version), key])
You may use any key function you want, as long as it has the same argument signature. See the cache documentation for more information. KEY_PREFIX Default: '' (Empty string) A string that will be automatically included (prepended by default) to all cache keys used by the Django server. See the cache documentation for more information. LOCATION Default: '' (Empty string) The location of the cache to use. This might be the directory for a file system cache, a host and port for a memcache server, or an identifying name for a local memory cache. e.g.: CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
'LOCATION': '/var/tmp/django_cache',
}
}
OPTIONS Default: None Extra parameters to pass to the cache backend. Available parameters vary depending on your cache backend. Some information on available parameters can be found in the cache arguments documentation. For more information, consult your backend module’s own documentation. TIMEOUT Default: 300 The number of seconds before a cache entry is considered stale. If the value of this setting is None, cache entries will not expire. A value of 0 causes keys to immediately expire (effectively “don’t cache”). VERSION Default: 1 The default version number for cache keys generated by the Django server. See the cache documentation for more information. CACHE_MIDDLEWARE_ALIAS Default: 'default' The cache connection to use for the cache middleware. CACHE_MIDDLEWARE_KEY_PREFIX Default: '' (Empty string) A string which will be prefixed to the cache keys generated by the cache middleware. This prefix is combined with the KEY_PREFIX setting; it does not replace it. See Django’s cache framework. CACHE_MIDDLEWARE_SECONDS Default: 600 The default number of seconds to cache a page for the cache middleware. See Django’s cache framework. CSRF_COOKIE_AGE Default: 31449600 (approximately 1 year, in seconds) The age of CSRF cookies, in seconds. The reason for setting a long-lived expiration time is to avoid problems in the case of a user closing a browser or bookmarking a page and then loading that page from a browser cache. Without persistent cookies, the form submission would fail in this case. Some browsers (specifically Internet Explorer) can disallow the use of persistent cookies or can have the indexes to the cookie jar corrupted on disk, thereby causing CSRF protection checks to (sometimes intermittently) fail. Change this setting to None to use session-based CSRF cookies, which keep the cookies in-memory instead of on persistent storage. CSRF_COOKIE_DOMAIN Default: None The domain to be used when setting the CSRF cookie. This can be useful for easily allowing cross-subdomain requests to be excluded from the normal cross site request forgery protection. It should be set to a string such as ".example.com" to allow a POST request from a form on one subdomain to be accepted by a view served from another subdomain. Please note that the presence of this setting does not imply that Django’s CSRF protection is safe from cross-subdomain attacks by default - please see the CSRF limitations section. CSRF_COOKIE_HTTPONLY Default: False Whether to use HttpOnly flag on the CSRF cookie. If this is set to True, client-side JavaScript will not be able to access the CSRF cookie. Designating the CSRF cookie as HttpOnly doesn’t offer any practical protection because CSRF is only to protect against cross-domain attacks. If an attacker can read the cookie via JavaScript, they’re already on the same domain as far as the browser knows, so they can do anything they like anyway. (XSS is a much bigger hole than CSRF.) Although the setting offers little practical benefit, it’s sometimes required by security auditors. If you enable this and need to send the value of the CSRF token with an AJAX request, your JavaScript must pull the value from a hidden CSRF token form input instead of from the cookie. See SESSION_COOKIE_HTTPONLY for details on HttpOnly. CSRF_COOKIE_NAME Default: 'csrftoken' The name of the cookie to use for the CSRF authentication token. This can be whatever you want (as long as it’s different from the other cookie names in your application). See Cross Site Request Forgery protection. CSRF_COOKIE_PATH Default: '/' The path set on the CSRF cookie. This should either match the URL path of your Django installation or be a parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths, and each instance will only see its own CSRF cookie. CSRF_COOKIE_SAMESITE Default: 'Lax' The value of the SameSite flag on the CSRF cookie. This flag prevents the cookie from being sent in cross-site requests. See SESSION_COOKIE_SAMESITE for details about SameSite. CSRF_COOKIE_SECURE Default: False Whether to use a secure cookie for the CSRF cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent with an HTTPS connection. CSRF_USE_SESSIONS Default: False Whether to store the CSRF token in the user’s session instead of in a cookie. It requires the use of django.contrib.sessions. Storing the CSRF token in a cookie (Django’s default) is safe, but storing it in the session is common practice in other web frameworks and therefore sometimes demanded by security auditors. Since the default error views require the CSRF token, SessionMiddleware must appear in MIDDLEWARE before any middleware that may raise an exception to trigger an error view (such as PermissionDenied) if you’re using CSRF_USE_SESSIONS. See Middleware ordering. CSRF_FAILURE_VIEW Default: 'django.views.csrf.csrf_failure' A dotted path to the view function to be used when an incoming request is rejected by the CSRF protection. The function should have this signature: def csrf_failure(request, reason=""):
...
where reason is a short message (intended for developers or logging, not for end users) indicating the reason the request was rejected. It should return an HttpResponseForbidden. django.views.csrf.csrf_failure() accepts an additional template_name parameter that defaults to '403_csrf.html'. If a template with that name exists, it will be used to render the page. CSRF_HEADER_NAME Default: 'HTTP_X_CSRFTOKEN' The name of the request header used for CSRF authentication. As with other HTTP headers in request.META, the header name received from the server is normalized by converting all characters to uppercase, replacing any hyphens with underscores, and adding an 'HTTP_' prefix to the name. For example, if your client sends a 'X-XSRF-TOKEN' header, the setting should be 'HTTP_X_XSRF_TOKEN'. CSRF_TRUSTED_ORIGINS Default: [] (Empty list) A list of trusted origins for unsafe requests (e.g. POST). For requests that include the Origin header, Django’s CSRF protection requires that header match the origin present in the Host header. For a secure unsafe request that doesn’t include the Origin header, the request must have a Referer header that matches the origin present in the Host header. These checks prevent, for example, a POST request from subdomain.example.com from succeeding against api.example.com. If you need cross-origin unsafe requests, continuing the example, add 'https://subdomain.example.com' to this list (and/or http://... if requests originate from an insecure page). The setting also supports subdomains, so you could add 'https://*.example.com', for example, to allow access from all subdomains of example.com. Changed in Django 4.0: The values in older versions must only include the hostname (possibly with a leading dot) and not the scheme or an asterisk. Also, Origin header checking isn’t performed in older versions. DATABASES Default: {} (Empty dictionary) A dictionary containing the settings for all databases to be used with Django. It is a nested dictionary whose contents map a database alias to a dictionary containing the options for an individual database. The DATABASES setting must configure a default database; any number of additional databases may also be specified. The simplest possible settings file is for a single-database setup using SQLite. This can be configured using the following: DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'mydatabase',
}
}
When connecting to other database backends, such as MariaDB, MySQL, Oracle, or PostgreSQL, additional connection parameters will be required. See the ENGINE setting below on how to specify other database types. This example is for PostgreSQL: DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'mydatabase',
'USER': 'mydatabaseuser',
'PASSWORD': 'mypassword',
'HOST': '127.0.0.1',
'PORT': '5432',
}
}
The following inner options that may be required for more complex configurations are available: ATOMIC_REQUESTS Default: False Set this to True to wrap each view in a transaction on this database. See Tying transactions to HTTP requests. AUTOCOMMIT Default: True Set this to False if you want to disable Django’s transaction management and implement your own. ENGINE Default: '' (Empty string) The database backend to use. The built-in database backends are: 'django.db.backends.postgresql' 'django.db.backends.mysql' 'django.db.backends.sqlite3' 'django.db.backends.oracle' You can use a database backend that doesn’t ship with Django by setting ENGINE to a fully-qualified path (i.e. mypackage.backends.whatever). HOST Default: '' (Empty string) Which host to use when connecting to the database. An empty string means localhost. Not used with SQLite. If this value starts with a forward slash ('/') and you’re using MySQL, MySQL will connect via a Unix socket to the specified socket. For example: "HOST": '/var/run/mysql'
If you’re using MySQL and this value doesn’t start with a forward slash, then this value is assumed to be the host. If you’re using PostgreSQL, by default (empty HOST), the connection to the database is done through UNIX domain sockets (‘local’ lines in pg_hba.conf). If your UNIX domain socket is not in the standard location, use the same value of unix_socket_directory from postgresql.conf. If you want to connect through TCP sockets, set HOST to ‘localhost’ or ‘127.0.0.1’ (‘host’ lines in pg_hba.conf). On Windows, you should always define HOST, as UNIX domain sockets are not available. NAME Default: '' (Empty string) The name of the database to use. For SQLite, it’s the full path to the database file. When specifying the path, always use forward slashes, even on Windows (e.g. C:/homes/user/mysite/sqlite3.db). CONN_MAX_AGE Default: 0 The lifetime of a database connection, as an integer of seconds. Use 0 to close database connections at the end of each request — Django’s historical behavior — and None for unlimited persistent connections. OPTIONS Default: {} (Empty dictionary) Extra parameters to use when connecting to the database. Available parameters vary depending on your database backend. Some information on available parameters can be found in the Database Backends documentation. For more information, consult your backend module’s own documentation. PASSWORD Default: '' (Empty string) The password to use when connecting to the database. Not used with SQLite. PORT Default: '' (Empty string) The port to use when connecting to the database. An empty string means the default port. Not used with SQLite. TIME_ZONE Default: None A string representing the time zone for this database connection or None. This inner option of the DATABASES setting accepts the same values as the general TIME_ZONE setting. When USE_TZ is True and this option is set, reading datetimes from the database returns aware datetimes in this time zone instead of UTC. When USE_TZ is False, it is an error to set this option.
If the database backend doesn’t support time zones (e.g. SQLite, MySQL, Oracle), Django reads and writes datetimes in local time according to this option if it is set and in UTC if it isn’t. Changing the connection time zone changes how datetimes are read from and written to the database. If Django manages the database and you don’t have a strong reason to do otherwise, you should leave this option unset. It’s best to store datetimes in UTC because it avoids ambiguous or nonexistent datetimes during daylight saving time changes. Also, receiving datetimes in UTC keeps datetime arithmetic simple — there’s no need to consider potential offset changes over a DST transition. If you’re connecting to a third-party database that stores datetimes in a local time rather than UTC, then you must set this option to the appropriate time zone. Likewise, if Django manages the database but third-party systems connect to the same database and expect to find datetimes in local time, then you must set this option.
If the database backend supports time zones (e.g. PostgreSQL), the TIME_ZONE option is very rarely needed. It can be changed at any time; the database takes care of converting datetimes to the desired time zone. Setting the time zone of the database connection may be useful for running raw SQL queries involving date/time functions provided by the database, such as date_trunc, because their results depend on the time zone. However, this has a downside: receiving all datetimes in local time makes datetime arithmetic more tricky — you must account for possible offset changes over DST transitions. Consider converting to local time explicitly with AT TIME ZONE in raw SQL queries instead of setting the TIME_ZONE option. DISABLE_SERVER_SIDE_CURSORS Default: False Set this to True if you want to disable the use of server-side cursors with QuerySet.iterator(). Transaction pooling and server-side cursors describes the use case. This is a PostgreSQL-specific setting. USER Default: '' (Empty string) The username to use when connecting to the database. Not used with SQLite. TEST Default: {} (Empty dictionary) A dictionary of settings for test databases; for more details about the creation and use of test databases, see The test database. Here’s an example with a test database configuration: DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'USER': 'mydatabaseuser',
'NAME': 'mydatabase',
'TEST': {
'NAME': 'mytestdatabase',
},
},
}
The following keys in the TEST dictionary are available: CHARSET Default: None The character set encoding used to create the test database. The value of this string is passed directly through to the database, so its format is backend-specific. Supported by the PostgreSQL (postgresql) and MySQL (mysql) backends. COLLATION Default: None The collation order to use when creating the test database. This value is passed directly to the backend, so its format is backend-specific. Only supported for the mysql backend (see the MySQL manual for details). DEPENDENCIES Default: ['default'], for all databases other than default, which has no dependencies. The creation-order dependencies of the database. See the documentation on controlling the creation order of test databases for details. MIGRATE Default: True When set to False, migrations won’t run when creating the test database. This is similar to setting None as a value in MIGRATION_MODULES, but for all apps. MIRROR Default: None The alias of the database that this database should mirror during testing. This setting exists to allow for testing of primary/replica (referred to as master/slave by some databases) configurations of multiple databases. See the documentation on testing primary/replica configurations for details. NAME Default: None The name of database to use when running the test suite. If the default value (None) is used with the SQLite database engine, the tests will use a memory resident database. For all other database engines the test database will use the name 'test_' + DATABASE_NAME. See The test database. SERIALIZE Boolean value to control whether or not the default test runner serializes the database into an in-memory JSON string before running tests (used to restore the database state between tests if you don’t have transactions). You can set this to False to speed up creation time if you don’t have any test classes with serialized_rollback=True. Deprecated since version 4.0: This setting is deprecated as it can be inferred from the databases with the serialized_rollback option enabled. TEMPLATE This is a PostgreSQL-specific setting. The name of a template (e.g. 'template0') from which to create the test database. CREATE_DB Default: True This is an Oracle-specific setting. If it is set to False, the test tablespaces won’t be automatically created at the beginning of the tests or dropped at the end. CREATE_USER Default: True This is an Oracle-specific setting. If it is set to False, the test user won’t be automatically created at the beginning of the tests and dropped at the end. USER Default: None This is an Oracle-specific setting. The username to use when connecting to the Oracle database that will be used when running tests. If not provided, Django will use 'test_' + USER. PASSWORD Default: None This is an Oracle-specific setting. The password to use when connecting to the Oracle database that will be used when running tests. If not provided, Django will generate a random password. ORACLE_MANAGED_FILES Default: False This is an Oracle-specific setting. If set to True, Oracle Managed Files (OMF) tablespaces will be used. DATAFILE and DATAFILE_TMP will be ignored. TBLSPACE Default: None This is an Oracle-specific setting. The name of the tablespace that will be used when running tests. If not provided, Django will use 'test_' + USER. TBLSPACE_TMP Default: None This is an Oracle-specific setting. The name of the temporary tablespace that will be used when running tests. If not provided, Django will use 'test_' + USER + '_temp'. DATAFILE Default: None This is an Oracle-specific setting. The name of the datafile to use for the TBLSPACE. If not provided, Django will use TBLSPACE + '.dbf'. DATAFILE_TMP Default: None This is an Oracle-specific setting. The name of the datafile to use for the TBLSPACE_TMP. If not provided, Django will use TBLSPACE_TMP + '.dbf'. DATAFILE_MAXSIZE Default: '500M' This is an Oracle-specific setting. The maximum size that the DATAFILE is allowed to grow to. DATAFILE_TMP_MAXSIZE Default: '500M' This is an Oracle-specific setting. The maximum size that the DATAFILE_TMP is allowed to grow to. DATAFILE_SIZE Default: '50M' This is an Oracle-specific setting. The initial size of the DATAFILE. DATAFILE_TMP_SIZE Default: '50M' This is an Oracle-specific setting. The initial size of the DATAFILE_TMP. DATAFILE_EXTSIZE Default: '25M' This is an Oracle-specific setting. The amount by which the DATAFILE is extended when more space is required. DATAFILE_TMP_EXTSIZE Default: '25M' This is an Oracle-specific setting. The amount by which the DATAFILE_TMP is extended when more space is required. DATA_UPLOAD_MAX_MEMORY_SIZE Default: 2621440 (i.e. 2.5 MB). The maximum size in bytes that a request body may be before a SuspiciousOperation (RequestDataTooBig) is raised. The check is done when accessing request.body or request.POST and is calculated against the total request size excluding any file upload data. You can set this to None to disable the check. Applications that are expected to receive unusually large form posts should tune this setting. The amount of request data is correlated to the amount of memory needed to process the request and populate the GET and POST dictionaries. Large requests could be used as a denial-of-service attack vector if left unchecked. Since web servers don’t typically perform deep request inspection, it’s not possible to perform a similar check at that level. See also FILE_UPLOAD_MAX_MEMORY_SIZE. DATA_UPLOAD_MAX_NUMBER_FIELDS Default: 1000 The maximum number of parameters that may be received via GET or POST before a SuspiciousOperation (TooManyFields) is raised. You can set this to None to disable the check. Applications that are expected to receive an unusually large number of form fields should tune this setting. The number of request parameters is correlated to the amount of time needed to process the request and populate the GET and POST dictionaries. Large requests could be used as a denial-of-service attack vector if left unchecked. Since web servers don’t typically perform deep request inspection, it’s not possible to perform a similar check at that level. DATABASE_ROUTERS Default: [] (Empty list) The list of routers that will be used to determine which database to use when performing a database query. See the documentation on automatic database routing in multi database configurations. DATE_FORMAT Default: 'N j, Y' (e.g. Feb. 4, 2003) The default formatting to use for displaying date fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATETIME_FORMAT, TIME_FORMAT and SHORT_DATE_FORMAT. DATE_INPUT_FORMATS Default: [
'%Y-%m-%d', '%m/%d/%Y', '%m/%d/%y', # '2006-10-25', '10/25/2006', '10/25/06'
'%b %d %Y', '%b %d, %Y', # 'Oct 25 2006', 'Oct 25, 2006'
'%d %b %Y', '%d %b, %Y', # '25 Oct 2006', '25 Oct, 2006'
'%B %d %Y', '%B %d, %Y', # 'October 25 2006', 'October 25, 2006'
'%d %B %Y', '%d %B, %Y', # '25 October 2006', '25 October, 2006'
]
A list of formats that will be accepted when inputting data on a date field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATETIME_INPUT_FORMATS and TIME_INPUT_FORMATS. DATETIME_FORMAT Default: 'N j, Y, P' (e.g. Feb. 4, 2003, 4 p.m.) The default formatting to use for displaying datetime fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATE_FORMAT, TIME_FORMAT and SHORT_DATETIME_FORMAT. DATETIME_INPUT_FORMATS Default: [
'%Y-%m-%d %H:%M:%S', # '2006-10-25 14:30:59'
'%Y-%m-%d %H:%M:%S.%f', # '2006-10-25 14:30:59.000200'
'%Y-%m-%d %H:%M', # '2006-10-25 14:30'
'%m/%d/%Y %H:%M:%S', # '10/25/2006 14:30:59'
'%m/%d/%Y %H:%M:%S.%f', # '10/25/2006 14:30:59.000200'
'%m/%d/%Y %H:%M', # '10/25/2006 14:30'
'%m/%d/%y %H:%M:%S', # '10/25/06 14:30:59'
'%m/%d/%y %H:%M:%S.%f', # '10/25/06 14:30:59.000200'
'%m/%d/%y %H:%M', # '10/25/06 14:30'
]
A list of formats that will be accepted when inputting data on a datetime field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. Date-only formats are not included as datetime fields will automatically try DATE_INPUT_FORMATS in last resort. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATE_INPUT_FORMATS and TIME_INPUT_FORMATS. DEBUG Default: False A boolean that turns on/off debug mode. Never deploy a site into production with DEBUG turned on. One of the main features of debug mode is the display of detailed error pages. If your app raises an exception when DEBUG is True, Django will display a detailed traceback, including a lot of metadata about your environment, such as all the currently defined Django settings (from settings.py). As a security measure, Django will not include settings that might be sensitive, such as SECRET_KEY. Specifically, it will exclude any setting whose name includes any of the following: 'API' 'KEY' 'PASS' 'SECRET' 'SIGNATURE' 'TOKEN' Note that these are partial matches. 'PASS' will also match PASSWORD, just as 'TOKEN' will also match TOKENIZED and so on. Still, note that there are always going to be sections of your debug output that are inappropriate for public consumption. File paths, configuration options and the like all give attackers extra information about your server. It is also important to remember that when running with DEBUG turned on, Django will remember every SQL query it executes. This is useful when you’re debugging, but it’ll rapidly consume memory on a production server. Finally, if DEBUG is False, you also need to properly set the ALLOWED_HOSTS setting. Failing to do so will result in all requests being returned as “Bad Request (400)”. Note The default settings.py file created by django-admin
startproject sets DEBUG = True for convenience. DEBUG_PROPAGATE_EXCEPTIONS Default: False If set to True, Django’s exception handling of view functions (handler500, or the debug view if DEBUG is True) and logging of 500 responses (django.request) is skipped and exceptions propagate upward. This can be useful for some test setups. It shouldn’t be used on a live site unless you want your web server (instead of Django) to generate “Internal Server Error” responses. In that case, make sure your server doesn’t show the stack trace or other sensitive information in the response. DECIMAL_SEPARATOR Default: '.' (Dot) Default decimal separator used when formatting decimal numbers. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also NUMBER_GROUPING, THOUSAND_SEPARATOR and USE_THOUSAND_SEPARATOR. DEFAULT_AUTO_FIELD New in Django 3.2. Default: 'django.db.models.AutoField' Default primary key field type to use for models that don’t have a field with primary_key=True. Migrating auto-created through tables The value of DEFAULT_AUTO_FIELD will be respected when creating new auto-created through tables for many-to-many relationships. Unfortunately, the primary keys of existing auto-created through tables cannot currently be updated by the migrations framework. This means that if you switch the value of DEFAULT_AUTO_FIELD and then generate migrations, the primary keys of the related models will be updated, as will the foreign keys from the through table, but the primary key of the auto-created through table will not be migrated. In order to address this, you should add a RunSQL operation to your migrations to perform the required ALTER TABLE step. You can check the existing table name through sqlmigrate, dbshell, or with the field’s remote_field.through._meta.db_table property. Explicitly defined through models are already handled by the migrations system. Allowing automatic migrations for the primary key of existing auto-created through tables may be implemented at a later date. DEFAULT_CHARSET Default: 'utf-8' Default charset to use for all HttpResponse objects, if a MIME type isn’t manually specified. Used when constructing the Content-Type header. DEFAULT_EXCEPTION_REPORTER Default: 'django.views.debug.ExceptionReporter' Default exception reporter class to be used if none has been assigned to the HttpRequest instance yet. See Custom error reports. DEFAULT_EXCEPTION_REPORTER_FILTER Default: 'django.views.debug.SafeExceptionReporterFilter' Default exception reporter filter class to be used if none has been assigned to the HttpRequest instance yet. See Filtering error reports. DEFAULT_FILE_STORAGE Default: 'django.core.files.storage.FileSystemStorage' Default file storage class to be used for any file-related operations that don’t specify a particular storage system. See Managing files. DEFAULT_FROM_EMAIL Default: 'webmaster@localhost' Default email address to use for various automated correspondence from the site manager(s). This doesn’t include error messages sent to ADMINS and MANAGERS; for that, see SERVER_EMAIL. DEFAULT_INDEX_TABLESPACE Default: '' (Empty string) Default tablespace to use for indexes on fields that don’t specify one, if the backend supports it (see Tablespaces). DEFAULT_TABLESPACE Default: '' (Empty string) Default tablespace to use for models that don’t specify one, if the backend supports it (see Tablespaces). DISALLOWED_USER_AGENTS Default: [] (Empty list) List of compiled regular expression objects representing User-Agent strings that are not allowed to visit any page, systemwide. Use this for bots/crawlers. This is only used if CommonMiddleware is installed (see Middleware). EMAIL_BACKEND Default: 'django.core.mail.backends.smtp.EmailBackend' The backend to use for sending emails. For the list of available backends see Sending email. EMAIL_FILE_PATH Default: Not defined The directory used by the file email backend to store output files. EMAIL_HOST Default: 'localhost' The host to use for sending email. See also EMAIL_PORT. EMAIL_HOST_PASSWORD Default: '' (Empty string) Password to use for the SMTP server defined in EMAIL_HOST. This setting is used in conjunction with EMAIL_HOST_USER when authenticating to the SMTP server. If either of these settings is empty, Django won’t attempt authentication. See also EMAIL_HOST_USER. EMAIL_HOST_USER Default: '' (Empty string) Username to use for the SMTP server defined in EMAIL_HOST. If empty, Django won’t attempt authentication. See also EMAIL_HOST_PASSWORD. EMAIL_PORT Default: 25 Port to use for the SMTP server defined in EMAIL_HOST. EMAIL_SUBJECT_PREFIX Default: '[Django] ' Subject-line prefix for email messages sent with django.core.mail.mail_admins or django.core.mail.mail_managers. You’ll probably want to include the trailing space. EMAIL_USE_LOCALTIME Default: False Whether to send the SMTP Date header of email messages in the local time zone (True) or in UTC (False). EMAIL_USE_TLS Default: False Whether to use a TLS (secure) connection when talking to the SMTP server. This is used for explicit TLS connections, generally on port 587. If you are experiencing hanging connections, see the implicit TLS setting EMAIL_USE_SSL. EMAIL_USE_SSL Default: False Whether to use an implicit TLS (secure) connection when talking to the SMTP server. In most email documentation this type of TLS connection is referred to as SSL. It is generally used on port 465. If you are experiencing problems, see the explicit TLS setting EMAIL_USE_TLS. Note that EMAIL_USE_TLS/EMAIL_USE_SSL are mutually exclusive, so only set one of those settings to True. EMAIL_SSL_CERTFILE Default: None If EMAIL_USE_SSL or EMAIL_USE_TLS is True, you can optionally specify the path to a PEM-formatted certificate chain file to use for the SSL connection. EMAIL_SSL_KEYFILE Default: None If EMAIL_USE_SSL or EMAIL_USE_TLS is True, you can optionally specify the path to a PEM-formatted private key file to use for the SSL connection. Note that setting EMAIL_SSL_CERTFILE and EMAIL_SSL_KEYFILE doesn’t result in any certificate checking. They’re passed to the underlying SSL connection. Please refer to the documentation of Python’s ssl.wrap_socket() function for details on how the certificate chain file and private key file are handled. EMAIL_TIMEOUT Default: None Specifies a timeout in seconds for blocking operations like the connection attempt. FILE_UPLOAD_HANDLERS Default: [
'django.core.files.uploadhandler.MemoryFileUploadHandler',
'django.core.files.uploadhandler.TemporaryFileUploadHandler',
]
A list of handlers to use for uploading. Changing this setting allows complete customization – even replacement – of Django’s upload process. See Managing files for details. FILE_UPLOAD_MAX_MEMORY_SIZE Default: 2621440 (i.e. 2.5 MB). The maximum size (in bytes) that an upload will be before it gets streamed to the file system. See Managing files for details. See also DATA_UPLOAD_MAX_MEMORY_SIZE. FILE_UPLOAD_DIRECTORY_PERMISSIONS Default: None The numeric mode to apply to directories created in the process of uploading files. This setting also determines the default permissions for collected static directories when using the collectstatic management command. See collectstatic for details on overriding it. This value mirrors the functionality and caveats of the FILE_UPLOAD_PERMISSIONS setting. FILE_UPLOAD_PERMISSIONS Default: 0o644 The numeric mode (i.e. 0o644) to set newly uploaded files to. For more information about what these modes mean, see the documentation for os.chmod(). If None, you’ll get operating-system dependent behavior. On most platforms, temporary files will have a mode of 0o600, and files saved from memory will be saved using the system’s standard umask. For security reasons, these permissions aren’t applied to the temporary files that are stored in FILE_UPLOAD_TEMP_DIR. This setting also determines the default permissions for collected static files when using the collectstatic management command. See collectstatic for details on overriding it. Warning Always prefix the mode with 0o . If you’re not familiar with file modes, please note that the 0o prefix is very important: it indicates an octal number, which is the way that modes must be specified. If you try to use 644, you’ll get totally incorrect behavior. FILE_UPLOAD_TEMP_DIR Default: None The directory to store data to (typically files larger than FILE_UPLOAD_MAX_MEMORY_SIZE) temporarily while uploading files. If None, Django will use the standard temporary directory for the operating system. For example, this will default to /tmp on *nix-style operating systems. See Managing files for details. FIRST_DAY_OF_WEEK Default: 0 (Sunday) A number representing the first day of the week. This is especially useful when displaying a calendar. This value is only used when not using format internationalization, or when a format cannot be found for the current locale. The value must be an integer from 0 to 6, where 0 means Sunday, 1 means Monday and so on. FIXTURE_DIRS Default: [] (Empty list) List of directories searched for fixture files, in addition to the fixtures directory of each application, in search order. Note that these paths should use Unix-style forward slashes, even on Windows. See Providing data with fixtures and Fixture loading. FORCE_SCRIPT_NAME Default: None If not None, this will be used as the value of the SCRIPT_NAME environment variable in any HTTP request. This setting can be used to override the server-provided value of SCRIPT_NAME, which may be a rewritten version of the preferred value or not supplied at all. It is also used by django.setup() to set the URL resolver script prefix outside of the request/response cycle (e.g. in management commands and standalone scripts) to generate correct URLs when SCRIPT_NAME is not /. FORM_RENDERER Default: 'django.forms.renderers.DjangoTemplates' The class that renders forms and form widgets. It must implement the low-level render API. Included form renderers are:
'django.forms.renderers.DjangoTemplates'
'django.forms.renderers.Jinja2'
FORMAT_MODULE_PATH Default: None A full Python path to a Python package that contains custom format definitions for project locales. If not None, Django will check for a formats.py file, under the directory named as the current locale, and will use the formats defined in this file. For example, if FORMAT_MODULE_PATH is set to mysite.formats, and current language is en (English), Django will expect a directory tree like: mysite/
formats/
__init__.py
en/
__init__.py
formats.py
You can also set this setting to a list of Python paths, for example: FORMAT_MODULE_PATH = [
'mysite.formats',
'some_app.formats',
]
When Django searches for a certain format, it will go through all given Python paths until it finds a module that actually defines the given format. This means that formats defined in packages farther up in the list will take precedence over the same formats in packages farther down. Available formats are: DATE_FORMAT DATE_INPUT_FORMATS
DATETIME_FORMAT, DATETIME_INPUT_FORMATS DECIMAL_SEPARATOR FIRST_DAY_OF_WEEK MONTH_DAY_FORMAT NUMBER_GROUPING SHORT_DATE_FORMAT SHORT_DATETIME_FORMAT THOUSAND_SEPARATOR TIME_FORMAT TIME_INPUT_FORMATS YEAR_MONTH_FORMAT IGNORABLE_404_URLS Default: [] (Empty list) List of compiled regular expression objects describing URLs that should be ignored when reporting HTTP 404 errors via email (see How to manage error reporting). Regular expressions are matched against request's full paths (including query string, if any). Use this if your site does not provide a commonly requested file such as favicon.ico or robots.txt. This is only used if BrokenLinkEmailsMiddleware is enabled (see Middleware). INSTALLED_APPS Default: [] (Empty list) A list of strings designating all applications that are enabled in this Django installation. Each string should be a dotted Python path to: an application configuration class (preferred), or a package containing an application. Learn more about application configurations. Use the application registry for introspection Your code should never access INSTALLED_APPS directly. Use django.apps.apps instead. Application names and labels must be unique in INSTALLED_APPS Application names — the dotted Python path to the application package — must be unique. There is no way to include the same application twice, short of duplicating its code under another name. Application labels — by default the final part of the name — must be unique too. For example, you can’t include both django.contrib.auth and myproject.auth. However, you can relabel an application with a custom configuration that defines a different label. These rules apply regardless of whether INSTALLED_APPS references application configuration classes or application packages. When several applications provide different versions of the same resource (template, static file, management command, translation), the application listed first in INSTALLED_APPS has precedence. INTERNAL_IPS Default: [] (Empty list) A list of IP addresses, as strings, that: Allow the debug() context processor to add some variables to the template context. Can use the admindocs bookmarklets even if not logged in as a staff user. Are marked as “internal” (as opposed to “EXTERNAL”) in AdminEmailHandler emails. LANGUAGE_CODE Default: 'en-us' A string representing the language code for this installation. This should be in standard language ID format. For example, U.S. English is "en-us". See also the list of language identifiers and Internationalization and localization. USE_I18N must be active for this setting to have any effect. It serves two purposes: If the locale middleware isn’t in use, it decides which translation is served to all users. If the locale middleware is active, it provides a fallback language in case the user’s preferred language can’t be determined or is not supported by the website. It also provides the fallback translation when a translation for a given literal doesn’t exist for the user’s preferred language. See How Django discovers language preference for more details. LANGUAGE_COOKIE_AGE Default: None (expires at browser close) The age of the language cookie, in seconds. LANGUAGE_COOKIE_DOMAIN Default: None The domain to use for the language cookie. Set this to a string such as "example.com" for cross-domain cookies, or use None for a standard domain cookie. Be cautious when updating this setting on a production site. If you update this setting to enable cross-domain cookies on a site that previously used standard domain cookies, existing user cookies that have the old domain will not be updated. This will result in site users being unable to switch the language as long as these cookies persist. The only safe and reliable option to perform the switch is to change the language cookie name permanently (via the LANGUAGE_COOKIE_NAME setting) and to add a middleware that copies the value from the old cookie to a new one and then deletes the old one. LANGUAGE_COOKIE_HTTPONLY Default: False Whether to use HttpOnly flag on the language cookie. If this is set to True, client-side JavaScript will not be able to access the language cookie. See SESSION_COOKIE_HTTPONLY for details on HttpOnly. LANGUAGE_COOKIE_NAME Default: 'django_language' The name of the cookie to use for the language cookie. This can be whatever you want (as long as it’s different from the other cookie names in your application). See Internationalization and localization. LANGUAGE_COOKIE_PATH Default: '/' The path set on the language cookie. This should either match the URL path of your Django installation or be a parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths and each instance will only see its own language cookie. Be cautious when updating this setting on a production site. If you update this setting to use a deeper path than it previously used, existing user cookies that have the old path will not be updated. This will result in site users being unable to switch the language as long as these cookies persist. The only safe and reliable option to perform the switch is to change the language cookie name permanently (via the LANGUAGE_COOKIE_NAME setting), and to add a middleware that copies the value from the old cookie to a new one and then deletes the one. LANGUAGE_COOKIE_SAMESITE Default: None The value of the SameSite flag on the language cookie. This flag prevents the cookie from being sent in cross-site requests. See SESSION_COOKIE_SAMESITE for details about SameSite. LANGUAGE_COOKIE_SECURE Default: False Whether to use a secure cookie for the language cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent under an HTTPS connection. LANGUAGES Default: A list of all available languages. This list is continually growing and including a copy here would inevitably become rapidly out of date. You can see the current list of translated languages by looking in django/conf/global_settings.py. The list is a list of two-tuples in the format (language code, language name) – for example, ('ja', 'Japanese'). This specifies which languages are available for language selection. See Internationalization and localization. Generally, the default value should suffice. Only set this setting if you want to restrict language selection to a subset of the Django-provided languages. If you define a custom LANGUAGES setting, you can mark the language names as translation strings using the gettext_lazy() function. Here’s a sample settings file: from django.utils.translation import gettext_lazy as _
LANGUAGES = [
('de', _('German')),
('en', _('English')),
]
LANGUAGES_BIDI Default: A list of all language codes that are written right-to-left. You can see the current list of these languages by looking in django/conf/global_settings.py. The list contains language codes for languages that are written right-to-left. Generally, the default value should suffice. Only set this setting if you want to restrict language selection to a subset of the Django-provided languages. If you define a custom LANGUAGES setting, the list of bidirectional languages may contain language codes which are not enabled on a given site. LOCALE_PATHS Default: [] (Empty list) A list of directories where Django looks for translation files. See How Django discovers translations. Example: LOCALE_PATHS = [
'/home/www/project/common_files/locale',
'/var/local/translations/locale',
]
Django will look within each of these paths for the <locale_code>/LC_MESSAGES directories containing the actual translation files. LOGGING Default: A logging configuration dictionary. A data structure containing configuration information. The contents of this data structure will be passed as the argument to the configuration method described in LOGGING_CONFIG. Among other things, the default logging configuration passes HTTP 500 server errors to an email log handler when DEBUG is False. See also Configuring logging. You can see the default logging configuration by looking in django/utils/log.py. LOGGING_CONFIG Default: 'logging.config.dictConfig' A path to a callable that will be used to configure logging in the Django project. Points at an instance of Python’s dictConfig configuration method by default. If you set LOGGING_CONFIG to None, the logging configuration process will be skipped. MANAGERS Default: [] (Empty list) A list in the same format as ADMINS that specifies who should get broken link notifications when BrokenLinkEmailsMiddleware is enabled. MEDIA_ROOT Default: '' (Empty string) Absolute filesystem path to the directory that will hold user-uploaded files. Example: "/var/www/example.com/media/" See also MEDIA_URL. Warning MEDIA_ROOT and STATIC_ROOT must have different values. Before STATIC_ROOT was introduced, it was common to rely or fallback on MEDIA_ROOT to also serve static files; however, since this can have serious security implications, there is a validation check to prevent it. MEDIA_URL Default: '' (Empty string) URL that handles the media served from MEDIA_ROOT, used for managing stored files. It must end in a slash if set to a non-empty value. You will need to configure these files to be served in both development and production environments. If you want to use {{ MEDIA_URL }} in your templates, add 'django.template.context_processors.media' in the 'context_processors' option of TEMPLATES. Example: "http://media.example.com/" Warning There are security risks if you are accepting uploaded content from untrusted users! See the security guide’s topic on User-uploaded content for mitigation details. Warning MEDIA_URL and STATIC_URL must have different values. See MEDIA_ROOT for more details. Note If MEDIA_URL is a relative path, then it will be prefixed by the server-provided value of SCRIPT_NAME (or / if not set). This makes it easier to serve a Django application in a subpath without adding an extra configuration to the settings. MIDDLEWARE Default: None A list of middleware to use. See Middleware. MIGRATION_MODULES Default: {} (Empty dictionary) A dictionary specifying the package where migration modules can be found on a per-app basis. The default value of this setting is an empty dictionary, but the default package name for migration modules is migrations. Example: {'blog': 'blog.db_migrations'}
In this case, migrations pertaining to the blog app will be contained in the blog.db_migrations package. If you provide the app_label argument, makemigrations will automatically create the package if it doesn’t already exist. When you supply None as a value for an app, Django will consider the app as an app without migrations regardless of an existing migrations submodule. This can be used, for example, in a test settings file to skip migrations while testing (tables will still be created for the apps’ models). To disable migrations for all apps during tests, you can set the MIGRATE to False instead. If MIGRATION_MODULES is used in your general project settings, remember to use the migrate --run-syncdb option if you want to create tables for the app. MONTH_DAY_FORMAT Default: 'F j' The default formatting to use for date fields on Django admin change-list pages – and, possibly, by other parts of the system – in cases when only the month and day are displayed. For example, when a Django admin change-list page is being filtered by a date drilldown, the header for a given day displays the day and month. Different locales have different formats. For example, U.S. English would say “January 1,” whereas Spanish might say “1 Enero.” Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT and YEAR_MONTH_FORMAT. NUMBER_GROUPING Default: 0 Number of digits grouped together on the integer part of a number. Common use is to display a thousand separator. If this setting is 0, then no grouping will be applied to the number. If this setting is greater than 0, then THOUSAND_SEPARATOR will be used as the separator between those groups. Some locales use non-uniform digit grouping, e.g. 10,00,00,000 in en_IN. For this case, you can provide a sequence with the number of digit group sizes to be applied. The first number defines the size of the group preceding the decimal delimiter, and each number that follows defines the size of preceding groups. If the sequence is terminated with -1, no further grouping is performed. If the sequence terminates with a 0, the last group size is used for the remainder of the number. Example tuple for en_IN: NUMBER_GROUPING = (3, 2, 0)
Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also DECIMAL_SEPARATOR, THOUSAND_SEPARATOR and USE_THOUSAND_SEPARATOR. PREPEND_WWW Default: False Whether to prepend the “www.” subdomain to URLs that don’t have it. This is only used if CommonMiddleware is installed (see Middleware). See also APPEND_SLASH. ROOT_URLCONF Default: Not defined A string representing the full Python import path to your root URLconf, for example "mydjangoapps.urls". Can be overridden on a per-request basis by setting the attribute urlconf on the incoming HttpRequest object. See How Django processes a request for details. SECRET_KEY Default: '' (Empty string) A secret key for a particular Django installation. This is used to provide cryptographic signing, and should be set to a unique, unpredictable value. django-admin startproject automatically adds a randomly-generated SECRET_KEY to each new project. Uses of the key shouldn’t assume that it’s text or bytes. Every use should go through force_str() or force_bytes() to convert it to the desired type. Django will refuse to start if SECRET_KEY is not set. Warning Keep this value secret. Running Django with a known SECRET_KEY defeats many of Django’s security protections, and can lead to privilege escalation and remote code execution vulnerabilities. The secret key is used for: All sessions if you are using any other session backend than django.contrib.sessions.backends.cache, or are using the default get_session_auth_hash(). All messages if you are using CookieStorage or FallbackStorage. All PasswordResetView tokens. Any usage of cryptographic signing, unless a different key is provided. If you rotate your secret key, all of the above will be invalidated. Secret keys are not used for passwords of users and key rotation will not affect them. Note The default settings.py file created by django-admin
startproject creates a unique SECRET_KEY for convenience. SECURE_CONTENT_TYPE_NOSNIFF Default: True If True, the SecurityMiddleware sets the X-Content-Type-Options: nosniff header on all responses that do not already have it. SECURE_CROSS_ORIGIN_OPENER_POLICY New in Django 4.0. Default: 'same-origin' Unless set to None, the SecurityMiddleware sets the Cross-Origin Opener Policy header on all responses that do not already have it to the value provided. SECURE_HSTS_INCLUDE_SUBDOMAINS Default: False If True, the SecurityMiddleware adds the includeSubDomains directive to the HTTP Strict Transport Security header. It has no effect unless SECURE_HSTS_SECONDS is set to a non-zero value. Warning Setting this incorrectly can irreversibly (for the value of SECURE_HSTS_SECONDS) break your site. Read the HTTP Strict Transport Security documentation first. SECURE_HSTS_PRELOAD Default: False If True, the SecurityMiddleware adds the preload directive to the HTTP Strict Transport Security header. It has no effect unless SECURE_HSTS_SECONDS is set to a non-zero value. SECURE_HSTS_SECONDS Default: 0 If set to a non-zero integer value, the SecurityMiddleware sets the HTTP Strict Transport Security header on all responses that do not already have it. Warning Setting this incorrectly can irreversibly (for some time) break your site. Read the HTTP Strict Transport Security documentation first. SECURE_PROXY_SSL_HEADER Default: None A tuple representing an HTTP header/value combination that signifies a request is secure. This controls the behavior of the request object’s is_secure() method. By default, is_secure() determines if a request is secure by confirming that a requested URL uses https://. This method is important for Django’s CSRF protection, and it may be used by your own code or third-party apps. If your Django app is behind a proxy, though, the proxy may be “swallowing” whether the original request uses HTTPS or not. If there is a non-HTTPS connection between the proxy and Django then is_secure() would always return False – even for requests that were made via HTTPS by the end user. In contrast, if there is an HTTPS connection between the proxy and Django then is_secure() would always return True – even for requests that were made originally via HTTP. In this situation, configure your proxy to set a custom HTTP header that tells Django whether the request came in via HTTPS, and set SECURE_PROXY_SSL_HEADER so that Django knows what header to look for. Set a tuple with two elements – the name of the header to look for and the required value. For example: SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
This tells Django to trust the X-Forwarded-Proto header that comes from our proxy, and any time its value is 'https', then the request is guaranteed to be secure (i.e., it originally came in via HTTPS). You should only set this setting if you control your proxy or have some other guarantee that it sets/strips this header appropriately. Note that the header needs to be in the format as used by request.META – all caps and likely starting with HTTP_. (Remember, Django automatically adds 'HTTP_' to the start of x-header names before making the header available in request.META.) Warning Modifying this setting can compromise your site’s security. Ensure you fully understand your setup before changing it. Make sure ALL of the following are true before setting this (assuming the values from the example above): Your Django app is behind a proxy. Your proxy strips the X-Forwarded-Proto header from all incoming requests. In other words, if end users include that header in their requests, the proxy will discard it. Your proxy sets the X-Forwarded-Proto header and sends it to Django, but only for requests that originally come in via HTTPS. If any of those are not true, you should keep this setting set to None and find another way of determining HTTPS, perhaps via custom middleware. SECURE_REDIRECT_EXEMPT Default: [] (Empty list) If a URL path matches a regular expression in this list, the request will not be redirected to HTTPS. The SecurityMiddleware strips leading slashes from URL paths, so patterns shouldn’t include them, e.g. SECURE_REDIRECT_EXEMPT = [r'^no-ssl/$', …]. If SECURE_SSL_REDIRECT is False, this setting has no effect. SECURE_REFERRER_POLICY Default: 'same-origin' If configured, the SecurityMiddleware sets the Referrer Policy header on all responses that do not already have it to the value provided. SECURE_SSL_HOST Default: None If a string (e.g. secure.example.com), all SSL redirects will be directed to this host rather than the originally-requested host (e.g. www.example.com). If SECURE_SSL_REDIRECT is False, this setting has no effect. SECURE_SSL_REDIRECT Default: False If True, the SecurityMiddleware redirects all non-HTTPS requests to HTTPS (except for those URLs matching a regular expression listed in SECURE_REDIRECT_EXEMPT). Note If turning this to True causes infinite redirects, it probably means your site is running behind a proxy and can’t tell which requests are secure and which are not. Your proxy likely sets a header to indicate secure requests; you can correct the problem by finding out what that header is and configuring the SECURE_PROXY_SSL_HEADER setting accordingly. SERIALIZATION_MODULES Default: Not defined A dictionary of modules containing serializer definitions (provided as strings), keyed by a string identifier for that serialization type. For example, to define a YAML serializer, use: SERIALIZATION_MODULES = {'yaml': 'path.to.yaml_serializer'}
SERVER_EMAIL Default: 'root@localhost' The email address that error messages come from, such as those sent to ADMINS and MANAGERS. Why are my emails sent from a different address? This address is used only for error messages. It is not the address that regular email messages sent with send_mail() come from; for that, see DEFAULT_FROM_EMAIL. SHORT_DATE_FORMAT Default: 'm/d/Y' (e.g. 12/31/2003) An available formatting that can be used for displaying date fields on templates. Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT and SHORT_DATETIME_FORMAT. SHORT_DATETIME_FORMAT Default: 'm/d/Y P' (e.g. 12/31/2003 4 p.m.) An available formatting that can be used for displaying datetime fields on templates. Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT and SHORT_DATE_FORMAT. SIGNING_BACKEND Default: 'django.core.signing.TimestampSigner' The backend used for signing cookies and other data. See also the Cryptographic signing documentation. SILENCED_SYSTEM_CHECKS Default: [] (Empty list) A list of identifiers of messages generated by the system check framework (i.e. ["models.W001"]) that you wish to permanently acknowledge and ignore. Silenced checks will not be output to the console. See also the System check framework documentation. TEMPLATES Default: [] (Empty list) A list containing the settings for all template engines to be used with Django. Each item of the list is a dictionary containing the options for an individual engine. Here’s a setup that tells the Django template engine to load templates from the templates subdirectory inside each installed application: TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'APP_DIRS': True,
},
]
The following options are available for all backends. BACKEND Default: Not defined The template backend to use. The built-in template backends are: 'django.template.backends.django.DjangoTemplates' 'django.template.backends.jinja2.Jinja2' You can use a template backend that doesn’t ship with Django by setting BACKEND to a fully-qualified path (i.e. 'mypackage.whatever.Backend'). NAME Default: see below The alias for this particular template engine. It’s an identifier that allows selecting an engine for rendering. Aliases must be unique across all configured template engines. It defaults to the name of the module defining the engine class, i.e. the next to last piece of BACKEND, when it isn’t provided. For example if the backend is 'mypackage.whatever.Backend' then its default name is 'whatever'. DIRS Default: [] (Empty list) Directories where the engine should look for template source files, in search order. APP_DIRS Default: False Whether the engine should look for template source files inside installed applications. Note The default settings.py file created by django-admin
startproject sets 'APP_DIRS': True. OPTIONS Default: {} (Empty dict) Extra parameters to pass to the template backend. Available parameters vary depending on the template backend. See DjangoTemplates and Jinja2 for the options of the built-in backends. TEST_RUNNER Default: 'django.test.runner.DiscoverRunner' The name of the class to use for starting the test suite. See Using different testing frameworks. TEST_NON_SERIALIZED_APPS Default: [] (Empty list) In order to restore the database state between tests for TransactionTestCases and database backends without transactions, Django will serialize the contents of all apps when it starts the test run so it can then reload from that copy before running tests that need it. This slows down the startup time of the test runner; if you have apps that you know don’t need this feature, you can add their full names in here (e.g. 'django.contrib.contenttypes') to exclude them from this serialization process. THOUSAND_SEPARATOR Default: ',' (Comma) Default thousand separator used when formatting numbers. This setting is used only when USE_THOUSAND_SEPARATOR is True and NUMBER_GROUPING is greater than 0. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also NUMBER_GROUPING, DECIMAL_SEPARATOR and USE_THOUSAND_SEPARATOR. TIME_FORMAT Default: 'P' (e.g. 4 p.m.) The default formatting to use for displaying time fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATE_FORMAT and DATETIME_FORMAT. TIME_INPUT_FORMATS Default: [
'%H:%M:%S', # '14:30:59'
'%H:%M:%S.%f', # '14:30:59.000200'
'%H:%M', # '14:30'
]
A list of formats that will be accepted when inputting data on a time field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATE_INPUT_FORMATS and DATETIME_INPUT_FORMATS. TIME_ZONE Default: 'America/Chicago' A string representing the time zone for this installation. See the list of time zones. Note Since Django was first released with the TIME_ZONE set to 'America/Chicago', the global setting (used if nothing is defined in your project’s settings.py) remains 'America/Chicago' for backwards compatibility. New project templates default to 'UTC'. Note that this isn’t necessarily the time zone of the server. For example, one server may serve multiple Django-powered sites, each with a separate time zone setting. When USE_TZ is False, this is the time zone in which Django will store all datetimes. When USE_TZ is True, this is the default time zone that Django will use to display datetimes in templates and to interpret datetimes entered in forms. On Unix environments (where time.tzset() is implemented), Django sets the os.environ['TZ'] variable to the time zone you specify in the TIME_ZONE setting. Thus, all your views and models will automatically operate in this time zone. However, Django won’t set the TZ environment variable if you’re using the manual configuration option as described in manually configuring settings. If Django doesn’t set the TZ environment variable, it’s up to you to ensure your processes are running in the correct environment. Note Django cannot reliably use alternate time zones in a Windows environment. If you’re running Django on Windows, TIME_ZONE must be set to match the system time zone. USE_DEPRECATED_PYTZ New in Django 4.0. Default: False A boolean that specifies whether to use pytz, rather than zoneinfo, as the default time zone implementation. Deprecated since version 4.0: This transitional setting is deprecated. Support for using pytz will be removed in Django 5.0. USE_I18N Default: True A boolean that specifies whether Django’s translation system should be enabled. This provides a way to turn it off, for performance. If this is set to False, Django will make some optimizations so as not to load the translation machinery. See also LANGUAGE_CODE, USE_L10N and USE_TZ. Note The default settings.py file created by django-admin
startproject includes USE_I18N = True for convenience. USE_L10N Default: True A boolean that specifies if localized formatting of data will be enabled by default or not. If this is set to True, e.g. Django will display numbers and dates using the format of the current locale. See also LANGUAGE_CODE, USE_I18N and USE_TZ. Changed in Django 4.0: In older versions, the default value is False. Deprecated since version 4.0: This setting is deprecated. Starting with Django 5.0, localized formatting of data will always be enabled. For example Django will display numbers and dates using the format of the current locale. USE_THOUSAND_SEPARATOR Default: False A boolean that specifies whether to display numbers using a thousand separator. When set to True and USE_L10N is also True, Django will format numbers using the NUMBER_GROUPING and THOUSAND_SEPARATOR settings. These settings may also be dictated by the locale, which takes precedence. See also DECIMAL_SEPARATOR, NUMBER_GROUPING and THOUSAND_SEPARATOR. USE_TZ Default: False Note In Django 5.0, the default value will change from False to True. A boolean that specifies if datetimes will be timezone-aware by default or not. If this is set to True, Django will use timezone-aware datetimes internally. When USE_TZ is False, Django will use naive datetimes in local time, except when parsing ISO 8601 formatted strings, where timezone information will always be retained if present. See also TIME_ZONE, USE_I18N and USE_L10N. Note The default settings.py file created by django-admin startproject includes USE_TZ = True for convenience. USE_X_FORWARDED_HOST Default: False A boolean that specifies whether to use the X-Forwarded-Host header in preference to the Host header. This should only be enabled if a proxy which sets this header is in use. This setting takes priority over USE_X_FORWARDED_PORT. Per RFC 7239#section-5.3, the X-Forwarded-Host header can include the port number, in which case you shouldn’t use USE_X_FORWARDED_PORT. USE_X_FORWARDED_PORT Default: False A boolean that specifies whether to use the X-Forwarded-Port header in preference to the SERVER_PORT META variable. This should only be enabled if a proxy which sets this header is in use. USE_X_FORWARDED_HOST takes priority over this setting. WSGI_APPLICATION Default: None The full Python path of the WSGI application object that Django’s built-in servers (e.g. runserver) will use. The django-admin
startproject management command will create a standard wsgi.py file with an application callable in it, and point this setting to that application. If not set, the return value of django.core.wsgi.get_wsgi_application() will be used. In this case, the behavior of runserver will be identical to previous Django versions. YEAR_MONTH_FORMAT Default: 'F Y' The default formatting to use for date fields on Django admin change-list pages – and, possibly, by other parts of the system – in cases when only the year and month are displayed. For example, when a Django admin change-list page is being filtered by a date drilldown, the header for a given month displays the month and the year. Different locales have different formats. For example, U.S. English would say “January 2006,” whereas another locale might say “2006/January.” Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT and MONTH_DAY_FORMAT. X_FRAME_OPTIONS Default: 'DENY' The default value for the X-Frame-Options header used by XFrameOptionsMiddleware. See the clickjacking protection documentation. Auth Settings for django.contrib.auth. AUTHENTICATION_BACKENDS Default: ['django.contrib.auth.backends.ModelBackend'] A list of authentication backend classes (as strings) to use when attempting to authenticate a user. See the authentication backends documentation for details. AUTH_USER_MODEL Default: 'auth.User' The model to use to represent a User. See Substituting a custom User model. Warning You cannot change the AUTH_USER_MODEL setting during the lifetime of a project (i.e. once you have made and migrated models that depend on it) without serious effort. It is intended to be set at the project start, and the model it refers to must be available in the first migration of the app that it lives in. See Substituting a custom User model for more details. LOGIN_REDIRECT_URL Default: '/accounts/profile/' The URL or named URL pattern where requests are redirected after login when the LoginView doesn’t get a next GET parameter. LOGIN_URL Default: '/accounts/login/' The URL or named URL pattern where requests are redirected for login when using the login_required() decorator, LoginRequiredMixin, or AccessMixin. LOGOUT_REDIRECT_URL Default: None The URL or named URL pattern where requests are redirected after logout if LogoutView doesn’t have a next_page attribute. If None, no redirect will be performed and the logout view will be rendered. PASSWORD_RESET_TIMEOUT Default: 259200 (3 days, in seconds) The number of seconds a password reset link is valid for. Used by the PasswordResetConfirmView. Note Reducing the value of this timeout doesn’t make any difference to the ability of an attacker to brute-force a password reset token. Tokens are designed to be safe from brute-forcing without any timeout. This timeout exists to protect against some unlikely attack scenarios, such as someone gaining access to email archives that may contain old, unused password reset tokens. PASSWORD_HASHERS See How Django stores passwords. Default: [
'django.contrib.auth.hashers.PBKDF2PasswordHasher',
'django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher',
'django.contrib.auth.hashers.Argon2PasswordHasher',
'django.contrib.auth.hashers.BCryptSHA256PasswordHasher',
]
AUTH_PASSWORD_VALIDATORS Default: [] (Empty list) The list of validators that are used to check the strength of user’s passwords. See Password validation for more details. By default, no validation is performed and all passwords are accepted. Messages Settings for django.contrib.messages. MESSAGE_LEVEL Default: messages.INFO Sets the minimum message level that will be recorded by the messages framework. See message levels for more details. Important If you override MESSAGE_LEVEL in your settings file and rely on any of the built-in constants, you must import the constants module directly to avoid the potential for circular imports, e.g.: from django.contrib.messages import constants as message_constants
MESSAGE_LEVEL = message_constants.DEBUG
If desired, you may specify the numeric values for the constants directly according to the values in the above constants table. MESSAGE_STORAGE Default: 'django.contrib.messages.storage.fallback.FallbackStorage' Controls where Django stores message data. Valid values are: 'django.contrib.messages.storage.fallback.FallbackStorage' 'django.contrib.messages.storage.session.SessionStorage' 'django.contrib.messages.storage.cookie.CookieStorage' See message storage backends for more details. The backends that use cookies – CookieStorage and FallbackStorage – use the value of SESSION_COOKIE_DOMAIN, SESSION_COOKIE_SECURE and SESSION_COOKIE_HTTPONLY when setting their cookies. MESSAGE_TAGS Default: {
messages.DEBUG: 'debug',
messages.INFO: 'info',
messages.SUCCESS: 'success',
messages.WARNING: 'warning',
messages.ERROR: 'error',
}
This sets the mapping of message level to message tag, which is typically rendered as a CSS class in HTML. If you specify a value, it will extend the default. This means you only have to specify those values which you need to override. See Displaying messages above for more details. Important If you override MESSAGE_TAGS in your settings file and rely on any of the built-in constants, you must import the constants module directly to avoid the potential for circular imports, e.g.: from django.contrib.messages import constants as message_constants
MESSAGE_TAGS = {message_constants.INFO: ''}
If desired, you may specify the numeric values for the constants directly according to the values in the above constants table. Sessions Settings for django.contrib.sessions. SESSION_CACHE_ALIAS Default: 'default' If you’re using cache-based session storage, this selects the cache to use. SESSION_COOKIE_AGE Default: 1209600 (2 weeks, in seconds) The age of session cookies, in seconds. SESSION_COOKIE_DOMAIN Default: None The domain to use for session cookies. Set this to a string such as "example.com" for cross-domain cookies, or use None for a standard domain cookie. To use cross-domain cookies with CSRF_USE_SESSIONS, you must include a leading dot (e.g. ".example.com") to accommodate the CSRF middleware’s referer checking. Be cautious when updating this setting on a production site. If you update this setting to enable cross-domain cookies on a site that previously used standard domain cookies, existing user cookies will be set to the old domain. This may result in them being unable to log in as long as these cookies persist. This setting also affects cookies set by django.contrib.messages. SESSION_COOKIE_HTTPONLY Default: True Whether to use HttpOnly flag on the session cookie. If this is set to True, client-side JavaScript will not be able to access the session cookie. HttpOnly is a flag included in a Set-Cookie HTTP response header. It’s part of the RFC 6265#section-4.1.2.6 standard for cookies and can be a useful way to mitigate the risk of a client-side script accessing the protected cookie data. This makes it less trivial for an attacker to escalate a cross-site scripting vulnerability into full hijacking of a user’s session. There aren’t many good reasons for turning this off. Your code shouldn’t read session cookies from JavaScript. SESSION_COOKIE_NAME Default: 'sessionid' The name of the cookie to use for sessions. This can be whatever you want (as long as it’s different from the other cookie names in your application). SESSION_COOKIE_PATH Default: '/' The path set on the session cookie. This should either match the URL path of your Django installation or be parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths, and each instance will only see its own session cookie. SESSION_COOKIE_SAMESITE Default: 'Lax' The value of the SameSite flag on the session cookie. This flag prevents the cookie from being sent in cross-site requests thus preventing CSRF attacks and making some methods of stealing session cookie impossible. Possible values for the setting are:
'Strict': prevents the cookie from being sent by the browser to the target site in all cross-site browsing context, even when following a regular link. For example, for a GitHub-like website this would mean that if a logged-in user follows a link to a private GitHub project posted on a corporate discussion forum or email, GitHub will not receive the session cookie and the user won’t be able to access the project. A bank website, however, most likely doesn’t want to allow any transactional pages to be linked from external sites so the 'Strict' flag would be appropriate.
'Lax' (default): provides a balance between security and usability for websites that want to maintain user’s logged-in session after the user arrives from an external link. In the GitHub scenario, the session cookie would be allowed when following a regular link from an external website and be blocked in CSRF-prone request methods (e.g. POST).
'None' (string): the session cookie will be sent with all same-site and cross-site requests.
False: disables the flag. Note Modern browsers provide a more secure default policy for the SameSite flag and will assume Lax for cookies without an explicit value set. SESSION_COOKIE_SECURE Default: False Whether to use a secure cookie for the session cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent under an HTTPS connection. Leaving this setting off isn’t a good idea because an attacker could capture an unencrypted session cookie with a packet sniffer and use the cookie to hijack the user’s session. SESSION_ENGINE Default: 'django.contrib.sessions.backends.db' Controls where Django stores session data. Included engines are: 'django.contrib.sessions.backends.db' 'django.contrib.sessions.backends.file' 'django.contrib.sessions.backends.cache' 'django.contrib.sessions.backends.cached_db' 'django.contrib.sessions.backends.signed_cookies' See Configuring the session engine for more details. SESSION_EXPIRE_AT_BROWSER_CLOSE Default: False Whether to expire the session when the user closes their browser. See Browser-length sessions vs. persistent sessions. SESSION_FILE_PATH Default: None If you’re using file-based session storage, this sets the directory in which Django will store session data. When the default value (None) is used, Django will use the standard temporary directory for the system. SESSION_SAVE_EVERY_REQUEST Default: False Whether to save the session data on every request. If this is False (default), then the session data will only be saved if it has been modified – that is, if any of its dictionary values have been assigned or deleted. Empty sessions won’t be created, even if this setting is active. SESSION_SERIALIZER Default: 'django.contrib.sessions.serializers.JSONSerializer' Full import path of a serializer class to use for serializing session data. Included serializers are: 'django.contrib.sessions.serializers.PickleSerializer' 'django.contrib.sessions.serializers.JSONSerializer' See Session serialization for details, including a warning regarding possible remote code execution when using PickleSerializer. Sites Settings for django.contrib.sites. SITE_ID Default: Not defined The ID, as an integer, of the current site in the django_site database table. This is used so that application data can hook into specific sites and a single database can manage content for multiple sites. Static Files Settings for django.contrib.staticfiles. STATIC_ROOT Default: None The absolute path to the directory where collectstatic will collect static files for deployment. Example: "/var/www/example.com/static/" If the staticfiles contrib app is enabled (as in the default project template), the collectstatic management command will collect static files into this directory. See the how-to on managing static files for more details about usage. Warning This should be an initially empty destination directory for collecting your static files from their permanent locations into one directory for ease of deployment; it is not a place to store your static files permanently. You should do that in directories that will be found by staticfiles’s finders, which by default, are 'static/' app sub-directories and any directories you include in STATICFILES_DIRS). STATIC_URL Default: None URL to use when referring to static files located in STATIC_ROOT. Example: "static/" or "http://static.example.com/" If not None, this will be used as the base path for asset definitions (the Media class) and the staticfiles app. It must end in a slash if set to a non-empty value. You may need to configure these files to be served in development and will definitely need to do so in production. Note If STATIC_URL is a relative path, then it will be prefixed by the server-provided value of SCRIPT_NAME (or / if not set). This makes it easier to serve a Django application in a subpath without adding an extra configuration to the settings. STATICFILES_DIRS Default: [] (Empty list) This setting defines the additional locations the staticfiles app will traverse if the FileSystemFinder finder is enabled, e.g. if you use the collectstatic or findstatic management command or use the static file serving view. This should be set to a list of strings that contain full paths to your additional files directory(ies) e.g.: STATICFILES_DIRS = [
"/home/special.polls.com/polls/static",
"/home/polls.com/polls/static",
"/opt/webfiles/common",
]
Note that these paths should use Unix-style forward slashes, even on Windows (e.g. "C:/Users/user/mysite/extra_static_content"). Prefixes (optional) In case you want to refer to files in one of the locations with an additional namespace, you can optionally provide a prefix as (prefix, path) tuples, e.g.: STATICFILES_DIRS = [
# ...
("downloads", "/opt/webfiles/stats"),
]
For example, assuming you have STATIC_URL set to 'static/', the collectstatic management command would collect the “stats” files in a 'downloads' subdirectory of STATIC_ROOT. This would allow you to refer to the local file '/opt/webfiles/stats/polls_20101022.tar.gz' with '/static/downloads/polls_20101022.tar.gz' in your templates, e.g.: <a href="{% static 'downloads/polls_20101022.tar.gz' %}">
STATICFILES_STORAGE Default: 'django.contrib.staticfiles.storage.StaticFilesStorage' The file storage engine to use when collecting static files with the collectstatic management command. A ready-to-use instance of the storage backend defined in this setting can be found at django.contrib.staticfiles.storage.staticfiles_storage. For an example, see Serving static files from a cloud service or CDN. STATICFILES_FINDERS Default: [
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
]
The list of finder backends that know how to find static files in various locations. The default will find files stored in the STATICFILES_DIRS setting (using django.contrib.staticfiles.finders.FileSystemFinder) and in a static subdirectory of each app (using django.contrib.staticfiles.finders.AppDirectoriesFinder). If multiple files with the same name are present, the first file that is found will be used. One finder is disabled by default: django.contrib.staticfiles.finders.DefaultStorageFinder. If added to your STATICFILES_FINDERS setting, it will look for static files in the default file storage as defined by the DEFAULT_FILE_STORAGE setting. Note When using the AppDirectoriesFinder finder, make sure your apps can be found by staticfiles by adding the app to the INSTALLED_APPS setting of your site. Static file finders are currently considered a private interface, and this interface is thus undocumented. Core Settings Topical Index Cache CACHES CACHE_MIDDLEWARE_ALIAS CACHE_MIDDLEWARE_KEY_PREFIX CACHE_MIDDLEWARE_SECONDS Database DATABASES DATABASE_ROUTERS DEFAULT_INDEX_TABLESPACE DEFAULT_TABLESPACE Debugging DEBUG DEBUG_PROPAGATE_EXCEPTIONS Email ADMINS DEFAULT_CHARSET DEFAULT_FROM_EMAIL EMAIL_BACKEND EMAIL_FILE_PATH EMAIL_HOST EMAIL_HOST_PASSWORD EMAIL_HOST_USER EMAIL_PORT EMAIL_SSL_CERTFILE EMAIL_SSL_KEYFILE EMAIL_SUBJECT_PREFIX EMAIL_TIMEOUT EMAIL_USE_LOCALTIME EMAIL_USE_TLS MANAGERS SERVER_EMAIL Error reporting DEFAULT_EXCEPTION_REPORTER DEFAULT_EXCEPTION_REPORTER_FILTER IGNORABLE_404_URLS MANAGERS SILENCED_SYSTEM_CHECKS File uploads DEFAULT_FILE_STORAGE FILE_UPLOAD_HANDLERS FILE_UPLOAD_MAX_MEMORY_SIZE FILE_UPLOAD_PERMISSIONS FILE_UPLOAD_TEMP_DIR MEDIA_ROOT MEDIA_URL Forms FORM_RENDERER Globalization (i18n/l10n) DATE_FORMAT DATE_INPUT_FORMATS DATETIME_FORMAT DATETIME_INPUT_FORMATS DECIMAL_SEPARATOR FIRST_DAY_OF_WEEK FORMAT_MODULE_PATH LANGUAGE_CODE LANGUAGE_COOKIE_AGE LANGUAGE_COOKIE_DOMAIN LANGUAGE_COOKIE_HTTPONLY LANGUAGE_COOKIE_NAME LANGUAGE_COOKIE_PATH LANGUAGE_COOKIE_SAMESITE LANGUAGE_COOKIE_SECURE LANGUAGES LANGUAGES_BIDI LOCALE_PATHS MONTH_DAY_FORMAT NUMBER_GROUPING SHORT_DATE_FORMAT SHORT_DATETIME_FORMAT THOUSAND_SEPARATOR TIME_FORMAT TIME_INPUT_FORMATS TIME_ZONE USE_I18N USE_L10N USE_THOUSAND_SEPARATOR USE_TZ YEAR_MONTH_FORMAT HTTP DATA_UPLOAD_MAX_MEMORY_SIZE DATA_UPLOAD_MAX_NUMBER_FIELDS DEFAULT_CHARSET DISALLOWED_USER_AGENTS FORCE_SCRIPT_NAME INTERNAL_IPS MIDDLEWARE Security SECURE_CONTENT_TYPE_NOSNIFF SECURE_CROSS_ORIGIN_OPENER_POLICY SECURE_HSTS_INCLUDE_SUBDOMAINS SECURE_HSTS_PRELOAD SECURE_HSTS_SECONDS SECURE_PROXY_SSL_HEADER SECURE_REDIRECT_EXEMPT SECURE_REFERRER_POLICY SECURE_SSL_HOST SECURE_SSL_REDIRECT SIGNING_BACKEND USE_X_FORWARDED_HOST USE_X_FORWARDED_PORT WSGI_APPLICATION Logging LOGGING LOGGING_CONFIG Models ABSOLUTE_URL_OVERRIDES FIXTURE_DIRS INSTALLED_APPS Security Cross Site Request Forgery Protection CSRF_COOKIE_DOMAIN CSRF_COOKIE_NAME CSRF_COOKIE_PATH CSRF_COOKIE_SAMESITE CSRF_COOKIE_SECURE CSRF_FAILURE_VIEW CSRF_HEADER_NAME CSRF_TRUSTED_ORIGINS CSRF_USE_SESSIONS SECRET_KEY X_FRAME_OPTIONS Serialization DEFAULT_CHARSET SERIALIZATION_MODULES Templates TEMPLATES Testing Database: TEST
TEST_NON_SERIALIZED_APPS TEST_RUNNER URLs APPEND_SLASH PREPEND_WWW ROOT_URLCONF | |
doc_1288 |
Add new fields to an existing array. The names of the fields are given with the names arguments, the corresponding values with the data arguments. If a single field is appended, names, data and dtypes do not have to be lists but just values. Parameters
basearray
Input array to extend.
namesstring, sequence
String or sequence of strings corresponding to the names of the new fields.
dataarray or sequence of arrays
Array or sequence of arrays storing the fields to add to the base.
dtypessequence of datatypes, optional
Datatype or sequence of datatypes. If None, the datatypes are estimated from the data. Returns
appended_arraynp.recarray
See also append_fields | |
doc_1289 |
Indicator whether Series/DataFrame is empty. True if Series/DataFrame is entirely empty (no items), meaning any of the axes are of length 0. Returns
bool
If Series/DataFrame is empty, return True, if not return False. See also Series.dropna
Return series without null values. DataFrame.dropna
Return DataFrame with labels on given axis omitted where (all or any) data are missing. Notes If Series/DataFrame contains only NaNs, it is still not considered empty. See the example below. Examples An example of an actual empty DataFrame. Notice the index is empty:
>>> df_empty = pd.DataFrame({'A' : []})
>>> df_empty
Empty DataFrame
Columns: [A]
Index: []
>>> df_empty.empty
True
If we only have NaNs in our DataFrame, it is not considered empty! We will need to drop the NaNs to make the DataFrame empty:
>>> df = pd.DataFrame({'A' : [np.nan]})
>>> df
A
0 NaN
>>> df.empty
False
>>> df.dropna().empty
True
>>> ser_empty = pd.Series({'A' : []})
>>> ser_empty
A []
dtype: object
>>> ser_empty.empty
False
>>> ser_empty = pd.Series()
>>> ser_empty.empty
True | |
doc_1290 | Return True if x is a positive or negative infinity, and False otherwise. | |
doc_1291 |
Alias for set_linewidth. | |
doc_1292 | Returns a new tensor with the elements of input at the given indices. The input tensor is treated as if it were viewed as a 1-D tensor. The result takes the same shape as the indices. Parameters
input (Tensor) – the input tensor.
indices (LongTensor) – the indices into tensor Example: >>> src = torch.tensor([[4, 3, 5],
... [6, 7, 8]])
>>> torch.take(src, torch.tensor([0, 2, 5]))
tensor([ 4, 5, 8]) | |
doc_1293 |
Convert argument to datetime. This function converts a scalar, array-like, Series or DataFrame/dict-like to a pandas datetime object. Parameters
arg:int, float, str, datetime, list, tuple, 1-d array, Series, DataFrame/dict-like
The object to convert to a datetime. If a DataFrame is provided, the method expects minimally the following columns: "year", "month", "day".
errors:{‘ignore’, ‘raise’, ‘coerce’}, default ‘raise’
If 'raise', then invalid parsing will raise an exception. If 'coerce', then invalid parsing will be set as NaT. If 'ignore', then invalid parsing will return the input.
dayfirst:bool, default False
Specify a date parse order if arg is str or is list-like. If True, parses dates with the day first, e.g. "10/11/12" is parsed as 2012-11-10. Warning dayfirst=True is not strict, but will prefer to parse with day first. If a delimited date string cannot be parsed in accordance with the given dayfirst option, e.g. to_datetime(['31-12-2021']), then a warning will be shown.
yearfirst:bool, default False
Specify a date parse order if arg is str or is list-like. If True parses dates with the year first, e.g. "10/11/12" is parsed as 2010-11-12. If both dayfirst and yearfirst are True, yearfirst is preceded (same as dateutil). Warning yearfirst=True is not strict, but will prefer to parse with year first.
utc:bool, default None
Control timezone-related parsing, localization and conversion. If True, the function always returns a timezone-aware UTC-localized Timestamp, Series or DatetimeIndex. To do this, timezone-naive inputs are localized as UTC, while timezone-aware inputs are converted to UTC. If False (default), inputs will not be coerced to UTC. Timezone-naive inputs will remain naive, while timezone-aware ones will keep their time offsets. Limitations exist for mixed offsets (typically, daylight savings), see Examples section for details. See also: pandas general documentation about timezone conversion and localization.
format:str, default None
The strftime to parse time, e.g. "%d/%m/%Y". Note that "%f" will parse all the way up to nanoseconds. See strftime documentation for more information on choices.
exact:bool, default True
Control how format is used: If True, require an exact format match. If False, allow the format to match anywhere in the target string.
unit:str, default ‘ns’
The unit of the arg (D,s,ms,us,ns) denote the unit, which is an integer or float number. This will be based off the origin. Example, with unit='ms' and origin='unix' (the default), this would calculate the number of milliseconds to the unix epoch start.
infer_datetime_format:bool, default False
If True and no format is given, attempt to infer the format of the datetime strings based on the first non-NaN element, and if it can be inferred, switch to a faster method of parsing them. In some cases this can increase the parsing speed by ~5-10x.
origin:scalar, default ‘unix’
Define the reference date. The numeric values would be parsed as number of units (defined by unit) since this reference date. If 'unix' (or POSIX) time; origin is set to 1970-01-01. If 'julian', unit must be 'D', and origin is set to beginning of Julian Calendar. Julian day number 0 is assigned to the day starting at noon on January 1, 4713 BC. If Timestamp convertible, origin is set to Timestamp identified by origin.
cache:bool, default True
If True, use a cache of unique, converted dates to apply the datetime conversion. May produce significant speed-up when parsing duplicate date strings, especially ones with timezone offsets. The cache is only used when there are at least 50 values. The presence of out-of-bounds values will render the cache unusable and may slow down parsing. Changed in version 0.25.0: changed default value from False to True. Returns
datetime
If parsing succeeded. Return type depends on input (types in parenthesis correspond to fallback in case of unsuccessful timezone or out-of-range timestamp parsing): scalar: Timestamp (or datetime.datetime) array-like: DatetimeIndex (or Series with object dtype containing datetime.datetime) Series: Series of datetime64 dtype (or Series of object dtype containing datetime.datetime) DataFrame: Series of datetime64 dtype (or Series of object dtype containing datetime.datetime) Raises
ParserError
When parsing a date from string fails. ValueError
When another datetime conversion error happens. For example when one of ‘year’, ‘month’, day’ columns is missing in a DataFrame, or when a Timezone-aware datetime.datetime is found in an array-like of mixed time offsets, and utc=False. See also DataFrame.astype
Cast argument to a specified dtype. to_timedelta
Convert argument to timedelta. convert_dtypes
Convert dtypes. Notes Many input types are supported, and lead to different output types: scalars can be int, float, str, datetime object (from stdlib datetime module or numpy). They are converted to Timestamp when possible, otherwise they are converted to datetime.datetime. None/NaN/null scalars are converted to NaT. array-like can contain int, float, str, datetime objects. They are converted to DatetimeIndex when possible, otherwise they are converted to Index with object dtype, containing datetime.datetime. None/NaN/null entries are converted to NaT in both cases. Series are converted to Series with datetime64 dtype when possible, otherwise they are converted to Series with object dtype, containing datetime.datetime. None/NaN/null entries are converted to NaT in both cases. DataFrame/dict-like are converted to Series with datetime64 dtype. For each row a datetime is created from assembling the various dataframe columns. Column keys can be common abbreviations like [‘year’, ‘month’, ‘day’, ‘minute’, ‘second’, ‘ms’, ‘us’, ‘ns’]) or plurals of the same. The following causes are responsible for datetime.datetime objects being returned (possibly inside an Index or a Series with object dtype) instead of a proper pandas designated type (Timestamp, DatetimeIndex or Series with datetime64 dtype): when any input element is before Timestamp.min or after Timestamp.max, see timestamp limitations. when utc=False (default) and the input is an array-like or Series containing mixed naive/aware datetime, or aware with mixed time offsets. Note that this happens in the (quite frequent) situation when the timezone has a daylight savings policy. In that case you may wish to use utc=True. Examples Handling various input formats Assembling a datetime from multiple columns of a DataFrame. The keys can be common abbreviations like [‘year’, ‘month’, ‘day’, ‘minute’, ‘second’, ‘ms’, ‘us’, ‘ns’]) or plurals of the same
>>> df = pd.DataFrame({'year': [2015, 2016],
... 'month': [2, 3],
... 'day': [4, 5]})
>>> pd.to_datetime(df)
0 2015-02-04
1 2016-03-05
dtype: datetime64[ns]
Passing infer_datetime_format=True can often-times speedup a parsing if its not an ISO8601 format exactly, but in a regular format.
>>> s = pd.Series(['3/11/2000', '3/12/2000', '3/13/2000'] * 1000)
>>> s.head()
0 3/11/2000
1 3/12/2000
2 3/13/2000
3 3/11/2000
4 3/12/2000
dtype: object
>>> %timeit pd.to_datetime(s, infer_datetime_format=True)
100 loops, best of 3: 10.4 ms per loop
>>> %timeit pd.to_datetime(s, infer_datetime_format=False)
1 loop, best of 3: 471 ms per loop
Using a unix epoch time
>>> pd.to_datetime(1490195805, unit='s')
Timestamp('2017-03-22 15:16:45')
>>> pd.to_datetime(1490195805433502912, unit='ns')
Timestamp('2017-03-22 15:16:45.433502912')
Warning For float arg, precision rounding might happen. To prevent unexpected behavior use a fixed-width exact type. Using a non-unix epoch origin
>>> pd.to_datetime([1, 2, 3], unit='D',
... origin=pd.Timestamp('1960-01-01'))
DatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'],
dtype='datetime64[ns]', freq=None)
Non-convertible date/times If a date does not meet the timestamp limitations, passing errors='ignore' will return the original input instead of raising any exception. Passing errors='coerce' will force an out-of-bounds date to NaT, in addition to forcing non-dates (or non-parseable dates) to NaT.
>>> pd.to_datetime('13000101', format='%Y%m%d', errors='ignore')
datetime.datetime(1300, 1, 1, 0, 0)
>>> pd.to_datetime('13000101', format='%Y%m%d', errors='coerce')
NaT
Timezones and time offsets The default behaviour (utc=False) is as follows: Timezone-naive inputs are converted to timezone-naive DatetimeIndex:
>>> pd.to_datetime(['2018-10-26 12:00', '2018-10-26 13:00:15'])
DatetimeIndex(['2018-10-26 12:00:00', '2018-10-26 13:00:15'],
dtype='datetime64[ns]', freq=None)
Timezone-aware inputs with constant time offset are converted to timezone-aware DatetimeIndex:
>>> pd.to_datetime(['2018-10-26 12:00 -0500', '2018-10-26 13:00 -0500'])
DatetimeIndex(['2018-10-26 12:00:00-05:00', '2018-10-26 13:00:00-05:00'],
dtype='datetime64[ns, pytz.FixedOffset(-300)]', freq=None)
However, timezone-aware inputs with mixed time offsets (for example issued from a timezone with daylight savings, such as Europe/Paris) are not successfully converted to a DatetimeIndex. Instead a simple Index containing datetime.datetime objects is returned:
>>> pd.to_datetime(['2020-10-25 02:00 +0200', '2020-10-25 04:00 +0100'])
Index([2020-10-25 02:00:00+02:00, 2020-10-25 04:00:00+01:00],
dtype='object')
A mix of timezone-aware and timezone-naive inputs is converted to a timezone-aware DatetimeIndex if the offsets of the timezone-aware are constant:
>>> from datetime import datetime
>>> pd.to_datetime(["2020-01-01 01:00 -01:00", datetime(2020, 1, 1, 3, 0)])
DatetimeIndex(['2020-01-01 01:00:00-01:00', '2020-01-01 02:00:00-01:00'],
dtype='datetime64[ns, pytz.FixedOffset(-60)]', freq=None)
Finally, mixing timezone-aware strings and datetime.datetime always raises an error, even if the elements all have the same time offset.
>>> from datetime import datetime, timezone, timedelta
>>> d = datetime(2020, 1, 1, 18, tzinfo=timezone(-timedelta(hours=1)))
>>> pd.to_datetime(["2020-01-01 17:00 -0100", d])
Traceback (most recent call last):
...
ValueError: Tz-aware datetime.datetime cannot be converted to datetime64
unless utc=True
Setting utc=True solves most of the above issues: Timezone-naive inputs are localized as UTC
>>> pd.to_datetime(['2018-10-26 12:00', '2018-10-26 13:00'], utc=True)
DatetimeIndex(['2018-10-26 12:00:00+00:00', '2018-10-26 13:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq=None)
Timezone-aware inputs are converted to UTC (the output represents the exact same datetime, but viewed from the UTC time offset +00:00).
>>> pd.to_datetime(['2018-10-26 12:00 -0530', '2018-10-26 12:00 -0500'],
... utc=True)
DatetimeIndex(['2018-10-26 17:30:00+00:00', '2018-10-26 17:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq=None)
Inputs can contain both naive and aware, string or datetime, the above rules still apply
>>> pd.to_datetime(['2018-10-26 12:00', '2018-10-26 12:00 -0530',
... datetime(2020, 1, 1, 18),
... datetime(2020, 1, 1, 18,
... tzinfo=timezone(-timedelta(hours=1)))],
... utc=True)
DatetimeIndex(['2018-10-26 12:00:00+00:00', '2018-10-26 17:30:00+00:00',
'2020-01-01 18:00:00+00:00', '2020-01-01 19:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq=None) | |
doc_1294 | See Migration guide for more details. tf.compat.v1.raw_ops.BoostedTreesSparseAggregateStats
tf.raw_ops.BoostedTreesSparseAggregateStats(
node_ids, gradients, hessians, feature_indices, feature_values, feature_shape,
max_splits, num_buckets, name=None
)
The summary stats contains gradients and hessians accumulated for each node, bucket and dimension id.
Args
node_ids A Tensor of type int32. int32; Rank 1 Tensor containing node ids for each example, shape [batch_size].
gradients A Tensor of type float32. float32; Rank 2 Tensor (shape=[batch_size, logits_dimension]) with gradients for each example.
hessians A Tensor of type float32. float32; Rank 2 Tensor (shape=[batch_size, hessian_dimension]) with hessians for each example.
feature_indices A Tensor of type int32. int32; Rank 2 indices of feature sparse Tensors (shape=[number of sparse entries, 2]). Number of sparse entries across all instances from the batch. The first value is the index of the instance, the second is dimension of the feature. The second axis can only have 2 values, i.e., the input dense version of Tensor can only be matrix.
feature_values A Tensor of type int32. int32; Rank 1 values of feature sparse Tensors (shape=[number of sparse entries]). Number of sparse entries across all instances from the batch. The first value is the index of the instance, the second is dimension of the feature.
feature_shape A Tensor of type int32. int32; Rank 1 dense shape of feature sparse Tensors (shape=[2]). The first axis can only have 2 values, [batch_size, feature_dimension].
max_splits An int that is >= 1. int; the maximum number of splits possible in the whole tree.
num_buckets An int that is >= 1. int; equals to the maximum possible value of bucketized feature + 1.
name A name for the operation (optional).
Returns A tuple of Tensor objects (stats_summary_indices, stats_summary_values, stats_summary_shape). stats_summary_indices A Tensor of type int32.
stats_summary_values A Tensor of type float32.
stats_summary_shape A Tensor of type int32. | |
doc_1295 |
Least squares polynomial fit. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in numpy.polynomial is preferred. A summary of the differences can be found in the transition guide. Fit a polynomial p(x) = p[0] * x**deg + ... + p[deg] of degree deg to points (x, y). Returns a vector of coefficients p that minimises the squared error in the order deg, deg-1, … 0. The Polynomial.fit class method is recommended for new code as it is more stable numerically. See the documentation of the method for more information. Parameters
xarray_like, shape (M,)
x-coordinates of the M sample points (x[i], y[i]).
yarray_like, shape (M,) or (M, K)
y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column.
degint
Degree of the fitting polynomial
rcondfloat, optional
Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases.
fullbool, optional
Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned.
warray_like, shape (M,), optional
Weights. If not None, the weight w[i] applies to the unsquared residual y[i] - y_hat[i] at x[i]. Ideally the weights are chosen so that the errors of the products w[i]*y[i] all have the same variance. When using inverse-variance weighting, use w[i] = 1/sigma(y[i]). The default value is None.
covbool or str, optional
If given and not False, return not just the estimate but also its covariance matrix. By default, the covariance are scaled by chi2/dof, where dof = M - (deg + 1), i.e., the weights are presumed to be unreliable except in a relative sense and everything is scaled such that the reduced chi2 is unity. This scaling is omitted if cov='unscaled', as is relevant for the case that the weights are w = 1/sigma, with sigma known to be a reliable estimate of the uncertainty. Returns
pndarray, shape (deg + 1,) or (deg + 1, K)
Polynomial coefficients, highest power first. If y was 2-D, the coefficients for k-th data set are in p[:,k]. residuals, rank, singular_values, rcond
These values are only returned if full == True residuals – sum of squared residuals of the least squares fit
rank – the effective rank of the scaled Vandermonde
coefficient matrix
singular_values – singular values of the scaled Vandermonde
coefficient matrix rcond – value of rcond. For more details, see numpy.linalg.lstsq.
Vndarray, shape (M,M) or (M,M,K)
Present only if full == False and cov == True. The covariance matrix of the polynomial coefficient estimates. The diagonal of this matrix are the variance estimates for each coefficient. If y is a 2-D array, then the covariance matrix for the k-th data set are in V[:,:,k] Warns
RankWarning
The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if full == False. The warnings can be turned off by >>> import warnings
>>> warnings.simplefilter('ignore', np.RankWarning)
See also polyval
Compute polynomial values. linalg.lstsq
Computes a least-squares fit. scipy.interpolate.UnivariateSpline
Computes spline fits. Notes Any masked values in x is propagated in y, and vice-versa. The solution minimizes the squared error \[E = \sum_{j=0}^k |p(x_j) - y_j|^2\] in the equations: x[0]**n * p[0] + ... + x[0] * p[n-1] + p[n] = y[0]
x[1]**n * p[0] + ... + x[1] * p[n-1] + p[n] = y[1]
...
x[k]**n * p[0] + ... + x[k] * p[n-1] + p[n] = y[k]
The coefficient matrix of the coefficients p is a Vandermonde matrix. polyfit issues a RankWarning when the least-squares fit is badly conditioned. This implies that the best fit is not well-defined due to numerical error. The results may be improved by lowering the polynomial degree or by replacing x by x - x.mean(). The rcond parameter can also be set to a value smaller than its default, but the resulting fit may be spurious: including contributions from the small singular values can add numerical noise to the result. Note that fitting polynomial coefficients is inherently badly conditioned when the degree of the polynomial is large or the interval of sample points is badly centered. The quality of the fit should always be checked in these cases. When polynomial fits are not satisfactory, splines may be a good alternative. References 1
Wikipedia, “Curve fitting”, https://en.wikipedia.org/wiki/Curve_fitting 2
Wikipedia, “Polynomial interpolation”, https://en.wikipedia.org/wiki/Polynomial_interpolation Examples >>> import warnings
>>> x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])
>>> y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0])
>>> z = np.polyfit(x, y, 3)
>>> z
array([ 0.08703704, -0.81349206, 1.69312169, -0.03968254]) # may vary
It is convenient to use poly1d objects for dealing with polynomials: >>> p = np.poly1d(z)
>>> p(0.5)
0.6143849206349179 # may vary
>>> p(3.5)
-0.34732142857143039 # may vary
>>> p(10)
22.579365079365115 # may vary
High-order polynomials may oscillate wildly: >>> with warnings.catch_warnings():
... warnings.simplefilter('ignore', np.RankWarning)
... p30 = np.poly1d(np.polyfit(x, y, 30))
...
>>> p30(4)
-0.80000000000000204 # may vary
>>> p30(5)
-0.99999999999999445 # may vary
>>> p30(4.5)
-0.10547061179440398 # may vary
Illustration: >>> import matplotlib.pyplot as plt
>>> xp = np.linspace(-2, 6, 100)
>>> _ = plt.plot(x, y, '.', xp, p(xp), '-', xp, p30(xp), '--')
>>> plt.ylim(-2,2)
(-2, 2)
>>> plt.show() | |
doc_1296 | See Migration guide for more details. tf.compat.v1.raw_ops.SegmentProd
tf.raw_ops.SegmentProd(
data, segment_ids, name=None
)
Read the section on segmentation for an explanation of segments. Computes a tensor such that \(output_i = \prod_j data_j\) where the product is over j such that segment_ids[j] == i. If the product is empty for a given segment ID i, output[i] = 1. For example: c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])
tf.segment_prod(c, tf.constant([0, 0, 1]))
# ==> [[4, 6, 6, 4],
# [5, 6, 7, 8]]
Args
data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64.
segment_ids A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose size is equal to the size of data's first dimension. Values should be sorted and can be repeated.
name A name for the operation (optional).
Returns A Tensor. Has the same type as data. | |
doc_1297 | See Migration guide for more details. tf.compat.v1.raw_ops.LoadTPUEmbeddingCenteredRMSPropParameters
tf.raw_ops.LoadTPUEmbeddingCenteredRMSPropParameters(
parameters, ms, mom, mg, num_shards, shard_id, table_id=-1,
table_name='', config='', name=None
)
An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.
Args
parameters A Tensor of type float32. Value of parameters used in the centered RMSProp optimization algorithm.
ms A Tensor of type float32. Value of ms used in the centered RMSProp optimization algorithm.
mom A Tensor of type float32. Value of mom used in the centered RMSProp optimization algorithm.
mg A Tensor of type float32. Value of mg used in the centered RMSProp optimization algorithm.
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns The created Operation. | |
doc_1298 | Given field_name as returned by parse() (see above), convert it to an object to be formatted. Returns a tuple (obj, used_key). The default version takes strings of the form defined in PEP 3101, such as “0[name]” or “label.title”. args and kwargs are as passed in to vformat(). The return value used_key has the same meaning as the key parameter to get_value(). | |
doc_1299 | ABC for sized iterable container classes. New in version 3.6. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.