_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_28200 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_28201 | See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_string
tf.compat.v1.flags.DEFINE_string(
name, default, help, flag_values=_flagvalues.FLAGS, **args
) | |
doc_28202 | Set the panel’s user pointer to obj. This is used to associate an arbitrary piece of data with the panel, and can be any Python object. | |
doc_28203 | Return an installation path corresponding to the path name, from the install scheme named scheme. name has to be a value from the list returned by get_path_names(). sysconfig stores installation paths corresponding to each path name, for each platform, with variables to be expanded. For instance the stdlib path for the nt scheme is: {base}/Lib. get_path() will use the variables returned by get_config_vars() to expand the path. All variables have default values for each platform so one may call this function and get the default value. If scheme is provided, it must be a value from the list returned by get_scheme_names(). Otherwise, the default scheme for the current platform is used. If vars is provided, it must be a dictionary of variables that will update the dictionary return by get_config_vars(). If expand is set to False, the path will not be expanded using the variables. If name is not found, return None. | |
doc_28204 | Create a new WSGI server listening on host and port, accepting connections for app. The return value is an instance of the supplied server_class, and will process requests using the specified handler_class. app must be a WSGI application object, as defined by PEP 3333. Example usage: from wsgiref.simple_server import make_server, demo_app
with make_server('', 8000, demo_app) as httpd:
print("Serving HTTP on port 8000...")
# Respond to requests until process is killed
httpd.serve_forever()
# Alternative: serve one request, then exit
httpd.handle_request() | |
doc_28205 |
Execute a call_function node and return the result. Parameters
target (Target) – The call target for this node. See Node for details on semantics
args (Tuple) – Tuple of positional args for this invocation
kwargs (Dict) – Dict of keyword arguments for this invocation Return
Any: The value returned by the function invocation | |
doc_28206 | Like bind() but you can pass it an WSGI environment and it will fetch the information from that dictionary. Note that because of limitations in the protocol there is no way to get the current subdomain and real server_name from the environment. If you don’t provide it, Werkzeug will use SERVER_NAME and SERVER_PORT (or HTTP_HOST if provided) as used server_name with disabled subdomain feature. If subdomain is None but an environment and a server name is provided it will calculate the current subdomain automatically. Example: server_name is 'example.com' and the SERVER_NAME in the wsgi environ is 'staging.dev.example.com' the calculated subdomain will be 'staging.dev'. If the object passed as environ has an environ attribute, the value of this attribute is used instead. This allows you to pass request objects. Additionally PATH_INFO added as a default of the MapAdapter so that you don’t have to pass the path info to the match method. Changelog Changed in version 1.0.0: If the passed server name specifies port 443, it will match if the incoming scheme is https without a port. Changed in version 1.0.0: A warning is shown when the passed server name does not match the incoming WSGI server name. Changed in version 0.8: This will no longer raise a ValueError when an unexpected server name was passed. Changed in version 0.5: previously this method accepted a bogus calculate_subdomain parameter that did not have any effect. It was removed because of that. Parameters
environ (WSGIEnvironment) – a WSGI environment.
server_name (Optional[str]) – an optional server name hint (see above).
subdomain (Optional[str]) – optionally the current subdomain (see above). Return type
MapAdapter | |
doc_28207 | tf.keras.layers.GlobalMaxPooling3D Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.GlobalMaxPool3D, tf.compat.v1.keras.layers.GlobalMaxPooling3D
tf.keras.layers.GlobalMaxPool3D(
data_format=None, **kwargs
)
Arguments
data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape: If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)
If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)
Output shape: 2D tensor with shape (batch_size, channels). | |
doc_28208 |
x-displacement of the part from the origin. | |
doc_28209 | With one argument, return the natural logarithm of x (to base e). With two arguments, return the logarithm of x to the given base, calculated as log(x)/log(base). | |
doc_28210 | See Migration guide for more details. tf.compat.v1.lbeta, tf.compat.v1.math.lbeta
tf.math.lbeta(
x, name=None
)
Given one-dimensional $z = [z_1,...,z_K]$, we define $$Beta(z) = \frac{\prod_j \Gamma(z_j)}{\Gamma(\sum_j z_j)},$$ where $\Gamma$ is the gamma function. And for $n + 1$ dimensional $x$ with shape $[N_1, ..., N_n, K]$, we define $$lbeta(x)[i_1, ..., i_n] = \log{|Beta(x[i_1, ..., i_n, :])|}.$$ In other words, the last dimension is treated as the $z$ vector. Note that if $z = [u, v]$, then $$Beta(z) = \frac{\Gamma(u)\Gamma(v)}{\Gamma(u + v)} = \int_0^1 t^{u-1} (1 - t)^{v-1} \mathrm{d}t,$$ which defines the traditional bivariate beta function. If the last dimension is empty, we follow the convention that the sum over the empty set is zero, and the product is one.
Args
x A rank n + 1 Tensor, n >= 0 with type float, or double.
name A name for the operation (optional).
Returns The logarithm of \(|Beta(x)|\) reducing along the last dimension. | |
doc_28211 | See Migration guide for more details. tf.compat.v1.app.flags.MultiFlag
tf.compat.v1.flags.MultiFlag(
*args, **kwargs
)
The value of such a flag is a list that contains the individual values from all the appearances of that flag on the command-line. See the doc for Flag for most behavior of this class. Only differences in behavior are described here: The default value may be either a single value or an iterable of values. A single value is transformed into a single-item list of that value. The value of the flag is always a list, even if the option was only supplied once, and even if the default value is a single value
Attributes
value
Methods flag_type
flag_type()
See base class. parse
parse(
arguments
)
Parses one or more arguments with the installed parser.
Args
arguments a single argument or a list of arguments (typically a list of default values); a single argument is converted internally into a list containing one item. serialize
serialize()
Serializes the flag. unparse
unparse()
__eq__
__eq__(
other
)
Return self==value. __ge__
__ge__(
other, NotImplemented=NotImplemented
)
Return a >= b. Computed by @total_ordering from (not a < b). __gt__
__gt__(
other, NotImplemented=NotImplemented
)
Return a > b. Computed by @total_ordering from (not a < b) and (a != b). __le__
__le__(
other, NotImplemented=NotImplemented
)
Return a <= b. Computed by @total_ordering from (a < b) or (a == b). __lt__
__lt__(
other
)
Return self<value. | |
doc_28212 | The HList widget can be used to display any data that have a hierarchical structure, for example, file system directory trees. The list entries are indented and connected by branch lines according to their places in the hierarchy. | |
doc_28213 | See Migration guide for more details. tf.compat.v1.train.CheckpointManager
tf.train.CheckpointManager(
checkpoint, directory, max_to_keep, keep_checkpoint_every_n_hours=None,
checkpoint_name='ckpt', step_counter=None, checkpoint_interval=None,
init_fn=None
)
Example usage: import tensorflow as tf
checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model)
manager = tf.train.CheckpointManager(
checkpoint, directory="/tmp/model", max_to_keep=5)
status = checkpoint.restore(manager.latest_checkpoint)
while True:
# train
manager.save()
CheckpointManager preserves its own state across instantiations (see the __init__ documentation for details). Only one should be active in a particular directory at a time.
Args
checkpoint The tf.train.Checkpoint instance to save and manage checkpoints for.
directory The path to a directory in which to write checkpoints. A special file named "checkpoint" is also written to this directory (in a human-readable text format) which contains the state of the CheckpointManager.
max_to_keep An integer, the number of checkpoints to keep. Unless preserved by keep_checkpoint_every_n_hours, checkpoints will be deleted from the active set, oldest first, until only max_to_keep checkpoints remain. If None, no checkpoints are deleted and everything stays in the active set. Note that max_to_keep=None will keep all checkpoint paths in memory and in the checkpoint state protocol buffer on disk.
keep_checkpoint_every_n_hours Upon removal from the active set, a checkpoint will be preserved if it has been at least keep_checkpoint_every_n_hours since the last preserved checkpoint. The default setting of None does not preserve any checkpoints in this way.
checkpoint_name Custom name for the checkpoint file.
step_counter A tf.Variable instance for checking the current step counter value, in case users want to save checkpoints every N steps.
checkpoint_interval An integer, indicates the minimum step interval between two checkpoints.
init_fn Callable. A function to do customized intialization if no checkpoints are in the directory.
Raises
ValueError If max_to_keep is not a positive integer.
Attributes
checkpoint Returns the tf.train.Checkpoint object.
checkpoint_interval
checkpoints A list of managed checkpoints. Note that checkpoints saved due to keep_checkpoint_every_n_hours will not show up in this list (to avoid ever-growing filename lists).
directory
latest_checkpoint The prefix of the most recent checkpoint in directory. Equivalent to tf.train.latest_checkpoint(directory) where directory is the constructor argument to CheckpointManager. Suitable for passing to tf.train.Checkpoint.restore to resume training.
Methods restore_or_initialize View source
restore_or_initialize()
Restore items in checkpoint from the latest checkpoint file. This method will first try to restore from the most recent checkpoint in directory. If no checkpoints exist in directory, and init_fn is specified, this method will call init_fn to do customized initialization. This can be used to support initialization from pretrained models. Note that unlike tf.train.Checkpoint.restore(), this method doesn't return a load status object that users can run assertions on (e.g. assert_consumed()). Thus to run assertions, users should directly use tf.train.Checkpoint.restore() method.
Returns The restored checkpoint path if the lastest checkpoint is found and restored. Otherwise None.
save View source
save(
checkpoint_number=None, check_interval=True
)
Creates a new checkpoint and manages it.
Args
checkpoint_number An optional integer, or an integer-dtype Variable or Tensor, used to number the checkpoint. If None (default), checkpoints are numbered using checkpoint.save_counter. Even if checkpoint_number is provided, save_counter is still incremented. A user-provided checkpoint_number is not incremented even if it is a Variable.
check_interval An optional boolean. The argument is only effective when checkpoint_interval is passed into the manager. If True, the manager will only save the checkpoint if the interval between checkpoints is larger than checkpoint_interval. Otherwise it will always save the checkpoint unless a checkpoint has already been saved for the current step.
Returns The path to the new checkpoint. It is also recorded in the checkpoints and latest_checkpoint properties. None if no checkpoint is saved. | |
doc_28214 | exception http.cookies.CookieError
Exception failing because of RFC 2109 invalidity: incorrect attributes, incorrect Set-Cookie header, etc.
class http.cookies.BaseCookie([input])
This class is a dictionary-like object whose keys are strings and whose values are Morsel instances. Note that upon setting a key to a value, the value is first converted to a Morsel containing the key and the value. If input is given, it is passed to the load() method.
class http.cookies.SimpleCookie([input])
This class derives from BaseCookie and overrides value_decode() and value_encode(). SimpleCookie supports strings as cookie values. When setting the value, SimpleCookie calls the builtin str() to convert the value to a string. Values received from HTTP are kept as strings.
See also
Module http.cookiejar
HTTP cookie handling for web clients. The http.cookiejar and http.cookies modules do not depend on each other.
RFC 2109 - HTTP State Management Mechanism
This is the state management specification implemented by this module. Cookie Objects
BaseCookie.value_decode(val)
Return a tuple (real_value, coded_value) from a string representation. real_value can be any type. This method does no decoding in BaseCookie — it exists so it can be overridden.
BaseCookie.value_encode(val)
Return a tuple (real_value, coded_value). val can be any type, but coded_value will always be converted to a string. This method does no encoding in BaseCookie — it exists so it can be overridden. In general, it should be the case that value_encode() and value_decode() are inverses on the range of value_decode.
BaseCookie.output(attrs=None, header='Set-Cookie:', sep='\r\n')
Return a string representation suitable to be sent as HTTP headers. attrs and header are sent to each Morsel’s output() method. sep is used to join the headers together, and is by default the combination '\r\n' (CRLF).
BaseCookie.js_output(attrs=None)
Return an embeddable JavaScript snippet, which, if run on a browser which supports JavaScript, will act the same as if the HTTP headers was sent. The meaning for attrs is the same as in output().
BaseCookie.load(rawdata)
If rawdata is a string, parse it as an HTTP_COOKIE and add the values found there as Morsels. If it is a dictionary, it is equivalent to: for k, v in rawdata.items():
cookie[k] = v
Morsel Objects
class http.cookies.Morsel
Abstract a key/value pair, which has some RFC 2109 attributes. Morsels are dictionary-like objects, whose set of keys is constant — the valid RFC 2109 attributes, which are expires path comment domain max-age secure version httponly samesite The attribute httponly specifies that the cookie is only transferred in HTTP requests, and is not accessible through JavaScript. This is intended to mitigate some forms of cross-site scripting. The attribute samesite specifies that the browser is not allowed to send the cookie along with cross-site requests. This helps to mitigate CSRF attacks. Valid values for this attribute are “Strict” and “Lax”. The keys are case-insensitive and their default value is ''. Changed in version 3.5: __eq__() now takes key and value into account. Changed in version 3.7: Attributes key, value and coded_value are read-only. Use set() for setting them. Changed in version 3.8: Added support for the samesite attribute.
Morsel.value
The value of the cookie.
Morsel.coded_value
The encoded value of the cookie — this is what should be sent.
Morsel.key
The name of the cookie.
Morsel.set(key, value, coded_value)
Set the key, value and coded_value attributes.
Morsel.isReservedKey(K)
Whether K is a member of the set of keys of a Morsel.
Morsel.output(attrs=None, header='Set-Cookie:')
Return a string representation of the Morsel, suitable to be sent as an HTTP header. By default, all the attributes are included, unless attrs is given, in which case it should be a list of attributes to use. header is by default "Set-Cookie:".
Morsel.js_output(attrs=None)
Return an embeddable JavaScript snippet, which, if run on a browser which supports JavaScript, will act the same as if the HTTP header was sent. The meaning for attrs is the same as in output().
Morsel.OutputString(attrs=None)
Return a string representing the Morsel, without any surrounding HTTP or JavaScript. The meaning for attrs is the same as in output().
Morsel.update(values)
Update the values in the Morsel dictionary with the values in the dictionary values. Raise an error if any of the keys in the values dict is not a valid RFC 2109 attribute. Changed in version 3.5: an error is raised for invalid keys.
Morsel.copy(value)
Return a shallow copy of the Morsel object. Changed in version 3.5: return a Morsel object instead of a dict.
Morsel.setdefault(key, value=None)
Raise an error if key is not a valid RFC 2109 attribute, otherwise behave the same as dict.setdefault().
Example The following example demonstrates how to use the http.cookies module. >>> from http import cookies
>>> C = cookies.SimpleCookie()
>>> C["fig"] = "newton"
>>> C["sugar"] = "wafer"
>>> print(C) # generate HTTP headers
Set-Cookie: fig=newton
Set-Cookie: sugar=wafer
>>> print(C.output()) # same thing
Set-Cookie: fig=newton
Set-Cookie: sugar=wafer
>>> C = cookies.SimpleCookie()
>>> C["rocky"] = "road"
>>> C["rocky"]["path"] = "/cookie"
>>> print(C.output(header="Cookie:"))
Cookie: rocky=road; Path=/cookie
>>> print(C.output(attrs=[], header="Cookie:"))
Cookie: rocky=road
>>> C = cookies.SimpleCookie()
>>> C.load("chips=ahoy; vienna=finger") # load from a string (HTTP header)
>>> print(C)
Set-Cookie: chips=ahoy
Set-Cookie: vienna=finger
>>> C = cookies.SimpleCookie()
>>> C.load('keebler="E=everybody; L=\\"Loves\\"; fudge=\\012;";')
>>> print(C)
Set-Cookie: keebler="E=everybody; L=\"Loves\"; fudge=\012;"
>>> C = cookies.SimpleCookie()
>>> C["oreo"] = "doublestuff"
>>> C["oreo"]["path"] = "/"
>>> print(C)
Set-Cookie: oreo=doublestuff; Path=/
>>> C = cookies.SimpleCookie()
>>> C["twix"] = "none for you"
>>> C["twix"].value
'none for you'
>>> C = cookies.SimpleCookie()
>>> C["number"] = 7 # equivalent to C["number"] = str(7)
>>> C["string"] = "seven"
>>> C["number"].value
'7'
>>> C["string"].value
'seven'
>>> print(C)
Set-Cookie: number=7
Set-Cookie: string=seven | |
doc_28215 | Return centered in a string of length width. Padding is done using the specified fillchar (default is an ASCII space). The original string is returned if width is less than or equal to len(s). | |
doc_28216 | The primary API method. It takes a format string and an arbitrary set of positional and keyword arguments. It is just a wrapper that calls vformat(). Changed in version 3.7: A format string argument is now positional-only. | |
doc_28217 |
Set a label that will be displayed in the legend. Parameters
sobject
s will be converted to a string by calling str. | |
doc_28218 | See Migration guide for more details. tf.compat.v1.raw_ops.InitializeTableFromTextFileV2
tf.raw_ops.InitializeTableFromTextFileV2(
table_handle, filename, key_index, value_index, vocab_size=-1,
delimiter='\t', name=None
)
It inserts one key-value pair into the table for each line of the file. The key and value is extracted from the whole line content, elements from the split line based on delimiter or the line number (starting from zero). Where to extract the key and value from a line is specified by key_index and value_index. A value of -1 means use the line number(starting from zero), expects int64. A value of -2 means use the whole line content, expects string. A value >= 0 means use the index (starting at zero) of the split line based on delimiter.
Args
table_handle A Tensor of type resource. Handle to a table which will be initialized.
filename A Tensor of type string. Filename of a vocabulary text file.
key_index An int that is >= -2. Column index in a line to get the table key values from.
value_index An int that is >= -2. Column index that represents information of a line to get the table value values from.
vocab_size An optional int that is >= -1. Defaults to -1. Number of elements of the file, use -1 if unknown.
delimiter An optional string. Defaults to "\t". Delimiter to separate fields in a line.
name A name for the operation (optional).
Returns The created Operation. | |
doc_28219 | This function handles the exception described by info (a 3-tuple containing the result of sys.exc_info()), formatting its traceback as text and returning the result as a string. The optional argument context is the number of lines of context to display around the current line of source code in the traceback; this defaults to 5. | |
doc_28220 | Convert params into an XML-RPC request. or into a response if methodresponse is true. params can be either a tuple of arguments or an instance of the Fault exception class. If methodresponse is true, only a single value can be returned, meaning that params must be of length 1. encoding, if supplied, is the encoding to use in the generated XML; the default is UTF-8. Python’s None value cannot be used in standard XML-RPC; to allow using it via an extension, provide a true value for allow_none. | |
doc_28221 |
Compute surface area, given vertices & triangular faces Parameters
verts(V, 3) array of floats
Array containing (x, y, z) coordinates for V unique mesh vertices.
faces(F, 3) array of ints
List of length-3 lists of integers, referencing vertex coordinates as provided in verts Returns
areafloat
Surface area of mesh. Units now [coordinate units] ** 2. See also
skimage.measure.marching_cubes
skimage.measure.marching_cubes_classic
Notes The arguments expected by this function are the first two outputs from skimage.measure.marching_cubes. For unit correct output, ensure correct spacing was passed to skimage.measure.marching_cubes. This algorithm works properly only if the faces provided are all triangles. | |
doc_28222 |
Total bytes consumed by the elements of the array. Notes Does not include memory consumed by non-element attributes of the array object. Examples >>> x = np.zeros((3,5,2), dtype=np.complex128)
>>> x.nbytes
480
>>> np.prod(x.shape) * x.itemsize
480 | |
doc_28223 | The UUID was not generated in a multiprocessing-safe way. | |
doc_28224 | Run the cmd shell command. The limit argument sets the buffer limit for StreamReader wrappers for Process.stdout and Process.stderr (if subprocess.PIPE is passed to stdout and stderr arguments). Return a Process instance. See the documentation of loop.subprocess_shell() for other parameters. Important It is the application’s responsibility to ensure that all whitespace and special characters are quoted appropriately to avoid shell injection vulnerabilities. The shlex.quote() function can be used to properly escape whitespace and special shell characters in strings that are going to be used to construct shell commands. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. | |
doc_28225 |
Return an array formed from the elements of a at the given indices. Refer to numpy.take for full documentation. See also numpy.take
equivalent function | |
doc_28226 |
Call self as a function. | |
doc_28227 | tf.compat.v1.scatter_min(
ref, indices, updates, use_locking=False, name=None
)
This operation computes # Scalar indices
ref[indices, ...] = min(ref[indices, ...], updates[...])
# Vector indices (for each i)
ref[indices[i], ...] = min(ref[indices[i], ...], updates[i, ...])
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] = min(ref[indices[i, ..., j], ...],
updates[i, ..., j, ...])
This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions combine. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].
Args
ref A mutable Tensor. Must be one of the following types: half, bfloat16, float32, float64, int32, int64. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
updates A Tensor. Must have the same type as ref. A tensor of updated values to reduce into ref.
use_locking An optional bool. Defaults to False. If True, the update will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as ref. | |
doc_28228 |
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
antialiased or aa bool or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
closed bool
color color
data unknown
edgecolor or ec color or None
facecolor or fc color or None
figure Figure
fill bool
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw float or None
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
xy (N, 2) array-like
zorder float | |
doc_28229 | Execute end of drag-and-drop functions. | |
doc_28230 | Transform a method of a class into a property whose value is computed once and then cached as a normal attribute for the life of the instance. Similar to property(), with the addition of caching. Useful for expensive computed properties of instances that are otherwise effectively immutable. Example: class DataSet:
def __init__(self, sequence_of_numbers):
self._data = tuple(sequence_of_numbers)
@cached_property
def stdev(self):
return statistics.stdev(self._data)
The mechanics of cached_property() are somewhat different from property(). A regular property blocks attribute writes unless a setter is defined. In contrast, a cached_property allows writes. The cached_property decorator only runs on lookups and only when an attribute of the same name doesn’t exist. When it does run, the cached_property writes to the attribute with the same name. Subsequent attribute reads and writes take precedence over the cached_property method and it works like a normal attribute. The cached value can be cleared by deleting the attribute. This allows the cached_property method to run again. Note, this decorator interferes with the operation of PEP 412 key-sharing dictionaries. This means that instance dictionaries can take more space than usual. Also, this decorator requires that the __dict__ attribute on each instance be a mutable mapping. This means it will not work with some types, such as metaclasses (since the __dict__ attributes on type instances are read-only proxies for the class namespace), and those that specify __slots__ without including __dict__ as one of the defined slots (as such classes don’t provide a __dict__ attribute at all). If a mutable mapping is not available or if space-efficient key sharing is desired, an effect similar to cached_property() can be achieved by a stacking property() on top of cache(): class DataSet:
def __init__(self, sequence_of_numbers):
self._data = sequence_of_numbers
@property
@cache
def stdev(self):
return statistics.stdev(self._data)
New in version 3.8. | |
doc_28231 |
Write an array to an NPY file, including a header. If the array is neither C-contiguous nor Fortran-contiguous AND the file_like object is not a real file object, this function will have to copy data in memory. Parameters
fpfile_like object
An open, writable file object, or similar object with a .write() method.
arrayndarray
The array to write to disk.
version(int, int) or None, optional
The version number of the format. None means use the oldest supported version that is able to store the data. Default: None
allow_picklebool, optional
Whether to allow writing pickled data. Default: True
pickle_kwargsdict, optional
Additional keyword arguments to pass to pickle.dump, excluding ‘protocol’. These are only useful when pickling objects in object arrays on Python 3 to Python 2 compatible format. Raises
ValueError
If the array cannot be persisted. This includes the case of allow_pickle=False and array being an object array. Various other errors
If the array contains Python objects as part of its dtype, the process of pickling them may raise various errors if the objects are not picklable. | |
doc_28232 |
New view of array with the same data. Note Passing None for dtype is different from omitting the parameter, since the former invokes dtype(None) which is an alias for dtype('float_'). Parameters
dtypedata-type or ndarray sub-class, optional
Data-type descriptor of the returned view, e.g., float32 or int16. Omitting it results in the view having the same data-type as a. This argument can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the type parameter).
typePython type, optional
Type of the returned view, e.g., ndarray or matrix. Again, omission of the parameter results in type preservation. Notes a.view() is used two different ways: a.view(some_dtype) or a.view(dtype=some_dtype) constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory. a.view(ndarray_subclass) or a.view(type=ndarray_subclass) just returns an instance of ndarray_subclass that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory. For a.view(some_dtype), if some_dtype has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the behavior of the view cannot be predicted just from the superficial appearance of a (shown by print(a)). It also depends on exactly how a is stored in memory. Therefore if a is C-ordered versus fortran-ordered, versus defined as a slice or transpose, etc., the view may give different results. Examples >>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)])
Viewing array data using a different type and dtype: >>> y = x.view(dtype=np.int16, type=np.matrix)
>>> y
matrix([[513]], dtype=int16)
>>> print(type(y))
<class 'numpy.matrix'>
Creating a view on a structured array so it can be used in calculations >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)])
>>> xv = x.view(dtype=np.int8).reshape(-1,2)
>>> xv
array([[1, 2],
[3, 4]], dtype=int8)
>>> xv.mean(0)
array([2., 3.])
Making changes to the view changes the underlying array >>> xv[0,1] = 20
>>> x
array([(1, 20), (3, 4)], dtype=[('a', 'i1'), ('b', 'i1')])
Using a view to convert an array to a recarray: >>> z = x.view(np.recarray)
>>> z.a
array([1, 3], dtype=int8)
Views share data: >>> x[0] = (9, 10)
>>> z[0]
(9, 10)
Views that change the dtype size (bytes per entry) should normally be avoided on arrays defined by slices, transposes, fortran-ordering, etc.: >>> x = np.array([[1,2,3],[4,5,6]], dtype=np.int16)
>>> y = x[:, 0:2]
>>> y
array([[1, 2],
[4, 5]], dtype=int16)
>>> y.view(dtype=[('width', np.int16), ('length', np.int16)])
Traceback (most recent call last):
...
ValueError: To change to a dtype of a different size, the array must be C-contiguous
>>> z = y.copy()
>>> z.view(dtype=[('width', np.int16), ('length', np.int16)])
array([[(1, 2)],
[(4, 5)]], dtype=[('width', '<i2'), ('length', '<i2')]) | |
doc_28233 | Instance of the class to check the password. This will default to default_token_generator, it’s an instance of django.contrib.auth.tokens.PasswordResetTokenGenerator. | |
doc_28234 |
Return the underlying artist that actually defines some properties (e.g., color) of this artist. | |
doc_28235 |
Scalar method identical to the corresponding array attribute. Please see ndarray.conjugate. | |
doc_28236 | For backwards compatibility. Calls the run() method. | |
doc_28237 | tf.summary.flush(
writer=None, name=None
)
This operation blocks until that finishes.
Args
writer The tf.summary.SummaryWriter resource to flush. The thread default will be used if this parameter is None. Otherwise a tf.no_op is returned.
name A name for the operation (optional).
Returns The created tf.Operation. | |
doc_28238 |
Return the truth value of (x1 > x2) element-wise. Parameters
x1, x2array_like
Input arrays. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
outndarray or scalar
Output array, element-wise comparison of x1 and x2. Typically of type bool, unless dtype=object is passed. This is a scalar if both x1 and x2 are scalars. See also
greater_equal, less, less_equal, equal, not_equal
Examples >>> np.greater([4,2],[2,2])
array([ True, False])
The > operator can be used as a shorthand for np.greater on ndarrays. >>> a = np.array([4, 2])
>>> b = np.array([2, 2])
>>> a > b
array([ True, False]) | |
doc_28239 | Decode the Base16 encoded bytes-like object or ASCII string s and return the decoded bytes. Optional casefold is a flag specifying whether a lowercase alphabet is acceptable as input. For security purposes, the default is False. A binascii.Error is raised if s is incorrectly padded or if there are non-alphabet characters present in the input. | |
doc_28240 | The domain match rule that the session cookie will be valid for. If not set, the cookie will be valid for all subdomains of SERVER_NAME. If False, the cookie’s domain will not be set. Default: None | |
doc_28241 |
Call self as a function. | |
doc_28242 | Inode protection mode. | |
doc_28243 | copy.copy(x)
Return a shallow copy of x.
copy.deepcopy(x[, memo])
Return a deep copy of x.
exception copy.Error
Raised for module specific errors.
The difference between shallow and deep copying is only relevant for compound objects (objects that contain other objects, like lists or class instances): A shallow copy constructs a new compound object and then (to the extent possible) inserts references into it to the objects found in the original. A deep copy constructs a new compound object and then, recursively, inserts copies into it of the objects found in the original. Two problems often exist with deep copy operations that don’t exist with shallow copy operations: Recursive objects (compound objects that, directly or indirectly, contain a reference to themselves) may cause a recursive loop. Because deep copy copies everything it may copy too much, such as data which is intended to be shared between copies. The deepcopy() function avoids these problems by: keeping a memo dictionary of objects already copied during the current copying pass; and letting user-defined classes override the copying operation or the set of components copied. This module does not copy types like module, method, stack trace, stack frame, file, socket, window, array, or any similar types. It does “copy” functions and classes (shallow and deeply), by returning the original object unchanged; this is compatible with the way these are treated by the pickle module. Shallow copies of dictionaries can be made using dict.copy(), and of lists by assigning a slice of the entire list, for example, copied_list = original_list[:]. Classes can use the same interfaces to control copying that they use to control pickling. See the description of module pickle for information on these methods. In fact, the copy module uses the registered pickle functions from the copyreg module. In order for a class to define its own copy implementation, it can define special methods __copy__() and __deepcopy__(). The former is called to implement the shallow copy operation; no additional arguments are passed. The latter is called to implement the deep copy operation; it is passed one argument, the memo dictionary. If the __deepcopy__() implementation needs to make a deep copy of a component, it should call the deepcopy() function with the component as first argument and the memo dictionary as second argument. See also
Module pickle
Discussion of the special methods used to support object state retrieval and restoration. | |
doc_28244 | tf.compat.v1.estimator.BaselineClassifier(
model_dir=None, n_classes=2, weight_column=None, label_vocabulary=None,
optimizer='Ftrl', config=None,
loss_reduction=tf.compat.v1.losses.Reduction.SUM
)
This classifier ignores feature values and will learn to predict the average value of each label. For single-label problems, this will predict the probability distribution of the classes as seen in the labels. For multi-label problems, this will predict the fraction of examples that are positive for each class. Example:
# Build BaselineClassifier
classifier = tf.estimator.BaselineClassifier(n_classes=3)
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
# Fit model.
classifier.train(input_fn=input_fn_train)
# Evaluate cross entropy between the test and train labels.
loss = classifier.evaluate(input_fn=input_fn_eval)["loss"]
# predict outputs the probability distribution of the classes as seen in
# training.
predictions = classifier.predict(new_samples)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor.
Args
model_fn Model function. Follows the signature:
features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same.
labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None.
mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning.
config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used.
config estimator.RunConfig configuration object.
params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types.
warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged.
Raises
ValueError parameters of model_fn don't match params.
ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source
export_savedmodel(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, strip_default_attrs=False
)
Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | |
doc_28245 | See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_alias
tf.compat.v1.flags.DEFINE_alias(
name, original_name, flag_values=_flagvalues.FLAGS, module_name=None
)
Args
name str, the flag name.
original_name str, the original flag name.
flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden.
module_name A string, the name of the module that defines this flag.
Returns a handle to defined flag.
Raises
flags.FlagError UnrecognizedFlagError: if the referenced flag doesn't exist. DuplicateFlagError: if the alias name has been used by some existing flag. | |
doc_28246 |
Helper decorator for implementing module-level __getattr__ as a class. This decorator must be used at the module toplevel as follows: @caching_module_getattr
class __getattr__: # The class *must* be named ``__getattr__``.
@property # Only properties are taken into account.
def name(self): ...
The __getattr__ class will be replaced by a __getattr__ function such that trying to access name on the module will resolve the corresponding property (which may be decorated e.g. with _api.deprecated for deprecating module globals). The properties are all implicitly cached. Moreover, a suitable AttributeError is generated and raised if no property with the given name exists. | |
doc_28247 |
Return the underlying artist that actually defines some properties (e.g., color) of this artist. | |
doc_28248 |
Control behavior of major tick locators. Because the locator is involved in autoscaling, autoscale_view is called automatically after the parameters are changed. Parameters
axis{'both', 'x', 'y'}, default: 'both'
The axis on which to operate.
tightbool or None, optional
Parameter passed to autoscale_view. Default is None, for no change. Other Parameters
**kwargs
Remaining keyword arguments are passed to directly to the set_params() method of the locator. Supported keywords depend on the type of the locator. See for example set_params for the ticker.MaxNLocator used by default for linear axes. Examples When plotting small subplots, one might want to reduce the maximum number of ticks and use tight bounds, for example: ax.locator_params(tight=True, nbins=4) | |
doc_28249 | Alias for dim() | |
doc_28250 |
Draw a filled black rectangle from (x1, y1) to (x2, y2). | |
doc_28251 | Store a file in line mode. cmd should be an appropriate STOR command (see storbinary()). Lines are read until EOF from the file object fp (opened in binary mode) using its readline() method to provide the data to be stored. callback is an optional single parameter callable that is called on each line after it is sent. | |
doc_28252 | See Migration guide for more details. tf.compat.v1.SparseFeature, tf.compat.v1.io.SparseFeature
tf.io.SparseFeature(
index_key, value_key, dtype, size, already_sorted=False
)
Note, preferably use VarLenFeature (possibly in combination with a SequenceExample) in order to parse out SparseTensors instead of SparseFeature due to its simplicity. Closely mimicking the SparseTensor that will be obtained by parsing an Example with a SparseFeature config, a SparseFeature contains a value_key: The name of key for a Feature in the Example whose parsed Tensor will be the resulting SparseTensor.values. index_key: A list of names - one for each dimension in the resulting SparseTensor whose indices[i][dim] indicating the position of the i-th value in the dim dimension will be equal to the i-th value in the Feature with key named index_key[dim] in the Example. size: A list of ints for the resulting SparseTensor.dense_shape. For example, we can represent the following 2D SparseTensor SparseTensor(indices=[[3, 1], [20, 0]],
values=[0.5, -1.0]
dense_shape=[100, 3])
with an Example input proto features {
feature { key: "val" value { float_list { value: [ 0.5, -1.0 ] } } }
feature { key: "ix0" value { int64_list { value: [ 3, 20 ] } } }
feature { key: "ix1" value { int64_list { value: [ 1, 0 ] } } }
}
and SparseFeature config with 2 index_keys SparseFeature(index_key=["ix0", "ix1"],
value_key="val",
dtype=tf.float32,
size=[100, 3])
Fields:
index_key: A single string name or a list of string names of index features. For each key the underlying feature's type must be int64 and its length must always match that of the value_key feature. To represent SparseTensors with a dense_shape of rank higher than 1 a list of length rank should be used.
value_key: Name of value feature. The underlying feature's type must be dtype and its length must always match that of all the index_keys' features.
dtype: Data type of the value_key feature.
size: A Python int or list thereof specifying the dense shape. Should be a list if and only if index_key is a list. In that case the list must be equal to the length of index_key. Each for each entry i all values in the index_key[i] feature must be in [0, size[i]).
already_sorted: A Python boolean to specify whether the values in value_key are already sorted by their index position. If so skip sorting. False by default (optional).
Attributes
index_key
value_key
dtype
size
already_sorted | |
doc_28253 | Parameters
x – a number or a pair/vector of numbers or a turtle instance
y – a number if x is a number, else None
Return the angle between the line from turtle position to position specified by (x,y), the vector or the other turtle. This depends on the turtle’s start orientation which depends on the mode - “standard”/”world” or “logo”. >>> turtle.goto(10, 10)
>>> turtle.towards(0,0)
225.0 | |
doc_28254 |
Internal event handler to draw the cursor when the mouse moves. | |
doc_28255 |
Solve a linear matrix equation, or system of linear scalar equations. Computes the “exact” solution, x, of the well-determined, i.e., full rank, linear matrix equation ax = b. Parameters
a(…, M, M) array_like
Coefficient matrix.
b{(…, M,), (…, M, K)}, array_like
Ordinate or “dependent variable” values. Returns
x{(…, M,), (…, M, K)} ndarray
Solution to the system a x = b. Returned shape is identical to b. Raises
LinAlgError
If a is singular or not square. See also scipy.linalg.solve
Similar function in SciPy. Notes New in version 1.8.0. Broadcasting rules apply, see the numpy.linalg documentation for details. The solutions are computed using LAPACK routine _gesv. a must be square and of full-rank, i.e., all rows (or, equivalently, columns) must be linearly independent; if either is not true, use lstsq for the least-squares best “solution” of the system/equation. References 1
G. Strang, Linear Algebra and Its Applications, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pg. 22. Examples Solve the system of equations x0 + 2 * x1 = 1 and 3 * x0 + 5 * x1 = 2: >>> a = np.array([[1, 2], [3, 5]])
>>> b = np.array([1, 2])
>>> x = np.linalg.solve(a, b)
>>> x
array([-1., 1.])
Check that the solution is correct: >>> np.allclose(np.dot(a, x), b)
True | |
doc_28256 | Underlying file descriptor. | |
doc_28257 |
Abstract base class for classes used to interpolate on a triangular grid. Derived classes implement the following methods:
__call__(x, y), where x, y are array-like point coordinates of the same shape, and that returns a masked array of the same shape containing the interpolated z-values.
gradient(x, y), where x, y are array-like point coordinates of the same shape, and that returns a list of 2 masked arrays of the same shape containing the 2 derivatives of the interpolator (derivatives of interpolated z values with respect to x and y). | |
doc_28258 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_28259 | A string describing the arbitrary arguments passed to the command. The string is used in the usage text and error messages of the command. Defaults to 'label'. | |
doc_28260 | See Migration guide for more details. tf.compat.v1.keras.models.save_model
tf.keras.models.save_model(
model, filepath, overwrite=True, include_optimizer=True, save_format=None,
signatures=None, options=None, save_traces=True
)
See the Serialization and Saving guide for details. Usage:
model = tf.keras.Sequential([
tf.keras.layers.Dense(5, input_shape=(3,)),
tf.keras.layers.Softmax()])
model.save('/tmp/model')
loaded_model = tf.keras.models.load_model('/tmp/model')
x = tf.random.uniform((10, 3))
assert np.allclose(model.predict(x), loaded_model.predict(x))
The SavedModel and HDF5 file contains: the model's configuration (topology) the model's weights the model's optimizer's state (if any) Thus models can be reinstantiated in the exact same state, without any of the code used for model definition or training. Note that the model weights may have different scoped names after being loaded. Scoped names include the model/layer names, such as "dense_1/kernel:0". It is recommended that you use the layer properties to access specific variables, e.g. model.get_layer("dense_1").kernel. SavedModel serialization format Keras SavedModel uses tf.saved_model.save to save the model and all trackable objects attached to the model (e.g. layers and variables). The model config, weights, and optimizer are saved in the SavedModel. Additionally, for every Keras layer attached to the model, the SavedModel stores: the config and metadata -- e.g. name, dtype, trainable status traced call and loss functions, which are stored as TensorFlow subgraphs. The traced functions allow the SavedModel format to save and load custom layers without the original class definition. You can choose to not save the traced functions by disabling the save_traces option. This will decrease the time it takes to save the model and the amount of disk space occupied by the output SavedModel. If you enable this option, then you must provide all custom class definitions when loading the model. See the custom_objects argument in tf.keras.models.load_model.
Arguments
model Keras model instance to be saved.
filepath One of the following: String or pathlib.Path object, path where to save the model
h5py.File object where to save the model
overwrite Whether we should overwrite any existing model at the target location, or instead ask the user with a manual prompt.
include_optimizer If True, save optimizer's state together.
save_format Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' in TF 1.X.
signatures Signatures to save with the SavedModel. Applicable to the 'tf' format only. Please see the signatures argument in tf.saved_model.save for details.
options (only applies to SavedModel format) tf.saved_model.SaveOptions object that specifies options for saving to SavedModel.
save_traces (only applies to SavedModel format) When enabled, the SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.
Raises
ImportError If save format is hdf5, and h5py is not available. | |
doc_28261 |
Determine subpixel position of corners. A statistical test decides whether the corner is defined as the intersection of two edges or a single peak. Depending on the classification result, the subpixel corner location is determined based on the local covariance of the grey-values. If the significance level for either statistical test is not sufficient, the corner cannot be classified, and the output subpixel position is set to NaN. Parameters
imagendarray
Input image.
corners(N, 2) ndarray
Corner coordinates (row, col).
window_sizeint, optional
Search window size for subpixel estimation.
alphafloat, optional
Significance level for corner classification. Returns
positions(N, 2) ndarray
Subpixel corner positions. NaN for “not classified” corners. References
1
Förstner, W., & Gülch, E. (1987, June). A fast operator for detection and precise location of distinct points, corners and centres of circular features. In Proc. ISPRS intercommission conference on fast processing of photogrammetric data (pp. 281-305). https://cseweb.ucsd.edu/classes/sp02/cse252/foerstner/foerstner.pdf
2
https://en.wikipedia.org/wiki/Corner_detection Examples >>> from skimage.feature import corner_harris, corner_peaks, corner_subpix
>>> img = np.zeros((10, 10))
>>> img[:5, :5] = 1
>>> img[5:, 5:] = 1
>>> img.astype(int)
array([[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1]])
>>> coords = corner_peaks(corner_harris(img), min_distance=2)
>>> coords_subpix = corner_subpix(img, coords, window_size=7)
>>> coords_subpix
array([[4.5, 4.5]]) | |
doc_28262 | tf.cos Compat aliases for migration See Migration guide for more details. tf.compat.v1.cos, tf.compat.v1.math.cos
tf.math.cos(
x, name=None
)
Given an input tensor, this function computes cosine of every element in the tensor. Input range is (-inf, inf) and output range is [-1,1]. If input lies outside the boundary, nan is returned. x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10000, float("inf")])
tf.math.cos(x) ==> [nan -0.91113025 0.87758255 0.5403023 0.36235774 0.48718765 -0.95215535 nan]
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | |
doc_28263 |
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X.
sample_weightarray-like of shape (n_samples,) default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. Returns
selfobject | |
doc_28264 | See torch.erfinv() | |
doc_28265 | Use RPOP authentication (similar to UNIX r-commands) to log into POP3 server. | |
doc_28266 |
Called when this tool gets used. This method is called by ToolManager.trigger_tool. Parameters
eventEvent
The canvas event that caused this tool to be called.
senderobject
Object that requested the tool to be triggered.
dataobject
Extra data. | |
doc_28267 | See Migration guide for more details. tf.compat.v1.sparse.mask, tf.compat.v1.sparse_mask
tf.sparse.mask(
a, mask_indices, name=None
)
Given an IndexedSlices instance a, returns another IndexedSlices that contains a subset of the slices of a. Only the slices at indices not specified in mask_indices are returned. This is useful when you need to extract a subset of slices in an IndexedSlices object. For example: # `a` contains slices at indices [12, 26, 37, 45] from a large tensor
# with shape [1000, 10]
a.indices # [12, 26, 37, 45]
tf.shape(a.values) # [4, 10]
# `b` will be the subset of `a` slices at its second and third indices, so
# we want to mask its first and last indices (which are at absolute
# indices 12, 45)
b = tf.sparse.mask(a, [12, 45])
b.indices # [26, 37]
tf.shape(b.values) # [2, 10]
Args
a An IndexedSlices instance.
mask_indices Indices of elements to mask.
name A name for the operation (optional).
Returns The masked IndexedSlices instance. | |
doc_28268 |
Return whether the artist is to be rasterized. | |
doc_28269 | 'DEFAULT_THROTTLE_CLASSES': [
'rest_framework.throttling.AnonRateThrottle',
'rest_framework.throttling.UserRateThrottle'
],
'DEFAULT_THROTTLE_RATES': {
'anon': '100/day',
'user': '1000/day'
}
}
The rate descriptions used in DEFAULT_THROTTLE_RATES may include second, minute, hour or day as the throttle period. You can also set the throttling policy on a per-view or per-viewset basis, using the APIView class-based views. from rest_framework.response import Response
from rest_framework.throttling import UserRateThrottle
from rest_framework.views import APIView
class ExampleView(APIView):
throttle_classes = [UserRateThrottle]
def get(self, request, format=None):
content = {
'status': 'request was permitted'
}
return Response(content)
If you're using the @api_view decorator with function based views you can use the following decorator. @api_view(['GET'])
@throttle_classes([UserRateThrottle])
def example_view(request, format=None):
content = {
'status': 'request was permitted'
}
return Response(content)
It's also possible to set throttle classes for routes that are created using the @action decorator. Throttle classes set in this way will override any viewset level class settings. @action(detail=True, methods=["post"], throttle_classes=[UserRateThrottle])
def example_adhoc_method(request, pk=None):
content = {
'status': 'request was permitted'
}
return Response(content)
How clients are identified The X-Forwarded-For HTTP header and REMOTE_ADDR WSGI variable are used to uniquely identify client IP addresses for throttling. If the X-Forwarded-For header is present then it will be used, otherwise the value of the REMOTE_ADDR variable from the WSGI environment will be used. If you need to strictly identify unique client IP addresses, you'll need to first configure the number of application proxies that the API runs behind by setting the NUM_PROXIES setting. This setting should be an integer of zero or more. If set to non-zero then the client IP will be identified as being the last IP address in the X-Forwarded-For header, once any application proxy IP addresses have first been excluded. If set to zero, then the REMOTE_ADDR value will always be used as the identifying IP address. It is important to understand that if you configure the NUM_PROXIES setting, then all clients behind a unique NAT'd gateway will be treated as a single client. Further context on how the X-Forwarded-For header works, and identifying a remote client IP can be found here. Setting up the cache The throttle classes provided by REST framework use Django's cache backend. You should make sure that you've set appropriate cache settings. The default value of LocMemCache backend should be okay for simple setups. See Django's cache documentation for more details. If you need to use a cache other than 'default', you can do so by creating a custom throttle class and setting the cache attribute. For example: from django.core.cache import caches
class CustomAnonRateThrottle(AnonRateThrottle):
cache = caches['alternate']
You'll need to remember to also set your custom throttle class in the 'DEFAULT_THROTTLE_CLASSES' settings key, or using the throttle_classes view attribute. API Reference AnonRateThrottle The AnonRateThrottle will only ever throttle unauthenticated users. The IP address of the incoming request is used to generate a unique key to throttle against. The allowed request rate is determined from one of the following (in order of preference). The rate property on the class, which may be provided by overriding AnonRateThrottle and setting the property. The DEFAULT_THROTTLE_RATES['anon'] setting. AnonRateThrottle is suitable if you want to restrict the rate of requests from unknown sources. UserRateThrottle The UserRateThrottle will throttle users to a given rate of requests across the API. The user id is used to generate a unique key to throttle against. Unauthenticated requests will fall back to using the IP address of the incoming request to generate a unique key to throttle against. The allowed request rate is determined from one of the following (in order of preference). The rate property on the class, which may be provided by overriding UserRateThrottle and setting the property. The DEFAULT_THROTTLE_RATES['user'] setting. An API may have multiple UserRateThrottles in place at the same time. To do so, override UserRateThrottle and set a unique "scope" for each class. For example, multiple user throttle rates could be implemented by using the following classes... class BurstRateThrottle(UserRateThrottle):
scope = 'burst'
class SustainedRateThrottle(UserRateThrottle):
scope = 'sustained'
...and the following settings. REST_FRAMEWORK = {
'DEFAULT_THROTTLE_CLASSES': [
'example.throttles.BurstRateThrottle',
'example.throttles.SustainedRateThrottle'
],
'DEFAULT_THROTTLE_RATES': {
'burst': '60/min',
'sustained': '1000/day'
}
}
UserRateThrottle is suitable if you want simple global rate restrictions per-user. ScopedRateThrottle The ScopedRateThrottle class can be used to restrict access to specific parts of the API. This throttle will only be applied if the view that is being accessed includes a .throttle_scope property. The unique throttle key will then be formed by concatenating the "scope" of the request with the unique user id or IP address. The allowed request rate is determined by the DEFAULT_THROTTLE_RATES setting using a key from the request "scope". For example, given the following views... class ContactListView(APIView):
throttle_scope = 'contacts'
...
class ContactDetailView(APIView):
throttle_scope = 'contacts'
...
class UploadView(APIView):
throttle_scope = 'uploads'
...
...and the following settings. REST_FRAMEWORK = {
'DEFAULT_THROTTLE_CLASSES': [
'rest_framework.throttling.ScopedRateThrottle',
],
'DEFAULT_THROTTLE_RATES': {
'contacts': '1000/day',
'uploads': '20/day'
}
}
User requests to either ContactListView or ContactDetailView would be restricted to a total of 1000 requests per-day. User requests to UploadView would be restricted to 20 requests per day. Custom throttles To create a custom throttle, override BaseThrottle and implement .allow_request(self, request, view). The method should return True if the request should be allowed, and False otherwise. Optionally you may also override the .wait() method. If implemented, .wait() should return a recommended number of seconds to wait before attempting the next request, or None. The .wait() method will only be called if .allow_request() has previously returned False. If the .wait() method is implemented and the request is throttled, then a Retry-After header will be included in the response. Example The following is an example of a rate throttle, that will randomly throttle 1 in every 10 requests. import random
class RandomRateThrottle(throttling.BaseThrottle):
def allow_request(self, request, view):
return random.randint(1, 10) != 1
throttling.py | |
doc_28270 |
Perform fit on X and returns labels for X. Returns -1 for outliers and 1 for inliers. Parameters
X{array-like, sparse matrix, dataframe} of shape (n_samples, n_features)
yIgnored
Not used, present for API consistency by convention. Returns
yndarray of shape (n_samples,)
1 for inliers, -1 for outliers. | |
doc_28271 |
Transform a sequence of instances to a scipy.sparse matrix. Parameters
raw_Xiterable over iterable over raw features, length = n_samples
Samples. Each sample must be iterable an (e.g., a list or tuple) containing/generating feature names (and optionally values, see the input_type constructor argument) which will be hashed. raw_X need not support the len function, so it can be the result of a generator; n_samples is determined on the fly. Returns
Xsparse matrix of shape (n_samples, n_features)
Feature matrix, for use with estimators or further transformers. | |
doc_28272 | Compress data (a bytes object), returning a bytes object containing compressed data for at least part of the input. Some of data may be buffered internally, for use in later calls to compress() and flush(). The returned data should be concatenated with the output of any previous calls to compress(). | |
doc_28273 | tf.compat.v1.estimator.experimental.linear_logit_fn_builder(
units, feature_columns, sparse_combiner='sum'
)
Args
units An int indicating the dimension of the logit layer.
feature_columns An iterable containing all the feature columns used by the model.
sparse_combiner A string specifying how to reduce if a categorical column is multivalent. One of "mean", "sqrtn", and "sum".
Returns A logit_fn (see below). | |
doc_28274 |
Predict probability for each possible outcome. Compute the probability estimates for each single sample in X and each possible outcome seen during training (categorical distribution). Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
probabilitiesndarray of shape (n_samples, n_classes)
Normalized probability distributions across class labels. | |
doc_28275 | An integer indicating how many dimensions of a multi-dimensional array the memory represents. | |
doc_28276 |
Wrapper class for quantized operations. The instance of this class can be used instead of the torch.ops.quantized prefix. See example usage below. Note This class does not provide a forward hook. Instead, you must use one of the underlying functions (e.g. add). Examples: >>> q_add = QFunctional()
>>> a = torch.quantize_per_tensor(torch.tensor(3.0), 1.0, 0, torch.qint32)
>>> b = torch.quantize_per_tensor(torch.tensor(4.0), 1.0, 0, torch.qint32)
>>> q_add.add(a, b) # Equivalent to ``torch.ops.quantized.add(a, b, 1.0, 0)``
Valid operation names:
add cat mul add_relu add_scalar mul_scalar | |
doc_28277 |
The quarter of the date. | |
doc_28278 | In-place version of logical_or() | |
doc_28279 | Whether the OpenSSL library has built-in support for the Elliptic Curve-based Diffie-Hellman key exchange. This should be true unless the feature was explicitly disabled by the distributor. New in version 3.3. | |
doc_28280 | Solves a linear system of equations with a positive semidefinite matrix to be inverted given its Cholesky factor matrix uu . If upper is False, uu is and lower triangular and c is returned such that: c=(uuT)−1bc = (u u^T)^{{-1}} b
If upper is True or not provided, uu is upper triangular and c is returned such that: c=(uTu)−1bc = (u^T u)^{{-1}} b
torch.cholesky_solve(b, u) can take in 2D inputs b, u or inputs that are batches of 2D matrices. If the inputs are batches, then returns batched outputs c Supports real-valued and complex-valued inputs. For the complex-valued inputs the transpose operator above is the conjugate transpose. Parameters
input (Tensor) – input matrix bb of size (∗,m,k)(*, m, k) , where ∗* is zero or more batch dimensions
input2 (Tensor) – input matrix uu of size (∗,m,m)(*, m, m) , where ∗* is zero of more batch dimensions composed of upper or lower triangular Cholesky factor
upper (bool, optional) – whether to consider the Cholesky factor as a lower or upper triangular matrix. Default: False. Keyword Arguments
out (Tensor, optional) – the output tensor for c Example: >>> a = torch.randn(3, 3)
>>> a = torch.mm(a, a.t()) # make symmetric positive definite
>>> u = torch.cholesky(a)
>>> a
tensor([[ 0.7747, -1.9549, 1.3086],
[-1.9549, 6.7546, -5.4114],
[ 1.3086, -5.4114, 4.8733]])
>>> b = torch.randn(3, 2)
>>> b
tensor([[-0.6355, 0.9891],
[ 0.1974, 1.4706],
[-0.4115, -0.6225]])
>>> torch.cholesky_solve(b, u)
tensor([[ -8.1625, 19.6097],
[ -5.8398, 14.2387],
[ -4.3771, 10.4173]])
>>> torch.mm(a.inverse(), b)
tensor([[ -8.1626, 19.6097],
[ -5.8398, 14.2387],
[ -4.3771, 10.4173]]) | |
doc_28281 |
Return x, y values at equally spaced points in domain. Returns the x, y values at n linearly spaced points across the domain. Here y is the value of the polynomial at the points x. By default the domain is the same as that of the series instance. This method is intended mostly as a plotting aid. New in version 1.5.0. Parameters
nint, optional
Number of point pairs to return. The default value is 100.
domain{None, array_like}, optional
If not None, the specified domain is used instead of that of the calling instance. It should be of the form [beg,end]. The default is None which case the class domain is used. Returns
x, yndarray
x is equal to linspace(self.domain[0], self.domain[1], n) and y is the series evaluated at element of x. | |
doc_28282 |
Creates a callable object to retrieve events in a blocking way for interactive sessions. Base class of the other classes listed here. BlockingKeyMouseInput
Creates a callable object to retrieve key or mouse clicks in a blocking way for interactive sessions. Used by waitforbuttonpress. BlockingMouseInput
Creates a callable object to retrieve mouse clicks in a blocking way for interactive sessions. Used by ginput. BlockingContourLabeler
Creates a callable object to retrieve mouse clicks in a blocking way that will then be used to place labels on a ContourSet. Used by clabel. classmatplotlib.blocking_input.BlockingContourLabeler(cs)[source]
Bases: matplotlib.blocking_input.BlockingMouseInput Callable for retrieving mouse clicks and key presses in a blocking way. Used to place contour labels. add_click(event)[source]
Add the coordinates of an event to the list of clicks. Parameters
eventMouseEvent
button1(event)[source]
Process an button-1 event (add a label to a contour). Parameters
eventMouseEvent
button3(event)[source]
Process an button-3 event (remove a label if not in inline mode). Unfortunately, if one is doing inline labels, then there is currently no way to fix the broken contour - once humpty-dumpty is broken, he can't be put back together. In inline mode, this does nothing. Parameters
eventMouseEvent
pop_click(event, index=- 1)[source]
Remove a click (by default, the last) from the list of clicks. Parameters
eventMouseEvent
classmatplotlib.blocking_input.BlockingInput(fig, eventslist=())[source]
Bases: object Callable for retrieving events in a blocking way. add_event(event)[source]
For base class, this just appends an event to events.
cleanup()[source]
Disconnect all callbacks.
on_event(event)[source]
Event handler; will be passed to the current figure to retrieve events.
pop(index=- 1)[source]
Remove an event from the event list -- by default, the last. Note that this does not check that there are events, much like the normal pop method. If no events exist, this will throw an exception.
pop_event(index=- 1)[source]
Remove an event from the event list -- by default, the last. Note that this does not check that there are events, much like the normal pop method. If no events exist, this will throw an exception.
post_event()[source]
For baseclass, do nothing but collect events.
classmatplotlib.blocking_input.BlockingKeyMouseInput(fig)[source]
Bases: matplotlib.blocking_input.BlockingInput Callable for retrieving mouse clicks and key presses in a blocking way. post_event()[source]
Determine if it is a key event.
classmatplotlib.blocking_input.BlockingMouseInput(fig, mouse_add=MouseButton.LEFT, mouse_pop=MouseButton.RIGHT, mouse_stop=MouseButton.MIDDLE)[source]
Bases: matplotlib.blocking_input.BlockingInput Callable for retrieving mouse clicks in a blocking way. This class will also retrieve keypresses and map them to mouse clicks: delete and backspace are a right click, enter is like a middle click, and all others are like a left click. add_click(event)[source]
Add the coordinates of an event to the list of clicks. Parameters
eventMouseEvent
button_add=1[source]
button_pop=3[source]
button_stop=2[source]
cleanup(event=None)[source]
Parameters
eventMouseEvent, optional
Not used
key_event()[source]
Process a key press event, mapping keys to appropriate mouse clicks.
mouse_event()[source]
Process a mouse click event.
mouse_event_add(event)[source]
Process an button-1 event (add a click if inside axes). Parameters
eventMouseEvent
mouse_event_pop(event)[source]
Process an button-3 event (remove the last click). Parameters
eventMouseEvent
mouse_event_stop(event)[source]
Process an button-2 event (end blocking input). Parameters
eventMouseEvent
pop(event, index=- 1)[source]
Remove a click and the associated event from the list of clicks. Defaults to the last click.
pop_click(event, index=- 1)[source]
Remove a click (by default, the last) from the list of clicks. Parameters
eventMouseEvent
post_event()[source]
Process an event. | |
doc_28283 | Read n items (as machine values) from the file object f and append them to the end of the array. If less than n items are available, EOFError is raised, but the items that were available are still inserted into the array. | |
doc_28284 | In-place version of rrelu(). | |
doc_28285 | Return an iterator of all values associated with a key. Zipping keys() and this is the same as calling lists(): >>> d = MultiDict({"foo": [1, 2, 3]})
>>> zip(d.keys(), d.listvalues()) == d.lists()
True | |
doc_28286 |
Return a copy. Returns
new_seriesseries
Copy of self. | |
doc_28287 | Open a resource file relative to root_path for reading. For example, if the file schema.sql is next to the file app.py where the Flask app is defined, it can be opened with: with app.open_resource("schema.sql") as f:
conn.executescript(f.read())
Parameters
resource (str) – Path to the resource relative to root_path.
mode (str) – Open the file in this mode. Only reading is supported, valid values are “r” (or “rt”) and “rb”. Return type
IO | |
doc_28288 |
This decorator indicates to the compiler that a function or method should be ignored and left as a Python function. This allows you to leave code in your model that is not yet TorchScript compatible. If called from TorchScript, ignored functions will dispatch the call to the Python interpreter. Models with ignored functions cannot be exported; use @torch.jit.unused instead. Example (using @torch.jit.ignore on a method): import torch
import torch.nn as nn
class MyModule(nn.Module):
@torch.jit.ignore
def debugger(self, x):
import pdb
pdb.set_trace()
def forward(self, x):
x += 10
# The compiler would normally try to compile `debugger`,
# but since it is `@ignore`d, it will be left as a call
# to Python
self.debugger(x)
return x
m = torch.jit.script(MyModule())
# Error! The call `debugger` cannot be saved since it calls into Python
m.save("m.pt")
Example (using @torch.jit.ignore(drop=True) on a method): import torch
import torch.nn as nn
class MyModule(nn.Module):
@torch.jit.ignore(drop=True)
def training_method(self, x):
import pdb
pdb.set_trace()
def forward(self, x):
if self.training:
self.training_method(x)
return x
m = torch.jit.script(MyModule())
# This is OK since `training_method` is not saved, the call is replaced
# with a `raise`.
m.save("m.pt") | |
doc_28289 |
Provide expanding window calculations. Parameters
min_periods:int, default 1
Minimum number of observations in window required to have a value; otherwise, result is np.nan.
center:bool, default False
If False, set the window labels as the right edge of the window index. If True, set the window labels as the center of the window index. Deprecated since version 1.1.0.
axis:int or str, default 0
If 0 or 'index', roll across the rows. If 1 or 'columns', roll across the columns.
method:str {‘single’, ‘table’}, default ‘single’
Execute the rolling operation per single column or row ('single') or over the entire object ('table'). This argument is only implemented when specifying engine='numba' in the method call. New in version 1.3.0. Returns
Expanding subclass
See also rolling
Provides rolling window calculations. ewm
Provides exponential weighted functions. Notes See Windowing Operations for further usage details and examples. Examples
>>> df = pd.DataFrame({"B": [0, 1, 2, np.nan, 4]})
>>> df
B
0 0.0
1 1.0
2 2.0
3 NaN
4 4.0
min_periods Expanding sum with 1 vs 3 observations needed to calculate a value.
>>> df.expanding(1).sum()
B
0 0.0
1 1.0
2 3.0
3 3.0
4 7.0
>>> df.expanding(3).sum()
B
0 NaN
1 NaN
2 3.0
3 3.0
4 7.0 | |
doc_28290 | Creates a ContentRange object from the current range and given content length. | |
doc_28291 | Adds a parameter to the module. The parameter can be accessed as an attribute using given name. Parameters
name (string) – name of the parameter. The parameter can be accessed from this module using the given name
param (Parameter) – parameter to be added to the module. | |
doc_28292 | sklearn.datasets.get_data_home(data_home=None) → str[source]
Return the path of the scikit-learn data dir. This folder is used by some large dataset loaders to avoid downloading the data several times. By default the data dir is set to a folder named ‘scikit_learn_data’ in the user home folder. Alternatively, it can be set by the ‘SCIKIT_LEARN_DATA’ environment variable or programmatically by giving an explicit folder path. The ‘~’ symbol is expanded to the user home folder. If the folder does not already exist, it is automatically created. Parameters
data_homestr, default=None
The path to scikit-learn data directory. If None, the default path is ~/sklearn_learn_data.
Examples using sklearn.datasets.get_data_home
Out-of-core classification of text documents | |
doc_28293 | Decode the Base32 encoded bytes-like object or ASCII string s and return the decoded bytes. Optional casefold is a flag specifying whether a lowercase alphabet is acceptable as input. For security purposes, the default is False. RFC 3548 allows for optional mapping of the digit 0 (zero) to the letter O (oh), and for optional mapping of the digit 1 (one) to either the letter I (eye) or letter L (el). The optional argument map01 when not None, specifies which letter the digit 1 should be mapped to (when map01 is not None, the digit 0 is always mapped to the letter O). For security purposes the default is None, so that 0 and 1 are not allowed in the input. A binascii.Error is raised if s is incorrectly padded or if there are non-alphabet characters present in the input. | |
doc_28294 | from myapp.serializers import PurchaseSerializer
from rest_framework import generics
class PurchaseList(generics.ListAPIView):
serializer_class = PurchaseSerializer
def get_queryset(self):
"""
This view should return a list of all the purchases
for the currently authenticated user.
"""
user = self.request.user
return Purchase.objects.filter(purchaser=user)
Filtering against the URL Another style of filtering might involve restricting the queryset based on some part of the URL. For example if your URL config contained an entry like this: re_path('^purchases/(?P<username>.+)/$', PurchaseList.as_view()),
You could then write a view that returned a purchase queryset filtered by the username portion of the URL: class PurchaseList(generics.ListAPIView):
serializer_class = PurchaseSerializer
def get_queryset(self):
"""
This view should return a list of all the purchases for
the user as determined by the username portion of the URL.
"""
username = self.kwargs['username']
return Purchase.objects.filter(purchaser__username=username)
Filtering against query parameters A final example of filtering the initial queryset would be to determine the initial queryset based on query parameters in the url. We can override .get_queryset() to deal with URLs such as http://example.com/api/purchases?username=denvercoder9, and filter the queryset only if the username parameter is included in the URL: class PurchaseList(generics.ListAPIView):
serializer_class = PurchaseSerializer
def get_queryset(self):
"""
Optionally restricts the returned purchases to a given user,
by filtering against a `username` query parameter in the URL.
"""
queryset = Purchase.objects.all()
username = self.request.query_params.get('username')
if username is not None:
queryset = queryset.filter(purchaser__username=username)
return queryset
Generic Filtering As well as being able to override the default queryset, REST framework also includes support for generic filtering backends that allow you to easily construct complex searches and filters. Generic filters can also present themselves as HTML controls in the browsable API and admin API. Setting filter backends The default filter backends may be set globally, using the DEFAULT_FILTER_BACKENDS setting. For example. REST_FRAMEWORK = {
'DEFAULT_FILTER_BACKENDS': ['django_filters.rest_framework.DjangoFilterBackend']
}
You can also set the filter backends on a per-view, or per-viewset basis, using the GenericAPIView class-based views. import django_filters.rest_framework
from django.contrib.auth.models import User
from myapp.serializers import UserSerializer
from rest_framework import generics
class UserListView(generics.ListAPIView):
queryset = User.objects.all()
serializer_class = UserSerializer
filter_backends = [django_filters.rest_framework.DjangoFilterBackend]
Filtering and object lookups Note that if a filter backend is configured for a view, then as well as being used to filter list views, it will also be used to filter the querysets used for returning a single object. For instance, given the previous example, and a product with an id of 4675, the following URL would either return the corresponding object, or return a 404 response, depending on if the filtering conditions were met by the given product instance: http://example.com/api/products/4675/?category=clothing&max_price=10.00
Overriding the initial queryset Note that you can use both an overridden .get_queryset() and generic filtering together, and everything will work as expected. For example, if Product had a many-to-many relationship with User, named purchase, you might want to write a view like this: class PurchasedProductsList(generics.ListAPIView):
"""
Return a list of all the products that the authenticated
user has ever purchased, with optional filtering.
"""
model = Product
serializer_class = ProductSerializer
filterset_class = ProductFilter
def get_queryset(self):
user = self.request.user
return user.purchase_set.all()
API Guide DjangoFilterBackend The django-filter library includes a DjangoFilterBackend class which supports highly customizable field filtering for REST framework. To use DjangoFilterBackend, first install django-filter. pip install django-filter
Then add 'django_filters' to Django's INSTALLED_APPS: INSTALLED_APPS = [
...
'django_filters',
...
]
You should now either add the filter backend to your settings: REST_FRAMEWORK = {
'DEFAULT_FILTER_BACKENDS': ['django_filters.rest_framework.DjangoFilterBackend']
}
Or add the filter backend to an individual View or ViewSet. from django_filters.rest_framework import DjangoFilterBackend
class UserListView(generics.ListAPIView):
...
filter_backends = [DjangoFilterBackend]
If all you need is simple equality-based filtering, you can set a filterset_fields attribute on the view, or viewset, listing the set of fields you wish to filter against. class ProductList(generics.ListAPIView):
queryset = Product.objects.all()
serializer_class = ProductSerializer
filter_backends = [DjangoFilterBackend]
filterset_fields = ['category', 'in_stock']
This will automatically create a FilterSet class for the given fields, and will allow you to make requests such as: http://example.com/api/products?category=clothing&in_stock=True
For more advanced filtering requirements you can specify a FilterSet class that should be used by the view. You can read more about FilterSets in the django-filter documentation. It's also recommended that you read the section on DRF integration. SearchFilter The SearchFilter class supports simple single query parameter based searching, and is based on the Django admin's search functionality. When in use, the browsable API will include a SearchFilter control: The SearchFilter class will only be applied if the view has a search_fields attribute set. The search_fields attribute should be a list of names of text type fields on the model, such as CharField or TextField. from rest_framework import filters
class UserListView(generics.ListAPIView):
queryset = User.objects.all()
serializer_class = UserSerializer
filter_backends = [filters.SearchFilter]
search_fields = ['username', 'email']
This will allow the client to filter the items in the list by making queries such as: http://example.com/api/users?search=russell
You can also perform a related lookup on a ForeignKey or ManyToManyField with the lookup API double-underscore notation: search_fields = ['username', 'email', 'profile__profession']
For JSONField and HStoreField fields you can filter based on nested values within the data structure using the same double-underscore notation: search_fields = ['data__breed', 'data__owner__other_pets__0__name']
By default, searches will use case-insensitive partial matches. The search parameter may contain multiple search terms, which should be whitespace and/or comma separated. If multiple search terms are used then objects will be returned in the list only if all the provided terms are matched. The search behavior may be restricted by prepending various characters to the search_fields. '^' Starts-with search. '=' Exact matches. '@' Full-text search. (Currently only supported Django's PostgreSQL backend.) '$' Regex search. For example: search_fields = ['=username', '=email']
By default, the search parameter is named 'search', but this may be overridden with the SEARCH_PARAM setting. To dynamically change search fields based on request content, it's possible to subclass the SearchFilter and override the get_search_fields() function. For example, the following subclass will only search on title if the query parameter title_only is in the request: from rest_framework import filters
class CustomSearchFilter(filters.SearchFilter):
def get_search_fields(self, view, request):
if request.query_params.get('title_only'):
return ['title']
return super(CustomSearchFilter, self).get_search_fields(view, request)
For more details, see the Django documentation. OrderingFilter The OrderingFilter class supports simple query parameter controlled ordering of results. By default, the query parameter is named 'ordering', but this may by overridden with the ORDERING_PARAM setting. For example, to order users by username: http://example.com/api/users?ordering=username
The client may also specify reverse orderings by prefixing the field name with '-', like so: http://example.com/api/users?ordering=-username
Multiple orderings may also be specified: http://example.com/api/users?ordering=account,username
Specifying which fields may be ordered against It's recommended that you explicitly specify which fields the API should allowing in the ordering filter. You can do this by setting an ordering_fields attribute on the view, like so: class UserListView(generics.ListAPIView):
queryset = User.objects.all()
serializer_class = UserSerializer
filter_backends = [filters.OrderingFilter]
ordering_fields = ['username', 'email']
This helps prevent unexpected data leakage, such as allowing users to order against a password hash field or other sensitive data. If you don't specify an ordering_fields attribute on the view, the filter class will default to allowing the user to filter on any readable fields on the serializer specified by the serializer_class attribute. If you are confident that the queryset being used by the view doesn't contain any sensitive data, you can also explicitly specify that a view should allow ordering on any model field or queryset aggregate, by using the special value '__all__'. class BookingsListView(generics.ListAPIView):
queryset = Booking.objects.all()
serializer_class = BookingSerializer
filter_backends = [filters.OrderingFilter]
ordering_fields = '__all__'
Specifying a default ordering If an ordering attribute is set on the view, this will be used as the default ordering. Typically you'd instead control this by setting order_by on the initial queryset, but using the ordering parameter on the view allows you to specify the ordering in a way that it can then be passed automatically as context to a rendered template. This makes it possible to automatically render column headers differently if they are being used to order the results. class UserListView(generics.ListAPIView):
queryset = User.objects.all()
serializer_class = UserSerializer
filter_backends = [filters.OrderingFilter]
ordering_fields = ['username', 'email']
ordering = ['username']
The ordering attribute may be either a string or a list/tuple of strings. Custom generic filtering You can also provide your own generic filtering backend, or write an installable app for other developers to use. To do so override BaseFilterBackend, and override the .filter_queryset(self, request, queryset, view) method. The method should return a new, filtered queryset. As well as allowing clients to perform searches and filtering, generic filter backends can be useful for restricting which objects should be visible to any given request or user. Example For example, you might need to restrict users to only being able to see objects they created. class IsOwnerFilterBackend(filters.BaseFilterBackend):
"""
Filter that only allows users to see their own objects.
"""
def filter_queryset(self, request, queryset, view):
return queryset.filter(owner=request.user)
We could achieve the same behavior by overriding get_queryset() on the views, but using a filter backend allows you to more easily add this restriction to multiple views, or to apply it across the entire API. Customizing the interface Generic filters may also present an interface in the browsable API. To do so you should implement a to_html() method which returns a rendered HTML representation of the filter. This method should have the following signature: to_html(self, request, queryset, view) The method should return a rendered HTML string. Filtering & schemas You can also make the filter controls available to the schema autogeneration that REST framework provides, by implementing a get_schema_fields() method. This method should have the following signature: get_schema_fields(self, view) The method should return a list of coreapi.Field instances. Third party packages The following third party packages provide additional filter implementations. Django REST framework filters package The django-rest-framework-filters package works together with the DjangoFilterBackend class, and allows you to easily create filters across relationships, or create multiple filter lookup types for a given field. Django REST framework full word search filter The djangorestframework-word-filter developed as alternative to filters.SearchFilter which will search full word in text, or exact match. Django URL Filter django-url-filter provides a safe way to filter data via human-friendly URLs. It works very similar to DRF serializers and fields in a sense that they can be nested except they are called filtersets and filters. That provides easy way to filter related data. Also this library is generic-purpose so it can be used to filter other sources of data and not only Django QuerySets. drf-url-filters drf-url-filter is a simple Django app to apply filters on drf ModelViewSet's Queryset in a clean, simple and configurable way. It also supports validations on incoming query params and their values. A beautiful python package Voluptuous is being used for validations on the incoming query parameters. The best part about voluptuous is you can define your own validations as per your query params requirements. filters.py | |
doc_28295 | See Migration guide for more details. tf.compat.v1.image.per_image_standardization
tf.image.per_image_standardization(
image
)
For each 3-D image x in image, computes (x - mean) / adjusted_stddev, where
mean is the average of all values in x
adjusted_stddev = max(stddev, 1.0/sqrt(N)) is capped away from 0 to protect against division by 0 when handling uniform images
N is the number of elements in x
stddev is the standard deviation of all values in x
Args
image An n-D Tensor with at least 3 dimensions, the last 3 of which are the dimensions of each image.
Returns A Tensor with the same shape and dtype as image.
Raises
ValueError if the shape of 'image' is incompatible with this function. | |
doc_28296 |
Apply the non-affine part of this transform to Path path, returning a new Path. transform_path(path) is equivalent to transform_path_affine(transform_path_non_affine(values)). | |
doc_28297 | A list of the path’s file extensions: >>> PurePosixPath('my/library.tar.gar').suffixes
['.tar', '.gar']
>>> PurePosixPath('my/library.tar.gz').suffixes
['.tar', '.gz']
>>> PurePosixPath('my/library').suffixes
[] | |
doc_28298 |
Force rasterized (bitmap) drawing for vector graphics output. Rasterized drawing is not supported by all artists. If you try to enable this on an artist that does not support it, the command has no effect and a warning will be issued. This setting is ignored for pixel-based output. See also Rasterization for vector graphics. Parameters
rasterizedbool | |
doc_28299 | Cleans and returns a value for use in the widget template. value isn’t guaranteed to be valid input, therefore subclass implementations should program defensively. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.