_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_1800 | If set to true, sys.stdout and sys.stderr will be buffered in between startTest() and stopTest() being called. Collected output will only be echoed onto the real sys.stdout and sys.stderr if the test fails or errors. Any output is also attached to the failure / error message. New in version 3.2. | |
doc_1801 | socket.SOCK_DGRAM
socket.SOCK_RAW
socket.SOCK_RDM
socket.SOCK_SEQPACKET
These constants represent the socket types, used for the second argument to socket(). More constants may be available depending on the system. (Only SOCK_STREAM and SOCK_DGRAM appear to be generally useful.) | |
doc_1802 |
Given a middleware class, returns a view decorator. This lets you use middleware functionality on a per-view basis. The middleware is created with no params passed. It assumes middleware that’s compatible with the old style of Django 1.9 and earlier (having methods like process_request(), process_exception(), and process_response()). | |
doc_1803 | A string representing the current encoding used to decode form submission data (or None, which means the DEFAULT_CHARSET setting is used). You can write to this attribute to change the encoding used when accessing the form data. Any subsequent attribute accesses (such as reading from GET or POST) will use the new encoding value. Useful if you know the form data is not in the DEFAULT_CHARSET encoding. | |
doc_1804 | Returns True if the joystick module is initialized. get_init() -> bool Test if the pygame.joystick.init() function has been called. | |
doc_1805 |
Get a colormap instance, defaulting to rc values if name is None. Colormaps added with register_cmap() take precedence over built-in colormaps. Parameters
namematplotlib.colors.Colormap or str or None, default: None
If a Colormap instance, it will be returned. Otherwise, the name of a colormap known to Matplotlib, which will be resampled by lut. The default, None, means rcParams["image.cmap"] (default: 'viridis').
lutint or None, default: None
If name is not already a Colormap instance and lut is not None, the colormap will be resampled to have lut entries in the lookup table. Notes Currently, this returns the global colormap object, which is deprecated. In Matplotlib 3.5, you will no longer be able to modify the global colormaps in-place. | |
doc_1806 |
pygame constants This module contains various constants used by pygame. Its contents are automatically placed in the pygame module namespace. However, an application can use pygame.locals to include only the pygame constants with a from
pygame.locals import *. Detailed descriptions of the various constants can be found throughout the pygame documentation. Here are the locations of some of them.
The pygame.display module contains flags like HWSURFACE used by pygame.display.set_mode(). The pygame.event module contains the various event types. The pygame.key module lists the keyboard constants and modifiers (K_* and MOD_*) relating to the key and mod attributes of the KEYDOWN and KEYUP events. The pygame.time module defines TIMER_RESOLUTION. | |
doc_1807 | Return True if the set has no elements in common with other. Sets are disjoint if and only if their intersection is the empty set. | |
doc_1808 | A buffered binary stream providing higher-level access to a writeable, non seekable RawIOBase raw binary stream. It inherits BufferedIOBase. When writing to this object, data is normally placed into an internal buffer. The buffer will be written out to the underlying RawIOBase object under various conditions, including: when the buffer gets too small for all pending data; when flush() is called; when a seek() is requested (for BufferedRandom objects); when the BufferedWriter object is closed or destroyed. The constructor creates a BufferedWriter for the given writeable raw stream. If the buffer_size is not given, it defaults to DEFAULT_BUFFER_SIZE. BufferedWriter provides or overrides these methods in addition to those from BufferedIOBase and IOBase:
flush()
Force bytes held in the buffer into the raw stream. A BlockingIOError should be raised if the raw stream blocks.
write(b)
Write the bytes-like object, b, and return the number of bytes written. When in non-blocking mode, a BlockingIOError is raised if the buffer needs to be written out but the raw stream blocks. | |
doc_1809 | See Migration guide for more details. tf.compat.v1.io.gfile.mkdir
tf.io.gfile.mkdir(
path
)
Args
path string, name of the directory to be created Notes: The parent directories need to exist. Use tf.io.gfile.makedirs instead if there is the possibility that the parent dirs don't exist.
Raises
errors.OpError If the operation fails. | |
doc_1810 |
Bases: matplotlib.patches.Polygon Like Arrow, but lets you set head width and head height independently. Parameters
x, yfloat
The x and y coordinates of the arrow base.
dx, dyfloat
The length of the arrow along x and y direction.
widthfloat, default: 0.001
Width of full arrow tail.
length_includes_headbool, default: False
True if head is to be counted in calculating the length.
head_widthfloat or None, default: 3*width
Total width of the full arrow head.
head_lengthfloat or None, default: 1.5*head_width
Length of arrow head.
shape{'full', 'left', 'right'}, default: 'full'
Draw the left-half, right-half, or full arrow.
overhangfloat, default: 0
Fraction that the arrow is swept back (0 overhang means triangular shape). Can be negative or greater than one.
head_starts_at_zerobool, default: False
If True, the head starts being drawn at coordinate 0 instead of ending at coordinate 0. **kwargs
Patch properties:
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha unknown
animated bool
antialiased or aa bool or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
color color
edgecolor or ec color or None
facecolor or fc color or None
figure Figure
fill bool
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw float or None
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
zorder float set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, antialiased=<UNSET>, capstyle=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, closed=<UNSET>, color=<UNSET>, data=<UNSET>, edgecolor=<UNSET>, facecolor=<UNSET>, fill=<UNSET>, gid=<UNSET>, hatch=<UNSET>, in_layout=<UNSET>, joinstyle=<UNSET>, label=<UNSET>, linestyle=<UNSET>, linewidth=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, xy=<UNSET>, zorder=<UNSET>)[source]
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
antialiased or aa bool or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
closed bool
color color
data unknown
edgecolor or ec color or None
facecolor or fc color or None
figure Figure
fill bool
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw float or None
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
xy (N, 2) array-like
zorder float
set_data(*, x=None, y=None, dx=None, dy=None, width=None, head_width=None, head_length=None)[source]
Set FancyArrow x, y, dx, dy, width, head_with, and head_length. Values left as None will not be updated. Parameters
x, yfloat or None, default: None
The x and y coordinates of the arrow base.
dx, dyfloat or None, default: None
The length of the arrow along x and y direction. width: float or None, default: None
Width of full arrow tail. head_width: float or None, default: None
Total width of the full arrow head. head_length: float or None, default: None
Length of arrow head.
Examples using matplotlib.patches.FancyArrow
Arrow guide | |
doc_1811 |
Alias for set_fontproperties. | |
doc_1812 |
Bases: matplotlib.scale.ScaleBase Logit scale for data between zero and one, both excluded. This scale is similar to a log scale close to zero and to one, and almost linear around 0.5. It maps the interval ]0, 1[ onto ]-infty, +infty[. Parameters
axismatplotlib.axis.Axis
Currently unused.
nonpositive{'mask', 'clip'}
Determines the behavior for values beyond the open interval ]0, 1[. They can either be masked as invalid, or clipped to a number very close to 0 or 1.
use_overlinebool, default: False
Indicate the usage of survival notation (overline{x}) in place of standard notation (1-x) for probability close to one.
one_halfstr, default: r"frac{1}{2}"
The string used for ticks formatter to represent 1/2. get_transform()[source]
Return the LogitTransform associated with this scale.
limit_range_for_scale(vmin, vmax, minpos)[source]
Limit the domain to values between 0 and 1 (excluded).
name='logit'
set_default_locators_and_formatters(axis)[source]
Set the locators and formatters of axis to instances suitable for this scale. | |
doc_1813 | Return a bytes object containing the terminfo long name field describing the current terminal. The maximum length of a verbose description is 128 characters. It is defined only after the call to initscr(). | |
doc_1814 | Provide a per-write equivalent of the O_DSYNC open(2) flag. This flag effect applies only to the data range written by the system call. Availability: Linux 4.7 and newer. New in version 3.7. | |
doc_1815 | Return a proxy object that delegates method calls to a parent or sibling class of type. This is useful for accessing inherited methods that have been overridden in a class. The object-or-type determines the method resolution order to be searched. The search starts from the class right after the type. For example, if __mro__ of object-or-type is D -> B -> C -> A -> object and the value of type is B, then super() searches C -> A -> object. The __mro__ attribute of the object-or-type lists the method resolution search order used by both getattr() and super(). The attribute is dynamic and can change whenever the inheritance hierarchy is updated. If the second argument is omitted, the super object returned is unbound. If the second argument is an object, isinstance(obj, type) must be true. If the second argument is a type, issubclass(type2, type) must be true (this is useful for classmethods). There are two typical use cases for super. In a class hierarchy with single inheritance, super can be used to refer to parent classes without naming them explicitly, thus making the code more maintainable. This use closely parallels the use of super in other programming languages. The second use case is to support cooperative multiple inheritance in a dynamic execution environment. This use case is unique to Python and is not found in statically compiled languages or languages that only support single inheritance. This makes it possible to implement “diamond diagrams” where multiple base classes implement the same method. Good design dictates that such implementations have the same calling signature in every case (because the order of calls is determined at runtime, because that order adapts to changes in the class hierarchy, and because that order can include sibling classes that are unknown prior to runtime). For both use cases, a typical superclass call looks like this: class C(B):
def method(self, arg):
super().method(arg) # This does the same thing as:
# super(C, self).method(arg)
In addition to method lookups, super() also works for attribute lookups. One possible use case for this is calling descriptors in a parent or sibling class. Note that super() is implemented as part of the binding process for explicit dotted attribute lookups such as super().__getitem__(name). It does so by implementing its own __getattribute__() method for searching classes in a predictable order that supports cooperative multiple inheritance. Accordingly, super() is undefined for implicit lookups using statements or operators such as super()[name]. Also note that, aside from the zero argument form, super() is not limited to use inside methods. The two argument form specifies the arguments exactly and makes the appropriate references. The zero argument form only works inside a class definition, as the compiler fills in the necessary details to correctly retrieve the class being defined, as well as accessing the current instance for ordinary methods. For practical suggestions on how to design cooperative classes using super(), see guide to using super(). | |
doc_1816 | Returns the same Decimal object x. | |
doc_1817 | See Migration guide for more details. tf.compat.v1.keras.activations.selu
tf.keras.activations.selu(
x
)
The Scaled Exponential Linear Unit (SELU) activation function is defined as: if x > 0: return scale * x if x < 0: return scale * alpha * (exp(x) - 1) where alpha and scale are pre-defined constants (alpha=1.67326324 and scale=1.05070098). Basically, the SELU activation function multiplies scale (> 1) with the output of the tf.keras.activations.elu function to ensure a slope larger than one for positive inputs. The values of alpha and scale are chosen so that the mean and variance of the inputs are preserved between two consecutive layers as long as the weights are initialized correctly (see tf.keras.initializers.LecunNormal initializer) and the number of input units is "large enough" (see reference paper for more information). Example Usage:
num_classes = 10 # 10-class problem
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(64, kernel_initializer='lecun_normal',
activation='selu'))
model.add(tf.keras.layers.Dense(32, kernel_initializer='lecun_normal',
activation='selu'))
model.add(tf.keras.layers.Dense(16, kernel_initializer='lecun_normal',
activation='selu'))
model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))
Arguments
x A tensor or variable to compute the activation function for.
Returns The scaled exponential unit activation: scale * elu(x, alpha).
Notes: To be used together with the tf.keras.initializers.LecunNormal initializer. To be used together with the dropout variant tf.keras.layers.AlphaDropout (not regular dropout). References: Klambauer et al., 2017 | |
doc_1818 | Open an encoded file using the given mode and return an instance of StreamReaderWriter, providing transparent encoding/decoding. The default file mode is 'r', meaning to open the file in read mode. Note Underlying encoded files are always opened in binary mode. No automatic conversion of '\n' is done on reading and writing. The mode argument may be any binary mode acceptable to the built-in open() function; the 'b' is automatically added. encoding specifies the encoding which is to be used for the file. Any encoding that encodes to and decodes from bytes is allowed, and the data types supported by the file methods depend on the codec used. errors may be given to define the error handling. It defaults to 'strict' which causes a ValueError to be raised in case an encoding error occurs. buffering has the same meaning as for the built-in open() function. It defaults to -1 which means that the default buffer size will be used. | |
doc_1819 | Whether the OpenSSL library has built-in support for the TLS 1.1 protocol. New in version 3.7. | |
doc_1820 | tf.compat.v1.to_bfloat16(
x, name='ToBFloat16'
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.cast instead.
Args
x A Tensor or SparseTensor or IndexedSlices.
name A name for the operation (optional).
Returns A Tensor or SparseTensor or IndexedSlices with same shape as x with type bfloat16.
Raises
TypeError If x cannot be cast to the bfloat16. | |
doc_1821 | tf.compat.v1.squeeze(
input, axis=None, name=None, squeeze_dims=None
)
Warning: SOME ARGUMENTS ARE DEPRECATED: (squeeze_dims). They will be removed in a future version. Instructions for updating: Use the axis argument instead Given a tensor input, this operation returns a tensor of the same type with all dimensions of size 1 removed. If you don't want to remove all size 1 dimensions, you can remove specific size 1 dimensions by specifying axis. For example:
# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
t = tf.ones([1, 2, 1, 3, 1, 1])
print(tf.shape(tf.squeeze(t)).numpy())
[2 3]
Or, to remove specific size 1 dimensions:
# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
t = tf.ones([1, 2, 1, 3, 1, 1])
print(tf.shape(tf.squeeze(t, [2, 4])).numpy())
[1 2 3 1]
Note: if input is a tf.RaggedTensor, then this operation takes O(N) time, where N is the number of elements in the squeezed dimensions.
Args
input A Tensor. The input to squeeze.
axis An optional list of ints. Defaults to []. If specified, only squeezes the dimensions listed. The dimension index starts at 0. It is an error to squeeze a dimension that is not 1. Must be in the range [-rank(input), rank(input)). Must be specified if input is a RaggedTensor.
name A name for the operation (optional).
squeeze_dims Deprecated keyword argument that is now axis.
Returns A Tensor. Has the same type as input. Contains the same data as input, but has one or more dimensions of size 1 removed.
Raises
ValueError When both squeeze_dims and axis are specified. | |
doc_1822 | A Popen creationflags parameter to specify that a new process does not inherit the error mode of the calling process. Instead, the new process gets the default error mode. This feature is particularly useful for multithreaded shell applications that run with hard errors disabled. New in version 3.7. | |
doc_1823 | tf.compat.v1.distribute.experimental.TPUStrategy(
tpu_cluster_resolver=None, steps_per_run=None, device_assignment=None
)
Args
tpu_cluster_resolver A tf.distribute.cluster_resolver.TPUClusterResolver, which provides information about the TPU cluster.
steps_per_run Number of steps to run on device before returning to the host. Note that this can have side-effects on performance, hooks, metrics, summaries etc. This parameter is only used when Distribution Strategy is used with estimator or keras.
device_assignment Optional tf.tpu.experimental.DeviceAssignment to specify the placement of replicas on the TPU cluster. Currently only supports the usecase of using a single core within a TPU cluster.
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated.
steps_per_run DEPRECATED: use .extended.steps_per_run instead. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
experimental_make_numpy_dataset View source
experimental_make_numpy_dataset(
numpy_input, session=None
)
Makes a tf.data.Dataset for input provided via a numpy array. This avoids adding numpy_input as a large constant in the graph, and copies the data to the machine or machines that will be processing the input. Note that you will likely need to use tf.distribute.Strategy.experimental_distribute_dataset with the returned dataset to further distribute it with the strategy. Example: numpy_input = np.ones([10], dtype=np.float32)
dataset = strategy.experimental_make_numpy_dataset(numpy_input)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
Args
numpy_input A nest of NumPy input arrays that will be converted into a dataset. Note that lists of Numpy arrays are stacked, as that is normal tf.data.Dataset behavior.
session (TensorFlow v1.x graph execution only) A session used for initialization.
Returns A tf.data.Dataset representing numpy_input.
experimental_run View source
experimental_run(
fn, input_iterator=None
)
Runs ops in fn on each replica, with inputs from input_iterator. DEPRECATED: This method is not available in TF 2.x. Please switch to using run instead. When eager execution is enabled, executes ops specified by fn on each replica. Otherwise, builds a graph to execute the ops on each replica. Each replica will take a single, different input from the inputs provided by one get_next call on the input iterator. fn may call tf.distribute.get_replica_context() to access members such as replica_id_in_sync_group. Key Point: Depending on the tf.distribute.Strategy implementation being used, and whether eager execution is enabled, fn may be called one or more times (once for each replica).
Args
fn The function to run. The inputs to the function must match the outputs of input_iterator.get_next(). The output must be a tf.nest of Tensors.
input_iterator (Optional) input iterator from which the inputs are taken.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be PerReplica (if the values are unsynchronized), Mirrored (if the values are kept in sync), or Tensor (if running on a single replica).
make_dataset_iterator View source
make_dataset_iterator(
dataset
)
Makes an iterator for input provided via dataset. DEPRECATED: This method is not available in TF 2.x. Data from the given dataset will be distributed evenly across all the compute replicas. We will assume that the input dataset is batched by the global batch size. With this assumption, we will make a best effort to divide each batch across all the replicas (one or more workers). If this effort fails, an error will be thrown, and the user should instead use make_input_fn_iterator which provides more control to the user, and does not try to divide a batch across replicas. The user could also use make_input_fn_iterator if they want to customize which input is fed to which replica/worker etc.
Args
dataset tf.data.Dataset that will be distributed evenly across all replicas.
Returns An tf.distribute.InputIterator which returns inputs for each step of the computation. User should call initialize on the returned iterator.
make_input_fn_iterator View source
make_input_fn_iterator(
input_fn, replication_mode=tf.distribute.InputReplicationMode.PER_WORKER
)
Returns an iterator split across replicas created from an input function. DEPRECATED: This method is not available in TF 2.x. The input_fn should take an tf.distribute.InputContext object where information about batching and input sharding can be accessed: def input_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size)
return d.shard(input_context.num_input_pipelines,
input_context.input_pipeline_id)
with strategy.scope():
iterator = strategy.make_input_fn_iterator(input_fn)
replica_results = strategy.experimental_run(replica_fn, iterator)
The tf.data.Dataset returned by input_fn should have a per-replica batch size, which may be computed using input_context.get_per_replica_batch_size.
Args
input_fn A function taking a tf.distribute.InputContext object and returning a tf.data.Dataset.
replication_mode an enum value of tf.distribute.InputReplicationMode. Only PER_WORKER is supported currently, which means there will be a single call to input_fn per worker. Replicas will dequeue from the local tf.data.Dataset on their worker.
Returns An iterator object that should first be .initialize()-ed. It may then either be passed to strategy.experimental_run() or you can iterator.get_next() to get the next value to pass to strategy.extended.call_for_each_replica().
reduce View source
reduce(
reduce_op, value, axis=None
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Run fn on each replica, with the given arguments. Executes ops specified by fn on each replica. If args or kwargs have "per-replica" values, such as those produced by a "distributed Dataset", when fn is executed on a particular replica, it will be executed with the component of those "per-replica" values that correspond to that replica. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. All arguments in args or kwargs should either be nest of tensors or per-replica objects containing tensors or composite tensors. Users can pass strategy specific options to options argument. An example to enable bucketizing dynamic shapes in TPUStrategy.run is:
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
options = tf.distribute.RunOptions(
experimental_bucketizing_dynamic_shape=True)
dataset = tf.data.Dataset.range(
strategy.num_replicas_in_sync, output_type=dtypes.float32).batch(
strategy.num_replicas_in_sync, drop_remainder=True)
input_iterator = iter(strategy.experimental_distribute_dataset(dataset))
@tf.function()
def step_fn(inputs):
output = tf.reduce_sum(inputs)
return output
strategy.run(step_fn, args=(next(input_iterator),), options=options)
Args
fn The function to run. The output must be a tf.nest of Tensors.
args (Optional) Positional arguments to fn.
kwargs (Optional) Keyword arguments to fn.
options (Optional) An instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be "per-replica" Tensor objects or Tensors (for example, if running on a single replica).
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager.
update_config_proto View source
update_config_proto(
config_proto
)
Returns a copy of config_proto modified for use with this strategy. DEPRECATED: This method is not available in TF 2.x. The updated config has something needed to run a strategy, e.g. configuration to run collective ops, or device filters to improve distributed training performance.
Args
config_proto a tf.ConfigProto object.
Returns The updated copy of the config_proto. | |
doc_1824 |
DateOffset increments between calendar year ends. Attributes
base Returns a copy of the calling offset object with n=1 and all other attributes equal.
freqstr
kwds
month
n
name
nanos
normalize
rule_code Methods
__call__(*args, **kwargs) Call self as a function.
rollback Roll provided date backward to next offset only if not on offset.
rollforward Roll provided date forward to next offset only if not on offset.
apply
apply_index
copy
isAnchored
is_anchored
is_month_end
is_month_start
is_on_offset
is_quarter_end
is_quarter_start
is_year_end
is_year_start
onOffset | |
doc_1825 | A Modular Crypt Format method with 16 character salt and 86 character hash based on the SHA-512 hash function. This is the strongest method. | |
doc_1826 |
Return an antiderivative (indefinite integral) of this polynomial. Refer to polyint for full documentation. See also polyint
equivalent function | |
doc_1827 | from myapp.serializers import UserSerializer
from rest_framework import generics
from rest_framework.permissions import IsAdminUser
class UserList(generics.ListCreateAPIView):
queryset = User.objects.all()
serializer_class = UserSerializer
permission_classes = [IsAdminUser]
For more complex cases you might also want to override various methods on the view class. For example. class UserList(generics.ListCreateAPIView):
queryset = User.objects.all()
serializer_class = UserSerializer
permission_classes = [IsAdminUser]
def list(self, request):
# Note the use of `get_queryset()` instead of `self.queryset`
queryset = self.get_queryset()
serializer = UserSerializer(queryset, many=True)
return Response(serializer.data)
For very simple cases you might want to pass through any class attributes using the .as_view() method. For example, your URLconf might include something like the following entry: path('users/', ListCreateAPIView.as_view(queryset=User.objects.all(), serializer_class=UserSerializer), name='user-list')
API Reference GenericAPIView This class extends REST framework's APIView class, adding commonly required behavior for standard list and detail views. Each of the concrete generic views provided is built by combining GenericAPIView, with one or more mixin classes. Attributes Basic settings: The following attributes control the basic view behavior.
queryset - The queryset that should be used for returning objects from this view. Typically, you must either set this attribute, or override the get_queryset() method. If you are overriding a view method, it is important that you call get_queryset() instead of accessing this property directly, as queryset will get evaluated once, and those results will be cached for all subsequent requests.
serializer_class - The serializer class that should be used for validating and deserializing input, and for serializing output. Typically, you must either set this attribute, or override the get_serializer_class() method.
lookup_field - The model field that should be used to for performing object lookup of individual model instances. Defaults to 'pk'. Note that when using hyperlinked APIs you'll need to ensure that both the API views and the serializer classes set the lookup fields if you need to use a custom value.
lookup_url_kwarg - The URL keyword argument that should be used for object lookup. The URL conf should include a keyword argument corresponding to this value. If unset this defaults to using the same value as lookup_field. Pagination: The following attributes are used to control pagination when used with list views.
pagination_class - The pagination class that should be used when paginating list results. Defaults to the same value as the DEFAULT_PAGINATION_CLASS setting, which is 'rest_framework.pagination.PageNumberPagination'. Setting pagination_class=None will disable pagination on this view. Filtering:
filter_backends - A list of filter backend classes that should be used for filtering the queryset. Defaults to the same value as the DEFAULT_FILTER_BACKENDS setting. Methods Base methods: get_queryset(self) Returns the queryset that should be used for list views, and that should be used as the base for lookups in detail views. Defaults to returning the queryset specified by the queryset attribute. This method should always be used rather than accessing self.queryset directly, as self.queryset gets evaluated only once, and those results are cached for all subsequent requests. May be overridden to provide dynamic behavior, such as returning a queryset, that is specific to the user making the request. For example: def get_queryset(self):
user = self.request.user
return user.accounts.all()
get_object(self) Returns an object instance that should be used for detail views. Defaults to using the lookup_field parameter to filter the base queryset. May be overridden to provide more complex behavior, such as object lookups based on more than one URL kwarg. For example: def get_object(self):
queryset = self.get_queryset()
filter = {}
for field in self.multiple_lookup_fields:
filter[field] = self.kwargs[field]
obj = get_object_or_404(queryset, **filter)
self.check_object_permissions(self.request, obj)
return obj
Note that if your API doesn't include any object level permissions, you may optionally exclude the self.check_object_permissions, and simply return the object from the get_object_or_404 lookup. filter_queryset(self, queryset) Given a queryset, filter it with whichever filter backends are in use, returning a new queryset. For example: def filter_queryset(self, queryset):
filter_backends = [CategoryFilter]
if 'geo_route' in self.request.query_params:
filter_backends = [GeoRouteFilter, CategoryFilter]
elif 'geo_point' in self.request.query_params:
filter_backends = [GeoPointFilter, CategoryFilter]
for backend in list(filter_backends):
queryset = backend().filter_queryset(self.request, queryset, view=self)
return queryset
get_serializer_class(self) Returns the class that should be used for the serializer. Defaults to returning the serializer_class attribute. May be overridden to provide dynamic behavior, such as using different serializers for read and write operations, or providing different serializers to different types of users. For example: def get_serializer_class(self):
if self.request.user.is_staff:
return FullAccountSerializer
return BasicAccountSerializer
Save and deletion hooks: The following methods are provided by the mixin classes, and provide easy overriding of the object save or deletion behavior.
perform_create(self, serializer) - Called by CreateModelMixin when saving a new object instance.
perform_update(self, serializer) - Called by UpdateModelMixin when saving an existing object instance.
perform_destroy(self, instance) - Called by DestroyModelMixin when deleting an object instance. These hooks are particularly useful for setting attributes that are implicit in the request, but are not part of the request data. For instance, you might set an attribute on the object based on the request user, or based on a URL keyword argument. def perform_create(self, serializer):
serializer.save(user=self.request.user)
These override points are also particularly useful for adding behavior that occurs before or after saving an object, such as emailing a confirmation, or logging the update. def perform_update(self, serializer):
instance = serializer.save()
send_email_confirmation(user=self.request.user, modified=instance)
You can also use these hooks to provide additional validation, by raising a ValidationError(). This can be useful if you need some validation logic to apply at the point of database save. For example: def perform_create(self, serializer):
queryset = SignupRequest.objects.filter(user=self.request.user)
if queryset.exists():
raise ValidationError('You have already signed up')
serializer.save(user=self.request.user)
Other methods: You won't typically need to override the following methods, although you might need to call into them if you're writing custom views using GenericAPIView.
get_serializer_context(self) - Returns a dictionary containing any extra context that should be supplied to the serializer. Defaults to including 'request', 'view' and 'format' keys.
get_serializer(self, instance=None, data=None, many=False, partial=False) - Returns a serializer instance.
get_paginated_response(self, data) - Returns a paginated style Response object.
paginate_queryset(self, queryset) - Paginate a queryset if required, either returning a page object, or None if pagination is not configured for this view.
filter_queryset(self, queryset) - Given a queryset, filter it with whichever filter backends are in use, returning a new queryset. Mixins The mixin classes provide the actions that are used to provide the basic view behavior. Note that the mixin classes provide action methods rather than defining the handler methods, such as .get() and .post(), directly. This allows for more flexible composition of behavior. The mixin classes can be imported from rest_framework.mixins. ListModelMixin Provides a .list(request, *args, **kwargs) method, that implements listing a queryset. If the queryset is populated, this returns a 200 OK response, with a serialized representation of the queryset as the body of the response. The response data may optionally be paginated. CreateModelMixin Provides a .create(request, *args, **kwargs) method, that implements creating and saving a new model instance. If an object is created this returns a 201 Created response, with a serialized representation of the object as the body of the response. If the representation contains a key named url, then the Location header of the response will be populated with that value. If the request data provided for creating the object was invalid, a 400 Bad Request response will be returned, with the error details as the body of the response. RetrieveModelMixin Provides a .retrieve(request, *args, **kwargs) method, that implements returning an existing model instance in a response. If an object can be retrieved this returns a 200 OK response, with a serialized representation of the object as the body of the response. Otherwise it will return a 404 Not Found. UpdateModelMixin Provides a .update(request, *args, **kwargs) method, that implements updating and saving an existing model instance. Also provides a .partial_update(request, *args, **kwargs) method, which is similar to the update method, except that all fields for the update will be optional. This allows support for HTTP PATCH requests. If an object is updated this returns a 200 OK response, with a serialized representation of the object as the body of the response. If the request data provided for updating the object was invalid, a 400 Bad Request response will be returned, with the error details as the body of the response. DestroyModelMixin Provides a .destroy(request, *args, **kwargs) method, that implements deletion of an existing model instance. If an object is deleted this returns a 204 No Content response, otherwise it will return a 404 Not Found. Concrete View Classes The following classes are the concrete generic views. If you're using generic views this is normally the level you'll be working at unless you need heavily customized behavior. The view classes can be imported from rest_framework.generics. CreateAPIView Used for create-only endpoints. Provides a post method handler. Extends: GenericAPIView, CreateModelMixin ListAPIView Used for read-only endpoints to represent a collection of model instances. Provides a get method handler. Extends: GenericAPIView, ListModelMixin RetrieveAPIView Used for read-only endpoints to represent a single model instance. Provides a get method handler. Extends: GenericAPIView, RetrieveModelMixin DestroyAPIView Used for delete-only endpoints for a single model instance. Provides a delete method handler. Extends: GenericAPIView, DestroyModelMixin UpdateAPIView Used for update-only endpoints for a single model instance. Provides put and patch method handlers. Extends: GenericAPIView, UpdateModelMixin ListCreateAPIView Used for read-write endpoints to represent a collection of model instances. Provides get and post method handlers. Extends: GenericAPIView, ListModelMixin, CreateModelMixin RetrieveUpdateAPIView Used for read or update endpoints to represent a single model instance. Provides get, put and patch method handlers. Extends: GenericAPIView, RetrieveModelMixin, UpdateModelMixin RetrieveDestroyAPIView Used for read or delete endpoints to represent a single model instance. Provides get and delete method handlers. Extends: GenericAPIView, RetrieveModelMixin, DestroyModelMixin RetrieveUpdateDestroyAPIView Used for read-write-delete endpoints to represent a single model instance. Provides get, put, patch and delete method handlers. Extends: GenericAPIView, RetrieveModelMixin, UpdateModelMixin, DestroyModelMixin Customizing the generic views Often you'll want to use the existing generic views, but use some slightly customized behavior. If you find yourself reusing some bit of customized behavior in multiple places, you might want to refactor the behavior into a common class that you can then just apply to any view or viewset as needed. Creating custom mixins For example, if you need to lookup objects based on multiple fields in the URL conf, you could create a mixin class like the following: class MultipleFieldLookupMixin:
"""
Apply this mixin to any view or viewset to get multiple field filtering
based on a `lookup_fields` attribute, instead of the default single field filtering.
"""
def get_object(self):
queryset = self.get_queryset() # Get the base queryset
queryset = self.filter_queryset(queryset) # Apply any filter backends
filter = {}
for field in self.lookup_fields:
if self.kwargs[field]: # Ignore empty fields.
filter[field] = self.kwargs[field]
obj = get_object_or_404(queryset, **filter) # Lookup the object
self.check_object_permissions(self.request, obj)
return obj
You can then simply apply this mixin to a view or viewset anytime you need to apply the custom behavior. class RetrieveUserView(MultipleFieldLookupMixin, generics.RetrieveAPIView):
queryset = User.objects.all()
serializer_class = UserSerializer
lookup_fields = ['account', 'username']
Using custom mixins is a good option if you have custom behavior that needs to be used. Creating custom base classes If you are using a mixin across multiple views, you can take this a step further and create your own set of base views that can then be used throughout your project. For example: class BaseRetrieveView(MultipleFieldLookupMixin,
generics.RetrieveAPIView):
pass
class BaseRetrieveUpdateDestroyView(MultipleFieldLookupMixin,
generics.RetrieveUpdateDestroyAPIView):
pass
Using custom base classes is a good option if you have custom behavior that consistently needs to be repeated across a large number of views throughout your project. PUT as create Prior to version 3.0 the REST framework mixins treated PUT as either an update or a create operation, depending on if the object already existed or not. Allowing PUT as create operations is problematic, as it necessarily exposes information about the existence or non-existence of objects. It's also not obvious that transparently allowing re-creating of previously deleted instances is necessarily a better default behavior than simply returning 404 responses. Both styles "PUT as 404" and "PUT as create" can be valid in different circumstances, but from version 3.0 onwards we now use 404 behavior as the default, due to it being simpler and more obvious. If you need to generic PUT-as-create behavior you may want to include something like this AllowPUTAsCreateMixin class as a mixin to your views. Third party packages The following third party packages provide additional generic view implementations. Django Rest Multiple Models Django Rest Multiple Models provides a generic view (and mixin) for sending multiple serialized models and/or querysets via a single API request. mixins.pygenerics.py | |
doc_1828 | Copy a network object denoted by a URL to a local file. If the URL points to a local file, the object will not be copied unless filename is supplied. Return a tuple (filename, headers) where filename is the local file name under which the object can be found, and headers is whatever the info() method of the object returned by urlopen() returned (for a remote object). Exceptions are the same as for urlopen(). The second argument, if present, specifies the file location to copy to (if absent, the location will be a tempfile with a generated name). The third argument, if present, is a callable that will be called once on establishment of the network connection and once after each block read thereafter. The callable will be passed three arguments; a count of blocks transferred so far, a block size in bytes, and the total size of the file. The third argument may be -1 on older FTP servers which do not return a file size in response to a retrieval request. The following example illustrates the most common usage scenario: >>> import urllib.request
>>> local_filename, headers = urllib.request.urlretrieve('http://python.org/')
>>> html = open(local_filename)
>>> html.close()
If the url uses the http: scheme identifier, the optional data argument may be given to specify a POST request (normally the request type is GET). The data argument must be a bytes object in standard application/x-www-form-urlencoded format; see the urllib.parse.urlencode() function. urlretrieve() will raise ContentTooShortError when it detects that the amount of data available was less than the expected amount (which is the size reported by a Content-Length header). This can occur, for example, when the download is interrupted. The Content-Length is treated as a lower bound: if there’s more data to read, urlretrieve reads more data, but if less data is available, it raises the exception. You can still retrieve the downloaded data in this case, it is stored in the content attribute of the exception instance. If no Content-Length header was supplied, urlretrieve can not check the size of the data it has downloaded, and just returns it. In this case you just have to assume that the download was successful. | |
doc_1829 |
Alias for set_facecolor. | |
doc_1830 | An abstract method to return the source of a module. It is returned as a text string using universal newlines, translating all recognized line separators into '\n' characters. Returns None if no source is available (e.g. a built-in module). Raises ImportError if the loader cannot find the module specified. Changed in version 3.4: Raises ImportError instead of NotImplementedError. | |
doc_1831 | Find the minimum value of an array over positive values Returns a huge value if none of the values are positive | |
doc_1832 | Returns the largest representable number smaller than x. | |
doc_1833 |
Return the artist that this HandlerBase generates for the given original artist/handle. Parameters
legendLegend
The legend for which these legend artists are being created.
orig_handlematplotlib.artist.Artist or similar
The object for which these legend artists are being created.
fontsizeint
The fontsize in pixels. The artists being created should be scaled according to the given fontsize.
handleboxmatplotlib.offsetbox.OffsetBox
The box which has been created to hold this legend entry's artists. Artists created in the legend_artist method must be added to this handlebox inside this method. | |
doc_1834 | Remove an attribute by name. Note that it uses a localName, not a qname. No exception is raised if there is no matching attribute. | |
doc_1835 | Convert escaped markup back into a text string. This replaces HTML entities with the characters they represent. >>> Markup("Main » <em>About</em>").unescape()
'Main » <em>About</em>'
Return type
str | |
doc_1836 | Attach the watcher to an event loop. If the watcher was previously attached to an event loop, then it is first detached before attaching to the new loop. Note: loop may be None. | |
doc_1837 | See Migration guide for more details. tf.compat.v1.raw_ops.ScalarSummary
tf.raw_ops.ScalarSummary(
tags, values, name=None
)
The input tags and values must have the same shape. The generated summary has a summary value for each tag-value pair in tags and values.
Args
tags A Tensor of type string. Tags for the summary.
values A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. Same shape as tags. Values for the summary. </td> </tr><tr> <td>name` A name for the operation (optional).
Returns A Tensor of type string. | |
doc_1838 | The one and only root element of the document. | |
doc_1839 |
Test element-wise for finiteness (not infinity and not Not a Number). The result is returned as a boolean array. Parameters
xarray_like
Input values.
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
yndarray, bool
True where x is not positive infinity, negative infinity, or NaN; false otherwise. This is a scalar if x is a scalar. See also
isinf, isneginf, isposinf, isnan
Notes Not a Number, positive infinity and negative infinity are considered to be non-finite. NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. Also that positive infinity is not equivalent to negative infinity. But infinity is equivalent to positive infinity. Errors result if the second argument is also supplied when x is a scalar input, or if first and second arguments have different shapes. Examples >>> np.isfinite(1)
True
>>> np.isfinite(0)
True
>>> np.isfinite(np.nan)
False
>>> np.isfinite(np.inf)
False
>>> np.isfinite(np.NINF)
False
>>> np.isfinite([np.log(-1.),1.,np.log(0)])
array([False, True, False])
>>> x = np.array([-np.inf, 0., np.inf])
>>> y = np.array([2, 2, 2])
>>> np.isfinite(x, y)
array([0, 1, 0])
>>> y
array([0, 1, 0]) | |
doc_1840 | See Migration guide for more details. tf.compat.v1.raw_ops.QuantizeV2
tf.raw_ops.QuantizeV2(
input, min_range, max_range, T, mode='MIN_COMBINED',
round_mode='HALF_AWAY_FROM_ZERO', narrow_range=False, axis=-1,
ensure_minimum_range=0.01, name=None
)
[min_range, max_range] are scalar floats that specify the range for the 'input' data. The 'mode' attribute controls exactly which calculations are used to convert the float values to their quantized equivalents. The 'round_mode' attribute controls which rounding tie-breaking algorithm is used when rounding float values to their quantized equivalents. In 'MIN_COMBINED' mode, each value of the tensor will undergo the following: out[i] = (in[i] - min_range) * range(T) / (max_range - min_range)
if T == qint8: out[i] -= (range(T) + 1) / 2.0
here range(T) = numeric_limits<T>::max() - numeric_limits<T>::min() MIN_COMBINED Mode Example Assume the input is type float and has a possible range of [0.0, 6.0] and the output type is quint8 ([0, 255]). The min_range and max_range values should be specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each value of the input by 255/6 and cast to quint8. If the output type was qint8 ([-128, 127]), the operation will additionally subtract each value by 128 prior to casting, so that the range of values aligns with the range of qint8. If the mode is 'MIN_FIRST', then this approach is used: num_discrete_values = 1 << (# of bits in T)
range_adjust = num_discrete_values / (num_discrete_values - 1)
range = (range_max - range_min) * range_adjust
range_scale = num_discrete_values / range
quantized = round(input * range_scale) - round(range_min * range_scale) +
numeric_limits<T>::min()
quantized = max(quantized, numeric_limits<T>::min())
quantized = min(quantized, numeric_limits<T>::max())
The biggest difference between this and MIN_COMBINED is that the minimum range is rounded first, before it's subtracted from the rounded value. With MIN_COMBINED, a small bias is introduced where repeated iterations of quantizing and dequantizing will introduce a larger and larger error. SCALED mode Example SCALED mode matches the quantization approach used in QuantizeAndDequantize{V2|V3}. If the mode is SCALED, the quantization is performed by multiplying each input value by a scaling_factor. The scaling_factor is determined from min_range and max_range to be as large as possible such that the range from min_range to max_range is representable within values of type T.
const int min_T = std::numeric_limits<T>::min();
const int max_T = std::numeric_limits<T>::max();
const float max_float = std::numeric_limits<float>::max();
const float scale_factor_from_min_side =
(min_T * min_range > 0) ? min_T / min_range : max_float;
const float scale_factor_from_max_side =
(max_T * max_range > 0) ? max_T / max_range : max_float;
const float scale_factor = std::min(scale_factor_from_min_side,
scale_factor_from_max_side);
We next use the scale_factor to adjust min_range and max_range as follows: min_range = min_T / scale_factor;
max_range = max_T / scale_factor;
e.g. if T = qint8, and initially min_range = -10, and max_range = 9, we would compare -128/-10.0 = 12.8 to 127/9.0 = 14.11, and set scaling_factor = 12.8 In this case, min_range would remain -10, but max_range would be adjusted to 127 / 12.8 = 9.921875 So we will quantize input values in the range (-10, 9.921875) to (-128, 127). The input tensor can now be quantized by clipping values to the range min_range to max_range, then multiplying by scale_factor as follows: result = round(min(max_range, max(min_range, input)) * scale_factor)
The adjusted min_range and max_range are returned as outputs 2 and 3 of this operation. These outputs should be used as the range for any further calculations. narrow_range (bool) attribute If true, we do not use the minimum quantized value. i.e. for int8 the quantized output, it would be restricted to the range -127..127 instead of the full -128..127 range. This is provided for compatibility with certain inference backends. (Only applies to SCALED mode) axis (int) attribute An optional axis attribute can specify a dimension index of the input tensor, such that quantization ranges will be calculated and applied separately for each slice of the tensor along that dimension. This is useful for per-channel quantization. If axis is specified, min_range and max_range if axis=None, per-tensor quantization is performed as normal. ensure_minimum_range (float) attribute Ensures the minimum quantization range is at least this value. The legacy default value for this is 0.01, but it is strongly suggested to set it to 0 for new uses.
Args
input A Tensor of type float32.
min_range A Tensor of type float32. The minimum value of the quantization range. This value may be adjusted by the op depending on other parameters. The adjusted value is written to output_min. If the axis attribute is specified, this must be a 1-D tensor whose size matches the axis dimension of the input and output tensors.
max_range A Tensor of type float32. The maximum value of the quantization range. This value may be adjusted by the op depending on other parameters. The adjusted value is written to output_max. If the axis attribute is specified, this must be a 1-D tensor whose size matches the axis dimension of the input and output tensors.
T A tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16.
mode An optional string from: "MIN_COMBINED", "MIN_FIRST", "SCALED". Defaults to "MIN_COMBINED".
round_mode An optional string from: "HALF_AWAY_FROM_ZERO", "HALF_TO_EVEN". Defaults to "HALF_AWAY_FROM_ZERO".
narrow_range An optional bool. Defaults to False.
axis An optional int. Defaults to -1.
ensure_minimum_range An optional float. Defaults to 0.01.
name A name for the operation (optional).
Returns A tuple of Tensor objects (output, output_min, output_max). output A Tensor of type T.
output_min A Tensor of type float32.
output_max A Tensor of type float32. | |
doc_1841 | the hash function to use for the signature. The default is sha1 | |
doc_1842 | This signal is sent when the request context is set up, before any request processing happens. Because the request context is already bound, the subscriber can access the request with the standard global proxies such as request. Example subscriber: def log_request(sender, **extra):
sender.logger.debug('Request context is set up')
from flask import request_started
request_started.connect(log_request, app) | |
doc_1843 | Encodes obj using the codec registered for encoding. Errors may be given to set the desired error handling scheme. The default error handler is 'strict' meaning that encoding errors raise ValueError (or a more codec specific subclass, such as UnicodeEncodeError). Refer to Codec Base Classes for more information on codec error handling. | |
doc_1844 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_1845 | Add label to the list of labels on the message. | |
doc_1846 | skimage.data.astronaut() Color image of the astronaut Eileen Collins.
skimage.data.binary_blobs([length, …]) Generate synthetic binary image with several rounded blob-like objects.
skimage.data.brain() Subset of data from the University of North Carolina Volume Rendering Test Data Set.
skimage.data.brick() Brick wall.
skimage.data.camera() Gray-level “camera” image.
skimage.data.cat() Chelsea the cat.
skimage.data.cell() Cell floating in saline.
skimage.data.cells3d() 3D fluorescence microscopy image of cells.
skimage.data.checkerboard() Checkerboard image.
skimage.data.chelsea() Chelsea the cat.
skimage.data.clock() Motion blurred clock.
skimage.data.coffee() Coffee cup.
skimage.data.coins() Greek coins from Pompeii.
skimage.data.colorwheel() Color Wheel.
skimage.data.download_all([directory]) Download all datasets for use with scikit-image offline.
skimage.data.eagle() A golden eagle.
skimage.data.grass() Grass.
skimage.data.gravel() Gravel
skimage.data.horse() Black and white silhouette of a horse.
skimage.data.hubble_deep_field() Hubble eXtreme Deep Field.
skimage.data.human_mitosis() Image of human cells undergoing mitosis.
skimage.data.immunohistochemistry() Immunohistochemical (IHC) staining with hematoxylin counterstaining.
skimage.data.kidney() Mouse kidney tissue.
skimage.data.lbp_frontal_face_cascade_filename() Return the path to the XML file containing the weak classifier cascade.
skimage.data.lfw_subset() Subset of data from the LFW dataset.
skimage.data.lily() Lily of the valley plant stem.
skimage.data.logo() Scikit-image logo, a RGBA image.
skimage.data.microaneurysms() Gray-level “microaneurysms” image.
skimage.data.moon() Surface of the moon.
skimage.data.page() Scanned page.
skimage.data.retina() Human retina.
skimage.data.rocket() Launch photo of DSCOVR on Falcon 9 by SpaceX.
skimage.data.shepp_logan_phantom() Shepp Logan Phantom.
skimage.data.skin() Microscopy image of dermis and epidermis (skin layers).
skimage.data.stereo_motorcycle() Rectified stereo image pair with ground-truth disparities.
skimage.data.text() Gray-level “text” image used for corner detection. astronaut
skimage.data.astronaut() [source]
Color image of the astronaut Eileen Collins. Photograph of Eileen Collins, an American astronaut. She was selected as an astronaut in 1992 and first piloted the space shuttle STS-63 in 1995. She retired in 2006 after spending a total of 38 days, 8 hours and 10 minutes in outer space. This image was downloaded from the NASA Great Images database <https://flic.kr/p/r9qvLn>`__. No known copyright restrictions, released into the public domain. Returns
astronaut(512, 512, 3) uint8 ndarray
Astronaut image.
Examples using skimage.data.astronaut
Flood Fill binary_blobs
skimage.data.binary_blobs(length=512, blob_size_fraction=0.1, n_dim=2, volume_fraction=0.5, seed=None) [source]
Generate synthetic binary image with several rounded blob-like objects. Parameters
lengthint, optional
Linear size of output image.
blob_size_fractionfloat, optional
Typical linear size of blob, as a fraction of length, should be smaller than 1.
n_dimint, optional
Number of dimensions of output image.
volume_fractionfloat, default 0.5
Fraction of image pixels covered by the blobs (where the output is 1). Should be in [0, 1].
seedint, optional
Seed to initialize the random number generator. If None, a random seed from the operating system is used. Returns
blobsndarray of bools
Output binary image Examples >>> from skimage import data
>>> data.binary_blobs(length=5, blob_size_fraction=0.2, seed=1)
array([[ True, False, True, True, True],
[ True, True, True, False, True],
[False, True, False, True, True],
[ True, False, False, True, True],
[ True, False, False, False, True]])
>>> blobs = data.binary_blobs(length=256, blob_size_fraction=0.1)
>>> # Finer structures
>>> blobs = data.binary_blobs(length=256, blob_size_fraction=0.05)
>>> # Blobs cover a smaller volume fraction of the image
>>> blobs = data.binary_blobs(length=256, volume_fraction=0.3)
brain
skimage.data.brain() [source]
Subset of data from the University of North Carolina Volume Rendering Test Data Set. The full dataset is available at [1]. Returns
image(10, 256, 256) uint16 ndarray
Notes The 3D volume consists of 10 layers from the larger volume. References
1
https://graphics.stanford.edu/data/voldata/
Examples using skimage.data.brain
Local Histogram Equalization
Rank filters brick
skimage.data.brick() [source]
Brick wall. Returns
brick(512, 512) uint8 image
A small section of a brick wall. Notes The original image was downloaded from CC0Textures and licensed under the Creative Commons CC0 License. A perspective transform was then applied to the image, prior to rotating it by 90 degrees, cropping and scaling it to obtain the final image.
camera
skimage.data.camera() [source]
Gray-level “camera” image. Can be used for segmentation and denoising examples. Returns
camera(512, 512) uint8 ndarray
Camera image. Notes No copyright restrictions. CC0 by the photographer (Lav Varshney). Changed in version 0.18: This image was replaced due to copyright restrictions. For more information, please see [1]. References
1
https://github.com/scikit-image/scikit-image/issues/3927
Examples using skimage.data.camera
Tinting gray-scale images
Masked Normalized Cross-Correlation
Entropy
GLCM Texture Features
Multi-Otsu Thresholding
Flood Fill
Rank filters cat
skimage.data.cat() [source]
Chelsea the cat. An example with texture, prominent edges in horizontal and diagonal directions, as well as features of differing scales. Returns
chelsea(300, 451, 3) uint8 ndarray
Chelsea image. Notes No copyright restrictions. CC0 by the photographer (Stefan van der Walt).
cell
skimage.data.cell() [source]
Cell floating in saline. This is a quantitative phase image retrieved from a digital hologram using the Python library qpformat. The image shows a cell with high phase value, above the background phase. Because of a banding pattern artifact in the background, this image is a good test of thresholding algorithms. The pixel spacing is 0.107 µm. These data were part of a comparison between several refractive index retrieval techniques for spherical objects as part of [1]. This image is CC0, dedicated to the public domain. You may copy, modify, or distribute it without asking permission. Returns
cell(660, 550) uint8 array
Image of a cell. References
1
Paul Müller, Mirjam Schürmann, Salvatore Girardo, Gheorghe Cojoc, and Jochen Guck. “Accurate evaluation of size and refractive index for spherical objects in quantitative phase imaging.” Optics Express 26(8): 10729-10743 (2018). DOI:10.1364/OE.26.010729
cells3d
skimage.data.cells3d() [source]
3D fluorescence microscopy image of cells. The returned data is a 3D multichannel array with dimensions provided in (z, c, y, x) order. Each voxel has a size of (0.29 0.26 0.26) micrometer. Channel 0 contains cell membranes, channel 1 contains nuclei. Returns
cells3d: (60, 2, 256, 256) uint16 ndarray
The volumetric images of cells taken with an optical microscope. Notes The data for this was provided by the Allen Institute for Cell Science. It has been downsampled by a factor of 4 in the row and column dimensions to reduce computational time. The microscope reports the following voxel spacing in microns: Original voxel size is (0.290, 0.065, 0.065). Scaling factor is (1, 4, 4) in each dimension. After rescaling the voxel size is (0.29 0.26 0.26).
Examples using skimage.data.cells3d
3D adaptive histogram equalization
Use rolling-ball algorithm for estimating background intensity
Explore 3D images (of cells) checkerboard
skimage.data.checkerboard() [source]
Checkerboard image. Checkerboards are often used in image calibration, since the corner-points are easy to locate. Because of the many parallel edges, they also visualise distortions particularly well. Returns
checkerboard(200, 200) uint8 ndarray
Checkerboard image.
Examples using skimage.data.checkerboard
Flood Fill chelsea
skimage.data.chelsea() [source]
Chelsea the cat. An example with texture, prominent edges in horizontal and diagonal directions, as well as features of differing scales. Returns
chelsea(300, 451, 3) uint8 ndarray
Chelsea image. Notes No copyright restrictions. CC0 by the photographer (Stefan van der Walt).
Examples using skimage.data.chelsea
Phase Unwrapping
Flood Fill clock
skimage.data.clock() [source]
Motion blurred clock. This photograph of a wall clock was taken while moving the camera in an aproximately horizontal direction. It may be used to illustrate inverse filters and deconvolution. Released into the public domain by the photographer (Stefan van der Walt). Returns
clock(300, 400) uint8 ndarray
Clock image.
coffee
skimage.data.coffee() [source]
Coffee cup. This photograph is courtesy of Pikolo Espresso Bar. It contains several elliptical shapes as well as varying texture (smooth porcelain to course wood grain). Returns
coffee(400, 600, 3) uint8 ndarray
Coffee image. Notes No copyright restrictions. CC0 by the photographer (Rachel Michetti).
coins
skimage.data.coins() [source]
Greek coins from Pompeii. This image shows several coins outlined against a gray background. It is especially useful in, e.g. segmentation tests, where individual objects need to be identified against a background. The background shares enough grey levels with the coins that a simple segmentation is not sufficient. Returns
coins(303, 384) uint8 ndarray
Coins image. Notes This image was downloaded from the Brooklyn Museum Collection. No known copyright restrictions.
Examples using skimage.data.coins
Finding local maxima
Measure region properties
Use rolling-ball algorithm for estimating background intensity colorwheel
skimage.data.colorwheel() [source]
Color Wheel. Returns
colorwheel(370, 371, 3) uint8 image
A colorwheel.
download_all
skimage.data.download_all(directory=None) [source]
Download all datasets for use with scikit-image offline. Scikit-image datasets are no longer shipped with the library by default. This allows us to use higher quality datasets, while keeping the library download size small. This function requires the installation of an optional dependency, pooch, to download the full dataset. Follow installation instruction found at https://scikit-image.org/docs/stable/install.html Call this function to download all sample images making them available offline on your machine. Parameters
directory: path-like, optional
The directory where the dataset should be stored. Raises
ModuleNotFoundError:
If pooch is not install, this error will be raised. Notes scikit-image will only search for images stored in the default directory. Only specify the directory if you wish to download the images to your own folder for a particular reason. You can access the location of the default data directory by inspecting the variable skimage.data.data_dir.
eagle
skimage.data.eagle() [source]
A golden eagle. Suitable for examples on segmentation, Hough transforms, and corner detection. Returns
eagle(2019, 1826) uint8 ndarray
Eagle image. Notes No copyright restrictions. CC0 by the photographer (Dayane Machado).
Examples using skimage.data.eagle
Markers for watershed transform grass
skimage.data.grass() [source]
Grass. Returns
grass(512, 512) uint8 image
Some grass. Notes The original image was downloaded from DeviantArt and licensed underthe Creative Commons CC0 License. The downloaded image was cropped to include a region of (512, 512) pixels around the top left corner, converted to grayscale, then to uint8 prior to saving the result in PNG format.
gravel
skimage.data.gravel() [source]
Gravel Returns
gravel(512, 512) uint8 image
Grayscale gravel sample. Notes The original image was downloaded from CC0Textures and licensed under the Creative Commons CC0 License. The downloaded image was then rescaled to (1024, 1024), then the top left (512, 512) pixel region was cropped prior to converting the image to grayscale and uint8 data type. The result was saved using the PNG format.
horse
skimage.data.horse() [source]
Black and white silhouette of a horse. This image was downloaded from openclipart No copyright restrictions. CC0 given by owner (Andreas Preuss (marauder)). Returns
horse(328, 400) bool ndarray
Horse image.
hubble_deep_field
skimage.data.hubble_deep_field() [source]
Hubble eXtreme Deep Field. This photograph contains the Hubble Telescope’s farthest ever view of the universe. It can be useful as an example for multi-scale detection. Returns
hubble_deep_field(872, 1000, 3) uint8 ndarray
Hubble deep field image. Notes This image was downloaded from HubbleSite. The image was captured by NASA and may be freely used in the public domain.
human_mitosis
skimage.data.human_mitosis() [source]
Image of human cells undergoing mitosis. Returns
human_mitosis: (512, 512) uint8 ndimage
Data of human cells undergoing mitosis taken during the preperation of the manuscript in [1]. Notes Copyright David Root. Licensed under CC-0 [2]. References
1
Moffat J, Grueneberg DA, Yang X, Kim SY, Kloepfer AM, Hinkle G, Piqani B, Eisenhaure TM, Luo B, Grenier JK, Carpenter AE, Foo SY, Stewart SA, Stockwell BR, Hacohen N, Hahn WC, Lander ES, Sabatini DM, Root DE (2006) A lentiviral RNAi library for human and mouse genes applied to an arrayed viral high-content screen. Cell, 124(6):1283-98 / :DOI: 10.1016/j.cell.2006.01.040 PMID 16564017
2
GitHub licensing discussion https://github.com/CellProfiler/examples/issues/41
Examples using skimage.data.human_mitosis
Segment human cells (in mitosis) immunohistochemistry
skimage.data.immunohistochemistry() [source]
Immunohistochemical (IHC) staining with hematoxylin counterstaining. This picture shows colonic glands where the IHC expression of FHL2 protein is revealed with DAB. Hematoxylin counterstaining is applied to enhance the negative parts of the tissue. This image was acquired at the Center for Microscopy And Molecular Imaging (CMMI). No known copyright restrictions. Returns
immunohistochemistry(512, 512, 3) uint8 ndarray
Immunohistochemistry image.
kidney
skimage.data.kidney() [source]
Mouse kidney tissue. This biological tissue on a pre-prepared slide was imaged with confocal fluorescence microscopy (Nikon C1 inverted microscope). Image shape is (16, 512, 512, 3). That is 512x512 pixels in X-Y, 16 image slices in Z, and 3 color channels (emission wavelengths 450nm, 515nm, and 605nm, respectively). Real-space voxel size is 1.24 microns in X-Y, and 1.25 microns in Z. Data type is unsigned 16-bit integers. Returns
kidney(16, 512, 512, 3) uint16 ndarray
Kidney 3D multichannel image. Notes This image was acquired by Genevieve Buckley at Monasoh Micro Imaging in 2018. License: CC0
lbp_frontal_face_cascade_filename
skimage.data.lbp_frontal_face_cascade_filename() [source]
Return the path to the XML file containing the weak classifier cascade. These classifiers were trained using LBP features. The file is part of the OpenCV repository [1]. References
1
OpenCV lbpcascade trained files https://github.com/opencv/opencv/tree/master/data/lbpcascades
lfw_subset
skimage.data.lfw_subset() [source]
Subset of data from the LFW dataset. This database is a subset of the LFW database containing: 100 faces 100 non-faces The full dataset is available at [2]. Returns
images(200, 25, 25) uint8 ndarray
100 first images are faces and subsequent 100 are non-faces. Notes The faces were randomly selected from the LFW dataset and the non-faces were extracted from the background of the same dataset. The cropped ROIs have been resized to a 25 x 25 pixels. References
1
Huang, G., Mattar, M., Lee, H., & Learned-Miller, E. G. (2012). Learning to align from scratch. In Advances in Neural Information Processing Systems (pp. 764-772).
2
http://vis-www.cs.umass.edu/lfw/
Examples using skimage.data.lfw_subset
Specific images lily
skimage.data.lily() [source]
Lily of the valley plant stem. This plant stem on a pre-prepared slide was imaged with confocal fluorescence microscopy (Nikon C1 inverted microscope). Image shape is (922, 922, 4). That is 922x922 pixels in X-Y, with 4 color channels. Real-space voxel size is 1.24 microns in X-Y. Data type is unsigned 16-bit integers. Returns
lily(922, 922, 4) uint16 ndarray
Lily 2D multichannel image. Notes This image was acquired by Genevieve Buckley at Monasoh Micro Imaging in 2018. License: CC0
logo
skimage.data.logo() [source]
Scikit-image logo, a RGBA image. Returns
logo(500, 500, 4) uint8 ndarray
Logo image.
microaneurysms
skimage.data.microaneurysms() [source]
Gray-level “microaneurysms” image. Detail from an image of the retina (green channel). The image is a crop of image 07_dr.JPG from the High-Resolution Fundus (HRF) Image Database: https://www5.cs.fau.de/research/data/fundus-images/ Returns
microaneurysms(102, 102) uint8 ndarray
Retina image with lesions. Notes No copyright restrictions. CC0 given by owner (Andreas Maier). References
1
Budai, A., Bock, R, Maier, A., Hornegger, J., Michelson, G. (2013). Robust Vessel Segmentation in Fundus Images. International Journal of Biomedical Imaging, vol. 2013, 2013. DOI:10.1155/2013/154860
moon
skimage.data.moon() [source]
Surface of the moon. This low-contrast image of the surface of the moon is useful for illustrating histogram equalization and contrast stretching. Returns
moon(512, 512) uint8 ndarray
Moon image.
Examples using skimage.data.moon
Local Histogram Equalization page
skimage.data.page() [source]
Scanned page. This image of printed text is useful for demonstrations requiring uneven background illumination. Returns
page(191, 384) uint8 ndarray
Page image.
Examples using skimage.data.page
Use rolling-ball algorithm for estimating background intensity
Rank filters retina
skimage.data.retina() [source]
Human retina. This image of a retina is useful for demonstrations requiring circular images. Returns
retina(1411, 1411, 3) uint8 ndarray
Retina image in RGB. Notes This image was downloaded from wikimedia. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. References
1
Häggström, Mikael (2014). “Medical gallery of Mikael Häggström 2014”. WikiJournal of Medicine 1 (2). DOI:10.15347/wjm/2014.008. ISSN 2002-4436. Public Domain
rocket
skimage.data.rocket() [source]
Launch photo of DSCOVR on Falcon 9 by SpaceX. This is the launch photo of Falcon 9 carrying DSCOVR lifted off from SpaceX’s Launch Complex 40 at Cape Canaveral Air Force Station, FL. Returns
rocket(427, 640, 3) uint8 ndarray
Rocket image. Notes This image was downloaded from SpaceX Photos. The image was captured by SpaceX and released in the public domain.
shepp_logan_phantom
skimage.data.shepp_logan_phantom() [source]
Shepp Logan Phantom. Returns
phantom(400, 400) float64 image
Image of the Shepp-Logan phantom in grayscale. References
1
L. A. Shepp and B. F. Logan, “The Fourier reconstruction of a head section,” in IEEE Transactions on Nuclear Science, vol. 21, no. 3, pp. 21-43, June 1974. DOI:10.1109/TNS.1974.6499235
skin
skimage.data.skin() [source]
Microscopy image of dermis and epidermis (skin layers). Hematoxylin and eosin stained slide at 10x of normal epidermis and dermis with a benign intradermal nevus. Returns
skin(960, 1280, 3) RGB image of uint8
Notes This image requires an Internet connection the first time it is called, and to have the pooch package installed, in order to fetch the image file from the scikit-image datasets repository. The source of this image is https://en.wikipedia.org/wiki/File:Normal_Epidermis_and_Dermis_with_Intradermal_Nevus_10x.JPG The image was released in the public domain by its author Kilbad.
Examples using skimage.data.skin
Trainable segmentation using local features and random forests stereo_motorcycle
skimage.data.stereo_motorcycle() [source]
Rectified stereo image pair with ground-truth disparities. The two images are rectified such that every pixel in the left image has its corresponding pixel on the same scanline in the right image. That means that both images are warped such that they have the same orientation but a horizontal spatial offset (baseline). The ground-truth pixel offset in column direction is specified by the included disparity map. The two images are part of the Middlebury 2014 stereo benchmark. The dataset was created by Nera Nesic, Porter Westling, Xi Wang, York Kitajima, Greg Krathwohl, and Daniel Scharstein at Middlebury College. A detailed description of the acquisition process can be found in [1]. The images included here are down-sampled versions of the default exposure images in the benchmark. The images are down-sampled by a factor of 4 using the function skimage.transform.downscale_local_mean. The calibration data in the following and the included ground-truth disparity map are valid for the down-sampled images: Focal length: 994.978px
Principal point x: 311.193px
Principal point y: 254.877px
Principal point dx: 31.086px
Baseline: 193.001mm
Returns
img_left(500, 741, 3) uint8 ndarray
Left stereo image.
img_right(500, 741, 3) uint8 ndarray
Right stereo image.
disp(500, 741, 3) float ndarray
Ground-truth disparity map, where each value describes the offset in column direction between corresponding pixels in the left and the right stereo images. E.g. the corresponding pixel of img_left[10, 10 + disp[10, 10]] is img_right[10, 10]. NaNs denote pixels in the left image that do not have ground-truth. Notes The original resolution images, images with different exposure and lighting, and ground-truth depth maps can be found at the Middlebury website [2]. References
1
D. Scharstein, H. Hirschmueller, Y. Kitajima, G. Krathwohl, N. Nesic, X. Wang, and P. Westling. High-resolution stereo datasets with subpixel-accurate ground truth. In German Conference on Pattern Recognition (GCPR 2014), Muenster, Germany, September 2014.
2
http://vision.middlebury.edu/stereo/data/scenes2014/
Examples using skimage.data.stereo_motorcycle
Specific images
Registration using optical flow text
skimage.data.text() [source]
Gray-level “text” image used for corner detection. Returns
text(172, 448) uint8 ndarray
Text image. Notes This image was downloaded from Wikipedia <https://en.wikipedia.org/wiki/File:Corner.png>`__. No known copyright restrictions, released into the public domain. | |
doc_1847 | Termination signal. | |
doc_1848 |
A special method to re-show the figure in the notebook. | |
doc_1849 |
Counts the number of non-zero values in the array a. The word “non-zero” is in reference to the Python 2.x built-in method __nonzero__() (renamed __bool__() in Python 3.x) of Python objects that tests an object’s “truthfulness”. For example, any number is considered truthful if it is nonzero, whereas any string is considered truthful if it is not the empty string. Thus, this function (recursively) counts how many elements in a (and in sub-arrays thereof) have their __nonzero__() or __bool__() method evaluated to True. Parameters
aarray_like
The array for which to count non-zeros.
axisint or tuple, optional
Axis or tuple of axes along which to count non-zeros. Default is None, meaning that non-zeros will be counted along a flattened version of a. New in version 1.12.0.
keepdimsbool, optional
If this is set to True, the axes that are counted are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. New in version 1.19.0. Returns
countint or array of int
Number of non-zero values in the array along a given axis. Otherwise, the total number of non-zero values in the array is returned. See also nonzero
Return the coordinates of all the non-zero values. Examples >>> np.count_nonzero(np.eye(4))
4
>>> a = np.array([[0, 1, 7, 0],
... [3, 0, 2, 19]])
>>> np.count_nonzero(a)
5
>>> np.count_nonzero(a, axis=0)
array([1, 1, 2, 1])
>>> np.count_nonzero(a, axis=1)
array([2, 3])
>>> np.count_nonzero(a, axis=1, keepdims=True)
array([[2],
[3]]) | |
doc_1850 |
Computes a partial inverse of MaxPool2d. MaxPool2d is not fully invertible, since the non-maximal values are lost. MaxUnpool2d takes in as input the output of MaxPool2d including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero. Note MaxPool2d can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument output_size in the forward call. See the Inputs and Example below. Parameters
kernel_size (int or tuple) – Size of the max pooling window.
stride (int or tuple) – Stride of the max pooling window. It is set to kernel_size by default.
padding (int or tuple) – Padding that was added to the input Inputs:
input: the input Tensor to invert
indices: the indices given out by MaxPool2d
output_size (optional): the targeted output size Shape:
Input: (N,C,Hin,Win)(N, C, H_{in}, W_{in})
Output: (N,C,Hout,Wout)(N, C, H_{out}, W_{out}) , where Hout=(Hin−1)×stride[0]−2×padding[0]+kernel_size[0]H_{out} = (H_{in} - 1) \times \text{stride[0]} - 2 \times \text{padding[0]} + \text{kernel\_size[0]}
Wout=(Win−1)×stride[1]−2×padding[1]+kernel_size[1]W_{out} = (W_{in} - 1) \times \text{stride[1]} - 2 \times \text{padding[1]} + \text{kernel\_size[1]}
or as given by output_size in the call operator Example: >>> pool = nn.MaxPool2d(2, stride=2, return_indices=True)
>>> unpool = nn.MaxUnpool2d(2, stride=2)
>>> input = torch.tensor([[[[ 1., 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12],
[13, 14, 15, 16]]]])
>>> output, indices = pool(input)
>>> unpool(output, indices)
tensor([[[[ 0., 0., 0., 0.],
[ 0., 6., 0., 8.],
[ 0., 0., 0., 0.],
[ 0., 14., 0., 16.]]]])
>>> # specify a different output size than input size
>>> unpool(output, indices, output_size=torch.Size([1, 1, 5, 5]))
tensor([[[[ 0., 0., 0., 0., 0.],
[ 6., 0., 8., 0., 0.],
[ 0., 0., 0., 14., 0.],
[ 16., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]]]]) | |
doc_1851 |
Plot the coherence between x and y. Plot the coherence between x and y. Coherence is the normalized cross spectral density: \[C_{xy} = \frac{|P_{xy}|^2}{P_{xx}P_{yy}}\] Parameters
Fsfloat, default: 2
The sampling frequency (samples per time unit). It is used to calculate the Fourier frequencies, freqs, in cycles per time unit.
windowcallable or ndarray, default: window_hanning
A function or a vector of length NFFT. To create window vectors see window_hanning, window_none, numpy.blackman, numpy.hamming, numpy.bartlett, scipy.signal, scipy.signal.get_window, etc. If a function is passed as the argument, it must take a data segment as an argument and return the windowed version of the segment.
sides{'default', 'onesided', 'twosided'}, optional
Which sides of the spectrum to return. 'default' is one-sided for real data and two-sided for complex data. 'onesided' forces the return of a one-sided spectrum, while 'twosided' forces two-sided.
pad_toint, optional
The number of points to which the data segment is padded when performing the FFT. This can be different from NFFT, which specifies the number of data points used. While not increasing the actual resolution of the spectrum (the minimum distance between resolvable peaks), this can give more points in the plot, allowing for more detail. This corresponds to the n parameter in the call to fft(). The default is None, which sets pad_to equal to NFFT
NFFTint, default: 256
The number of data points used in each block for the FFT. A power 2 is most efficient. This should NOT be used to get zero padding, or the scaling of the result will be incorrect; use pad_to for this instead.
detrend{'none', 'mean', 'linear'} or callable, default: 'none'
The function applied to each segment before fft-ing, designed to remove the mean or linear trend. Unlike in MATLAB, where the detrend parameter is a vector, in Matplotlib it is a function. The mlab module defines detrend_none, detrend_mean, and detrend_linear, but you can use a custom function as well. You can also use a string to choose one of the functions: 'none' calls detrend_none. 'mean' calls detrend_mean. 'linear' calls detrend_linear.
scale_by_freqbool, default: True
Whether the resulting density values should be scaled by the scaling frequency, which gives density in units of Hz^-1. This allows for integration over the returned frequency values. The default is True for MATLAB compatibility.
noverlapint, default: 0 (no overlap)
The number of points of overlap between blocks.
Fcint, default: 0
The center frequency of x, which offsets the x extents of the plot to reflect the frequency range used when a signal is acquired and then filtered and downsampled to baseband. Returns
Cxy1-D array
The coherence vector.
freqs1-D array
The frequencies for the elements in Cxy. Other Parameters
dataindexable object, optional
If given, the following parameters also accept a string s, which is interpreted as data[s] (unless this raises an exception): x, y **kwargs
Keyword arguments control the Line2D properties:
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
antialiased or aa bool
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
color or c color
dash_capstyle CapStyle or {'butt', 'projecting', 'round'}
dash_joinstyle JoinStyle or {'miter', 'round', 'bevel'}
dashes sequence of floats (on/off ink in points) or (None, None)
data (2, N) array or two 1D arrays
drawstyle or ds {'default', 'steps', 'steps-pre', 'steps-mid', 'steps-post'}, default: 'default'
figure Figure
fillstyle {'full', 'left', 'right', 'bottom', 'top', 'none'}
gid str
in_layout bool
label object
linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw float
marker marker style string, Path or MarkerStyle
markeredgecolor or mec color
markeredgewidth or mew float
markerfacecolor or mfc color
markerfacecoloralt or mfcalt color
markersize or ms float
markevery None or int or (int, int) or slice or list[int] or float or (float, float) or list[bool]
path_effects AbstractPathEffect
picker float or callable[[Artist, Event], tuple[bool, dict]]
pickradius float
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
solid_capstyle CapStyle or {'butt', 'projecting', 'round'}
solid_joinstyle JoinStyle or {'miter', 'round', 'bevel'}
transform unknown
url str
visible bool
xdata 1D array
ydata 1D array
zorder float References Bendat & Piersol -- Random Data: Analysis and Measurement Procedures, John Wiley & Sons (1986) | |
doc_1852 |
Connect the callback function func to button click events. Returns a connection id, which can be used to disconnect the callback. | |
doc_1853 |
Grab the image information from the figure and save as a movie frame. All keyword arguments in savefig_kwargs are passed on to the savefig call that saves the figure. | |
doc_1854 |
Return an array of zeros with the same shape and type as a given array. Parameters
aarray_like
The shape and data-type of a define these same attributes of the returned array.
dtypedata-type, optional
Overrides the data type of the result. New in version 1.6.0.
order{‘C’, ‘F’, ‘A’, or ‘K’}, optional
Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if a is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of a as closely as possible. New in version 1.6.0.
subokbool, optional.
If True, then the newly created array will use the sub-class type of a, otherwise it will be a base-class array. Defaults to True.
shapeint or sequence of ints, optional.
Overrides the shape of the result. If order=’K’ and the number of dimensions is unchanged, will try to keep order, otherwise, order=’C’ is implied. New in version 1.17.0. Returns
outMaskedArray
Array of zeros with the same shape and type as a. See also empty_like
Return an empty array with shape and type of input. ones_like
Return an array of ones with shape and type of input. full_like
Return a new array with shape of input filled with value. zeros
Return a new array setting values to zero. Examples >>> x = np.arange(6)
>>> x = x.reshape((2, 3))
>>> x
array([[0, 1, 2],
[3, 4, 5]])
>>> np.zeros_like(x)
array([[0, 0, 0],
[0, 0, 0]])
>>> y = np.arange(3, dtype=float)
>>> y
array([0., 1., 2.])
>>> np.zeros_like(y)
array([0., 0., 0.]) | |
doc_1855 | Set the message’s envelope header to unixfrom, which should be a string. (See mboxMessage for a brief description of this header.) | |
doc_1856 | See Migration guide for more details. tf.compat.v1.raw_ops.FlushSummaryWriter
tf.raw_ops.FlushSummaryWriter(
writer, name=None
)
Args
writer A Tensor of type resource.
name A name for the operation (optional).
Returns The created Operation. | |
doc_1857 | A 1-based range iterator of page numbers, e.g. yielding [1, 2, 3, 4]. | |
doc_1858 | Return True if func is a coroutine function. This method is different from inspect.iscoroutinefunction() because it returns True for generator-based coroutine functions decorated with @coroutine. | |
doc_1859 | This method commits the current transaction. If you don’t call this method, anything you did since the last call to commit() is not visible from other database connections. If you wonder why you don’t see the data you’ve written to the database, please check you didn’t forget to call this method. | |
doc_1860 | tf.profiler.experimental.ProfilerOptions(
host_tracer_level=2, python_tracer_level=0, device_tracer_level=1, delay_ms=None
)
Use tf.profiler.ProfilerOptions to control tf.profiler behavior. Fields:
host_tracer_level: Adjust CPU tracing level. Values are: 1 - critical info only, 2 - info, 3 - verbose. [default value is 2]
python_tracer_level: Toggle tracing of Python function calls. Values are: 1 enabled, 0 - disabled [default value is 0]
device_tracer_level: Adjust device (TPU/GPU) tracing level. Values are: 1 - enabled, 0 - disabled [default value is 1]
delay_ms: Requests for all hosts to start profiling at a timestamp that is delay_ms away from the current time. delay_ms is in milliseconds. If zero, each host will start profiling immediately upon receiving the request. Default value is None, allowing the profiler guess the best value.
Attributes
host_tracer_level
python_tracer_level
device_tracer_level
delay_ms | |
doc_1861 | tf.distribute.DistributedValues(
values
)
A subclass instance of tf.distribute.DistributedValues is created when creating variables within a distribution strategy, iterating a tf.distribute.DistributedDataset or through tf.distribute.Strategy.run. This base class should never be instantiated directly. tf.distribute.DistributedValues contains a value per replica. Depending on the subclass, the values could either be synced on update, synced on demand, or never synced. tf.distribute.DistributedValues can be reduced to obtain single value across replicas, as input into tf.distribute.Strategy.run or the per-replica values inspected using tf.distribute.Strategy.experimental_local_results. Example usage: Created from a tf.distribute.DistributedDataset:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)
dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))
distributed_values = next(dataset_iterator)
Returned by run:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
ctx = tf.distribute.get_replica_context()
return ctx.replica_id_in_sync_group
distributed_values = strategy.run(run)
As input into run:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)
dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))
distributed_values = next(dataset_iterator)
@tf.function
def run(input):
return input + 1.0
updated_value = strategy.run(run, args=(distributed_values,))
Reduce value:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)
dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))
distributed_values = next(dataset_iterator)
reduced_value = strategy.reduce(tf.distribute.ReduceOp.SUM,
distributed_values,
axis = 0)
Inspect local replica values:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)
dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))
per_replica_values = strategy.experimental_local_results(
distributed_values)
per_replica_values
(<tf.Tensor: shape=(1,), dtype=float32, numpy=array([5.], dtype=float32)>,
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([6.], dtype=float32)>) | |
doc_1862 | AnchoredLocatorBase(bbox_to_anchor, ...[, ...])
Parameters
AnchoredSizeLocator(bbox_to_anchor, x_size, ...)
Parameters
AnchoredZoomLocator(parent_axes, zoom, loc)
Parameters
BboxConnector(bbox1, bbox2, loc1[, loc2]) Connect two bboxes with a straight line.
BboxConnectorPatch(bbox1, bbox2, loc1a, ...) Connect two bboxes with a quadrilateral.
BboxPatch(bbox, **kwargs) Patch showing the shape bounded by a Bbox.
InsetPosition(parent, lbwh) An object for positioning an inset axes. Functions
inset_axes(parent_axes, width, height[, ...]) Create an inset axes with a given width and height.
mark_inset(parent_axes, inset_axes, loc1, ...) Draw a box to mark the location of an area represented by an inset axes.
zoomed_inset_axes(parent_axes, zoom[, loc, ...]) Create an anchored inset axes by scaling a parent axes. | |
doc_1863 | Start the playback of the music stream play(loops=0, start=0.0, fade_ms = 0) -> None This will play the loaded music stream. If the music is already playing it will be restarted. loops is an optional integer argument, which is 0 by default, it tells how many times to repeat the music. The music repeats indefinately if this argument is set to -1. start is an optional float argument, which is 0.0 by default, which denotes the position in time, the music starts playing from. The starting position depends on the format of the music played. MP3 and OGG use the position as time in seconds. For mp3s the start time position selected may not be accurate as things like variable bit rate encoding and ID3 tags can throw off the timing calculations. For MOD music it is the pattern order number. Passing a start position will raise a NotImplementedError if the start position cannot be set. fade_ms is an optional integer argument, which is 0 by default, makes the music start playing at 0 volume and fade up to full volume over the given time. The sample may end before the fade-in is complete. Changed in pygame 2.0.0: Added optional fade_ms argument | |
doc_1864 | See Migration guide for more details. tf.compat.v1.math.special.bessel_y0
tf.math.special.bessel_y0(
x, name=None
)
Modified Bessel function of order 0.
tf.math.special.bessel_y0([0.5, 1., 2., 4.]).numpy()
array([-0.44451873, 0.08825696, 0.51037567, -0.01694074], dtype=float32)
Args
x A Tensor or SparseTensor. Must be one of the following types: half, float32, float64.
name A name for the operation (optional).
Returns A Tensor or SparseTensor, respectively. Has the same type as x.
Scipy Compatibility Equivalent to scipy.special.y0 | |
doc_1865 | See Migration guide for more details. tf.compat.v1.raw_ops.DeepCopy
tf.raw_ops.DeepCopy(
x, name=None
)
Args
x A Tensor. The source tensor of type T.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | |
doc_1866 | See Migration guide for more details. tf.compat.v1.train.ClusterSpec
tf.train.ClusterSpec(
cluster
)
A tf.train.ClusterSpec represents the set of processes that participate in a distributed TensorFlow computation. Every tf.distribute.Server is constructed in a particular cluster. To create a cluster with two jobs and five tasks, you specify the mapping from job names to lists of network addresses (typically hostname-port pairs). cluster = tf.train.ClusterSpec({"worker": ["worker0.example.com:2222",
"worker1.example.com:2222",
"worker2.example.com:2222"],
"ps": ["ps0.example.com:2222",
"ps1.example.com:2222"]})
Each job may also be specified as a sparse mapping from task indices to network addresses. This enables a server to be configured without needing to know the identity of (for example) all other worker tasks: cluster = tf.train.ClusterSpec({"worker": {1: "worker1.example.com:2222"},
"ps": ["ps0.example.com:2222",
"ps1.example.com:2222"]})
Args
cluster A dictionary mapping one or more job names to (i) a list of network addresses, or (ii) a dictionary mapping integer task indices to network addresses; or a tf.train.ClusterDef protocol buffer.
Raises
TypeError If cluster is not a dictionary mapping strings to lists of strings, and not a tf.train.ClusterDef protobuf.
Attributes
jobs Returns a list of job names in this cluster. Methods as_cluster_def View source
as_cluster_def()
Returns a tf.train.ClusterDef protocol buffer based on this cluster. as_dict View source
as_dict()
Returns a dictionary from job names to their tasks. For each job, if the task index space is dense, the corresponding value will be a list of network addresses; otherwise it will be a dictionary mapping (sparse) task indices to the corresponding addresses.
Returns A dictionary mapping job names to lists or dictionaries describing the tasks in those jobs.
job_tasks View source
job_tasks(
job_name
)
Returns a mapping from task ID to address in the given job.
Note: For backwards compatibility, this method returns a list. If the given job was defined with a sparse set of task indices, the length of this list may not reflect the number of tasks defined in this job. Use the tf.train.ClusterSpec.num_tasks method to find the number of tasks defined in a particular job.
Args
job_name The string name of a job in this cluster.
Returns A list of task addresses, where the index in the list corresponds to the task index of each task. The list may contain None if the job was defined with a sparse set of task indices.
Raises
ValueError If job_name does not name a job in this cluster. num_tasks View source
num_tasks(
job_name
)
Returns the number of tasks defined in the given job.
Args
job_name The string name of a job in this cluster.
Returns The number of tasks defined in the given job.
Raises
ValueError If job_name does not name a job in this cluster. task_address View source
task_address(
job_name, task_index
)
Returns the address of the given task in the given job.
Args
job_name The string name of a job in this cluster.
task_index A non-negative integer.
Returns The address of the given task in the given job.
Raises
ValueError If job_name does not name a job in this cluster, or no task with index task_index is defined in that job. task_indices View source
task_indices(
job_name
)
Returns a list of valid task indices in the given job.
Args
job_name The string name of a job in this cluster.
Returns A list of valid task indices in the given job.
Raises
ValueError If job_name does not name a job in this cluster, or no task with index task_index is defined in that job. __bool__ View source
__bool__()
__eq__ View source
__eq__(
other
)
Return self==value. __ne__ View source
__ne__(
other
)
Return self!=value. __nonzero__ View source
__nonzero__() | |
doc_1867 | accessor for ‘no-transform’ | |
doc_1868 | See Migration guide for more details. tf.compat.v1.sparse.to_indicator, tf.compat.v1.sparse_to_indicator
tf.sparse.to_indicator(
sp_input, vocab_size, name=None
)
The last dimension of sp_input.indices is discarded and replaced with the values of sp_input. If sp_input.dense_shape = [D0, D1, ..., Dn, K], then output.shape = [D0, D1, ..., Dn, vocab_size], where output[d_0, d_1, ..., d_n, sp_input[d_0, d_1, ..., d_n, k]] = True
and False elsewhere in output. For example, if sp_input.dense_shape = [2, 3, 4] with non-empty values: [0, 0, 0]: 0
[0, 1, 0]: 10
[1, 0, 3]: 103
[1, 1, 1]: 150
[1, 1, 2]: 149
[1, 1, 3]: 150
[1, 2, 1]: 121
and vocab_size = 200, then the output will be a [2, 3, 200] dense bool tensor with False everywhere except at positions (0, 0, 0), (0, 1, 10), (1, 0, 103), (1, 1, 149), (1, 1, 150),
(1, 2, 121).
Note that repeats are allowed in the input SparseTensor. This op is useful for converting SparseTensors into dense formats for compatibility with ops that expect dense tensors. The input SparseTensor must be in row-major order.
Args
sp_input A SparseTensor with values property of type int32 or int64.
vocab_size A scalar int64 Tensor (or Python int) containing the new size of the last dimension, all(0 <= sp_input.values < vocab_size).
name A name prefix for the returned tensors (optional)
Returns A dense bool indicator tensor representing the indices with specified value.
Raises
TypeError If sp_input is not a SparseTensor. | |
doc_1869 |
Check if the object is dict-like. Parameters
obj:The object to check
Returns
is_dict_like:bool
Whether obj has dict-like properties. Examples
>>> is_dict_like({1: 2})
True
>>> is_dict_like([1, 2, 3])
False
>>> is_dict_like(dict)
False
>>> is_dict_like(dict())
True | |
doc_1870 |
Return a derivative of this polynomial. Refer to polyder for full documentation. See also polyder
equivalent function | |
doc_1871 | tf.sqrt Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.sqrt, tf.compat.v1.sqrt
tf.math.sqrt(
x, name=None
)
Note: This operation does not support integer types.
x = tf.constant([[4.0], [16.0]])
tf.sqrt(x)
<tf.Tensor: shape=(2, 1), dtype=float32, numpy=
array([[2.],
[4.]], dtype=float32)>
y = tf.constant([[-4.0], [16.0]])
tf.sqrt(y)
<tf.Tensor: shape=(2, 1), dtype=float32, numpy=
array([[nan],
[ 4.]], dtype=float32)>
z = tf.constant([[-1.0], [16.0]], dtype=tf.complex128)
tf.sqrt(z)
<tf.Tensor: shape=(2, 1), dtype=complex128, numpy=
array([[0.0+1.j],
[4.0+0.j]])>
Note: In order to support complex complex, please provide an input tensor of complex64 or complex128.
Args
x A tf.Tensor of type bfloat16, half, float32, float64, complex64, complex128
name A name for the operation (optional).
Returns A tf.Tensor of same size, type and sparsity as x. If x is a SparseTensor, returns SparseTensor(x.indices, tf.math.sqrt(x.values, ...), x.dense_shape) | |
doc_1872 |
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values. | |
doc_1873 |
Define default roll function to be called in apply method. | |
doc_1874 |
Set parameters for this locator. Parameters
nbinsint or 'auto', optional
see MaxNLocator
stepsarray-like, optional
see MaxNLocator
integerbool, optional
see MaxNLocator
symmetricbool, optional
see MaxNLocator
prune{'lower', 'upper', 'both', None}, optional
see MaxNLocator
min_n_ticksint, optional
see MaxNLocator | |
doc_1875 | Open the file pointed to in text mode, write data to it, and close the file: >>> p = Path('my_text_file')
>>> p.write_text('Text file contents')
18
>>> p.read_text()
'Text file contents'
An existing file of the same name is overwritten. The optional parameters have the same meaning as in open(). New in version 3.5. | |
doc_1876 | New in Django 4.0. The default command options to suppress in the help output. This should be a set of option names (e.g. '--verbosity'). The default values for the suppressed options are still passed. | |
doc_1877 | Transform a method into a static method. A static method does not receive an implicit first argument. To declare a static method, use this idiom: class C:
@staticmethod
def f(arg1, arg2, ...): ...
The @staticmethod form is a function decorator – see Function definitions for details. A static method can be called either on the class (such as C.f()) or on an instance (such as C().f()). Static methods in Python are similar to those found in Java or C++. Also see classmethod() for a variant that is useful for creating alternate class constructors. Like all decorators, it is also possible to call staticmethod as a regular function and do something with its result. This is needed in some cases where you need a reference to a function from a class body and you want to avoid the automatic transformation to instance method. For these cases, use this idiom: class C:
builtin_open = staticmethod(open)
For more information on static methods, see The standard type hierarchy. | |
doc_1878 | operator.__matmul__(a, b)
Return a @ b. New in version 3.5. | |
doc_1879 |
Copy properties from other to self. | |
doc_1880 |
Return a function that converts a pdf file to a png file. | |
doc_1881 | tf.compat.v1.data.experimental.choose_from_datasets(
datasets, choice_dataset
)
For example, given the following datasets: datasets = [tf.data.Dataset.from_tensors("foo").repeat(),
tf.data.Dataset.from_tensors("bar").repeat(),
tf.data.Dataset.from_tensors("baz").repeat()]
# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`.
choice_dataset = tf.data.Dataset.range(3).repeat(3)
result = tf.data.experimental.choose_from_datasets(datasets, choice_dataset)
The elements of result will be: "foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz"
Args
datasets A list of tf.data.Dataset objects with compatible structure.
choice_dataset A tf.data.Dataset of scalar tf.int64 tensors between 0 and len(datasets) - 1.
Returns A dataset that interleaves elements from datasets according to the values of choice_dataset.
Raises
TypeError If the datasets or choice_dataset arguments have the wrong type. | |
doc_1882 | The FileType factory creates objects that can be passed to the type argument of ArgumentParser.add_argument(). Arguments that have FileType objects as their type will open command-line arguments as files with the requested modes, buffer sizes, encodings and error handling (see the open() function for more details): >>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--raw', type=argparse.FileType('wb', 0))
>>> parser.add_argument('out', type=argparse.FileType('w', encoding='UTF-8'))
>>> parser.parse_args(['--raw', 'raw.dat', 'file.txt'])
Namespace(out=<_io.TextIOWrapper name='file.txt' mode='w' encoding='UTF-8'>, raw=<_io.FileIO name='raw.dat' mode='wb'>)
FileType objects understand the pseudo-argument '-' and automatically convert this into sys.stdin for readable FileType objects and sys.stdout for writable FileType objects: >>> parser = argparse.ArgumentParser()
>>> parser.add_argument('infile', type=argparse.FileType('r'))
>>> parser.parse_args(['-'])
Namespace(infile=<_io.TextIOWrapper name='<stdin>' encoding='UTF-8'>)
New in version 3.4: The encodings and errors keyword arguments. | |
doc_1883 |
Return random floats in the half-open interval [0.0, 1.0). Results are from the “continuous uniform” distribution over the stated interval. To sample \(Unif[a, b), b > a\) multiply the output of random_sample by (b-a) and add a: (b - a) * random_sample() + a
Note New code should use the random method of a default_rng() instance instead; please see the Quick Start. Parameters
sizeint or tuple of ints, optional
Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. Default is None, in which case a single value is returned. Returns
outfloat or ndarray of floats
Array of random floats of shape size (unless size=None, in which case a single float is returned). See also Generator.random
which should be used for new code. Examples >>> np.random.random_sample()
0.47108547995356098 # random
>>> type(np.random.random_sample())
<class 'float'>
>>> np.random.random_sample((5,))
array([ 0.30220482, 0.86820401, 0.1654503 , 0.11659149, 0.54323428]) # random
Three-by-two array of random numbers from [-5, 0): >>> 5 * np.random.random_sample((3, 2)) - 5
array([[-3.99149989, -0.52338984], # random
[-2.99091858, -0.79479508],
[-1.23204345, -1.75224494]]) | |
doc_1884 | A string or list of strings specifying the ordering to apply to the queryset. Valid values are the same as those for order_by(). | |
doc_1885 | The CheckList widget displays a list of items to be selected by the user. CheckList acts similarly to the Tk checkbutton or radiobutton widgets, except it is capable of handling many more items than checkbuttons or radiobuttons. | |
doc_1886 |
Roll provided date backward to next offset only if not on offset. Returns
TimeStamp
Rolled timestamp if not on offset, otherwise unchanged timestamp. | |
doc_1887 | calculates the Euclidean distance to a given vector. distance_to(Vector2) -> float | |
doc_1888 |
Return a normalized rgba array corresponding to x. In the normal case, x is a 1D or 2D sequence of scalars, and the corresponding ndarray of rgba values will be returned, based on the norm and colormap set for this ScalarMappable. There is one special case, for handling images that are already rgb or rgba, such as might have been read from an image file. If x is an ndarray with 3 dimensions, and the last dimension is either 3 or 4, then it will be treated as an rgb or rgba array, and no mapping will be done. The array can be uint8, or it can be floating point with values in the 0-1 range; otherwise a ValueError will be raised. If it is a masked array, the mask will be ignored. If the last dimension is 3, the alpha kwarg (defaulting to 1) will be used to fill in the transparency. If the last dimension is 4, the alpha kwarg is ignored; it does not replace the pre-existing alpha. A ValueError will be raised if the third dimension is other than 3 or 4. In either case, if bytes is False (default), the rgba array will be floats in the 0-1 range; if it is True, the returned rgba array will be uint8 in the 0 to 255 range. If norm is False, no normalization of the input data is performed, and it is assumed to be in the range (0-1). | |
doc_1889 |
A container for the objects defining tick position and format. Attributes
locatormatplotlib.ticker.Locator subclass
Determines the positions of the ticks.
formattermatplotlib.ticker.Formatter subclass
Determines the format of the tick labels. | |
doc_1890 | LineString objects are instantiated using arguments that are either a sequence of coordinates or Point objects. For example, the following are equivalent: >>> ls = LineString((0, 0), (1, 1))
>>> ls = LineString(Point(0, 0), Point(1, 1))
In addition, LineString objects may also be created by passing in a single sequence of coordinate or Point objects: >>> ls = LineString( ((0, 0), (1, 1)) )
>>> ls = LineString( [Point(0, 0), Point(1, 1)] )
Empty LineString objects may be instantiated by passing no arguments or an empty sequence. The following are equivalent: >>> ls = LineString()
>>> ls = LineString([])
closed
Returns whether or not this LineString is closed. | |
doc_1891 |
Applies the randomized leaky rectified liner unit function, element-wise, as described in the paper: Empirical Evaluation of Rectified Activations in Convolutional Network. The function is defined as: RReLU(x)={xif x≥0ax otherwise \text{RReLU}(x) = \begin{cases} x & \text{if } x \geq 0 \\ ax & \text{ otherwise } \end{cases}
where aa is randomly sampled from uniform distribution U(lower,upper)\mathcal{U}(\text{lower}, \text{upper}) . See: https://arxiv.org/pdf/1505.00853.pdf Parameters
lower – lower bound of the uniform distribution. Default: 18\frac{1}{8}
upper – upper bound of the uniform distribution. Default: 13\frac{1}{3}
inplace – can optionally do the operation in-place. Default: False
Shape:
Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.RReLU(0.1, 0.3)
>>> input = torch.randn(2)
>>> output = m(input) | |
doc_1892 | tf.experimental.numpy.int_
tf.experimental.numpy.int64(
*args, **kwargs
)
Character code: 'l'. Canonical name: np.int_. Alias on this platform: np.int64: 64-bit signed integer (-9223372036854775808 to 9223372036854775807). Alias on this platform: np.intp: Signed integer large enough to fit pointer, compatible with C intptr_t. Methods all
all()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. any
any()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmax
argmax()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmin
argmin()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argsort
argsort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. astype
astype()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. byteswap
byteswap()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. choose
choose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. clip
clip()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. compress
compress()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. conj
conj()
conjugate
conjugate()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. copy
copy()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. cumprod
cumprod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. cumsum
cumsum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. diagonal
diagonal()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dump
dump()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dumps
dumps()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. fill
fill()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. flatten
flatten()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. getfield
getfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. item
item()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. itemset
itemset()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. max
max()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. mean
mean()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. min
min()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. newbyteorder
newbyteorder()
newbyteorder(new_order='S') Return a new dtype with a different byte order. Changes are also made in all fields and sub-arrays of the data type. The new_order code can be any from the following: 'S' - swap dtype from current to opposite endian '<', 'L'- little endian '>', 'B'- big endian '=', 'N'- native order '|', 'I'- ignore (no change to byte order) Parameters new_order : str, optional Byte order to force; a value from the byte order specifications above. The default value ('S') results in swapping the current byte order. The code does a case-insensitive check on the first letter of new_order for the alternatives above. For example, any of 'B' or 'b' or 'biggish' are valid to specify big-endian. Returns new_dtype : dtype New dtype object with the given change to the byte order. nonzero
nonzero()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. prod
prod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ptp
ptp()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. put
put()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ravel
ravel()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. repeat
repeat()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. reshape
reshape()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. resize
resize()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. round
round()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. searchsorted
searchsorted()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setfield
setfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setflags
setflags()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. sort
sort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. squeeze
squeeze()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. std
std()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. sum
sum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. swapaxes
swapaxes()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. take
take()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tobytes
tobytes()
tofile
tofile()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tolist
tolist()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tostring
tostring()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. trace
trace()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. transpose
transpose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. var
var()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. view
view()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. __abs__
__abs__()
abs(self) __add__
__add__(
value, /
)
Return self+value. __and__
__and__(
value, /
)
Return self&value. __bool__
__bool__()
self != 0 __eq__
__eq__(
value, /
)
Return self==value. __floordiv__
__floordiv__(
value, /
)
Return self//value. __ge__
__ge__(
value, /
)
Return self>=value. __getitem__
__getitem__(
key, /
)
Return self[key]. __gt__
__gt__(
value, /
)
Return self>value. __invert__
__invert__()
~self __le__
__le__(
value, /
)
Return self<=value. __lt__
__lt__(
value, /
)
Return self<value. __mod__
__mod__(
value, /
)
Return self%value. __mul__
__mul__(
value, /
)
Return self*value. __ne__
__ne__(
value, /
)
Return self!=value. __neg__
__neg__()
-self __or__
__or__(
value, /
)
Return self|value. __pos__
__pos__()
+self __pow__
__pow__(
value, mod, /
)
Return pow(self, value, mod). __radd__
__radd__(
value, /
)
Return value+self. __rand__
__rand__(
value, /
)
Return value&self. __rfloordiv__
__rfloordiv__(
value, /
)
Return value//self. __rmod__
__rmod__(
value, /
)
Return value%self. __rmul__
__rmul__(
value, /
)
Return value*self. __ror__
__ror__(
value, /
)
Return value|self. __rpow__
__rpow__(
value, mod, /
)
Return pow(value, self, mod). __rsub__
__rsub__(
value, /
)
Return value-self. __rtruediv__
__rtruediv__(
value, /
)
Return value/self. __rxor__
__rxor__(
value, /
)
Return value^self. __sub__
__sub__(
value, /
)
Return self-value. __truediv__
__truediv__(
value, /
)
Return self/value. __xor__
__xor__(
value, /
)
Return self^value.
Class Variables
T
base
data
denominator
dtype
flags
flat
imag
itemsize
nbytes
ndim
numerator
real
shape
size
strides | |
doc_1893 |
Fit underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
y(sparse) array-like of shape (n_samples,) or (n_samples, n_classes)
Multi-class targets. An indicator matrix turns on multilabel classification. Returns
self | |
doc_1894 | Repeatedly issue a prompt, accept input, parse an initial prefix off the received input, and dispatch to action methods, passing them the remainder of the line as argument. The optional argument is a banner or intro string to be issued before the first prompt (this overrides the intro class attribute). If the readline module is loaded, input will automatically inherit bash-like history-list editing (e.g. Control-P scrolls back to the last command, Control-N forward to the next one, Control-F moves the cursor to the right non-destructively, Control-B moves the cursor to the left non-destructively, etc.). An end-of-file on input is passed back as the string 'EOF'. An interpreter instance will recognize a command name foo if and only if it has a method do_foo(). As a special case, a line beginning with the character '?' is dispatched to the method do_help(). As another special case, a line beginning with the character '!' is dispatched to the method do_shell() (if such a method is defined). This method will return when the postcmd() method returns a true value. The stop argument to postcmd() is the return value from the command’s corresponding do_*() method. If completion is enabled, completing commands will be done automatically, and completing of commands args is done by calling complete_foo() with arguments text, line, begidx, and endidx. text is the string prefix we are attempting to match: all returned matches must begin with it. line is the current input line with leading whitespace removed, begidx and endidx are the beginning and ending indexes of the prefix text, which could be used to provide different completion depending upon which position the argument is in. All subclasses of Cmd inherit a predefined do_help(). This method, called with an argument 'bar', invokes the corresponding method help_bar(), and if that is not present, prints the docstring of do_bar(), if available. With no argument, do_help() lists all available help topics (that is, all commands with corresponding help_*() methods or commands that have docstrings), and also lists any undocumented commands. | |
doc_1895 |
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | |
doc_1896 |
IEEE 754 floating point representation of (positive) infinity. Use inf because Inf, Infinity, PINF and infty are aliases for inf. For more details, see inf. See Also inf | |
doc_1897 |
Apply dimensionality reduction to X. X is projected on the first principal components previously extracted from a training set. Parameters
Xarray-like, shape (n_samples, n_features)
New data, where n_samples is the number of samples and n_features is the number of features. Returns
X_newarray-like, shape (n_samples, n_components)
Examples >>> import numpy as np
>>> from sklearn.decomposition import IncrementalPCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> ipca = IncrementalPCA(n_components=2, batch_size=3)
>>> ipca.fit(X)
IncrementalPCA(batch_size=3, n_components=2)
>>> ipca.transform(X) | |
doc_1898 |
Return whether the artist is animated. | |
doc_1899 | Whether the OpenSSL library has built-in support for the SSL 3.0 protocol. New in version 3.7. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.