_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_24300 | Perform multiple patches in a single call. It takes the object to be patched (either as an object or a string to fetch the object by importing) and keyword arguments for the patches: with patch.multiple(settings, FIRST_PATCH='one', SECOND_PATCH='two'):
...
Use DEFAULT as the value if you want patch.multiple() to create mocks for you. In this case the created mocks are passed into a decorated function by keyword, and a dictionary is returned when patch.multiple() is used as a context manager. patch.multiple() can be used as a decorator, class decorator or a context manager. The arguments spec, spec_set, create, autospec and new_callable have the same meaning as for patch(). These arguments will be applied to all patches done by patch.multiple(). When used as a class decorator patch.multiple() honours patch.TEST_PREFIX for choosing which methods to wrap. | |
doc_24301 |
Alias for set_linestyle. | |
doc_24302 |
Applies a 3D convolution over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,Cin,D,H,W)(N, C_{in}, D, H, W) and output (N,Cout,Dout,Hout,Wout)(N, C_{out}, D_{out}, H_{out}, W_{out}) can be precisely described as: out(Ni,Coutj)=bias(Coutj)+∑k=0Cin−1weight(Coutj,k)⋆input(Ni,k)out(N_i, C_{out_j}) = bias(C_{out_j}) + \sum_{k = 0}^{C_{in} - 1} weight(C_{out_j}, k) \star input(N_i, k)
where ⋆\star is the valid 3D cross-correlation operator This module supports TensorFloat32.
stride controls the stride for the cross-correlation.
padding controls the amount of implicit padding on both sides for padding number of points for each dimension.
dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.
groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example, At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated. At groups= in_channels, each input channel is convolved with its own set of filters (of size out_channelsin_channels\frac{\text{out\_channels}}{\text{in\_channels}} ). The parameters kernel_size, stride, padding, dilation can either be: a single int – in which case the same value is used for the depth, height and width dimension a tuple of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension Note When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also known as a “depthwise convolution”. In other words, for an input of size (N,Cin,Lin)(N, C_{in}, L_{in}) , a depthwise convolution with a depthwise multiplier K can be performed with the arguments (Cin=Cin,Cout=Cin×K,...,groups=Cin)(C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in}) . Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Zero-padding added to all three sides of the input. Default: 0
padding_mode (string, optional) – 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
Shape:
Input: (N,Cin,Din,Hin,Win)(N, C_{in}, D_{in}, H_{in}, W_{in})
Output: (N,Cout,Dout,Hout,Wout)(N, C_{out}, D_{out}, H_{out}, W_{out}) where Dout=⌊Din+2×padding[0]−dilation[0]×(kernel_size[0]−1)−1stride[0]+1⌋D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor
Hout=⌊Hin+2×padding[1]−dilation[1]×(kernel_size[1]−1)−1stride[1]+1⌋H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor
Wout=⌊Win+2×padding[2]−dilation[2]×(kernel_size[2]−1)−1stride[2]+1⌋W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{dilation}[2] \times (\text{kernel\_size}[2] - 1) - 1}{\text{stride}[2]} + 1\right\rfloor
Variables
~Conv3d.weight (Tensor) – the learnable weights of the module of shape (out_channels,in_channelsgroups,(\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}}, kernel_size[0],kernel_size[1],kernel_size[2])\text{kernel\_size[0]}, \text{kernel\_size[1]}, \text{kernel\_size[2]}) . The values of these weights are sampled from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsCin∗∏i=02kernel_size[i]k = \frac{groups}{C_\text{in} * \prod_{i=0}^{2}\text{kernel\_size}[i]}
~Conv3d.bias (Tensor) – the learnable bias of the module of shape (out_channels). If bias is True, then the values of these weights are sampled from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsCin∗∏i=02kernel_size[i]k = \frac{groups}{C_\text{in} * \prod_{i=0}^{2}\text{kernel\_size}[i]}
Examples: >>> # With square kernels and equal stride
>>> m = nn.Conv3d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(4, 2, 0))
>>> input = torch.randn(20, 16, 10, 50, 100)
>>> output = m(input) | |
doc_24303 |
Convert an image to 16-bit signed integer format. Parameters
imagendarray
Input image.
force_copybool, optional
Force a copy of the data, irrespective of its current dtype. Returns
outndarray of int16
Output image. Notes The values are scaled between -32768 and 32767. If the input data-type is positive-only (e.g., uint8), then the output image will still only have positive values. | |
doc_24304 |
Parameters
hlist of axes_size
sizes for horizontal division | |
doc_24305 | Return the current local date and time. If optional argument tz is None or not specified, this is like today(), but, if possible, supplies more precision than can be gotten from going through a time.time() timestamp (for example, this may be possible on platforms supplying the C gettimeofday() function). If tz is not None, it must be an instance of a tzinfo subclass, and the current date and time are converted to tz’s time zone. This function is preferred over today() and utcnow(). | |
doc_24306 |
Return True if sources contains C++ files | |
doc_24307 | A secret key that will be used for securely signing the session cookie and can be used for any other security related needs by extensions or your application. It should be a long random bytes or str. For example, copy the output of this to your config: $ python -c 'import os; print(os.urandom(16))'
b'_5#y2L"F4Q8z\n\xec]/'
Do not reveal the secret key when posting questions or committing code. Default: None | |
doc_24308 | Returns the event type for the record. Override this if you want to specify your own types. This version does a mapping using the handler’s typemap attribute, which is set up in __init__() to a dictionary which contains mappings for DEBUG, INFO, WARNING, ERROR and CRITICAL. If you are using your own levels, you will either need to override this method or place a suitable dictionary in the handler’s typemap attribute. | |
doc_24309 | Returns either True or False, depending on whether the user’s session cookie will expire when the user’s web browser is closed. | |
doc_24310 | tkinter.messagebox.showerror(title=None, message=None, **options) | |
doc_24311 |
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha array-like or scalar or None
animated bool
antialiased or aa or antialiaseds bool or list of bools
array array-like or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
clim (vmin: float, vmax: float)
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
cmap Colormap or str or None
color color or list of rgba tuples
edgecolor or ec or edgecolors color or list of colors or 'face'
facecolor or facecolors or fc color or list of colors
figure Figure
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or dashes or linestyles or ls str or tuple or list thereof
linewidth or linewidths or lw float or list of floats
norm Normalize or None
offset_transform Transform
offsets (N, 2) or (2,) array-like
path_effects AbstractPathEffect
paths list of array-like
picker None or bool or float or callable
pickradius float
rasterized bool
sizes ndarray or None
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
urls list of str or None
verts list of array-like
verts_and_codes unknown
visible bool
zorder float | |
doc_24312 | Return the glyph case size get_cache_size() -> long See pygame.freetype.init(). | |
doc_24313 |
Return the array of values, that are mapped to colors. The base class ScalarMappable does not make any assumptions on the dimensionality and shape of the array. | |
doc_24314 |
Generates a star shaped structuring element. Start has 8 vertices and is an overlap of square of size 2*a + 1 with its 45 degree rotated version. The slanted sides are 45 or 135 degrees to the horizontal axis. Parameters
aint
Parameter deciding the size of the star structural element. The side of the square array returned is 2*a + 1 + 2*floor(a / 2). Returns
selemndarray
The structuring element where elements of the neighborhood are 1 and 0 otherwise. Other Parameters
dtypedata-type
The data type of the structuring element. | |
doc_24315 | Return the size, in bytes, of path. Raise OSError if the file does not exist or is inaccessible. Changed in version 3.6: Accepts a path-like object. | |
doc_24316 | Pretend count lines have been changed, starting with line start. If changed is supplied, it specifies whether the affected lines are marked as having been changed (changed=True) or unchanged (changed=False). | |
doc_24317 | This is the fixed number of times the underlying field will be used. | |
doc_24318 | Unix V7 synonym for S_IWUSR. | |
doc_24319 | True if the address is otherwise IETF reserved. | |
doc_24320 |
Resets the starting point in tracking maximum GPU memory managed by the caching allocator for a given device. See max_memory_cached() for details. Parameters
device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Warning This function now calls reset_peak_memory_stats(), which resets /all/ peak memory stats. Note See Memory management for more details about GPU memory management. | |
doc_24321 | See Migration guide for more details. tf.compat.v1.raw_ops.ConjugateTranspose
tf.raw_ops.ConjugateTranspose(
x, perm, name=None
)
The output y has the same rank as x. The shapes of x and y satisfy: y.shape[i] == x.shape[perm[i]] for i in [0, 1, ..., rank(x) - 1] y[i,j,k,...,s,t,u] == conj(x[perm[i], perm[j], perm[k],...,perm[s], perm[t], perm[u]])
Args
x A Tensor.
perm A Tensor. Must be one of the following types: int32, int64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | |
doc_24322 |
Repeat elements of an array. Refer to numpy.repeat for full documentation. See also numpy.repeat
equivalent function | |
doc_24323 | Return the pathname of the current directory on the server. | |
doc_24324 | If distinct=True, Sum returns the sum of unique values. This is the SQL equivalent of SUM(DISTINCT <field>). The default value is False. | |
doc_24325 | Replace a header. Replace the first header found in the message that matches _name, retaining header order and field name case of the original header. If no matching header is found, raise a KeyError. | |
doc_24326 | Returns the logger used by multiprocessing. If necessary, a new one will be created. When first created the logger has level logging.NOTSET and no default handler. Messages sent to this logger will not by default propagate to the root logger. Note that on Windows child processes will only inherit the level of the parent process’s logger – any other customization of the logger will not be inherited. | |
doc_24327 |
Return the clipbox. | |
doc_24328 |
Add a set of hyperparameters to be compared in TensorBoard. Parameters
hparam_dict (dict) – Each key-value pair in the dictionary is the name of the hyper parameter and it’s corresponding value. The type of the value can be one of bool, string, float, int, or None.
metric_dict (dict) – Each key-value pair in the dictionary is the name of the metric and it’s corresponding value. Note that the key used here should be unique in the tensorboard record. Otherwise the value you added by add_scalar will be displayed in hparam plugin. In most cases, this is unwanted.
hparam_domain_discrete – (Optional[Dict[str, List[Any]]]) A dictionary that contains names of the hyperparameters and all discrete values they can hold
run_name (str) – Name of the run, to be included as part of the logdir. If unspecified, will use current timestamp. Examples: from torch.utils.tensorboard import SummaryWriter
with SummaryWriter() as w:
for i in range(5):
w.add_hparams({'lr': 0.1*i, 'bsize': i},
{'hparam/accuracy': 10*i, 'hparam/loss': 10*i})
Expected result: | |
doc_24329 | Returns a tensor filled with random integers generated uniformly between low (inclusive) and high (exclusive). The shape of the tensor is defined by the variable argument size. Note With the global dtype default (torch.float32), this function returns a tensor with dtype torch.int64. Parameters
low (int, optional) – Lowest integer to be drawn from the distribution. Default: 0.
high (int) – One above the highest integer to be drawn from the distribution.
size (tuple) – a tuple defining the shape of the output tensor. Keyword Arguments
generator (torch.Generator, optional) – a pseudorandom number generator for sampling
out (Tensor, optional) – the output tensor.
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).
layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. Example: >>> torch.randint(3, 5, (3,))
tensor([4, 3, 4])
>>> torch.randint(10, (2, 2))
tensor([[0, 2],
[5, 5]])
>>> torch.randint(3, 10, (2, 2))
tensor([[4, 5],
[6, 7]]) | |
doc_24330 | Create a view of an existing torch.Tensor input with specified size, stride and storage_offset. Warning More than one element of a created tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first. Many PyTorch functions, which return a view of a tensor, are internally implemented with this function. Those functions, like torch.Tensor.expand(), are easier to read and are therefore more advisable to use. Parameters
input (Tensor) – the input tensor.
size (tuple or ints) – the shape of the output tensor
stride (tuple or ints) – the stride of the output tensor
storage_offset (int, optional) – the offset in the underlying storage of the output tensor Example: >>> x = torch.randn(3, 3)
>>> x
tensor([[ 0.9039, 0.6291, 1.0795],
[ 0.1586, 2.1939, -0.4900],
[-0.1909, -0.7503, 1.9355]])
>>> t = torch.as_strided(x, (2, 2), (1, 2))
>>> t
tensor([[0.9039, 1.0795],
[0.6291, 0.1586]])
>>> t = torch.as_strided(x, (2, 2), (1, 2), 1)
tensor([[0.6291, 0.1586],
[1.0795, 2.1939]]) | |
doc_24331 |
Select values at particular time of day (e.g., 9:30AM). Parameters
time:datetime.time or str
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
Returns
Series or DataFrame
Raises
TypeError
If the index is not a DatetimeIndex See also between_time
Select values between particular times of the day. first
Select initial periods of time series based on a date offset. last
Select final periods of time series based on a date offset. DatetimeIndex.indexer_at_time
Get just the index locations for values at particular time of the day. Examples
>>> i = pd.date_range('2018-04-09', periods=4, freq='12H')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 00:00:00 1
2018-04-09 12:00:00 2
2018-04-10 00:00:00 3
2018-04-10 12:00:00 4
>>> ts.at_time('12:00')
A
2018-04-09 12:00:00 2
2018-04-10 12:00:00 4 | |
doc_24332 | Returns True to activate the filtering in get_post_parameters() and get_traceback_frame_variables(). By default the filter is active if DEBUG is False. Note that sensitive request.META values are always filtered along with sensitive setting values, as described in the DEBUG documentation. | |
doc_24333 | Refer to the corresponding method documentation in IPv4Address. New in version 3.9. | |
doc_24334 |
Append values to the end of an array. Parameters
arrarray_like
Values are appended to a copy of this array.
valuesarray_like
These values are appended to a copy of arr. It must be of the correct shape (the same shape as arr, excluding axis). If axis is not specified, values can be any shape and will be flattened before use.
axisint, optional
The axis along which values are appended. If axis is not given, both arr and values are flattened before use. Returns
appendndarray
A copy of arr with values appended to axis. Note that append does not occur in-place: a new array is allocated and filled. If axis is None, out is a flattened array. See also insert
Insert elements into an array. delete
Delete elements from an array. Examples >>> np.append([1, 2, 3], [[4, 5, 6], [7, 8, 9]])
array([1, 2, 3, ..., 7, 8, 9])
When axis is specified, values must have the correct shape. >>> np.append([[1, 2, 3], [4, 5, 6]], [[7, 8, 9]], axis=0)
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
>>> np.append([[1, 2, 3], [4, 5, 6]], [7, 8, 9], axis=0)
Traceback (most recent call last):
...
ValueError: all the input arrays must have same number of dimensions, but
the array at index 0 has 2 dimension(s) and the array at index 1 has 1
dimension(s) | |
doc_24335 | Return a k sized list of elements chosen from the population with replacement. If the population is empty, raises IndexError. If a weights sequence is specified, selections are made according to the relative weights. Alternatively, if a cum_weights sequence is given, the selections are made according to the cumulative weights (perhaps computed using itertools.accumulate()). For example, the relative weights [10, 5, 30, 5] are equivalent to the cumulative weights [10, 15, 45, 50]. Internally, the relative weights are converted to cumulative weights before making selections, so supplying the cumulative weights saves work. If neither weights nor cum_weights are specified, selections are made with equal probability. If a weights sequence is supplied, it must be the same length as the population sequence. It is a TypeError to specify both weights and cum_weights. The weights or cum_weights can use any numeric type that interoperates with the float values returned by random() (that includes integers, floats, and fractions but excludes decimals). Behavior is undefined if any weight is negative. A ValueError is raised if all weights are zero. For a given seed, the choices() function with equal weighting typically produces a different sequence than repeated calls to choice(). The algorithm used by choices() uses floating point arithmetic for internal consistency and speed. The algorithm used by choice() defaults to integer arithmetic with repeated selections to avoid small biases from round-off error. New in version 3.6. Changed in version 3.9: Raises a ValueError if all weights are zero. | |
doc_24336 | tf.experimental.numpy.broadcast_arrays(
*args, **kwargs
)
Unsupported arguments: subok. See the NumPy documentation for numpy.broadcast_arrays. | |
doc_24337 |
[Deprecated] Run the matplotlib test suite. Notes Deprecated since version 3.5.
matplotlib.testing Helper functions for testing. matplotlib.testing.set_font_settings_for_testing()[source]
matplotlib.testing.set_reproducibility_for_testing()[source]
matplotlib.testing.setup()[source]
matplotlib.testing.compare Utilities for comparing image results. matplotlib.testing.compare.calculate_rms(expected_image, actual_image)[source]
Calculate the per-pixel errors, then compute the root mean square error.
matplotlib.testing.compare.comparable_formats()[source]
Return the list of file formats that compare_images can compare on this system. Returns
list of str
E.g. ['png', 'pdf', 'svg', 'eps'].
matplotlib.testing.compare.compare_images(expected, actual, tol, in_decorator=False)[source]
Compare two "image" files checking differences within a tolerance. The two given filenames may point to files which are convertible to PNG via the converter dictionary. The underlying RMS is calculated with the calculate_rms function. Parameters
expectedstr
The filename of the expected image.
actualstr
The filename of the actual image.
tolfloat
The tolerance (a color value difference, where 255 is the maximal difference). The test fails if the average pixel difference is greater than this value.
in_decoratorbool
Determines the output format. If called from image_comparison decorator, this should be True. (default=False) Returns
None or dict or str
Return None if the images are equal within the given tolerance. If the images differ, the return value depends on in_decorator. If in_decorator is true, a dict with the following entries is returned:
rms: The RMS of the image difference.
expected: The filename of the expected image.
actual: The filename of the actual image.
diff_image: The filename of the difference image.
tol: The comparison tolerance. Otherwise, a human-readable multi-line string representation of this information is returned. Examples img1 = "./baseline/plot.png"
img2 = "./output/plot.png"
compare_images(img1, img2, 0.001)
matplotlib.testing.decorators classmatplotlib.testing.decorators.CleanupTestCase(methodName='runTest')[source]
Bases: unittest.case.TestCase A wrapper for unittest.TestCase that includes cleanup operations. Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name. classmethodsetUpClass()[source]
Hook method for setting up class fixture before running tests in the class.
classmethodtearDownClass()[source]
Hook method for deconstructing the class fixture after running all tests in the class.
matplotlib.testing.decorators.check_figures_equal(*, extensions=('png', 'pdf', 'svg'), tol=0)[source]
Decorator for test cases that generate and compare two figures. The decorated function must take two keyword arguments, fig_test and fig_ref, and draw the test and reference images on them. After the function returns, the figures are saved and compared. This decorator should be preferred over image_comparison when possible in order to keep the size of the test suite from ballooning. Parameters
extensionslist, default: ["png", "pdf", "svg"]
The extensions to test.
tolfloat
The RMS threshold above which the test is considered failed. Raises
RuntimeError
If any new figures are created (and not subsequently closed) inside the test function. Examples Check that calling Axes.plot with a single argument plots it against [0, 1, 2, ...]: @check_figures_equal()
def test_plot(fig_test, fig_ref):
fig_test.subplots().plot([1, 3, 5])
fig_ref.subplots().plot([0, 1, 2], [1, 3, 5])
matplotlib.testing.decorators.check_freetype_version(ver)[source]
matplotlib.testing.decorators.cleanup(style=None)[source]
A decorator to ensure that any global state is reset before running a test. Parameters
stylestr, dict, or list, optional
The style(s) to apply. Defaults to ["classic",
"_classic_test_patch"].
matplotlib.testing.decorators.image_comparison(baseline_images, extensions=None, tol=0, freetype_version=None, remove_text=False, savefig_kwarg=None, style=('classic', '_classic_test_patch'))[source]
Compare images generated by the test with those specified in baseline_images, which must correspond, else an ImageComparisonFailure exception will be raised. Parameters
baseline_imageslist or None
A list of strings specifying the names of the images generated by calls to Figure.savefig. If None, the test function must use the baseline_images fixture, either as a parameter or with pytest.mark.usefixtures. This value is only allowed when using pytest.
extensionsNone or list of str
The list of extensions to test, e.g. ['png', 'pdf']. If None, defaults to all supported extensions: png, pdf, and svg. When testing a single extension, it can be directly included in the names passed to baseline_images. In that case, extensions must not be set. In order to keep the size of the test suite from ballooning, we only include the svg or pdf outputs if the test is explicitly exercising a feature dependent on that backend (see also the check_figures_equal decorator for that purpose).
tolfloat, default: 0
The RMS threshold above which the test is considered failed. Due to expected small differences in floating-point calculations, on 32-bit systems an additional 0.06 is added to this threshold.
freetype_versionstr or tuple
The expected freetype version or range of versions for this test to pass.
remove_textbool
Remove the title and tick text from the figure before comparison. This is useful to make the baseline images independent of variations in text rendering between different versions of FreeType. This does not remove other, more deliberate, text, such as legends and annotations.
savefig_kwargdict
Optional arguments that are passed to the savefig method.
stylestr, dict, or list
The optional style(s) to apply to the image test. The test itself can also apply additional styles if desired. Defaults to ["classic",
"_classic_test_patch"].
matplotlib.testing.decorators.remove_ticks_and_titles(figure)[source]
matplotlib.testing.exceptions exceptionmatplotlib.testing.exceptions.ImageComparisonFailure[source]
Bases: AssertionError Raise this exception to mark a test as a comparison between two images. | |
doc_24338 | Return a dictionary mapping module-level class names to class descriptors. If possible, descriptors for imported base classes are included. Parameter module is a string with the name of the module to read; it may be the name of a module within a package. If given, path is a sequence of directory paths prepended to sys.path, which is used to locate the module source code. This function is the original interface and is only kept for back compatibility. It returns a filtered version of the following. | |
doc_24339 | The Driver class is used internally to wrap an OGR DataSource driver.
driver_count
Returns the number of OGR vector drivers currently registered. | |
doc_24340 |
Fit Kernel Ridge regression model Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. If kernel == “precomputed” this is instead a precomputed kernel matrix, of shape (n_samples, n_samples).
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values
sample_weightfloat or array-like of shape (n_samples,), default=None
Individual weights for each sample, ignored if None is passed. Returns
selfreturns an instance of self. | |
doc_24341 | A namespace of a class. This class inherits SymbolTable.
get_methods()
Return a tuple containing the names of methods declared in the class. | |
doc_24342 | Decorate a function as responder that accepts the request as the last argument. This works like the responder() decorator but the function is passed the request object as the last argument and the request object will be closed automatically: @Request.application
def my_wsgi_app(request):
return Response('Hello World!')
As of Werkzeug 0.14 HTTP exceptions are automatically caught and converted to responses instead of failing. Parameters
f (Callable[[Request], WSGIApplication]) – the WSGI callable to decorate Returns
a new WSGI callable Return type
WSGIApplication | |
doc_24343 |
Returns the current random seed of the current GPU. Warning This function eagerly initializes CUDA. | |
doc_24344 | The input stream from which this shlex instance is reading characters. | |
doc_24345 |
Create a 3D filled contour plot. Parameters
X, Y, Zarray-like
Input data. See contourf for acceptable data shapes.
zdir{'x', 'y', 'z'}, default: 'z'
The direction to use.
offsetfloat, optional
If specified, plot a projection of the contour lines at this position in a plane normal to zdir.
dataindexable object, optional
If given, all parameters also accept a string s, which is interpreted as data[s] (unless this raises an exception). *args, **kwargs
Other arguments are forwarded to matplotlib.axes.Axes.contourf. Returns
matplotlib.contour.QuadContourSet | |
doc_24346 | Temporary holder object for registering a blueprint with the application. An instance of this class is created by the make_setup_state() method and later passed to all register callback functions. Parameters
blueprint (Blueprint) –
app (Flask) –
options (Any) –
first_registration (bool) – Return type
None
add_url_rule(rule, endpoint=None, view_func=None, **options)
A helper method to register a rule (and optionally a view function) to the application. The endpoint is automatically prefixed with the blueprint’s name. Parameters
rule (str) –
endpoint (Optional[str]) –
view_func (Optional[Callable]) –
options (Any) – Return type
None
app
a reference to the current application
blueprint
a reference to the blueprint that created this setup state.
first_registration
as blueprints can be registered multiple times with the application and not everything wants to be registered multiple times on it, this attribute can be used to figure out if the blueprint was registered in the past already.
options
a dictionary with all options that were passed to the register_blueprint() method.
subdomain
The subdomain that the blueprint should be active for, None otherwise.
url_defaults
A dictionary with URL defaults that is added to each and every URL that was defined with the blueprint.
url_prefix
The prefix that should be used for all URLs defined on the blueprint. | |
doc_24347 |
Scalar method identical to the corresponding array attribute. Please see ndarray.squeeze. | |
doc_24348 |
Tuple of array dimensions. The shape property is usually used to get the current shape of an array, but may also be used to reshape the array in-place by assigning a tuple of array dimensions to it. As with numpy.reshape, one of the new shape dimensions can be -1, in which case its value is inferred from the size of the array and the remaining dimensions. Reshaping an array in-place will fail if a copy is required. See also numpy.reshape
similar function ndarray.reshape
similar method Examples >>> x = np.array([1, 2, 3, 4])
>>> x.shape
(4,)
>>> y = np.zeros((2, 3, 4))
>>> y.shape
(2, 3, 4)
>>> y.shape = (3, 8)
>>> y
array([[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.]])
>>> y.shape = (3, 6)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: total size of new array must be unchanged
>>> np.zeros((4,2))[::2].shape = (-1,)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: Incompatible shape for in-place modification. Use
`.reshape()` to make a copy with the desired shape. | |
doc_24349 | Returns a list of all objects tracked by the collector, excluding the list returned. If generation is not None, return only the objects tracked by the collector that are in that generation. Changed in version 3.8: New generation parameter. Raises an auditing event gc.get_objects with argument generation. | |
doc_24350 | See Migration guide for more details. tf.compat.v1.raw_ops.MaxPoolGradGradWithArgmax
tf.raw_ops.MaxPoolGradGradWithArgmax(
input, grad, argmax, ksize, strides, padding, include_batch_in_index=False,
name=None
)
Args
input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. The original input.
grad A Tensor. Must have the same type as input. 4-D with shape [batch, height, width, channels]. Gradients w.r.t. the input of max_pool.
argmax A Tensor. Must be one of the following types: int32, int64. The indices of the maximum values chosen for each output of max_pool.
ksize A list of ints that has length >= 4. The size of the window for each dimension of the input tensor.
strides A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor.
padding A string from: "SAME", "VALID". The type of padding algorithm to use.
include_batch_in_index An optional bool. Defaults to False. Whether to include batch dimension in flattened index of argmax.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_24351 | See Migration guide for more details. tf.compat.v1.raw_ops.EagerPyFunc
tf.raw_ops.EagerPyFunc(
input, token, Tout, is_async=False, name=None
)
semantics of the input, output, and attributes are the same as those for PyFunc.
Args
input A list of Tensor objects.
token A string.
Tout A list of tf.DTypes.
is_async An optional bool. Defaults to False.
name A name for the operation (optional).
Returns A list of Tensor objects of type Tout. | |
doc_24352 | See Migration guide for more details. tf.compat.v1.raw_ops.LookupTableExport
tf.raw_ops.LookupTableExport(
table_handle, Tkeys, Tvalues, name=None
)
Args
table_handle A Tensor of type mutable string. Handle to the table.
Tkeys A tf.DType.
Tvalues A tf.DType.
name A name for the operation (optional).
Returns A tuple of Tensor objects (keys, values). keys A Tensor of type Tkeys.
values A Tensor of type Tvalues. | |
doc_24353 | Start debugging with a Bdb instance from caller’s frame. | |
doc_24354 |
Returns
float
Always returns 1. | |
doc_24355 | See torch.orgqr() | |
doc_24356 |
Bases: matplotlib.offsetbox.AnchoredOffsetbox An anchored container with a fixed size and fillable DrawingArea. Artists added to the drawing_area will have their coordinates interpreted as pixels. Any transformations set on the artists will be overridden. Parameters
width, heightfloat
width and height of the container, in pixels.
xdescent, ydescentfloat
descent of the container in the x- and y- direction, in pixels.
locstr
Location of this artist. Valid locations are 'upper left', 'upper center', 'upper right', 'center left', 'center', 'center right', 'lower left', 'lower center, 'lower right'. For backward compatibility, numeric values are accepted as well. See the parameter loc of Legend for details.
padfloat, default: 0.4
Padding around the child objects, in fraction of the font size.
borderpadfloat, default: 0.5
Border padding, in fraction of the font size.
propmatplotlib.font_manager.FontProperties, optional
Font property used as a reference for paddings.
frameonbool, default: True
If True, draw a box around this artists. **kwargs
Keyword arguments forwarded to AnchoredOffsetbox. Examples To display blue and red circles of different sizes in the upper right of an axes ax: >>> ada = AnchoredDrawingArea(20, 20, 0, 0,
... loc='upper right', frameon=False)
>>> ada.drawing_area.add_artist(Circle((10, 10), 10, fc="b"))
>>> ada.drawing_area.add_artist(Circle((30, 10), 5, fc="r"))
>>> ax.add_artist(ada)
Attributes
drawing_areamatplotlib.offsetbox.DrawingArea
A container for artists to display. set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, bbox_to_anchor=<UNSET>, child=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, gid=<UNSET>, height=<UNSET>, in_layout=<UNSET>, label=<UNSET>, offset=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, width=<UNSET>, zorder=<UNSET>)[source]
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
bbox_to_anchor unknown
child unknown
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
figure Figure
gid str
height float
in_layout bool
label object
offset (float, float) or callable
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
width float
zorder float
Examples using mpl_toolkits.axes_grid1.anchored_artists.AnchoredDrawingArea
Simple Anchored Artists
Annotations | |
doc_24357 |
Returns a dictionary containing a whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Returns
a dictionary containing a whole state of the module Return type
dict Example: >>> module.state_dict().keys()
['bias', 'weight'] | |
doc_24358 | See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_bool, tf.compat.v1.app.flags.DEFINE_boolean, tf.compat.v1.flags.DEFINE_boolean
tf.compat.v1.flags.DEFINE_bool(
name, default, help, flag_values=_flagvalues.FLAGS, module_name=None, **args
)
Such a boolean flag does not take an argument. If a user wants to specify a false value explicitly, the long option beginning with 'no' must be used: i.e. --noflag This flag will have a value of None, True or False. None is possible if default=None and the user does not specify the flag on the command line.
Args
name str, the flag name.
default bool|str|None, the default value of the flag.
help str, the help message.
flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden.
module_name str, the name of the Python module declaring this flag. If not provided, it will be computed using the stack trace of this call.
**args dict, the extra keyword args that are passed to Flag init.
Returns a handle to defined flag. | |
doc_24359 |
Return the marker face color. See also set_markerfacecolor. | |
doc_24360 |
Set a.flat[n] = values[n] for all n in indices. Refer to numpy.put for full documentation. See also numpy.put
equivalent function | |
doc_24361 | Checks for an ASCII decimal digit, '0' through '9'. This is equivalent to c in string.digits. | |
doc_24362 |
Transform via the mapping y=tanh(x)y = \tanh(x) . It is equivalent to `
ComposeTransform([AffineTransform(0., 2.), SigmoidTransform(), AffineTransform(-1., 2.)])
` However this might not be numerically stable, thus it is recommended to use TanhTransform instead. Note that one should use cache_size=1 when it comes to NaN/Inf values. | |
doc_24363 |
Return the local sum of pixels. Only greyvalues between percentiles [p0, p1] are considered in the filter. Note that the sum may overflow depending on the data type of the input array. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. | |
doc_24364 |
Return key in self. | |
doc_24365 | The standard error device. Initially, this is the active console screen buffer, CONOUT$. | |
doc_24366 |
Return a Series or DataFrame containing counts of unique rows. New in version 1.4.0. Parameters
subset:list-like, optional
Columns to use when counting unique combinations.
normalize:bool, default False
Return proportions rather than frequencies.
sort:bool, default True
Sort by frequencies.
ascending:bool, default False
Sort in ascending order.
dropna:bool, default True
Don’t include counts of rows that contain NA values. Returns
Series or DataFrame
Series if the groupby as_index is True, otherwise DataFrame. See also Series.value_counts
Equivalent method on Series. DataFrame.value_counts
Equivalent method on DataFrame. SeriesGroupBy.value_counts
Equivalent method on SeriesGroupBy. Notes If the groupby as_index is True then the returned Series will have a MultiIndex with one level per input column. If the groupby as_index is False then the returned DataFrame will have an additional column with the value_counts. The column is labelled ‘count’ or ‘proportion’, depending on the normalize parameter. By default, rows that contain any NA values are omitted from the result. By default, the result will be in descending order so that the first element of each group is the most frequently-occurring row. Examples
>>> df = pd.DataFrame({
... 'gender': ['male', 'male', 'female', 'male', 'female', 'male'],
... 'education': ['low', 'medium', 'high', 'low', 'high', 'low'],
... 'country': ['US', 'FR', 'US', 'FR', 'FR', 'FR']
... })
>>> df
gender education country
0 male low US
1 male medium FR
2 female high US
3 male low FR
4 female high FR
5 male low FR
>>> df.groupby('gender').value_counts()
gender education country
female high FR 1
US 1
male low FR 2
US 1
medium FR 1
dtype: int64
>>> df.groupby('gender').value_counts(ascending=True)
gender education country
female high FR 1
US 1
male low US 1
medium FR 1
low FR 2
dtype: int64
>>> df.groupby('gender').value_counts(normalize=True)
gender education country
female high FR 0.50
US 0.50
male low FR 0.50
US 0.25
medium FR 0.25
dtype: float64
>>> df.groupby('gender', as_index=False).value_counts()
gender education country count
0 female high FR 1
1 female high US 1
2 male low FR 2
3 male low US 1
4 male medium FR 1
>>> df.groupby('gender', as_index=False).value_counts(normalize=True)
gender education country proportion
0 female high FR 0.50
1 female high US 0.50
2 male low FR 0.50
3 male low US 0.25
4 male medium FR 0.25 | |
doc_24367 | This exception is raised to skip a test. Usually you can use TestCase.skipTest() or one of the skipping decorators instead of raising this directly. | |
doc_24368 | For a call object that represents multiple calls, call_list() returns a list of all the intermediate calls as well as the final call. | |
doc_24369 |
template_name: 'django/forms/widgets/multiple_hidden.html'
Renders as: multiple <input type="hidden" ...> tags A widget that handles multiple hidden widgets for fields that have a list of values. | |
doc_24370 | Returns the Django version, which should be correct for all built-in Django commands. User-supplied commands can override this method to return their own version. | |
doc_24371 | class sklearn.inspection.PartialDependenceDisplay(pd_results, *, features, feature_names, target_idx, pdp_lim, deciles, kind='average', subsample=1000, random_state=None) [source]
Partial Dependence Plot (PDP). This can also display individual partial dependencies which are often referred to as: Individual Condition Expectation (ICE). It is recommended to use plot_partial_dependence to create a PartialDependenceDisplay. All parameters are stored as attributes. Read more in Advanced Plotting With Partial Dependence and the User Guide. New in version 0.22. Parameters
pd_resultslist of Bunch
Results of partial_dependence for features.
featureslist of (int,) or list of (int, int)
Indices of features for a given plot. A tuple of one integer will plot a partial dependence curve of one feature. A tuple of two integers will plot a two-way partial dependence curve as a contour plot.
feature_nameslist of str
Feature names corresponding to the indices in features.
target_idxint
In a multiclass setting, specifies the class for which the PDPs should be computed. Note that for binary classification, the positive class (index 1) is always used. In a multioutput setting, specifies the task for which the PDPs should be computed. Ignored in binary classification or classical regression settings.
pdp_limdict
Global min and max average predictions, such that all plots will have the same scale and y limits. pdp_lim[1] is the global min and max for single partial dependence curves. pdp_lim[2] is the global min and max for two-way partial dependence curves.
decilesdict
Deciles for feature indices in features.
kind{‘average’, ‘individual’, ‘both’}, default=’average’
Whether to plot the partial dependence averaged across all the samples in the dataset or one line per sample or both.
kind='average' results in the traditional PD plot;
kind='individual' results in the ICE plot. Note that the fast method='recursion' option is only available for kind='average'. Plotting individual dependencies requires using the slower method='brute' option. New in version 0.24.
subsamplefloat, int or None, default=1000
Sampling for ICE curves when kind is ‘individual’ or ‘both’. If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to be used to plot ICE curves. If int, represents the maximum absolute number of samples to use. Note that the full dataset is still used to calculate partial dependence when kind='both'. New in version 0.24.
random_stateint, RandomState instance or None, default=None
Controls the randomness of the selected samples when subsamples is not None. See Glossary for details. New in version 0.24. Attributes
bounding_ax_matplotlib Axes or None
If ax is an axes or None, the bounding_ax_ is the axes where the grid of partial dependence plots are drawn. If ax is a list of axes or a numpy array of axes, bounding_ax_ is None.
axes_ndarray of matplotlib Axes
If ax is an axes or None, axes_[i, j] is the axes on the i-th row and j-th column. If ax is a list of axes, axes_[i] is the i-th item in ax. Elements that are None correspond to a nonexisting axes in that position.
lines_ndarray of matplotlib Artists
If ax is an axes or None, lines_[i, j] is the partial dependence curve on the i-th row and j-th column. If ax is a list of axes, lines_[i] is the partial dependence curve corresponding to the i-th item in ax. Elements that are None correspond to a nonexisting axes or an axes that does not include a line plot.
deciles_vlines_ndarray of matplotlib LineCollection
If ax is an axes or None, vlines_[i, j] is the line collection representing the x axis deciles of the i-th row and j-th column. If ax is a list of axes, vlines_[i] corresponds to the i-th item in ax. Elements that are None correspond to a nonexisting axes or an axes that does not include a PDP plot. New in version 0.23.
deciles_hlines_ndarray of matplotlib LineCollection
If ax is an axes or None, vlines_[i, j] is the line collection representing the y axis deciles of the i-th row and j-th column. If ax is a list of axes, vlines_[i] corresponds to the i-th item in ax. Elements that are None correspond to a nonexisting axes or an axes that does not include a 2-way plot. New in version 0.23.
contours_ndarray of matplotlib Artists
If ax is an axes or None, contours_[i, j] is the partial dependence plot on the i-th row and j-th column. If ax is a list of axes, contours_[i] is the partial dependence plot corresponding to the i-th item in ax. Elements that are None correspond to a nonexisting axes or an axes that does not include a contour plot.
figure_matplotlib Figure
Figure containing partial dependence plots. See also
partial_dependence
Compute Partial Dependence values.
plot_partial_dependence
Plot Partial Dependence. Methods
plot(*[, ax, n_cols, line_kw, contour_kw]) Plot partial dependence plots.
plot(*, ax=None, n_cols=3, line_kw=None, contour_kw=None) [source]
Plot partial dependence plots. Parameters
axMatplotlib axes or array-like of Matplotlib axes, default=None
If a single axis is passed in, it is treated as a bounding axes
and a grid of partial dependence plots will be drawn within these bounds. The n_cols parameter controls the number of columns in the grid.
If an array-like of axes are passed in, the partial dependence
plots will be drawn directly into these axes.
If None, a figure and a bounding axes is created and treated
as the single axes case.
n_colsint, default=3
The maximum number of columns in the grid plot. Only active when ax is a single axes or None.
line_kwdict, default=None
Dict with keywords passed to the matplotlib.pyplot.plot call. For one-way partial dependence plots.
contour_kwdict, default=None
Dict with keywords passed to the matplotlib.pyplot.contourf call for two-way partial dependence plots. Returns
displayPartialDependenceDisplay
Examples using sklearn.inspection.PartialDependenceDisplay
Advanced Plotting With Partial Dependence | |
doc_24372 |
Transform labels back to original encoding. Parameters
yndarray of shape (n_samples,)
Target values. Returns
yndarray of shape (n_samples,) | |
doc_24373 | If False, follow RFC 5322, supporting non-ASCII characters in headers by encoding them as “encoded words”. If True, follow RFC 6532 and use utf-8 encoding for headers. Messages formatted in this way may be passed to SMTP servers that support the SMTPUTF8 extension (RFC 6531). | |
doc_24374 | Unix V7 synonym for S_IRUSR. | |
doc_24375 |
Convert list of tuples to MultiIndex. Parameters
tuples:list / sequence of tuple-likes
Each tuple is the index of one row/column.
sortorder:int or None
Level of sortedness (must be lexicographically sorted by that level).
names:list / sequence of str, optional
Names for the levels in the index. Returns
MultiIndex
See also MultiIndex.from_arrays
Convert list of arrays to MultiIndex. MultiIndex.from_product
Make a MultiIndex from cartesian product of iterables. MultiIndex.from_frame
Make a MultiIndex from a DataFrame. Examples
>>> tuples = [(1, 'red'), (1, 'blue'),
... (2, 'red'), (2, 'blue')]
>>> pd.MultiIndex.from_tuples(tuples, names=('number', 'color'))
MultiIndex([(1, 'red'),
(1, 'blue'),
(2, 'red'),
(2, 'blue')],
names=['number', 'color']) | |
doc_24376 |
Return the sketch parameters for the artist. Returns
tuple or None
A 3-tuple with the following elements:
scale: The amplitude of the wiggle perpendicular to the source line.
length: The length of the wiggle along the line.
randomness: The scale factor by which the length is shrunken or expanded. Returns None if no sketch parameters were set. | |
doc_24377 |
Perform OPTICS clustering. Extracts an ordered list of points and reachability distances, and performs initial clustering using max_eps distance specified at OPTICS object instantiation. Parameters
Xndarray of shape (n_samples, n_features), or (n_samples, n_samples) if metric=’precomputed’
A feature array, or array of distances between samples if metric=’precomputed’.
yignored
Ignored. Returns
selfinstance of OPTICS
The instance. | |
doc_24378 |
Set whether the artist uses clipping. When False artists will be visible outside of the axes which can lead to unexpected results. Parameters
bbool | |
doc_24379 |
Bases: object cla()[source]
colorbar(mappable, *, ticks=None, **kwargs)[source]
toggle_label(b)[source] | |
doc_24380 | The detect_encoding() function is used to detect the encoding that should be used to decode a Python source file. It requires one argument, readline, in the same way as the tokenize() generator. It will call readline a maximum of twice, and return the encoding used (as a string) and a list of any lines (not decoded from bytes) it has read in. It detects the encoding from the presence of a UTF-8 BOM or an encoding cookie as specified in PEP 263. If both a BOM and a cookie are present, but disagree, a SyntaxError will be raised. Note that if the BOM is found, 'utf-8-sig' will be returned as an encoding. If no encoding is specified, then the default of 'utf-8' will be returned. Use open() to open Python source files: it uses detect_encoding() to detect the file encoding. | |
doc_24381 | See Migration guide for more details. tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer
tf.compat.v1.mixed_precision.MixedPrecisionLossScaleOptimizer(
opt, loss_scale
)
Loss scaling is a process that multiplies the loss by a multiplier called the loss scale, and divides each gradient by the same multiplier. The pseudocode for this process is: loss = ...
loss *= loss_scale
grads = gradients(loss, vars)
grads /= loss_scale
Mathematically, loss scaling has no effect, but can help avoid numerical underflow in intermediate gradients when float16 tensors are used for mixed precision training. By multiplying the loss, each intermediate gradient will have the same multiplier applied. The loss scale can either be a fixed constant, chosen by the user, or be dynamically determined. Dynamically determining the loss scale is convenient as a loss scale does not have to be explicitly chosen. However it reduces performance. This optimizer wraps another optimizer and applies loss scaling to it via a LossScale. Loss scaling is applied whenever gradients are computed, such as through minimize().
Args
use_locking Bool. If True apply use locks to prevent concurrent updates to variables.
name A non-empty string. The name to use for accumulators created for the optimizer.
Raises
ValueError If name is malformed. Methods apply_gradients View source
apply_gradients(
grads_and_vars, global_step=None, name=None
)
Apply gradients to variables. This is the second part of minimize(). It returns an Operation that conditionally applies gradients if all gradient values are finite. Otherwise no update is performed (nor is global_step incremented).
Args
grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients().
global_step Optional Variable to increment by one after the variables have been updated.
name Optional name for the returned operation. Default to the name passed to the Optimizer constructor.
Returns An Operation that conditionally applies the specified gradients. If global_step was not None, that operation also increments global_step.
Raises
RuntimeError If you should use _distributed_apply() instead. compute_gradients View source
compute_gradients(
loss, var_list=None, gate_gradients=optimizer.Optimizer.GATE_OP,
aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None
)
Compute gradients of loss for the variables in var_list. This adjusts the dynamic range of the gradient evaluation by scaling up the loss value. The gradient values are then scaled back down by the reciprocal of the loss scale. This is useful in reduced precision training where small gradient values would otherwise underflow the representable range.
Args
loss A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable.
var_list Optional list or tuple of tf.Variable to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES.
gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH.
aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod.
colocate_gradients_with_ops If True, try colocating gradients with the corresponding op.
grad_loss Optional. A Tensor holding the gradient computed for loss.
Returns A list of (gradient, variable) pairs. Variable is always present, but gradient can be None.
get_name View source
get_name()
get_slot View source
get_slot(
var, name
)
Return a slot named name created for var by the Optimizer. Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them. Use get_slot_names() to get the list of slot names created by the Optimizer.
Args
var A variable passed to minimize() or apply_gradients().
name A string.
Returns The Variable for the slot if it was created, None otherwise.
get_slot_names View source
get_slot_names()
Return a list of the names of slots created by the Optimizer. See get_slot().
Returns A list of strings.
minimize View source
minimize(
loss, global_step=None, var_list=None, gate_gradients=GATE_OP,
aggregation_method=None, colocate_gradients_with_ops=False, name=None,
grad_loss=None
)
Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function.
Args
loss A Tensor containing the value to minimize.
global_step Optional Variable to increment by one after the variables have been updated.
var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES.
gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH.
aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod.
colocate_gradients_with_ops If True, try colocating gradients with the corresponding op.
name Optional name for the returned operation.
grad_loss Optional. A Tensor holding the gradient computed for loss.
Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step.
Raises
ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source
variables()
Returns the variables of the Optimizer.
Class Variables
GATE_GRAPH 2
GATE_NONE 0
GATE_OP 1 | |
doc_24382 |
Convert Series to DataFrame. Parameters
name:object, optional
The passed name should substitute for the series name (if it has one). Returns
DataFrame
DataFrame representation of Series. Examples
>>> s = pd.Series(["a", "b", "c"],
... name="vals")
>>> s.to_frame()
vals
0 a
1 b
2 c | |
doc_24383 | Close the TarFile. In write mode, two finishing zero blocks are appended to the archive. | |
doc_24384 | Return the next member of the archive as a TarInfo object, when TarFile is opened for reading. Return None if there is no more available. | |
doc_24385 |
An ExtensionDtype for uint32 integer data. Changed in version 1.0.0: Now uses pandas.NA as its missing value, rather than numpy.nan. Attributes
None Methods
None | |
doc_24386 |
Display all open figures. Parameters
blockbool, optional
Whether to wait for all figures to be closed before returning. If True block and run the GUI main loop until all figure windows are closed. If False ensure that all figure windows are displayed and return immediately. In this case, you are responsible for ensuring that the event loop is running to have responsive figures. Defaults to True in non-interactive mode and to False in interactive mode (see pyplot.isinteractive). See also ion
Enable interactive mode, which shows / updates the figure after every plotting command, so that calling show() is not necessary. ioff
Disable interactive mode. savefig
Save the figure to an image file instead of showing it on screen. Notes Saving figures to file and showing a window at the same time If you want an image file as well as a user interface window, use pyplot.savefig before pyplot.show. At the end of (a blocking) show() the figure is closed and thus unregistered from pyplot. Calling pyplot.savefig afterwards would save a new and thus empty figure. This limitation of command order does not apply if the show is non-blocking or if you keep a reference to the figure and use Figure.savefig. Auto-show in jupyter notebooks The jupyter backends (activated via %matplotlib inline, %matplotlib notebook, or %matplotlib widget), call show() at the end of every cell by default. Thus, you usually don't have to call it explicitly there.
Examples using matplotlib.pyplot.show
Bar Label Demo
Stacked bar chart
Grouped bar chart with labels
Horizontal bar chart
Broken Barh
CapStyle
Plotting categorical variables
Plotting the coherence of two signals
CSD Demo
Curve with error band
Errorbar limit selection
Errorbar subsampling
EventCollection Demo
Eventplot Demo
Filled polygon
Fill Between and Alpha
Fill Betweenx Demo
Hatch-filled histograms
Bar chart with gradients
Hat graph
Discrete distribution as horizontal bar chart
JoinStyle
Customizing dashed line styles
Lines with a ticked patheffect
Linestyles
Marker reference
Markevery Demo
prop_cycle property markevery in rcParams
Plotting masked and NaN values
Multicolored lines
Psd Demo
Scatter Custom Symbol
Scatter Demo2
Scatter plot with histograms
Scatter Masked
Scatter plot with pie chart markers
Marker examples
Scatter Symbol
Scatter plots with a legend
Simple Plot
Using span_where
Spectrum Representations
Stackplots and streamgraphs
Stairs Demo
Stem Plot
Step Demo
Creating a timeline with lines, dates, and text
hlines and vlines
Cross- and Auto-Correlation Demo
Affine transform of an image
Wind Barbs
Barcode
Contour Corner Mask
Contour Demo
Contour Image
Contour Label Demo
Contourf Demo
Contourf Hatching
Contourf and log color scale
Contouring the solution space of optimizations
BboxImage Demo
Figimage Demo
Creating annotated heatmaps
Image antialiasing
Clipping images with patches
Image Demo
Image Masked
Image Nonuniform
Blend transparency with color in 2D images
Modifying the coordinate formatter
Interpolations for imshow
Contour plot of irregularly spaced data
Layer Images
Matshow
Multi Image
Pcolor Demo
pcolormesh grids and shading
pcolormesh
Streamplot
QuadMesh Demo
Advanced quiver and quiverkey functions
Quiver Simple Demo
Shading example
Spectrogram Demo
Spy Demos
Tricontour Demo
Tricontour Smooth Delaunay
Tricontour Smooth User
Trigradient Demo
Triinterp Demo
Tripcolor Demo
Triplot Demo
Watermark image
Aligning Labels
Axes box aspect
Axes Demo
Controlling view limits using margins and sticky_edges
Axes Props
Axes Zoom Effect
axhspan Demo
Equal axis aspect ratio
Axis Label Position
Broken Axis
Placing Colorbars
Resizing axes with constrained layout
Resizing axes with tight layout
Different scales on the same axes
Figure size in different units
Figure labels: suptitle, supxlabel, supylabel
Creating adjacent subplots
Geographic Projections
Combining two subplots using subplots and GridSpec
Using Gridspec to make multi-column/row subplot layouts
Nested Gridspecs
Invert Axes
Managing multiple figures in pyplot
Secondary Axis
Sharing axis limits and views
Shared Axis
Figure subfigures
Multiple subplots
Subplots spacings and margins
Creating multiple subplots using plt.subplots
Plots with different scales
Zoom region inset axes
Percentiles as horizontal bar chart
Artist customization in box plots
Box plots with custom fill colors
Boxplots
Box plot vs. violin plot comparison
Boxplot drawer function
Plot a confidence ellipse of a two-dimensional dataset
Violin plot customization
Errorbar function
Different ways of specifying error bars
Including upper and lower limits in error bars
Creating boxes from error bars using PatchCollection
Hexagonal binned plot
Histograms
Using histograms to plot a cumulative distribution
Some features of the histogram (hist) function
Demo of the histogram function's different histtype settings
The histogram (hist) function with multiple data sets
Producing multiple histograms side by side
Time Series Histogram
Violin plot basics
Basic pie chart
Pie Demo2
Bar of pie
Nested pie charts
Labeling a pie and a donut
Bar chart on polar axis
Polar plot
Polar Legend
Scatter plot on polar axis
Using accented text in matplotlib
Scale invariant angle label
Annotating Plots
Arrow Demo
Auto-wrapping text
Composing Custom Legends
Date tick labels
Custom tick formatter for time series
AnnotationBbox demo
Using a text as a Path
Text Rotation Mode
The difference between \dfrac and \frac
Labeling ticks using engineering notation
Annotation arrow style reference
Styling text boxes
Figure legend demo
Configuring the font family
Using a ttf font file in Matplotlib
Font table
Fonts demo (object-oriented style)
Fonts demo (keyword arguments)
Labelling subplots
Legend using pre-defined labels
Legend Demo
Artist within an artist
Convert texts to images
Mathtext
Mathtext Examples
Math fontfamily
Multiline
Placing text boxes
Rainbow text
STIX Fonts
Rendering math equations using TeX
Precise text layout
Controlling style of text and labels using a dictionary
Default text rotation demonstration
Text Rotation Relative To Line
Title positioning
Unicode minus
Usetex Baseline Test
Usetex Fonteffects
Text watermark
Align y-labels
Annotate Transform
Annotating a plot
Annotation Polar
Programmatically controlling subplot adjustment
Infinite lines
Boxplot Demo
Dollar Ticks
Fig Axes Customize Simple
Simple axes labels
Adding lines to figures
plot() format string
Pyplot Mathtext
Pyplot Simple
Pyplot Text
Pyplot Three
Pyplot Two Subplots
Text Commands
Text Layout
Color Demo
Color by y-value
Colors in the default property cycle
Colorbar
Colormap reference
Creating a colormap from a list of colors
List of named colors
Arrow guide
Reference for Matplotlib artists
Line, Poly and RegularPoly Collection with autoscaling
Compound path
Dolphins
Mmh Donuts!!!
Ellipse Collection
Ellipse Demo
Drawing fancy boxes
Hatch demo
Line Collection
Circles, Wedges and Polygons
PathPatch object
Bezier Curve
Scatter plot
Bayesian Methods for Hackers style sheet
Dark background style sheet
FiveThirtyEight style sheet
ggplot style sheet
Grayscale style sheet
Solarized Light stylesheet
Style sheets reference
Anchored Direction Arrow
Axes Divider
Demo Axes Grid
Axes Grid2
HBoxDivider demo
Showing RGB channels using RGBAxes
Adding a colorbar to inset axes
Colorbar with AxesDivider
Controlling the position and size of colorbars with Inset Axes
Per-row or per-column colorbars
Axes with a fixed physical size
Setting a fixed aspect on ImageGrid cells
Inset Locator Demo
Inset Locator Demo2
Make Room For Ylabel Using Axesgrid
Parasite Simple
Parasite Simple2
Scatter Histogram (Locatable Axes)
Simple Anchored Artists
Simple Axes Divider 1
Simple Axes Divider 3
Simple ImageGrid
Simple ImageGrid 2
Simple Axisline4
Simple Colorbar
Axis Direction
axis_direction demo
Axis line styles
Curvilinear grid demo
Demo CurveLinear Grid2
mpl_toolkits.axisartist.floating_axes features
floating_axis demo
Parasite Axes demo
Parasite axis demo
Ticklabel alignment
Ticklabel direction
Simple Axis Direction01
Simple Axis Direction03
Simple Axis Pad
Custom spines with axisartist
Simple Axisline
Simple Axisline3
Anatomy of a figure
Bachelor's degrees by gender
Firefox
Integral as the area under a curve
Shaded & power normalized rendering
XKCD
Decay
Animated histogram
The Bayes update
The double pendulum problem
Animated image using a precomputed list of images
Pausing and Resuming an Animation
Rain simulation
Animated 3D random walk
Animated line plot
Oscilloscope
MATPLOTLIB UNCHAINED
Close Event
Mouse move and click events
Data Browser
Figure/Axes enter and leave events
Interactive functions
Image Slices Viewer
Keypress event
Lasso Demo
Legend Picking
Looking Glass
Path Editor
Pick Event Demo
Pick Event Demo2
Poly Editor
Pong
Resampling Data
Timers
Trifinder Event Demo
Viewlims
Zoom Window
Frontpage contour example
Anchored Artists
Changing colors of lines intersecting a box
Manual Contour
Coords Report
Cross hair cursor
Custom projection
Customize Rc
AGG filter
Ribbon Box
Fill Spiral
Findobj Demo
Building histograms using Rectangles and PolyCollections
Plotting with keywords
Matplotlib logo
Multiprocess
Packed-bubble chart
Patheffect Demo
Pythonic Matplotlib
Set and get properties
Table Demo
TickedStroke patheffect
transforms.offset_copy
Zorder Demo
Plot 2D data on 3D plot
Demo of 3D bar charts
Create 2D bar graphs in different planes
3D box surface plot
Demonstrates plotting contour (level) curves in 3D
Demonstrates plotting contour (level) curves in 3D using the extend3d option
Projecting contour profiles onto a graph
Filled contours
Projecting filled contour onto a graph
Custom hillshading in a 3D surface plot
3D errorbars
Create 3D histogram of 2D data
Parametric Curve
Lorenz Attractor
2D and 3D Axes in same Figure
Automatic Text Offsetting
Draw flat objects in 3D plot
Generate polygons to fill under 3D line graph
3D quiver plot
3D scatterplot
3D stem
3D plots as subplots
3D surface (colormap)
3D surface (solid color)
3D surface (checkerboard)
3D surface with polar coordinates
Text annotations in 3D
Triangular 3D contour plot
Triangular 3D filled contour plot
Triangular 3D surfaces
More triangular 3D surfaces
3D voxel / volumetric plot
3D voxel plot of the numpy logo
3D voxel / volumetric plot with rgb colors
3D voxel / volumetric plot with cylindrical coordinates
3D wireframe plot
3D wireframe plots in one direction
Loglog Aspect
Custom scale
Log Bar
Log Demo
Log Axis
Logit Demo
Exploring normalizations
Scales
Symlog Demo
Hillshading
Anscombe's quartet
Hinton diagrams
Left ventricle bullseye
MRI
MRI With EEG
Radar chart (aka spider or star chart)
The Sankey class
Long chain of connections using Sankey
Rankine power cycle
SkewT-logP diagram: using transforms and custom projections
Topographic hillshading
Centered spines with arrows
Multiple Yaxis With Spines
Spine Placement
Spines
Custom spine bounds
Dropped spines
Automatically setting tick positions
Centering labels between ticks
Colorbar Tick Labelling
Custom Ticker1
Formatting date ticks using ConciseDateFormatter
Date Demo Convert
Placing date ticks using recurrence rules
Date Index Formatter
Date Precision and Epochs
Major and minor ticks
The default tick formatter
Tick formatters
Tick locators
Set default y-axis tick labels on the right
Setting tick labels from a list of values
Set default x-axis tick labels on the top
Rotating custom tick labels
Annotation with units
Artist tests
Bar demo with units
Group barchart with units
Ellipse With Units
Evans test
Radian ticks
Inches and Centimeters
Unit handling
pyplot with GTK3
pyplot with GTK4
Tool Manager
Anchored Box04
Annotate Explain
Annotate Simple01
Annotate Simple02
Annotate Simple03
Annotate Simple04
Annotate Simple Coord01
Annotate Simple Coord02
Annotate Simple Coord03
Annotate Text Arrow
Interactive Adjustment of Colormap Range
Colormap Normalizations
Colormap Normalizations Symlognorm
Connect Simple01
Connection styles for annotations
Custom box styles
subplot2grid demo
GridSpec demo
Nested GridSpecs
Simple Annotate01
Simple Legend01
Simple Legend02
Annotated Cursor
Buttons
Check Buttons
Cursor
Lasso Selector
Menu
Mouse Cursor
Multicursor
Polygon Selector
Radio Buttons
Thresholding an Image with RangeSlider
Rectangle and ellipse selectors
Slider
Snapping Sliders to Discrete Values
Span Selector
Textbox
Pyplot tutorial
The Lifecycle of a Plot
Customizing Matplotlib with style sheets and rcParams
Artist tutorial
Legend guide
Styling with cycler
Constrained Layout Guide
Tight Layout guide
Arranging multiple Axes in a Figure
origin and extent in imshow
Faster rendering by using blitting
Path Tutorial
Path effects guide
Transformations Tutorial
Specifying Colors
Customized Colorbars Tutorial
Creating Colormaps in Matplotlib
Colormap Normalization
Choosing Colormaps in Matplotlib
Text in Matplotlib Plots
Text properties and layout
plot(x, y)
scatter(x, y)
bar(x, height) / barh(y, width)
stem(x, y)
step(x, y)
fill_between(x, y1, y2)
imshow(Z)
pcolormesh(X, Y, Z)
contour(X, Y, Z)
contourf(X, Y, Z)
barbs(X, Y, U, V)
quiver(X, Y, U, V)
streamplot(X, Y, U, V)
hist(x)
boxplot(X)
errorbar(x, y, yerr, xerr)
violinplot(D)
eventplot(D)
hist2d(x, y)
hexbin(x, y, C)
pie(x)
tricontour(x, y, z)
tricontourf(x, y, z)
tripcolor(x, y, z)
triplot(x, y) | |
doc_24387 | os.O_WRONLY
os.O_RDWR
os.O_APPEND
os.O_CREAT
os.O_EXCL
os.O_TRUNC
The above constants are available on Unix and Windows. | |
doc_24388 | Pop the first item for a list on the dict. Afterwards the key is removed from the dict, so additional values are discarded: >>> d = MultiDict({"foo": [1, 2, 3]})
>>> d.pop("foo")
1
>>> "foo" in d
False
Parameters
key – the key to pop.
default – if provided the value to return if the key was not in the dictionary. | |
doc_24389 | Returns the number of non-fixed hyperparameters of the kernel. | |
doc_24390 | See Migration guide for more details. tf.compat.v1.raw_ops.ReaderRestoreStateV2
tf.raw_ops.ReaderRestoreStateV2(
reader_handle, state, name=None
)
Not all Readers support being restored, so this can produce an Unimplemented error.
Args
reader_handle A Tensor of type resource. Handle to a Reader.
state A Tensor of type string. Result of a ReaderSerializeState of a Reader with type matching reader_handle.
name A name for the operation (optional).
Returns The created Operation. | |
doc_24391 | Platform dependent: the time of most recent metadata change on Unix, the time of creation on Windows, expressed in nanoseconds as an integer. | |
doc_24392 | Instance of the class to check the one time link. This will default to default_token_generator, it’s an instance of django.contrib.auth.tokens.PasswordResetTokenGenerator. | |
doc_24393 |
Compute first of group values. Parameters
numeric_only:bool, default False
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data.
min_count:int, default -1
The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns
Series or DataFrame
Computed first of values within each group. | |
doc_24394 |
Generalized function class. Define a vectorized function which takes a nested sequence of objects or numpy arrays as inputs and returns a single numpy array or a tuple of numpy arrays. The vectorized function evaluates pyfunc over successive tuples of the input arrays like the python map function, except it uses the broadcasting rules of numpy. The data type of the output of vectorized is determined by calling the function with the first element of the input. This can be avoided by specifying the otypes argument. Parameters
pyfunccallable
A python function or method.
otypesstr or list of dtypes, optional
The output data type. It must be specified as either a string of typecode characters or a list of data type specifiers. There should be one data type specifier for each output.
docstr, optional
The docstring for the function. If None, the docstring will be the pyfunc.__doc__.
excludedset, optional
Set of strings or integers representing the positional or keyword arguments for which the function will not be vectorized. These will be passed directly to pyfunc unmodified. New in version 1.7.0.
cachebool, optional
If True, then cache the first function call that determines the number of outputs if otypes is not provided. New in version 1.7.0.
signaturestring, optional
Generalized universal function signature, e.g., (m,n),(n)->(m) for vectorized matrix-vector multiplication. If provided, pyfunc will be called with (and expected to return) arrays with shapes given by the size of corresponding core dimensions. By default, pyfunc is assumed to take scalars as input and output. New in version 1.12.0. Returns
vectorizedcallable
Vectorized function. See also frompyfunc
Takes an arbitrary Python function and returns a ufunc Notes The vectorize function is provided primarily for convenience, not for performance. The implementation is essentially a for loop. If otypes is not specified, then a call to the function with the first argument will be used to determine the number of outputs. The results of this call will be cached if cache is True to prevent calling the function twice. However, to implement the cache, the original function must be wrapped which will slow down subsequent calls, so only do this if your function is expensive. The new keyword argument interface and excluded argument support further degrades performance. References 1
Generalized Universal Function API Examples >>> def myfunc(a, b):
... "Return a-b if a>b, otherwise return a+b"
... if a > b:
... return a - b
... else:
... return a + b
>>> vfunc = np.vectorize(myfunc)
>>> vfunc([1, 2, 3, 4], 2)
array([3, 4, 1, 2])
The docstring is taken from the input function to vectorize unless it is specified: >>> vfunc.__doc__
'Return a-b if a>b, otherwise return a+b'
>>> vfunc = np.vectorize(myfunc, doc='Vectorized `myfunc`')
>>> vfunc.__doc__
'Vectorized `myfunc`'
The output type is determined by evaluating the first element of the input, unless it is specified: >>> out = vfunc([1, 2, 3, 4], 2)
>>> type(out[0])
<class 'numpy.int64'>
>>> vfunc = np.vectorize(myfunc, otypes=[float])
>>> out = vfunc([1, 2, 3, 4], 2)
>>> type(out[0])
<class 'numpy.float64'>
The excluded argument can be used to prevent vectorizing over certain arguments. This can be useful for array-like arguments of a fixed length such as the coefficients for a polynomial as in polyval: >>> def mypolyval(p, x):
... _p = list(p)
... res = _p.pop(0)
... while _p:
... res = res*x + _p.pop(0)
... return res
>>> vpolyval = np.vectorize(mypolyval, excluded=['p'])
>>> vpolyval(p=[1, 2, 3], x=[0, 1])
array([3, 6])
Positional arguments may also be excluded by specifying their position: >>> vpolyval.excluded.add(0)
>>> vpolyval([1, 2, 3], x=[0, 1])
array([3, 6])
The signature argument allows for vectorizing functions that act on non-scalar arrays of fixed length. For example, you can use it for a vectorized calculation of Pearson correlation coefficient and its p-value: >>> import scipy.stats
>>> pearsonr = np.vectorize(scipy.stats.pearsonr,
... signature='(n),(n)->(),()')
>>> pearsonr([[0, 1, 2, 3]], [[1, 2, 3, 4], [4, 3, 2, 1]])
(array([ 1., -1.]), array([ 0., 0.]))
Or for a vectorized convolution: >>> convolve = np.vectorize(np.convolve, signature='(n),(m)->(k)')
>>> convolve(np.eye(4), [1, 2, 1])
array([[1., 2., 1., 0., 0., 0.],
[0., 1., 2., 1., 0., 0.],
[0., 0., 1., 2., 1., 0.],
[0., 0., 0., 1., 2., 1.]])
Methods
__call__(*args, **kwargs) Return arrays with the results of pyfunc broadcast (vectorized) over args and kwargs not in excluded. | |
doc_24395 |
Alias for get_color. | |
doc_24396 | Saves session data for a provided session key, or deletes the session in case the data is empty. | |
doc_24397 |
frame
The frame which surrounds the text and scroll bar widgets.
vbar
The scroll bar widget. | |
doc_24398 | get all available fonts get_fonts() -> list of strings Returns a list of all the fonts available on the system. The names of the fonts will be set to lowercase with all spaces and punctuation removed. This works on most systems, but some will return an empty list if they cannot find fonts. | |
doc_24399 |
Create a subplot that can act as a host to parasitic axes. Parameters
figurematplotlib.figure.Figure
Figure to which the subplot will be added. Defaults to the current figure pyplot.gcf(). *args, **kwargs
Will be passed on to the underlying Axes object creation. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.