_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_24200 | See Migration guide for more details. tf.compat.v1.signal.mfccs_from_log_mel_spectrograms
tf.signal.mfccs_from_log_mel_spectrograms(
log_mel_spectrograms, name=None
)
Implemented with GPU-compatible ops and supports gradients. Mel-Frequency Cepstral Coefficient (MFCC) calculation consists of taking the DCT-II of a log-magnitude mel-scale spectrogram. HTK's MFCCs use a particular scaling of the DCT-II which is almost orthogonal normalization. We follow this convention. All num_mel_bins MFCCs are returned and it is up to the caller to select a subset of the MFCCs based on their application. For example, it is typical to only use the first few for speech recognition, as this results in an approximately pitch-invariant representation of the signal. For example: batch_size, num_samples, sample_rate = 32, 32000, 16000.0
# A Tensor of [batch_size, num_samples] mono PCM samples in the range [-1, 1].
pcm = tf.random.normal([batch_size, num_samples], dtype=tf.float32)
# A 1024-point STFT with frames of 64 ms and 75% overlap.
stfts = tf.signal.stft(pcm, frame_length=1024, frame_step=256,
fft_length=1024)
spectrograms = tf.abs(stfts)
# Warp the linear scale spectrograms into the mel-scale.
num_spectrogram_bins = stfts.shape[-1].value
lower_edge_hertz, upper_edge_hertz, num_mel_bins = 80.0, 7600.0, 80
linear_to_mel_weight_matrix = tf.signal.linear_to_mel_weight_matrix(
num_mel_bins, num_spectrogram_bins, sample_rate, lower_edge_hertz,
upper_edge_hertz)
mel_spectrograms = tf.tensordot(
spectrograms, linear_to_mel_weight_matrix, 1)
mel_spectrograms.set_shape(spectrograms.shape[:-1].concatenate(
linear_to_mel_weight_matrix.shape[-1:]))
# Compute a stabilized log to get log-magnitude mel-scale spectrograms.
log_mel_spectrograms = tf.math.log(mel_spectrograms + 1e-6)
# Compute MFCCs from log_mel_spectrograms and take the first 13.
mfccs = tf.signal.mfccs_from_log_mel_spectrograms(
log_mel_spectrograms)[..., :13]
Args
log_mel_spectrograms A [..., num_mel_bins] float32/float64 Tensor of log-magnitude mel-scale spectrograms.
name An optional name for the operation.
Returns A [..., num_mel_bins] float32/float64 Tensor of the MFCCs of log_mel_spectrograms.
Raises
ValueError If num_mel_bins is not positive. | |
doc_24201 | sklearn.utils.as_float_array(X, *, copy=True, force_all_finite=True) [source]
Converts an array-like to an array of floats. The new dtype will be np.float32 or np.float64, depending on the original type. The function can create a copy or modify the argument depending on the argument copy. Parameters
X{array-like, sparse matrix}
copybool, default=True
If True, a copy of X will be created. If False, a copy may still be returned if X’s dtype is not a floating point type.
force_all_finitebool or ‘allow-nan’, default=True
Whether to raise an error on np.inf, np.nan, pd.NA in X. The possibilities are: True: Force all values of X to be finite. False: accepts np.inf, np.nan, pd.NA in X. ‘allow-nan’: accepts only np.nan and pd.NA values in X. Values cannot be infinite. New in version 0.20: force_all_finite accepts the string 'allow-nan'. Changed in version 0.23: Accepts pd.NA and converts it into np.nan Returns
XT{ndarray, sparse matrix}
An array of type float. | |
doc_24202 |
Return the tight bounding box of the axes, including axis and their decorators (xlabel, title, etc). Artists that have artist.set_in_layout(False) are not included in the bbox. Parameters
rendererRendererBase subclass
renderer that will be used to draw the figures (i.e. fig.canvas.get_renderer())
bbox_extra_artistslist of Artist or None
List of artists to include in the tight bounding box. If None (default), then all artist children of the Axes are included in the tight bounding box.
call_axes_locatorbool, default: True
If call_axes_locator is False, it does not call the _axes_locator attribute, which is necessary to get the correct bounding box. call_axes_locator=False can be used if the caller is only interested in the relative size of the tightbbox compared to the Axes bbox.
for_layout_onlydefault: False
The bounding box will not include the x-extent of the title and the xlabel, or the y-extent of the ylabel. Returns
BboxBase
Bounding box in figure pixel coordinates. See also matplotlib.axes.Axes.get_window_extent
matplotlib.axis.Axis.get_tightbbox
matplotlib.spines.Spine.get_window_extent | |
doc_24203 | This read-only attribute represents the URL the response will redirect to (equivalent to the Location response header). | |
doc_24204 |
Bases: matplotlib.mathtext.MathtextBackendAgg [Deprecated] Notes Deprecated since version 3.4: get_results(box, used_characters)[source]
Return a backend-specific tuple to return to the backend after all processing is done. | |
doc_24205 | Returns a logger which is a descendant to this logger, as determined by the suffix. Thus, logging.getLogger('abc').getChild('def.ghi') would return the same logger as would be returned by logging.getLogger('abc.def.ghi'). This is a convenience method, useful when the parent logger is named using e.g. __name__ rather than a literal string. New in version 3.2. | |
doc_24206 |
Normalize value data in the [vmin, vmax] interval into the [0.0, 1.0] interval and return it. Parameters
value
Data to normalize.
clipbool
If None, defaults to self.clip (which defaults to False). Notes If not already initialized, self.vmin and self.vmax are initialized using self.autoscale_None(value). | |
doc_24207 |
Bases: mpl_toolkits.axes_grid1.inset_locator.AnchoredLocatorBase Parameters
locstr
The box location. Valid locations are 'upper left', 'upper center', 'upper right', 'center left', 'center', 'center right', 'lower left', 'lower center, 'lower right'. For backward compatibility, numeric values are accepted as well. See the parameter loc of Legend for details.
padfloat, default: 0.4
Padding around the child as fraction of the fontsize.
borderpadfloat, default: 0.5
Padding between the offsetbox frame and the bbox_to_anchor.
childOffsetBox
The box that will be anchored.
propFontProperties
This is only used as a reference for paddings. If not given, rcParams["legend.fontsize"] (default: 'medium') is used.
frameonbool
Whether to draw a frame around the box.
bbox_to_anchorBboxBase, 2-tuple, or 4-tuple of floats
Box that is used to position the legend in conjunction with loc.
bbox_transformNone or matplotlib.transforms.Transform
The transform for the bounding box (bbox_to_anchor). **kwargs
All other parameters are passed on to OffsetBox. Notes See Legend for a detailed description of the anchoring mechanism. get_extent(renderer)[source]
Return the extent of the box as (width, height, x, y). This is the extent of the child plus the padding.
set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, bbox_to_anchor=<UNSET>, child=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, gid=<UNSET>, height=<UNSET>, in_layout=<UNSET>, label=<UNSET>, offset=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, width=<UNSET>, zorder=<UNSET>)[source]
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
bbox_to_anchor unknown
child unknown
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
figure Figure
gid str
height float
in_layout bool
label object
offset (float, float) or callable
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
width float
zorder float | |
doc_24208 | Same as equivalent method in the Document class. | |
doc_24209 | Boolean indicating whether the device has been closed. | |
doc_24210 | See torch.transpose() | |
doc_24211 |
Draw histogram of the input series using matplotlib. Parameters
by:object, optional
If passed, then used to form histograms for separate groups.
ax:matplotlib axis object
If not passed, uses gca().
grid:bool, default True
Whether to show axis grid lines.
xlabelsize:int, default None
If specified changes the x-axis label size.
xrot:float, default None
Rotation of x axis labels.
ylabelsize:int, default None
If specified changes the y-axis label size.
yrot:float, default None
Rotation of y axis labels.
figsize:tuple, default None
Figure size in inches by default.
bins:int or sequence, default 10
Number of histogram bins to be used. If an integer is given, bins + 1 bin edges are calculated and returned. If bins is a sequence, gives bin edges, including left edge of first bin and right edge of last bin. In this case, bins is returned unmodified.
backend:str, default None
Backend to use instead of the backend specified in the option plotting.backend. For instance, ‘matplotlib’. Alternatively, to specify the plotting.backend for the whole session, set pd.options.plotting.backend. New in version 1.0.0.
legend:bool, default False
Whether to show the legend. New in version 1.1.0. **kwargs
To be passed to the actual plotting function. Returns
matplotlib.AxesSubplot
A histogram plot. See also matplotlib.axes.Axes.hist
Plot a histogram using matplotlib. | |
doc_24212 |
Given a sequence of Paths, Transforms objects, and offsets, as found in a PathCollection, returns the bounding box that encapsulates all of them. Parameters
master_transformTransform
Global transformation applied to all paths.
pathslist of Path
transformslist of Affine2D
offsets(N, 2) array-like
offset_transformAffine2D
Transform applied to the offsets before offsetting the path. Notes The way that paths, transforms and offsets are combined follows the same method as for collections: Each is iterated over independently, so if you have 3 paths, 2 transforms and 1 offset, their combinations are as follows: (A, A, A), (B, B, A), (C, A, A) | |
doc_24213 |
Convert y using the unit type of the yaxis. If the artist is not in contained in an Axes or if the yaxis does not have units, y itself is returned. | |
doc_24214 |
Convert an image to 16-bit unsigned integer format. Parameters
imagendarray
Input image.
force_copybool, optional
Force a copy of the data, irrespective of its current dtype. Returns
outndarray of uint16
Output image. Notes Negative input values will be clipped. Positive values are scaled between 0 and 65535. | |
doc_24215 |
ss = SeedSequence(12345)
# Spawn off 10 child SeedSequences to pass to child processes.
child_seeds = ss.spawn(10)
streams = [default_rng(s) for s in child_seeds]
Child SeedSequence objects can also spawn to make grandchildren, and so on. Each SeedSequence has its position in the tree of spawned SeedSequence objects mixed in with the user-provided seed to generate independent (with very high probability) streams. grandchildren = child_seeds[0].spawn(4)
grand_streams = [default_rng(s) for s in grandchildren]
This feature lets you make local decisions about when and how to split up streams without coordination between processes. You do not have to preallocate space to avoid overlapping or request streams from a common global service. This general “tree-hashing” scheme is not unique to numpy but not yet widespread. Python has increasingly-flexible mechanisms for parallelization available, and this scheme fits in very well with that kind of use. Using this scheme, an upper bound on the probability of a collision can be estimated if one knows the number of streams that you derive. SeedSequence hashes its inputs, both the seed and the spawn-tree-path, down to a 128-bit pool by default. The probability that there is a collision in that pool, pessimistically-estimated (1), will be about \(n^2*2^{-128}\) where n is the number of streams spawned. If a program uses an aggressive million streams, about \(2^{20}\), then the probability that at least one pair of them are identical is about \(2^{-88}\), which is in solidly-ignorable territory (2). 1
The algorithm is carefully designed to eliminate a number of possible ways to collide. For example, if one only does one level of spawning, it is guaranteed that all states will be unique. But it’s easier to estimate the naive upper bound on a napkin and take comfort knowing that the probability is actually lower. 2
In this calculation, we can mostly ignore the amount of numbers drawn from each stream. See Upgrading PCG64 with PCG64DXSM for the technical details about PCG64. The other PRNGs we provide have some extra protection built in that avoids overlaps if the SeedSequence pools differ in the slightest bit. PCG64DXSM has \(2^{127}\) separate cycles determined by the seed in addition to the position in the \(2^{128}\) long period for each cycle, so one has to both get on or near the same cycle and seed a nearby position in the cycle. Philox has completely independent cycles determined by the seed. SFC64 incorporates a 64-bit counter so every unique seed is at least \(2^{64}\) iterations away from any other seed. And finally, MT19937 has just an unimaginably huge period. Getting a collision internal to SeedSequence is the way a failure would be observed. Independent Streams Philox is a counter-based RNG based which generates values by encrypting an incrementing counter using weak cryptographic primitives. The seed determines the key that is used for the encryption. Unique keys create unique, independent streams. Philox lets you bypass the seeding algorithm to directly set the 128-bit key. Similar, but different, keys will still create independent streams. import secrets
from numpy.random import Philox
# 128-bit number as a seed
root_seed = secrets.getrandbits(128)
streams = [Philox(key=root_seed + stream_id) for stream_id in range(10)]
This scheme does require that you avoid reusing stream IDs. This may require coordination between the parallel processes. Jumping the BitGenerator state jumped advances the state of the BitGenerator as-if a large number of random numbers have been drawn, and returns a new instance with this state. The specific number of draws varies by BitGenerator, and ranges from \(2^{64}\) to \(2^{128}\). Additionally, the as-if draws also depend on the size of the default random number produced by the specific BitGenerator. The BitGenerators that support jumped, along with the period of the BitGenerator, the size of the jump and the bits in the default unsigned random are listed below.
BitGenerator Period Jump Size Bits per Draw
MT19937 \(2^{19937}-1\) \(2^{128}\) 32
PCG64 \(2^{128}\) \(~2^{127}\) (3) 64
PCG64DXSM \(2^{128}\) \(~2^{127}\) (3) 64
Philox \(2^{256}\) \(2^{128}\) 64
3(1,2)
The jump size is \((\phi-1)*2^{128}\) where \(\phi\) is the golden ratio. As the jumps wrap around the period, the actual distances between neighboring streams will slowly grow smaller than the jump size, but using the golden ratio this way is a classic method of constructing a low-discrepancy sequence that spreads out the states around the period optimally. You will not be able to jump enough to make those distances small enough to overlap in your lifetime. jumped can be used to produce long blocks which should be long enough to not overlap. import secrets
from numpy.random import PCG64
seed = secrets.getrandbits(128)
blocked_rng = []
rng = PCG64(seed)
for i in range(10):
blocked_rng.append(rng.jumped(i))
When using jumped, one does have to take care not to jump to a stream that was already used. In the above example, one could not later use blocked_rng[0].jumped() as it would overlap with blocked_rng[1]. Like with the independent streams, if the main process here wants to split off 10 more streams by jumping, then it needs to start with range(10, 20), otherwise it would recreate the same streams. On the other hand, if you carefully construct the streams, then you are guaranteed to have streams that do not overlap. | |
doc_24216 | The directory b. | |
doc_24217 | Similar to TimeInput.format | |
doc_24218 |
Draw streamlines of a vector flow. Parameters
x, y1D/2D arrays
Evenly spaced strictly increasing arrays to make a grid. If 2D, all rows of x must be equal and all columns of y must be equal; i.e., they must be as if generated by np.meshgrid(x_1d, y_1d).
u, v2D arrays
x and y-velocities. The number of rows and columns must match the length of y and x, respectively.
densityfloat or (float, float)
Controls the closeness of streamlines. When density = 1, the domain is divided into a 30x30 grid. density linearly scales this grid. Each cell in the grid can have, at most, one traversing streamline. For different densities in each direction, use a tuple (density_x, density_y).
linewidthfloat or 2D array
The width of the stream lines. With a 2D array the line width can be varied across the grid. The array must have the same shape as u and v.
colorcolor or 2D array
The streamline color. If given an array, its values are converted to colors using cmap and norm. The array must have the same shape as u and v.
cmapColormap
Colormap used to plot streamlines and arrows. This is only used if color is an array.
normNormalize
Normalize object used to scale luminance data to 0, 1. If None, stretch (min, max) to (0, 1). This is only used if color is an array.
arrowsizefloat
Scaling factor for the arrow size.
arrowstylestr
Arrow style specification. See FancyArrowPatch.
minlengthfloat
Minimum length of streamline in axes coordinates.
start_pointsNx2 array
Coordinates of starting points for the streamlines in data coordinates (the same coordinates as the x and y arrays).
zorderint
The zorder of the stream lines and arrows. Artists with lower zorder values are drawn first.
maxlengthfloat
Maximum length of streamline in axes coordinates.
integration_direction{'forward', 'backward', 'both'}, default: 'both'
Integrate the streamline in forward, backward or both directions.
dataindexable object, optional
If given, the following parameters also accept a string s, which is interpreted as data[s] (unless this raises an exception): x, y, u, v, start_points Returns
StreamplotSet
Container object with attributes
lines: LineCollection of streamlines
arrows: PatchCollection containing FancyArrowPatch objects representing the arrows half-way along stream lines. This container will probably change in the future to allow changes to the colormap, alpha, etc. for both lines and arrows, but these changes should be backward compatible. | |
doc_24219 | Renders a response, providing the invalid form as context. | |
doc_24220 | Returns the maximum value of each slice of the input tensor in the given dimension(s) dim. Note
The difference between max/min and amax/amin is:
amax/amin supports reducing on multiple dimensions,
amax/amin does not return indices,
amax/amin evenly distributes gradient between equal values, while max(dim)/min(dim) propagates gradient only to a single index in the source tensor. If keepdim is ``True`, the output tensors are of the same size as input except in the dimension(s) dim where they are of size 1. Otherwise, dim`s are squeezed (see :func:`torch.squeeze), resulting in the output tensors having fewer dimension than input. Parameters
input (Tensor) – the input tensor.
dim (int or tuple of python:ints) – the dimension or dimensions to reduce.
keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.8177, 1.4878, -0.2491, 0.9130],
[-0.7158, 1.1775, 2.0992, 0.4817],
[-0.0053, 0.0164, -1.3738, -0.0507],
[ 1.9700, 1.1106, -1.0318, -1.0816]])
>>> torch.amax(a, 1)
tensor([1.4878, 2.0992, 0.0164, 1.9700]) | |
doc_24221 |
Compute the outer product of two vectors. Given two vectors, a = [a0, a1, ..., aM] and b = [b0, b1, ..., bN], the outer product [1] is: [[a0*b0 a0*b1 ... a0*bN ]
[a1*b0 .
[ ... .
[aM*b0 aM*bN ]]
Parameters
a(M,) array_like
First input vector. Input is flattened if not already 1-dimensional.
b(N,) array_like
Second input vector. Input is flattened if not already 1-dimensional.
out(M, N) ndarray, optional
A location where the result is stored New in version 1.9.0. Returns
out(M, N) ndarray
out[i, j] = a[i] * b[j] See also inner
einsum
einsum('i,j->ij', a.ravel(), b.ravel()) is the equivalent. ufunc.outer
A generalization to dimensions other than 1D and other operations. np.multiply.outer(a.ravel(), b.ravel()) is the equivalent. tensordot
np.tensordot(a.ravel(), b.ravel(), axes=((), ())) is the equivalent. Notes Masked values are replaced by 0. References 1
: G. H. Golub and C. F. Van Loan, Matrix Computations, 3rd ed., Baltimore, MD, Johns Hopkins University Press, 1996, pg. 8. Examples Make a (very coarse) grid for computing a Mandelbrot set: >>> rl = np.outer(np.ones((5,)), np.linspace(-2, 2, 5))
>>> rl
array([[-2., -1., 0., 1., 2.],
[-2., -1., 0., 1., 2.],
[-2., -1., 0., 1., 2.],
[-2., -1., 0., 1., 2.],
[-2., -1., 0., 1., 2.]])
>>> im = np.outer(1j*np.linspace(2, -2, 5), np.ones((5,)))
>>> im
array([[0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j],
[0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j],
[0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j]])
>>> grid = rl + im
>>> grid
array([[-2.+2.j, -1.+2.j, 0.+2.j, 1.+2.j, 2.+2.j],
[-2.+1.j, -1.+1.j, 0.+1.j, 1.+1.j, 2.+1.j],
[-2.+0.j, -1.+0.j, 0.+0.j, 1.+0.j, 2.+0.j],
[-2.-1.j, -1.-1.j, 0.-1.j, 1.-1.j, 2.-1.j],
[-2.-2.j, -1.-2.j, 0.-2.j, 1.-2.j, 2.-2.j]])
An example using a “vector” of letters: >>> x = np.array(['a', 'b', 'c'], dtype=object)
>>> np.outer(x, [1, 2, 3])
array([['a', 'aa', 'aaa'],
['b', 'bb', 'bbb'],
['c', 'cc', 'ccc']], dtype=object) | |
doc_24222 |
Parameters
usetexbool or None
Whether to render using TeX, None means to use rcParams["text.usetex"] (default: False). | |
doc_24223 |
Return the bool of a single element Series or DataFrame. This must be a boolean scalar value, either True or False. It will raise a ValueError if the Series or DataFrame does not have exactly 1 element, or that element is not boolean (integer values 0 and 1 will also raise an exception). Returns
bool
The value in the Series or DataFrame. See also Series.astype
Change the data type of a Series, including to boolean. DataFrame.astype
Change the data type of a DataFrame, including to boolean. numpy.bool_
NumPy boolean data type, used by pandas for boolean values. Examples The method will only work for single element objects with a boolean value:
>>> pd.Series([True]).bool()
True
>>> pd.Series([False]).bool()
False
>>> pd.DataFrame({'col': [True]}).bool()
True
>>> pd.DataFrame({'col': [False]}).bool()
False | |
doc_24224 |
Return the indices of the elements that are non-zero. Refer to numpy.nonzero for full documentation. See also numpy.nonzero
equivalent function | |
doc_24225 | Same as gettempdir() but the return value is in bytes. New in version 3.5. | |
doc_24226 |
Test element-wise for NaT (not a time) and return result as a boolean array. New in version 1.13.0. Parameters
xarray_like
Input array with datetime or timedelta data type.
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
yndarray or bool
True where x is NaT, false otherwise. This is a scalar if x is a scalar. See also
isnan, isinf, isneginf, isposinf, isfinite
Examples >>> np.isnat(np.datetime64("NaT"))
True
>>> np.isnat(np.datetime64("2016-01-01"))
False
>>> np.isnat(np.array(["NaT", "2016-01-01"], dtype="datetime64[ns]"))
array([ True, False]) | |
doc_24227 |
Roll provided date backward to next offset only if not on offset. | |
doc_24228 | Post an article using the POST command. The data argument is either a file object opened for binary reading, or any iterable of bytes objects (representing raw lines of the article to be posted). It should represent a well-formed news article, including the required headers. The post() method automatically escapes lines beginning with . and appends the termination line. If the method succeeds, the server’s response is returned. If the server refuses posting, a NNTPReplyError is raised. | |
doc_24229 | See Migration guide for more details. tf.compat.v1.train.Int64List
Attributes
value repeated int64 value | |
doc_24230 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_24231 | >>> clear_part, encrypted_part, finale = font.parts
>>> slanted_font = font.transform({'slant': 0.167})
>>> extended_font = font.transform({'extend': 1.2})
Sources: Adobe Technical Note #5040, Supporting Downloadable PostScript Language Fonts. Adobe Type 1 Font Format, Adobe Systems Incorporated, third printing, v1.1, 1993. ISBN 0-201-57044-0. classmatplotlib.type1font.Type1Font(input)[source]
Bases: object A class representing a Type-1 font, for use by backends. Attributes
partstuple
A 3-tuple of the cleartext part, the encrypted part, and the finale of zeros.
decryptedbytes
The decrypted form of parts[1].
propdict[str, Any]
A dictionary of font properties. Initialize a Type-1 font. Parameters
inputstr or 3-tuple
Either a pfb file name, or a 3-tuple of already-decoded Type-1 font parts. decrypted
parts
prop
transform(effects)[source]
Return a new font that is slanted and/or extended. Parameters
effectsdict
A dict with optional entries:
'slant'float, default: 0
Tangent of the angle that the font is to be slanted to the right. Negative values slant to the left.
'extend'float, default: 1
Scaling factor for the font width. Values less than 1 condense the glyphs. Returns
Type1Font | |
doc_24232 | Return True if the object is a Python function, which includes functions created by a lambda expression. | |
doc_24233 |
An ExtensionArray for storing sparse data. Parameters
data:array-like or scalar
A dense array of values to store in the SparseArray. This may contain fill_value.
sparse_index:SparseIndex, optional
index:Index
Deprecated since version 1.4.0: Use a function like np.full to construct an array with the desired repeats of the scalar value instead.
fill_value:scalar, optional
Elements in data that are fill_value are not stored in the SparseArray. For memory savings, this should be the most common value in data. By default, fill_value depends on the dtype of data:
data.dtype na_value
float np.nan
int 0
bool False
datetime64 pd.NaT
timedelta64 pd.NaT The fill value is potentially specified in three ways. In order of precedence, these are The fill_value argument dtype.fill_value if fill_value is None and dtype is a SparseDtype data.dtype.fill_value if fill_value is None and dtype is not a SparseDtype and data is a SparseArray.
kind:str
Can be ‘integer’ or ‘block’, default is ‘integer’. The type of storage for sparse locations. ‘block’: Stores a block and block_length for each contiguous span of sparse values. This is best when sparse data tends to be clumped together, with large regions of fill-value values between sparse values. ‘integer’: uses an integer to store the location of each sparse value.
dtype:np.dtype or SparseDtype, optional
The dtype to use for the SparseArray. For numpy dtypes, this determines the dtype of self.sp_values. For SparseDtype, this determines self.sp_values and self.fill_value.
copy:bool, default False
Whether to explicitly copy the incoming data array. Examples
>>> from pandas.arrays import SparseArray
>>> arr = SparseArray([0, 0, 1, 2])
>>> arr
[0, 0, 1, 2]
Fill: 0
IntIndex
Indices: array([2, 3], dtype=int32)
Attributes
None Methods
None | |
doc_24234 | logical_xor() is a logical operation which takes two logical operands (see Logical operands). The result is the digit-wise exclusive or of the two operands. | |
doc_24235 | Special reducer that can be defined in Pickler subclasses. This method has priority over any reducer in the dispatch_table. It should conform to the same interface as a __reduce__() method, and can optionally return NotImplemented to fallback on dispatch_table-registered reducers to pickle obj. For a detailed example, see Custom Reduction for Types, Functions, and Other Objects. New in version 3.8. | |
doc_24236 | tf.compat.v1.data.experimental.CsvDataset(
filenames, record_defaults, compression_type=None, buffer_size=None,
header=False, field_delim=',', use_quote_delim=True,
na_value='', select_cols=None, exclude_cols=None
)
Args
filenames A tf.string tensor containing one or more filenames.
record_defaults A list of default values for the CSV fields. Each item in the list is either a valid CSV DType (float32, float64, int32, int64, string), or a Tensor object with one of the above types. One per column of CSV data, with either a scalar Tensor default value for the column if it is optional, or DType or empty Tensor if required. If both this and select_columns are specified, these must have the same lengths, and column_defaults is assumed to be sorted in order of increasing column index. If both this and 'exclude_cols' are specified, the sum of lengths of record_defaults and exclude_cols should equal the total number of columns in the CSV file.
compression_type (Optional.) A tf.string scalar evaluating to one of "" (no compression), "ZLIB", or "GZIP". Defaults to no compression.
buffer_size (Optional.) A tf.int64 scalar denoting the number of bytes to buffer while reading files. Defaults to 4MB.
header (Optional.) A tf.bool scalar indicating whether the CSV file(s) have header line(s) that should be skipped when parsing. Defaults to False.
field_delim (Optional.) A tf.string scalar containing the delimiter character that separates fields in a record. Defaults to ",".
use_quote_delim (Optional.) A tf.bool scalar. If False, treats double quotation marks as regular characters inside of string fields (ignoring RFC 4180, Section 2, Bullet 5). Defaults to True.
na_value (Optional.) A tf.string scalar indicating a value that will be treated as NA/NaN.
select_cols (Optional.) A sorted list of column indices to select from the input data. If specified, only this subset of columns will be parsed. Defaults to parsing all columns. At most one of select_cols and exclude_cols can be specified.
Attributes
element_spec The type specification of an element of this dataset.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
output_classes Returns the class of each component of an element of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_classes(dataset).
output_shapes Returns the shape of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_shapes(dataset).
output_types Returns the type of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_types(dataset).
Methods apply View source
apply(
transformation_func
)
Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset.
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
Args
transformation_func A function that takes one Dataset argument and returns a Dataset.
Returns
Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source
as_numpy_iterator()
Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
as_numpy_iterator() will preserve the nested structure of dataset elements.
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays.
Raises
TypeError if an element contains a non-Tensor value.
RuntimeError if eager execution is not enabled. batch View source
batch(
batch_size, drop_remainder=False
)
Combines consecutive elements of this dataset into batches.
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset. cache View source
cache(
filename=''
)
Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed.
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file") # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache.
Args
filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory.
Returns
Dataset A Dataset. cardinality View source
cardinality()
Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively.
concatenate View source
concatenate(
dataset
)
Creates a Dataset by concatenating the given dataset with this dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have the same
# nested structures and output types.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
Args
dataset Dataset to be concatenated.
Returns
Dataset A Dataset. enumerate View source
enumerate(
start=0
)
Enumerates the elements of this dataset. It is similar to python's enumerate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
# The nested structure of the input dataset determines the structure of
# elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
Args
start A tf.int64 scalar tf.Tensor, representing the start value for enumeration.
Returns
Dataset A Dataset. filter View source
filter(
predicate
)
Filters this dataset according to predicate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
Args
predicate A function mapping a dataset element to a boolean.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. filter_with_legacy_function View source
filter_with_legacy_function(
predicate
)
Filters this dataset according to predicate. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.filter()
Note: This is an escape hatch for existing uses of filter that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to filter as this method will be removed in V2.
Args
predicate A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to a scalar tf.bool tensor.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source
flat_map(
map_func
)
Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1)
Args
map_func A function mapping a dataset element to a dataset.
Returns
Dataset A Dataset. from_generator View source
@staticmethod
from_generator(
generator, output_types=None, output_shapes=None, args=None,
output_signature=None
)
Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument:
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes.
Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment.
Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator().
Args
generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args.
output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator.
output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator.
args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments.
output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator.
Returns
Dataset A Dataset. from_sparse_tensor_slices View source
@staticmethod
from_sparse_tensor_slices(
sparse_tensor
)
Splits each rank-N tf.sparse.SparseTensor in this dataset row-wise. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.from_tensor_slices().
Args
sparse_tensor A tf.sparse.SparseTensor.
Returns
Dataset A Dataset of rank-(N-1) sparse tensors. from_tensor_slices View source
@staticmethod
from_tensor_slices(
tensors
)
Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element, with each component having the same size in the first dimension.
Returns
Dataset A Dataset. from_tensors View source
@staticmethod
from_tensors(
tensors
)
Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead.
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element.
Returns
Dataset A Dataset. interleave View source
interleave(
map_func, cycle_length=None, block_length=None, num_parallel_calls=None,
deterministic=None
)
Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently:
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example:
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined.
Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to a dataset.
cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism.
block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1.
num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. list_files View source
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None
)
A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems.
Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order.
Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py
Args
file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched.
shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
Returns
Dataset A Dataset of strings corresponding to file names. make_initializable_iterator View source
make_initializable_iterator(
shared_name=None
)
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_initializable_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be in an uninitialized state, and you must run the iterator.initializer operation before using it:
# Building graph ...
dataset = ...
iterator = dataset.make_initializable_iterator()
next_value = iterator.get_next() # This is a Tensor.
# ... from within a session ...
sess.run(iterator.initializer)
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Args
shared_name (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server).
Returns A tf.data.Iterator for elements of this dataset.
Raises
RuntimeError If eager execution is enabled. make_one_shot_iterator View source
make_one_shot_iterator()
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_one_shot_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see make_initializable_iterator.
Example: # Building graph ...
dataset = ...
next_value = dataset.make_one_shot_iterator().get_next()
# ... from within a session ...
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Returns An tf.data.Iterator for elements of this dataset.
map View source
map(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
The input signature of map_func is determined by the structure of each element in this dataset.
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
The value or values returned by map_func determine the structure of each element in the returned dataset.
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to another dataset element.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. map_with_legacy_function View source
map_with_legacy_function(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.map()
Note: This is an escape hatch for existing uses of map that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to map as this method will be removed in V2.
Args
map_func A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to another nested structure of tensors.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. options View source
options()
Returns the options for this dataset and its inputs.
Returns A tf.data.Options object representing the dataset options.
padded_batch View source
padded_batch(
batch_size, padded_shapes=None, padding_values=None, drop_remainder=False
)
Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank.
padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset.
Raises
ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source
prefetch(
buffer_size
)
Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each).
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching.
Returns
Dataset A Dataset. range View source
@staticmethod
range(
*args, **kwargs
)
Creates a Dataset of a step-separated range of values.
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
Args
*args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2].
**kwargs output_type: Its expected dtype. (Optional, default: tf.int64).
Returns
Dataset A RangeDataset.
Raises
ValueError if len(args) == 0. reduce View source
reduce(
initial_state, reduce_func
)
Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result.
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
Args
initial_state An element representing the initial state of the transformation.
reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state.
Returns A dataset element corresponding to the final state of the transformation.
repeat View source
repeat(
count=None
)
Repeats this dataset so each original value is seen count times.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements.
Args
count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely.
Returns
Dataset A Dataset. shard View source
shard(
num_shards, index
)
Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i.
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Args
num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel.
index A tf.int64 scalar tf.Tensor, representing the worker index.
Returns
Dataset A Dataset.
Raises
InvalidArgumentError if num_shards or index are illegal values.
Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
shuffle View source
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None
)
Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 0, 2]
In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.)
Returns
Dataset A Dataset. skip View source
skip(
count
)
Creates a Dataset that skips count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset.
Returns
Dataset A Dataset. take View source
take(
count
)
Creates a Dataset with at most count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset.
Returns
Dataset A Dataset. unbatch View source
unbatch()
Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...].
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch.
Returns A Dataset.
window View source
window(
size, shift=None, stride=1, drop_remainder=False
)
Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example:
dataset = tf.data.Dataset.range(7).window(2)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1]
[2, 3]
[4, 5]
[6]
dataset = tf.data.Dataset.range(7).window(3, 2, 1, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[2, 3, 4]
[4, 5, 6]
dataset = tf.data.Dataset.range(7).window(3, 1, 2, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows.
nested = ([1, 2, 3, 4], [5, 6, 7, 8])
dataset = tf.data.Dataset.from_tensor_slices(nested).window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print(tuple(to_numpy(component) for component in window))
([1, 2], [5, 6])
([3, 4], [7, 8])
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]})
dataset = dataset.window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print({'a': to_numpy(window['a'])})
{'a': [1, 2]}
{'a': [3, 4]}
Args
size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive.
shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive.
stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element".
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size.
Returns
Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source
with_options(
options
)
Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.experimental_deterministic = False
ds = ds.with_options(options)
Args
options A tf.data.Options that identifies the options the use.
Returns
Dataset A Dataset with the given options.
Raises
ValueError when an option is set more than once to a non-default value zip View source
@staticmethod
zip(
datasets
)
Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects.
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
Args
datasets A nested structure of datasets.
Returns
Dataset A Dataset. __bool__ View source
__bool__()
__iter__ View source
__iter__()
Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol.
Returns An tf.data.Iterator for the elements of this dataset.
Raises
RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source
__len__()
Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead.
Returns An integer representing the length of the dataset.
Raises
RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source
__nonzero__() | |
doc_24237 | Returns the point of intersection overlap(othermask, offset) -> (x, y) overlap(othermask, offset) -> None Returns the first point of intersection encountered between this mask and othermask. A point of intersection is 2 overlapping set bits. The current algorithm searches the overlapping area in sizeof(unsigned long int) * CHAR_BIT bit wide column blocks (the value of sizeof(unsigned long int) * CHAR_BIT is platform dependent, for clarity it will be referred to as W). Starting at the top left corner it checks bits 0 to W - 1 of the first row ((0, 0) to (W - 1, 0)) then continues to the next row ((0, 1) to (W - 1, 1)). Once this entire column block is checked, it continues to the next one (W to 2 * W - 1). This is repeated until it finds a point of intersection or the entire overlapping area is checked.
Parameters:
othermask (Mask) -- the other mask to overlap with this mask
offset (tuple(int, int) or list[int, int]) -- the offset of othermask from this mask, for more details refer to the Mask offset notes
Returns:
point of intersection or None if no intersection
Return type:
tuple(int, int) or NoneType | |
doc_24238 | tf.keras.constraints.non_neg Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.constraints.NonNeg, tf.compat.v1.keras.constraints.non_neg Also available via the shortcut function tf.keras.constraints.non_neg. Methods get_config View source
get_config()
__call__ View source
__call__(
w
)
Call self as a function. | |
doc_24239 |
Find all occurrences of pattern or regular expression in the Series/Index. Equivalent to applying re.findall() to all the elements in the Series/Index. Parameters
pat:str
Pattern or regular expression.
flags:int, default 0
Flags from re module, e.g. re.IGNORECASE (default is 0, which means no flags). Returns
Series/Index of lists of strings
All non-overlapping matches of pattern or regular expression in each string of this Series/Index. See also count
Count occurrences of pattern or regular expression in each string of the Series/Index. extractall
For each string in the Series, extract groups from all matches of regular expression and return a DataFrame with one row for each match and one column for each group. re.findall
The equivalent re function to all non-overlapping matches of pattern or regular expression in string, as a list of strings. Examples
>>> s = pd.Series(['Lion', 'Monkey', 'Rabbit'])
The search for the pattern ‘Monkey’ returns one match:
>>> s.str.findall('Monkey')
0 []
1 [Monkey]
2 []
dtype: object
On the other hand, the search for the pattern ‘MONKEY’ doesn’t return any match:
>>> s.str.findall('MONKEY')
0 []
1 []
2 []
dtype: object
Flags can be added to the pattern or regular expression. For instance, to find the pattern ‘MONKEY’ ignoring the case:
>>> import re
>>> s.str.findall('MONKEY', flags=re.IGNORECASE)
0 []
1 [Monkey]
2 []
dtype: object
When the pattern matches more than one string in the Series, all matches are returned:
>>> s.str.findall('on')
0 [on]
1 [on]
2 []
dtype: object
Regular expressions are supported too. For instance, the search for all the strings ending with the word ‘on’ is shown next:
>>> s.str.findall('on$')
0 [on]
1 []
2 []
dtype: object
If the pattern is found more than once in the same string, then a list of multiple strings is returned:
>>> s.str.findall('b')
0 []
1 []
2 [b, b]
dtype: object | |
doc_24240 | class ast.NotEq
class ast.Lt
class ast.LtE
class ast.Gt
class ast.GtE
class ast.Is
class ast.IsNot
class ast.In
class ast.NotIn
Comparison operator tokens. | |
doc_24241 | A pickler object’s dispatch table is a registry of reduction functions of the kind which can be declared using copyreg.pickle(). It is a mapping whose keys are classes and whose values are reduction functions. A reduction function takes a single argument of the associated class and should conform to the same interface as a __reduce__() method. By default, a pickler object will not have a dispatch_table attribute, and it will instead use the global dispatch table managed by the copyreg module. However, to customize the pickling for a specific pickler object one can set the dispatch_table attribute to a dict-like object. Alternatively, if a subclass of Pickler has a dispatch_table attribute then this will be used as the default dispatch table for instances of that class. See Dispatch Tables for usage examples. New in version 3.3. | |
doc_24242 | KDTree for fast generalized N-point problems Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
n_samples is the number of points in the data set, and n_features is the dimension of the parameter space. Note: if X is a C-contiguous array of doubles then data will not be copied. Otherwise, an internal copy will be made.
leaf_sizepositive int, default=40
Number of points at which to switch to brute-force. Changing leaf_size will not affect the results of a query, but can significantly impact the speed of a query and the memory required to store the constructed tree. The amount of memory needed to store the tree scales as approximately n_samples / leaf_size. For a specified leaf_size, a leaf node is guaranteed to satisfy leaf_size <= n_points <= 2 * leaf_size, except in the case that n_samples < leaf_size.
metricstr or DistanceMetric object
the distance metric to use for the tree. Default=’minkowski’ with p=2 (that is, a euclidean metric). See the documentation of the DistanceMetric class for a list of available metrics. kd_tree.valid_metrics gives a list of the metrics which are valid for KDTree. Additional keywords are passed to the distance metric class.
Note: Callable functions in the metric parameter are NOT supported for KDTree
and Ball Tree. Function call overhead will result in very poor performance.
Attributes
datamemory view
The training data Examples Query for k-nearest neighbors >>> import numpy as np
>>> rng = np.random.RandomState(0)
>>> X = rng.random_sample((10, 3)) # 10 points in 3 dimensions
>>> tree = KDTree(X, leaf_size=2)
>>> dist, ind = tree.query(X[:1], k=3)
>>> print(ind) # indices of 3 closest neighbors
[0 3 1]
>>> print(dist) # distances to 3 closest neighbors
[ 0. 0.19662693 0.29473397]
Pickle and Unpickle a tree. Note that the state of the tree is saved in the pickle operation: the tree needs not be rebuilt upon unpickling. >>> import numpy as np
>>> import pickle
>>> rng = np.random.RandomState(0)
>>> X = rng.random_sample((10, 3)) # 10 points in 3 dimensions
>>> tree = KDTree(X, leaf_size=2)
>>> s = pickle.dumps(tree)
>>> tree_copy = pickle.loads(s)
>>> dist, ind = tree_copy.query(X[:1], k=3)
>>> print(ind) # indices of 3 closest neighbors
[0 3 1]
>>> print(dist) # distances to 3 closest neighbors
[ 0. 0.19662693 0.29473397]
Query for neighbors within a given radius >>> import numpy as np
>>> rng = np.random.RandomState(0)
>>> X = rng.random_sample((10, 3)) # 10 points in 3 dimensions
>>> tree = KDTree(X, leaf_size=2)
>>> print(tree.query_radius(X[:1], r=0.3, count_only=True))
3
>>> ind = tree.query_radius(X[:1], r=0.3)
>>> print(ind) # indices of neighbors within distance 0.3
[3 0 1]
Compute a gaussian kernel density estimate: >>> import numpy as np
>>> rng = np.random.RandomState(42)
>>> X = rng.random_sample((100, 3))
>>> tree = KDTree(X)
>>> tree.kernel_density(X[:3], h=0.1, kernel='gaussian')
array([ 6.94114649, 7.83281226, 7.2071716 ])
Compute a two-point auto-correlation function >>> import numpy as np
>>> rng = np.random.RandomState(0)
>>> X = rng.random_sample((30, 3))
>>> r = np.linspace(0, 1, 5)
>>> tree = KDTree(X)
>>> tree.two_point_correlation(X, r)
array([ 30, 62, 278, 580, 820])
Methods
get_arrays(self) Get data and node arrays.
get_n_calls(self) Get number of calls.
get_tree_stats(self) Get tree status.
kernel_density(self, X, h[, kernel, atol, …]) Compute the kernel density estimate at points X with the given kernel, using the distance metric specified at tree creation.
query(X[, k, return_distance, dualtree, …]) query the tree for the k nearest neighbors
query_radius(X, r[, return_distance, …]) query the tree for neighbors within a radius r
reset_n_calls(self) Reset number of calls to 0.
two_point_correlation(X, r[, dualtree]) Compute the two-point correlation function
get_arrays(self)
Get data and node arrays. Returns
arrays: tuple of array
Arrays for storing tree data, index, node data and node bounds.
get_n_calls(self)
Get number of calls. Returns
n_calls: int
number of distance computation calls
get_tree_stats(self)
Get tree status. Returns
tree_stats: tuple of int
(number of trims, number of leaves, number of splits)
kernel_density(self, X, h, kernel='gaussian', atol=0, rtol=1e-08, breadth_first=True, return_log=False)
Compute the kernel density estimate at points X with the given kernel, using the distance metric specified at tree creation. Parameters
Xarray-like of shape (n_samples, n_features)
An array of points to query. Last dimension should match dimension of training data.
hfloat
the bandwidth of the kernel
kernelstr, default=”gaussian”
specify the kernel to use. Options are - ‘gaussian’ - ‘tophat’ - ‘epanechnikov’ - ‘exponential’ - ‘linear’ - ‘cosine’ Default is kernel = ‘gaussian’
atol, rtolfloat, default=0, 1e-8
Specify the desired relative and absolute tolerance of the result. If the true result is K_true, then the returned result K_ret satisfies abs(K_true - K_ret) < atol + rtol * K_ret The default is zero (i.e. machine precision) for both.
breadth_firstbool, default=False
If True, use a breadth-first search. If False (default) use a depth-first search. Breadth-first is generally faster for compact kernels and/or high tolerances.
return_logbool, default=False
Return the logarithm of the result. This can be more accurate than returning the result itself for narrow kernels. Returns
densityndarray of shape X.shape[:-1]
The array of (log)-density evaluations
query(X, k=1, return_distance=True, dualtree=False, breadth_first=False)
query the tree for the k nearest neighbors Parameters
Xarray-like of shape (n_samples, n_features)
An array of points to query
kint, default=1
The number of nearest neighbors to return
return_distancebool, default=True
if True, return a tuple (d, i) of distances and indices if False, return array i
dualtreebool, default=False
if True, use the dual tree formalism for the query: a tree is built for the query points, and the pair of trees is used to efficiently search this space. This can lead to better performance as the number of points grows large.
breadth_firstbool, default=False
if True, then query the nodes in a breadth-first manner. Otherwise, query the nodes in a depth-first manner.
sort_resultsbool, default=True
if True, then distances and indices of each point are sorted on return, so that the first column contains the closest points. Otherwise, neighbors are returned in an arbitrary order. Returns
iif return_distance == False
(d,i)if return_distance == True
dndarray of shape X.shape[:-1] + (k,), dtype=double
Each entry gives the list of distances to the neighbors of the corresponding point.
indarray of shape X.shape[:-1] + (k,), dtype=int
Each entry gives the list of indices of neighbors of the corresponding point.
query_radius(X, r, return_distance=False, count_only=False, sort_results=False)
query the tree for neighbors within a radius r Parameters
Xarray-like of shape (n_samples, n_features)
An array of points to query
rdistance within which neighbors are returned
r can be a single value, or an array of values of shape x.shape[:-1] if different radii are desired for each point.
return_distancebool, default=False
if True, return distances to neighbors of each point if False, return only neighbors Note that unlike the query() method, setting return_distance=True here adds to the computation time. Not all distances need to be calculated explicitly for return_distance=False. Results are not sorted by default: see sort_results keyword.
count_onlybool, default=False
if True, return only the count of points within distance r if False, return the indices of all points within distance r If return_distance==True, setting count_only=True will result in an error.
sort_resultsbool, default=False
if True, the distances and indices will be sorted before being returned. If False, the results will not be sorted. If return_distance == False, setting sort_results = True will result in an error. Returns
countif count_only == True
indif count_only == False and return_distance == False
(ind, dist)if count_only == False and return_distance == True
countndarray of shape X.shape[:-1], dtype=int
Each entry gives the number of neighbors within a distance r of the corresponding point.
indndarray of shape X.shape[:-1], dtype=object
Each element is a numpy integer array listing the indices of neighbors of the corresponding point. Note that unlike the results of a k-neighbors query, the returned neighbors are not sorted by distance by default.
distndarray of shape X.shape[:-1], dtype=object
Each element is a numpy double array listing the distances corresponding to indices in i.
reset_n_calls(self)
Reset number of calls to 0.
two_point_correlation(X, r, dualtree=False)
Compute the two-point correlation function Parameters
Xarray-like of shape (n_samples, n_features)
An array of points to query. Last dimension should match dimension of training data.
rarray-like
A one-dimensional array of distances
dualtreebool, default=False
If True, use a dualtree algorithm. Otherwise, use a single-tree algorithm. Dual tree algorithms can have better scaling for large N. Returns
countsndarray
counts[i] contains the number of pairs of points with distance less than or equal to r[i] | |
doc_24243 | Get and remove an attribute by name. Like dict.pop(). Parameters
name (str) – Name of attribute to pop.
default (Any) – Value to return if the attribute is not present, instead of raising a KeyError. Return type
Any Changelog New in version 0.11. | |
doc_24244 | See torch.isclose() | |
doc_24245 |
Attributes
base Returns a copy of the calling offset object with n=1 and all other attributes equal.
delta
freqstr
kwds
n
name
nanos
normalize
rule_code Methods
__call__(*args, **kwargs) Call self as a function.
rollback Roll provided date backward to next offset only if not on offset.
rollforward Roll provided date forward to next offset only if not on offset.
apply
apply_index
copy
isAnchored
is_anchored
is_month_end
is_month_start
is_on_offset
is_quarter_end
is_quarter_start
is_year_end
is_year_start
onOffset | |
doc_24246 | See Migration guide for more details. tf.compat.v1.raw_ops.TensorArrayCloseV3
tf.raw_ops.TensorArrayCloseV3(
handle, name=None
)
This enables the user to close and release the resource in the middle of a step/run.
Args
handle A Tensor of type resource. The handle to a TensorArray (output of TensorArray or TensorArrayGrad).
name A name for the operation (optional).
Returns The created Operation. | |
doc_24247 |
Return whether the artist uses clipping. | |
doc_24248 | A bitmask or’ing together all the reporting flags above. | |
doc_24249 |
Perform a diameter opening of the image. Diameter opening removes all bright structures of an image with maximal extension smaller than diameter_threshold. The maximal extension is defined as the maximal extension of the bounding box. The operator is also called Bounding Box Opening. In practice, the result is similar to a morphological opening, but long and thin structures are not removed. Technically, this operator is based on the max-tree representation of the image. Parameters
imagendarray
The input image for which the area_opening is to be calculated. This image can be of any type.
diameter_thresholdunsigned int
The maximal extension parameter (number of pixels). The default value is 8.
connectivityunsigned int, optional
The neighborhood connectivity. The integer represents the maximum number of orthogonal steps to reach a neighbor. In 2D, it is 1 for a 4-neighborhood and 2 for a 8-neighborhood. Default value is 1.
parentndarray, int64, optional
Parent image representing the max tree of the image. The value of each pixel is the index of its parent in the ravelled array.
tree_traverser1D array, int64, optional
The ordered pixel indices (referring to the ravelled array). The pixels are ordered such that every pixel is preceded by its parent (except for the root which has no parent). Returns
outputndarray
Output image of the same shape and type as the input image. See also
skimage.morphology.area_opening
skimage.morphology.area_closing
skimage.morphology.diameter_closing
skimage.morphology.max_tree
References
1
Walter, T., & Klein, J.-C. (2002). Automatic Detection of Microaneurysms in Color Fundus Images of the Human Retina by Means of the Bounding Box Closing. In A. Colosimo, P. Sirabella, A. Giuliani (Eds.), Medical Data Analysis. Lecture Notes in Computer Science, vol 2526, pp. 210-220. Springer Berlin Heidelberg. DOI:10.1007/3-540-36104-9_23
2
Carlinet, E., & Geraud, T. (2014). A Comparative Review of Component Tree Computation Algorithms. IEEE Transactions on Image Processing, 23(9), 3885-3895. DOI:10.1109/TIP.2014.2336551 Examples We create an image (quadratic function with a maximum in the center and 4 additional local maxima. >>> w = 12
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 20 - 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:3,1:5] = 40; f[2:4,9:11] = 60; f[9:11,2:4] = 80
>>> f[9:10,9:11] = 100; f[10,10] = 100
>>> f = f.astype(int)
We can calculate the diameter opening: >>> open = diameter_opening(f, 3, connectivity=1)
The peaks with a maximal extension of 2 or less are removed. The remaining peaks have all a maximal extension of at least 3. | |
doc_24250 | Call the get_content() method of the content_manager, passing self as the message object, and passing along any other arguments or keywords as additional arguments. If content_manager is not specified, use the content_manager specified by the current policy. | |
doc_24251 | Volume number of file header. | |
doc_24252 |
Set the linewidth(s) for the collection. lw can be a scalar or a sequence; if it is a sequence the patches will cycle through the sequence Parameters
lwfloat or list of floats | |
doc_24253 |
Force the mask to hard. Whether the mask of a masked array is hard or soft is determined by its hardmask property. harden_mask sets hardmask to True. See also ma.MaskedArray.hardmask | |
doc_24254 | Returns a tuple of: [0] a pretty-printed representation (as valid Python syntax) of the internal graph for the forward method. See code. [1] a ConstMap following the CONSTANT.cN format of the output in [0]. The indices in the [0] output are keys to the underlying constant’s values. See Inspecting Code for details. | |
doc_24255 |
Return whether the artist is pickable. See also
set_picker, get_picker, pick | |
doc_24256 | sklearn.metrics.hinge_loss(y_true, pred_decision, *, labels=None, sample_weight=None) [source]
Average hinge loss (non-regularized). In binary class case, assuming labels in y_true are encoded with +1 and -1, when a prediction mistake is made, margin = y_true * pred_decision is always negative (since the signs disagree), implying 1 - margin is always greater than 1. The cumulated hinge loss is therefore an upper bound of the number of mistakes made by the classifier. In multiclass case, the function expects that either all the labels are included in y_true or an optional labels argument is provided which contains all the labels. The multilabel margin is calculated according to Crammer-Singer’s method. As in the binary case, the cumulated hinge loss is an upper bound of the number of mistakes made by the classifier. Read more in the User Guide. Parameters
y_truearray of shape (n_samples,)
True target, consisting of integers of two values. The positive label must be greater than the negative label.
pred_decisionarray of shape (n_samples,) or (n_samples, n_classes)
Predicted decisions, as output by decision_function (floats).
labelsarray-like, default=None
Contains all the labels for the problem. Used in multiclass hinge loss.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
lossfloat
References
1
Wikipedia entry on the Hinge loss.
2
Koby Crammer, Yoram Singer. On the Algorithmic Implementation of Multiclass Kernel-based Vector Machines. Journal of Machine Learning Research 2, (2001), 265-292.
3
L1 AND L2 Regularization for Multiclass Hinge Loss Models by Robert C. Moore, John DeNero. Examples >>> from sklearn import svm
>>> from sklearn.metrics import hinge_loss
>>> X = [[0], [1]]
>>> y = [-1, 1]
>>> est = svm.LinearSVC(random_state=0)
>>> est.fit(X, y)
LinearSVC(random_state=0)
>>> pred_decision = est.decision_function([[-2], [3], [0.5]])
>>> pred_decision
array([-2.18..., 2.36..., 0.09...])
>>> hinge_loss([-1, 1, 1], pred_decision)
0.30...
In the multiclass case: >>> import numpy as np
>>> X = np.array([[0], [1], [2], [3]])
>>> Y = np.array([0, 1, 2, 3])
>>> labels = np.array([0, 1, 2, 3])
>>> est = svm.LinearSVC()
>>> est.fit(X, Y)
LinearSVC()
>>> pred_decision = est.decision_function([[-1], [2], [3]])
>>> y_true = [0, 2, 3]
>>> hinge_loss(y_true, pred_decision, labels=labels)
0.56... | |
doc_24257 |
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | |
doc_24258 |
Alias for set_markeredgewidth. | |
doc_24259 | class sklearn.gaussian_process.kernels.ConstantKernel(constant_value=1.0, constant_value_bounds=1e-05, 100000.0) [source]
Constant kernel. Can be used as part of a product-kernel where it scales the magnitude of the other factor (kernel) or as part of a sum-kernel, where it modifies the mean of the Gaussian process. \[k(x_1, x_2) = constant\_value \;\forall\; x_1, x_2\] Adding a constant kernel is equivalent to adding a constant: kernel = RBF() + ConstantKernel(constant_value=2)
is the same as: kernel = RBF() + 2
Read more in the User Guide. New in version 0.18. Parameters
constant_valuefloat, default=1.0
The constant value which defines the covariance: k(x_1, x_2) = constant_value
constant_value_boundspair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on constant_value. If set to “fixed”, constant_value cannot be changed during hyperparameter tuning. Attributes
bounds
Returns the log-transformed bounds on the theta. hyperparameter_constant_value
hyperparameters
Returns a list of all hyperparameter specifications.
n_dims
Returns the number of non-fixed hyperparameters of the kernel.
requires_vector_input
Whether the kernel works only on fixed-length feature vectors.
theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Examples >>> from sklearn.datasets import make_friedman2
>>> from sklearn.gaussian_process import GaussianProcessRegressor
>>> from sklearn.gaussian_process.kernels import RBF, ConstantKernel
>>> X, y = make_friedman2(n_samples=500, noise=0, random_state=0)
>>> kernel = RBF() + ConstantKernel(constant_value=2)
>>> gpr = GaussianProcessRegressor(kernel=kernel, alpha=5,
... random_state=0).fit(X, y)
>>> gpr.score(X, y)
0.3696...
>>> gpr.predict(X[:1,:], return_std=True)
(array([606.1...]), array([0.24...]))
Methods
__call__(X[, Y, eval_gradient]) Return the kernel k(X, Y) and optionally its gradient.
clone_with_theta(theta) Returns a clone of self with given hyperparameters theta.
diag(X) Returns the diagonal of the kernel k(X, X).
get_params([deep]) Get parameters of this kernel.
is_stationary() Returns whether the kernel is stationary.
set_params(**params) Set the parameters of this kernel.
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Left argument of the returned kernel k(X, Y)
Yarray-like of shape (n_samples_X, n_features) or list of object, default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) is evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True.
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Argument to the kernel. Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X)
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property hyperparameters
Returns a list of all hyperparameter specifications.
is_stationary() [source]
Returns whether the kernel is stationary.
property n_dims
Returns the number of non-fixed hyperparameters of the kernel.
property requires_vector_input
Whether the kernel works only on fixed-length feature vectors.
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel
Examples using sklearn.gaussian_process.kernels.ConstantKernel
Illustration of prior and posterior Gaussian process for different kernels
Iso-probability lines for Gaussian Processes classification (GPC)
Gaussian Processes regression: basic introductory example | |
doc_24260 | Enable capability (see RFC 5161). Most capabilities do not need to be enabled. Currently only the UTF8=ACCEPT capability is supported (see RFC 6855). New in version 3.5: The enable() method itself, and RFC 6855 support. | |
doc_24261 |
Set the coordinate system to use for Annotation.xyann. See also xycoords in Annotation. | |
doc_24262 | Returns a tensor with the same size as input that is filled with random numbers from a normal distribution with mean 0 and variance 1. torch.randn_like(input) is equivalent to torch.randn(input.size(), dtype=input.dtype, layout=input.layout, device=input.device). Parameters
input (Tensor) – the size of input will determine size of the output tensor. Keyword Arguments
dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.
layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.
memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.preserve_format. | |
doc_24263 | Translate the host/port argument into a sequence of 5-tuples that contain all the necessary arguments for creating a socket connected to that service. host is a domain name, a string representation of an IPv4/v6 address or None. port is a string service name such as 'http', a numeric port number or None. By passing None as the value of host and port, you can pass NULL to the underlying C API. The family, type and proto arguments can be optionally specified in order to narrow the list of addresses returned. Passing zero as a value for each of these arguments selects the full range of results. The flags argument can be one or several of the AI_* constants, and will influence how results are computed and returned. For example, AI_NUMERICHOST will disable domain name resolution and will raise an error if host is a domain name. The function returns a list of 5-tuples with the following structure: (family, type, proto, canonname, sockaddr) In these tuples, family, type, proto are all integers and are meant to be passed to the socket() function. canonname will be a string representing the canonical name of the host if AI_CANONNAME is part of the flags argument; else canonname will be empty. sockaddr is a tuple describing a socket address, whose format depends on the returned family (a (address, port) 2-tuple for AF_INET, a (address, port, flowinfo, scope_id) 4-tuple for AF_INET6), and is meant to be passed to the socket.connect() method. Raises an auditing event socket.getaddrinfo with arguments host, port, family, type, protocol. The following example fetches address information for a hypothetical TCP connection to example.org on port 80 (results may differ on your system if IPv6 isn’t enabled): >>> socket.getaddrinfo("example.org", 80, proto=socket.IPPROTO_TCP)
[(<AddressFamily.AF_INET6: 10>, <SocketType.SOCK_STREAM: 1>,
6, '', ('2606:2800:220:1:248:1893:25c8:1946', 80, 0, 0)),
(<AddressFamily.AF_INET: 2>, <SocketType.SOCK_STREAM: 1>,
6, '', ('93.184.216.34', 80))]
Changed in version 3.2: parameters can now be passed using keyword arguments. Changed in version 3.7: for IPv6 multicast addresses, string representing an address will not contain %scope_id part. | |
doc_24264 | socket.PF_CAN
SOL_CAN_*
CAN_*
Many constants of these forms, documented in the Linux documentation, are also defined in the socket module. Availability: Linux >= 2.6.25. New in version 3.3. | |
doc_24265 |
Reduce X to the selected features. Parameters
Xarray of shape [n_samples, n_features]
The input samples. Returns
X_rarray of shape [n_samples, n_selected_features]
The input samples with only the selected features. | |
doc_24266 | Analyze the given sample and return a Dialect subclass reflecting the parameters found. If the optional delimiters parameter is given, it is interpreted as a string containing possible valid delimiter characters. | |
doc_24267 | tf.compat.v1.nn.dynamic_rnn(
cell, inputs, sequence_length=None, initial_state=None, dtype=None,
parallel_iterations=None, swap_memory=False, time_major=False, scope=None
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use keras.layers.RNN(cell), which is equivalent to this API Performs fully dynamic unrolling of inputs. Example: # create a BasicRNNCell
rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size)
# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]
# defining initial state
initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)
# 'state' is a tensor of shape [batch_size, cell_state_size]
outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data,
initial_state=initial_state,
dtype=tf.float32)
# create 2 LSTMCells
rnn_layers = [tf.compat.v1.nn.rnn_cell.LSTMCell(size) for size in [128, 256]]
# create a RNN cell composed sequentially of a number of RNNCells
multi_rnn_cell = tf.compat.v1.nn.rnn_cell.MultiRNNCell(rnn_layers)
# 'outputs' is a tensor of shape [batch_size, max_time, 256]
# 'state' is a N-tuple where N is the number of LSTMCells containing a
# tf.nn.rnn_cell.LSTMStateTuple for each cell
outputs, state = tf.compat.v1.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=data,
dtype=tf.float32)
Args
cell An instance of RNNCell.
inputs The RNN inputs. If time_major == False (default), this must be a Tensor of shape: [batch_size, max_time, ...], or a nested tuple of such elements. If time_major == True, this must be a Tensor of shape: [max_time, batch_size, ...], or a nested tuple of such elements. This may also be a (possibly nested) tuple of Tensors satisfying this property. The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ. In this case, input to cell at each time-step will replicate the structure of these tuples, except for the time dimension (from which the time is taken). The input to cell at each time step will be a Tensor or (possibly nested) tuple of Tensors each with dimensions [batch_size, ...].
sequence_length (optional) An int32/int64 vector sized [batch_size]. Used to copy-through state and zero-out outputs when past a batch element's sequence length. This parameter enables users to extract the last valid state and properly padded outputs, so it is provided for correctness.
initial_state (optional) An initial state for the RNN. If cell.state_size is an integer, this must be a Tensor of appropriate type and shape [batch_size, cell.state_size]. If cell.state_size is a tuple, this should be a tuple of tensors having shapes [batch_size, s] for s in cell.state_size.
dtype (optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
parallel_iterations (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
swap_memory Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
time_major The shape format of the inputs and outputs Tensors. If true, these Tensors must be shaped [max_time, batch_size, depth]. If false, these Tensors must be shaped [batch_size, max_time, depth]. Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
scope VariableScope for the created subgraph; defaults to "rnn".
Returns A pair (outputs, state) where: outputs The RNN output Tensor. If time_major == False (default), this will be a Tensor shaped: [batch_size, max_time, cell.output_size]. If time_major == True, this will be a Tensor shaped: [max_time, batch_size, cell.output_size]. Note, if cell.output_size is a (possibly nested) tuple of integers or TensorShape objects, then outputs will be a tuple having the same structure as cell.output_size, containing Tensors having shapes corresponding to the shape data in cell.output_size.
state The final state. If cell.state_size is an int, this will be shaped [batch_size, cell.state_size]. If it is a TensorShape, this will be shaped [batch_size] + cell.state_size. If it is a (possibly nested) tuple of ints or TensorShape, this will be a tuple having the corresponding shapes. If cells are LSTMCells state will be a tuple containing a LSTMStateTuple for each cell.
Raises
TypeError If cell is not an instance of RNNCell.
ValueError If inputs is None or an empty list. | |
doc_24268 |
An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a low-level method (ndarray(…)) for instantiating an array. For more information, refer to the numpy module and examine the methods and attributes of an array. Parameters
(for the __new__ method; see Notes below)
shapetuple of ints
Shape of created array.
dtypedata-type, optional
Any object that can be interpreted as a numpy data type.
bufferobject exposing buffer interface, optional
Used to fill the array with data.
offsetint, optional
Offset of array data in buffer.
stridestuple of ints, optional
Strides of data in memory.
order{‘C’, ‘F’}, optional
Row-major (C-style) or column-major (Fortran-style) order. See also array
Construct an array. zeros
Create an array, each element of which is zero. empty
Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). dtype
Create a data-type. numpy.typing.NDArray
An ndarray alias generic w.r.t. its dtype.type. Notes There are two modes of creating an array using __new__: If buffer is None, then only shape, dtype, and order are used. If buffer is an object exposing the buffer interface, then all keywords are interpreted. No __init__ method is needed because the array is fully initialized after the __new__ method. Examples These examples illustrate the low-level ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray. First mode, buffer is None: >>> np.ndarray(shape=(2,2), dtype=float, order='F')
array([[0.0e+000, 0.0e+000], # random
[ nan, 2.5e-323]])
Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]),
... offset=np.int_().itemsize,
... dtype=int) # offset = 1*itemsize, i.e. skip first element
array([2, 3])
Attributes
Tndarray
Transpose of the array.
databuffer
The array’s elements, in memory.
dtypedtype object
Describes the format of the elements in the array.
flagsdict
Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc.
flatnumpy.flatiter object
Flattened version of the array as an iterator. The iterator allows assignments, e.g., x.flat = 3 (See ndarray.flat for assignment examples; TODO).
imagndarray
Imaginary part of the array.
realndarray
Real part of the array.
sizeint
Number of elements in the array.
itemsizeint
The memory use of each array element in bytes.
nbytesint
The total number of bytes required to store the array data, i.e., itemsize * size.
ndimint
The array’s number of dimensions.
shapetuple of ints
Shape of the array.
stridestuple of ints
The step-size required to move from one element to the next in memory. For example, a contiguous (3, 4) array of type int16 in C-order has strides (8, 2). This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (2 * 4).
ctypesctypes object
Class containing properties of the array needed for interaction with ctypes.
basendarray
If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored. | |
doc_24269 | tf.experimental.numpy.arcsinh(
x
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.arcsinh. | |
doc_24270 | The address of the server. (host, port), (path, None) for unix sockets, or None if not known. | |
doc_24271 | os.F_TLOCK
os.F_ULOCK
os.F_TEST
Flags that specify what action lockf() will take. Availability: Unix. New in version 3.3. | |
doc_24272 |
Sets the seed for generating random numbers to a non-deterministic random number. Returns a 64 bit number used to seed the RNG. | |
doc_24273 |
Check if the object can be compiled into a regex pattern instance. Parameters
obj:The object to check
Returns
is_regex_compilable:bool
Whether obj can be compiled as a regex pattern. Examples
>>> is_re_compilable(".*")
True
>>> is_re_compilable(1)
False | |
doc_24274 |
Determine all local maxima of the image. The local maxima are defined as connected sets of pixels with equal gray level strictly greater than the gray levels of all pixels in direct neighborhood of the set. The function labels the local maxima. Technically, the implementation is based on the max-tree representation of an image. The function is very efficient if the max-tree representation has already been computed. Otherwise, it is preferable to use the function local_maxima. Parameters
imagendarray
The input image for which the maxima are to be calculated. connectivity: unsigned int, optional
The neighborhood connectivity. The integer represents the maximum number of orthogonal steps to reach a neighbor. In 2D, it is 1 for a 4-neighborhood and 2 for a 8-neighborhood. Default value is 1. parent: ndarray, int64, optional
The value of each pixel is the index of its parent in the ravelled array. tree_traverser: 1D array, int64, optional
The ordered pixel indices (referring to the ravelled array). The pixels are ordered such that every pixel is preceded by its parent (except for the root which has no parent). Returns
local_maxndarray, uint64
Labeled local maxima of the image. See also
skimage.morphology.local_maxima
skimage.morphology.max_tree
References
1
Vincent L., Proc. “Grayscale area openings and closings, their efficient implementation and applications”, EURASIP Workshop on Mathematical Morphology and its Applications to Signal Processing, Barcelona, Spain, pp.22-27, May 1993.
2
Soille, P., “Morphological Image Analysis: Principles and Applications” (Chapter 6), 2nd edition (2003), ISBN 3540429883. DOI:10.1007/978-3-662-05088-0
3
Salembier, P., Oliveras, A., & Garrido, L. (1998). Antiextensive Connected Operators for Image and Sequence Processing. IEEE Transactions on Image Processing, 7(4), 555-570. DOI:10.1109/83.663500
4
Najman, L., & Couprie, M. (2006). Building the component tree in quasi-linear time. IEEE Transactions on Image Processing, 15(11), 3531-3539. DOI:10.1109/TIP.2006.877518
5
Carlinet, E., & Geraud, T. (2014). A Comparative Review of Component Tree Computation Algorithms. IEEE Transactions on Image Processing, 23(9), 3885-3895. DOI:10.1109/TIP.2014.2336551 Examples We create an image (quadratic function with a maximum in the center and 4 additional constant maxima. >>> w = 10
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 20 - 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:4,2:4] = 40; f[2:4,7:9] = 60; f[7:9,2:4] = 80; f[7:9,7:9] = 100
>>> f = f.astype(int)
We can calculate all local maxima: >>> maxima = max_tree_local_maxima(f)
The resulting image contains the labeled local maxima. | |
doc_24275 |
Dump the dataset in svmlight / libsvm file format. This format is a text-based format, with one sample per line. It does not store zero valued features hence is suitable for sparse dataset. The first element of each line can be used to store a target variable to predict. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
y{array-like, sparse matrix}, shape = [n_samples (, n_labels)]
Target values. Class labels must be an integer or float, or array-like objects of integer or float for multilabel classifications.
fstring or file-like in binary mode
If string, specifies the path that will contain the data. If file-like, data will be written to f. f should be opened in binary mode.
zero_basedboolean, default=True
Whether column indices should be written zero-based (True) or one-based (False).
commentstring, default=None
Comment to insert at the top of the file. This should be either a Unicode string, which will be encoded as UTF-8, or an ASCII byte string. If a comment is given, then it will be preceded by one that identifies the file as having been dumped by scikit-learn. Note that not all tools grok comments in SVMlight files.
query_idarray-like of shape (n_samples,), default=None
Array containing pairwise preference constraints (qid in svmlight format).
multilabelboolean, default=False
Samples may have several labels each (see https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html) New in version 0.17: parameter multilabel to support multilabel datasets. | |
doc_24276 | Filter on traces of memory blocks. See the fnmatch.fnmatch() function for the syntax of filename_pattern. The '.pyc' file extension is replaced with '.py'. Examples:
Filter(True, subprocess.__file__) only includes traces of the subprocess module
Filter(False, tracemalloc.__file__) excludes traces of the tracemalloc module
Filter(False, "<unknown>") excludes empty tracebacks Changed in version 3.5: The '.pyo' file extension is no longer replaced with '.py'. Changed in version 3.6: Added the domain attribute.
domain
Address space of a memory block (int or None). tracemalloc uses the domain 0 to trace memory allocations made by Python. C extensions can use other domains to trace other resources.
inclusive
If inclusive is True (include), only match memory blocks allocated in a file with a name matching filename_pattern at line number lineno. If inclusive is False (exclude), ignore memory blocks allocated in a file with a name matching filename_pattern at line number lineno.
lineno
Line number (int) of the filter. If lineno is None, the filter matches any line number.
filename_pattern
Filename pattern of the filter (str). Read-only property.
all_frames
If all_frames is True, all frames of the traceback are checked. If all_frames is False, only the most recent frame is checked. This attribute has no effect if the traceback limit is 1. See the get_traceback_limit() function and Snapshot.traceback_limit attribute. | |
doc_24277 |
Return the clipbox. | |
doc_24278 | Query the server’s capabilities as specified in RFC 2449. Returns a dictionary in the form {'name': ['param'...]}. New in version 3.4. | |
doc_24279 |
Return class labels or probabilities for X for each estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
y_predsndarray of shape (n_samples, n_estimators) or (n_samples, n_classes * n_estimators)
Prediction outputs for each estimator. | |
doc_24280 |
Parameters
fpsint, default: 5
Movie frame rate (per second).
codecstr or None, default: rcParams["animation.codec"] (default: 'h264')
The codec to use.
bitrateint, default: rcParams["animation.bitrate"] (default: -1)
The bitrate of the movie, in kilobits per second. Higher values means higher quality movies, but increase the file size. A value of -1 lets the underlying movie encoder select the bitrate.
extra_argslist of str or None, optional
Extra command-line arguments passed to the underlying movie encoder. The default, None, means to use rcParams["animation.[name-of-encoder]_args"] for the builtin writers.
metadatadict[str, str], default: {}
A dictionary of keys and values for metadata to include in the output file. Some keys that may be of use include: title, artist, genre, subject, copyright, srcform, comment. | |
doc_24281 | The full name of a template to use for generating a text/html multipart email with the password reset link. By default, HTML email is not sent. | |
doc_24282 |
Cast to PeriodArray/Index at a particular frequency. Converts DatetimeArray/Index to PeriodArray/Index. Parameters
freq:str or Offset, optional
One of pandas’ offset strings or an Offset object. Will be inferred by default. Returns
PeriodArray/Index
Raises
ValueError
When converting a DatetimeArray/Index with non-regular values, so that a frequency cannot be inferred. See also PeriodIndex
Immutable ndarray holding ordinal values. DatetimeIndex.to_pydatetime
Return DatetimeIndex as object. Examples
>>> df = pd.DataFrame({"y": [1, 2, 3]},
... index=pd.to_datetime(["2000-03-31 00:00:00",
... "2000-05-31 00:00:00",
... "2000-08-31 00:00:00"]))
>>> df.index.to_period("M")
PeriodIndex(['2000-03', '2000-05', '2000-08'],
dtype='period[M]')
Infer the daily frequency
>>> idx = pd.date_range("2017-01-01", periods=2)
>>> idx.to_period()
PeriodIndex(['2017-01-01', '2017-01-02'],
dtype='period[D]') | |
doc_24283 |
Pass a KeyEvent to all functions connected to key_press_event. | |
doc_24284 |
Add one Chebyshev series to another. Returns the sum of two Chebyshev series c1 + c2. The arguments are sequences of coefficients ordered from lowest order term to highest, i.e., [1,2,3] represents the series T_0 + 2*T_1 + 3*T_2. Parameters
c1, c2array_like
1-D arrays of Chebyshev series coefficients ordered from low to high. Returns
outndarray
Array representing the Chebyshev series of their sum. See also
chebsub, chebmulx, chebmul, chebdiv, chebpow
Notes Unlike multiplication, division, etc., the sum of two Chebyshev series is a Chebyshev series (without having to “reproject” the result onto the basis set) so addition, just like that of “standard” polynomials, is simply “component-wise.” Examples >>> from numpy.polynomial import chebyshev as C
>>> c1 = (1,2,3)
>>> c2 = (3,2,1)
>>> C.chebadd(c1,c2)
array([4., 4., 4.]) | |
doc_24285 | See Migration guide for more details. tf.compat.v1.data.experimental.snapshot
tf.data.experimental.snapshot(
path, compression='AUTO', reader_func=None, shard_func=None
)
The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run. This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time. https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md has detailed design documentation of this feature. Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the reader_func and shard_func parameters. shard_func is a user specified function that maps input elements to snapshot shards. Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential shard_func could be written. dataset = ...
dataset = dataset.enumerate()
dataset = dataset.apply(tf.data.experimental.snapshot("/path/to/snapshot/dir",
shard_func=lambda x, y: x % NUM_SHARDS, ...))
dataset = dataset.map(lambda x, y: y)
reader_func is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the shard_func (see above). The function should return a Dataset of elements of the original dataset. Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism. Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets: def user_reader_func(datasets):
# shuffle the datasets splits
datasets = datasets.shuffle(NUM_CORES)
# read datasets in parallel and interleave their elements
return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE)
dataset = dataset.apply(tf.data.experimental.snapshot("/path/to/snapshot/dir",
reader_func=user_reader_func))
By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data.
Args
path Required. A directory to use for storing / loading the snapshot to / from.
compression Optional. The type of compression to apply to the snapshot written to disk. Supported options are GZIP, SNAPPY, AUTO or None. Defaults to AUTO, which attempts to pick an appropriate compression algorithm for the dataset.
reader_func Optional. A function to control how to read data from snapshot shards.
shard_func Optional. A function to control how to shard data when writing a snapshot.
Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply. | |
doc_24286 | Write the given bytes-like object, b, and return the number of bytes written (always equal to the length of b in bytes, since if the write fails an OSError will be raised). Depending on the actual implementation, these bytes may be readily written to the underlying stream, or held in a buffer for performance and latency reasons. When in non-blocking mode, a BlockingIOError is raised if the data needed to be written to the raw stream but it couldn’t accept all the data without blocking. The caller may release or mutate b after this method returns, so the implementation should only access b during the method call. | |
doc_24287 |
Set slider value to val. Parameters
valfloat | |
doc_24288 | Default configuration parameters. | |
doc_24289 |
Evaluate a 2-D Chebyshev series at points (x, y). This function returns the values: \[p(x,y) = \sum_{i,j} c_{i,j} * T_i(x) * T_j(y)\] The parameters x and y are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either x and y or their elements must support multiplication and addition both with themselves and with the elements of c. If c is a 1-D array a one is implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters
x, yarray_like, compatible objects
The two dimensional series is evaluated at the points (x, y), where x and y must have the same shape. If x or y is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar.
carray_like
Array of coefficients ordered so that the coefficient of the term of multi-degree i,j is contained in c[i,j]. If c has dimension greater than 2 the remaining indices enumerate multiple sets of coefficients. Returns
valuesndarray, compatible object
The values of the two dimensional Chebyshev series at points formed from pairs of corresponding values from x and y. See also
chebval, chebgrid2d, chebval3d, chebgrid3d
Notes New in version 1.7.0. | |
doc_24290 |
Return Multiplication of series and other, element-wise (binary operator mul). Equivalent to series * other, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters
other:Series or scalar value
fill_value:None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing.
level:int or name
Broadcast across a level, matching Index values on the passed MultiIndex level. Returns
Series
The result of the operation. See also Series.rmul
Reverse of the Multiplication operator, see Python documentation for more details. Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.multiply(b, fill_value=0)
a 1.0
b 0.0
c 0.0
d 0.0
e NaN
dtype: float64 | |
doc_24291 | If proxy = True, a model which subclasses another model will be treated as a proxy model. | |
doc_24292 | The default port for the HTTP protocol (always 80). | |
doc_24293 |
Return a flattened matrix. Refer to numpy.ravel for more documentation. Parameters
order{‘C’, ‘F’, ‘A’, ‘K’}, optional
The elements of m are read using this index order. ‘C’ means to index the elements in C-like order, with the last axis index changing fastest, back to the first axis index changing slowest. ‘F’ means to index the elements in Fortran-like index order, with the first index changing fastest, and the last index changing slowest. Note that the ‘C’ and ‘F’ options take no account of the memory layout of the underlying array, and only refer to the order of axis indexing. ‘A’ means to read the elements in Fortran-like index order if m is Fortran contiguous in memory, C-like order otherwise. ‘K’ means to read the elements in the order they occur in memory, except for reversing the data when strides are negative. By default, ‘C’ index order is used. Returns
retmatrix
Return the matrix flattened to shape (1, N) where N is the number of elements in the original matrix. A copy is made only if necessary. See also matrix.flatten
returns a similar output matrix but always a copy matrix.flat
a flat iterator on the array. numpy.ravel
related function which returns an ndarray | |
doc_24294 | When this namespace is specified, the name string is a fully-qualified domain name. | |
doc_24295 |
Returns the average of the matrix elements along the given axis. Refer to numpy.mean for full documentation. See also numpy.mean
Notes Same as ndarray.mean except that, where that returns an ndarray, this returns a matrix object. Examples >>> x = np.matrix(np.arange(12).reshape((3, 4)))
>>> x
matrix([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> x.mean()
5.5
>>> x.mean(0)
matrix([[4., 5., 6., 7.]])
>>> x.mean(1)
matrix([[ 1.5],
[ 5.5],
[ 9.5]]) | |
doc_24296 |
Like Artist.get_window_extent, but includes any clipping. Parameters
rendererRendererBase subclass
renderer that will be used to draw the figures (i.e. fig.canvas.get_renderer()) Returns
Bbox
The enclosing bounding box (in figure pixel coordinates). | |
doc_24297 |
Convert strings in the Series/Index to be swapcased. Equivalent to str.swapcase(). Returns
Series or Index of object
See also Series.str.lower
Converts all characters to lowercase. Series.str.upper
Converts all characters to uppercase. Series.str.title
Converts first character of each word to uppercase and remaining to lowercase. Series.str.capitalize
Converts first character to uppercase and remaining to lowercase. Series.str.swapcase
Converts uppercase to lowercase and lowercase to uppercase. Series.str.casefold
Removes all case distinctions in the string. Examples
>>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe'])
>>> s
0 lower
1 CAPITALS
2 this is a sentence
3 SwApCaSe
dtype: object
>>> s.str.lower()
0 lower
1 capitals
2 this is a sentence
3 swapcase
dtype: object
>>> s.str.upper()
0 LOWER
1 CAPITALS
2 THIS IS A SENTENCE
3 SWAPCASE
dtype: object
>>> s.str.title()
0 Lower
1 Capitals
2 This Is A Sentence
3 Swapcase
dtype: object
>>> s.str.capitalize()
0 Lower
1 Capitals
2 This is a sentence
3 Swapcase
dtype: object
>>> s.str.swapcase()
0 LOWER
1 capitals
2 THIS IS A SENTENCE
3 sWaPcAsE
dtype: object | |
doc_24298 | See Migration guide for more details. tf.compat.v1.app.flags.text_wrap
tf.compat.v1.flags.text_wrap(
text, length=None, indent='', firstline_indent=None
)
It turns lines that only contain whitespace into empty lines, keeps new lines, and expands tabs using 4 spaces.
Args
text str, text to wrap.
length int, maximum length of a line, includes indentation. If this is None then use get_help_width()
indent str, indent for all but first line.
firstline_indent str, indent for first line; if None, fall back to indent.
Returns str, the wrapped text.
Raises
ValueError Raised if indent or firstline_indent not shorter than length. | |
doc_24299 | See Migration guide for more details. tf.compat.v1.data.experimental.make_saveable_from_iterator
tf.data.experimental.make_saveable_from_iterator(
iterator, external_state_policy='fail'
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: make_saveable_from_iterator is intended for use in TF1 with tf.compat.v1.Saver. In TF2, use tf.train.Checkpoint instead.
Args
iterator Iterator.
external_state_policy A string that identifies how to handle input pipelines that depend on external state. Possible values are 'ignore': The external state is silently ignored. 'warn': The external state is ignored, logging a warning. 'fail': The operation fails upon encountering external state. By default we set it to 'fail'.
Returns A SaveableObject for saving/restoring iterator state using Saver.
Raises
ValueError If iterator does not support checkpointing.
ValueError If external_state_policy is not one of 'warn', 'ignore' or 'fail'. For example: with tf.Graph().as_default():
ds = tf.data.Dataset.range(10)
iterator = ds.make_initializable_iterator()
# Build the iterator SaveableObject.
saveable_obj = tf.data.experimental.make_saveable_from_iterator(iterator)
# Add the SaveableObject to the SAVEABLE_OBJECTS collection so
# it can be automatically saved using Saver.
tf.compat.v1.add_to_collection(tf.GraphKeys.SAVEABLE_OBJECTS, saveable_obj)
saver = tf.compat.v1.train.Saver()
while continue_training:
... Perform training ...
if should_save_checkpoint:
saver.save()
Note: When restoring the iterator, the existing iterator state is completely discarded. This means that any changes you may have made to the Dataset graph will be discarded as well! This includes the new Dataset graph that you may have built during validation. So, while running validation, make sure to run the initializer for the validation input pipeline after restoring the checkpoint.
Note: Not all iterators support checkpointing yet. Attempting to save the state of an unsupported iterator will throw an error. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.