_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_29000 |
Call all of the registered callbacks. This function is triggered internally when a property is changed. See also add_callback
remove_callback | |
doc_29001 | Returns the address of the memory buffer as integer. obj must be an instance of a ctypes type. Raises an auditing event ctypes.addressof with argument obj. | |
doc_29002 |
Return whether the Artist has an explicitly set transform. This is True after set_transform has been called. | |
doc_29003 | Load a plist from a bytes object. See load() for an explanation of the keyword arguments. New in version 3.4. | |
doc_29004 |
Create a qat module from a float module or qparams_dict Args: mod a float module, either produced by torch.quantization utilities or directly from user | |
doc_29005 |
DateOffset subclass representing custom business days excluding holidays. Parameters
n:int, default 1
normalize:bool, default False
Normalize start/end dates to midnight before generating date range.
weekmask:str, Default ‘Mon Tue Wed Thu Fri’
Weekmask of valid business days, passed to numpy.busdaycalendar.
holidays:list
List/array of dates to exclude from the set of valid business days, passed to numpy.busdaycalendar.
calendar:pd.HolidayCalendar or np.busdaycalendar
offset:timedelta, default timedelta(0)
Attributes
base Returns a copy of the calling offset object with n=1 and all other attributes equal.
offset Alias for self._offset.
calendar
freqstr
holidays
kwds
n
name
nanos
normalize
rule_code
weekmask Methods
__call__(*args, **kwargs) Call self as a function.
rollback Roll provided date backward to next offset only if not on offset.
rollforward Roll provided date forward to next offset only if not on offset.
apply
apply_index
copy
isAnchored
is_anchored
is_month_end
is_month_start
is_on_offset
is_quarter_end
is_quarter_start
is_year_end
is_year_start
onOffset | |
doc_29006 |
Return the string width (including kerning) and string height as a (w, h) tuple. | |
doc_29007 |
Alias for set_linestyle. | |
doc_29008 |
Set the facecolor(s) of the collection. c can be a color (all patches have same color), or a sequence of colors; if it is a sequence the patches will cycle through the sequence. If c is 'none', the patch will not be filled. Parameters
ccolor or list of colors | |
doc_29009 | See Migration guide for more details. tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth
tf.keras.layers.experimental.preprocessing.RandomWidth(
factor, interpolation='bilinear', seed=None, name=None, **kwargs
)
Adjusts the width of a batch of images by a random factor. The input should be a 4-D tensor in the "channels_last" image data format. By default, this layer is inactive during inference.
Arguments
factor A positive float (fraction of original height), or a tuple of size 2 representing lower and upper bound for resizing vertically. When represented as a single float, this value is used for both the upper and lower bound. For instance, factor=(0.2, 0.3) results in an output with width changed by a random amount in the range [20%, 30%]. factor=(-0.2, 0.3) results in an output with width changed by a random amount in the range [-20%, +30%].factor=0.2results in an output with width changed by a random amount in the range[-20%, +20%]. </td> </tr><tr> <td>interpolation</td> <td> String, the interpolation method. Defaults tobilinear. Supportsbilinear,nearest,bicubic,area,lanczos3,lanczos5,gaussian,mitchellcubic</td> </tr><tr> <td>seed</td> <td> Integer. Used to create a random seed. </td> </tr><tr> <td>name` A string, the name of the layer. Input shape: 4D tensor with shape: (samples, height, width, channels) (data_format='channels_last'). Output shape: 4D tensor with shape: (samples, height, random_width, channels). Methods adapt View source
adapt(
data, reset_state=True
)
Fits the state of the preprocessing layer to the data being passed.
Arguments
data The data to train on. It can be passed either as a tf.data Dataset, or as a numpy array.
reset_state Optional argument specifying whether to clear the state of the layer at the start of the call to adapt, or whether to start from the existing state. This argument may not be relevant to all preprocessing layers: a subclass of PreprocessingLayer may choose to throw if 'reset_state' is set to False. | |
doc_29010 | This implementation polls process file descriptors (pidfds) to await child process termination. In some respects, PidfdChildWatcher is a “Goldilocks” child watcher implementation. It doesn’t require signals or threads, doesn’t interfere with any processes launched outside the event loop, and scales linearly with the number of subprocesses launched by the event loop. The main disadvantage is that pidfds are specific to Linux, and only work on recent (5.3+) kernels. New in version 3.9. | |
doc_29011 | This exception collects exceptions that are raised during a multi-file operation. For copytree(), the exception argument is a list of 3-tuples (srcname, dstname, exception). | |
doc_29012 |
Returns element-wise base array raised to power from second array. This is the masked array version of numpy.power. For details see numpy.power. See also numpy.power
Notes The out argument to numpy.power is not supported, third has to be None. | |
doc_29013 |
Return the picking behavior of the artist. The possible values are described in set_picker. See also
set_picker, pickable, pick | |
doc_29014 | See torch.greater_equal(). | |
doc_29015 |
Set the bottom coord of the rectangle. Parameters
yfloat | |
doc_29016 |
Return value>>self. | |
doc_29017 | Do a plural-forms lookup of a message id. singular is used as the message id for purposes of lookup in the catalog, while n is used to determine which plural form to use. If the message id for context is not found in the catalog, and a fallback is specified, the request is forwarded to the fallback’s npgettext() method. Otherwise, when n is 1 singular is returned, and plural is returned in all other cases. New in version 3.8. | |
doc_29018 | This method returns a list of column names. Immediately after a query, it is the first member of each tuple in Cursor.description. | |
doc_29019 |
Exit "raster" mode. All of the drawing that was done since the last start_rasterizing call will be copied to the vector backend by calling draw_image. | |
doc_29020 | Return a logical OR of all video attributes supported by the terminal. This information is useful when a curses program needs complete control over the appearance of the screen. | |
doc_29021 |
Apply 2D matrix transform. Parameters
coords(N, 2) array
x, y coordinates to transform
matrix(3, 3) array
Homogeneous transformation matrix. Returns
coords(N, 2) array
Transformed coordinates. | |
doc_29022 |
Row and column indices of the i’th bicluster. Only works if rows_ and columns_ attributes exist. Parameters
iint
The index of the cluster. Returns
row_indndarray, dtype=np.intp
Indices of rows in the dataset that belong to the bicluster.
col_indndarray, dtype=np.intp
Indices of columns in the dataset that belong to the bicluster. | |
doc_29023 |
Gathers picklable objects from the whole group in a single process. Similar to gather(), but Python objects can be passed in. Note that the object must be picklable in order to be gathered. Parameters
obj (Any) – Input object. Must be picklable.
object_gather_list (list[Any]) – Output list. On the dst rank, it should be correctly sized as the size of the group for this collective and will contain the output. Must be None on non-dst ranks. (default is None)
dst (int, optional) – Destination rank. (default is 0)
group – (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. Default is None. Returns
None. On the dst rank, object_gather_list will contain the output of the collective. Note Note that this API differs slightly from the gather collective since it does not provide an async_op handle and thus will be a blocking call. Note Note that this API is not supported when using the NCCL backend. Warning gather_object() uses pickle module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Example::
>>> # Note: Process group initialization omitted on each rank.
>>> import torch.distributed as dist
>>> # Assumes world_size of 3.
>>> gather_objects = ["foo", 12, {1: 2}] # any picklable object
>>> output = [None for _ in gather_objects]
>>> dist.gather_object(
gather_objects[dist.get_rank()],
output if dist.get_rank() == 0 else None,
dst=0
)
>>> # On rank 0
>>> output
['foo', 12, {1: 2}] | |
doc_29024 | Send AUTHINFO commands with the user name and password. If user and password are None and usenetrc is true, credentials from ~/.netrc will be used if possible. Unless intentionally delayed, login is normally performed during the NNTP object initialization and separately calling this function is unnecessary. To force authentication to be delayed, you must not set user or password when creating the object, and must set usenetrc to False. New in version 3.2. | |
doc_29025 | Create a CLI runner for testing CLI commands. See Testing CLI Commands. Returns an instance of test_cli_runner_class, by default FlaskCliRunner. The Flask app object is passed as the first argument. Changelog New in version 1.0. Parameters
kwargs (Any) – Return type
FlaskCliRunner | |
doc_29026 | Spec: https://cyber.harvard.edu/rss/rss.html | |
doc_29027 |
Bases: matplotlib.ticker.Formatter Probability formatter (using Math text). Parameters
use_overlinebool, default: False
If x > 1/2, with x = 1-v, indicate if x should be displayed as $overline{v}$. The default is to display $1-v$.
one_halfstr, default: r"frac{1}{2}"
The string used to represent 1/2.
minorbool, default: False
Indicate if the formatter is formatting minor ticks or not. Basically minor ticks are not labelled, except when only few ticks are provided, ticks with most space with neighbor ticks are labelled. See other parameters to change the default behavior.
minor_thresholdint, default: 25
Maximum number of locs for labelling some minor ticks. This parameter have no effect if minor is False.
minor_numberint, default: 6
Number of ticks which are labelled when the number of ticks is below the threshold. format_data_short(value)[source]
Return a short string version of the tick value. Defaults to the position-independent long value.
set_locs(locs)[source]
Set the locations of the ticks. This method is called before computing the tick labels because some formatters need to know all tick locations to do so.
set_minor_number(minor_number)[source]
Set the number of minor ticks to label when some minor ticks are labelled. Parameters
minor_numberint
Number of ticks which are labelled when the number of ticks is below the threshold.
set_minor_threshold(minor_threshold)[source]
Set the threshold for labelling minors ticks. Parameters
minor_thresholdint
Maximum number of locations for labelling some minor ticks. This parameter have no effect if minor is False.
set_one_half(one_half)[source]
Set the way one half is displayed. one_halfstr, default: r"frac{1}{2}"
The string used to represent 1/2.
use_overline(use_overline)[source]
Switch display mode with overline for labelling p>1/2. Parameters
use_overlinebool, default: False
If x > 1/2, with x = 1-v, indicate if x should be displayed as $overline{v}$. The default is to display $1-v$. | |
doc_29028 |
Return an instance of a GraphicsContextBase. | |
doc_29029 |
Return local gradient of an image (i.e. local maximum - local minimum). Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import gradient
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = gradient(img, disk(5))
>>> out_vol = gradient(volume, ball(5)) | |
doc_29030 | Returns the mirrored property assigned to the character chr as integer. Returns 1 if the character has been identified as a “mirrored” character in bidirectional text, 0 otherwise. | |
doc_29031 | -m but not when it is imported: if __name__ == "__main__":
# execute only if run as a script
main()
For a package, the same effect can be achieved by including a __main__.py module, the contents of which will be executed when the module is run with -m. | |
doc_29032 | Assert that the mock was called at least once. >>> mock = Mock()
>>> mock.method()
<Mock name='mock.method()' id='...'>
>>> mock.method.assert_called()
New in version 3.6. | |
doc_29033 | moves the sprite to the bottom layer move_to_back(sprite) -> None Moves the sprite to the bottom layer, moving it behind all other layers and adding one additional layer. | |
doc_29034 | Sets the user’s password to the given raw string, taking care of the password hashing. Doesn’t save the User object. When the raw_password is None, the password will be set to an unusable password, as if set_unusable_password() were used. | |
doc_29035 |
Fills the input Tensor with the scalar value 0. Parameters
tensor – an n-dimensional torch.Tensor Examples >>> w = torch.empty(3, 5)
>>> nn.init.zeros_(w) | |
doc_29036 |
Alias for set_facecolor. | |
doc_29037 | Enqueues the record on the queue using put_nowait(); you may want to override this if you want to use blocking behaviour, or a timeout, or a customized queue implementation. | |
doc_29038 | See Migration guide for more details. tf.compat.v1.linalg.matrix_rank
tf.linalg.matrix_rank(
a, tol=None, validate_args=False, name=None
)
Arguments
a (Batch of) float-like matrix-shaped Tensor(s) which are to be pseudo-inverted.
tol Threshold below which the singular value is counted as 'zero'. Default value: None (i.e., eps * max(rows, cols) * max(singular_val)).
validate_args When True, additional assertions might be embedded in the graph. Default value: False (i.e., no graph assertions are added).
name Python str prefixed to ops created by this function. Default value: 'matrix_rank'.
Returns
matrix_rank (Batch of) int32 scalars representing the number of non-zero singular values. | |
doc_29039 |
Return the visibility. | |
doc_29040 | Return bytes containing the entire contents of the buffer. | |
doc_29041 | See Migration guide for more details. tf.compat.v1.distribute.has_strategy
tf.distribute.has_strategy()
assert not tf.distribute.has_strategy()
with strategy.scope():
assert tf.distribute.has_strategy()
Returns True if inside a with strategy.scope():. | |
doc_29042 |
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xarray-like of shape (n_samples,)
Left argument of the returned kernel k(X, Y) Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X) | |
doc_29043 |
For each element in self, return a copy of the string with all occurrences of substring old replaced by new. See also char.replace | |
doc_29044 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_29045 | When decompressing, the value of the last modification time field in the most recently read header may be read from this attribute, as an integer. The initial value before reading any headers is None. All gzip compressed streams are required to contain this timestamp field. Some programs, such as gunzip, make use of the timestamp. The format is the same as the return value of time.time() and the st_mtime attribute of the object returned by os.stat(). | |
doc_29046 |
Randomly zero out entire channels (a channel is a 3D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 3D tensor input[i,j]\text{input}[i, j] ) of the input tensor). Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution. See Dropout3d for details. Parameters
p – probability of a channel to be zeroed. Default: 0.5
training – apply dropout if is True. Default: True
inplace – If set to True, will do this operation in-place. Default: False | |
doc_29047 |
Add a colorbar to a plot. Parameters
mappable
The matplotlib.cm.ScalarMappable (i.e., AxesImage, ContourSet, etc.) described by this colorbar. This argument is mandatory for the Figure.colorbar method but optional for the pyplot.colorbar function, which sets the default to the current image. Note that one can create a ScalarMappable "on-the-fly" to generate colorbars not attached to a previously drawn artist, e.g. fig.colorbar(cm.ScalarMappable(norm=norm, cmap=cmap), ax=ax)
caxAxes, optional
Axes into which the colorbar will be drawn.
axAxes, list of Axes, optional
One or more parent axes from which space for a new colorbar axes will be stolen, if cax is None. This has no effect if cax is set.
use_gridspecbool, optional
If cax is None, a new cax is created as an instance of Axes. If ax is an instance of Subplot and use_gridspec is True, cax is created as an instance of Subplot using the gridspec module. Returns
colorbarColorbar
Notes Additional keyword arguments are of two kinds: axes properties: locationNone or {'left', 'right', 'top', 'bottom'}
The location, relative to the parent axes, where the colorbar axes is created. It also determines the orientation of the colorbar (colorbars on the left and right are vertical, colorbars at the top and bottom are horizontal). If None, the location will come from the orientation if it is set (vertical colorbars on the right, horizontal ones at the bottom), or default to 'right' if orientation is unset. orientationNone or {'vertical', 'horizontal'}
The orientation of the colorbar. It is preferable to set the location of the colorbar, as that also determines the orientation; passing incompatible values for location and orientation raises an exception. fractionfloat, default: 0.15
Fraction of original axes to use for colorbar. shrinkfloat, default: 1.0
Fraction by which to multiply the size of the colorbar. aspectfloat, default: 20
Ratio of long to short dimensions. padfloat, default: 0.05 if vertical, 0.15 if horizontal
Fraction of original axes between colorbar and new image axes. anchor(float, float), optional
The anchor point of the colorbar axes. Defaults to (0.0, 0.5) if vertical; (0.5, 1.0) if horizontal. panchor(float, float), or False, optional
The anchor point of the colorbar parent axes. If False, the parent axes' anchor will be unchanged. Defaults to (1.0, 0.5) if vertical; (0.5, 0.0) if horizontal. colorbar properties:
Property Description
extend {'neither', 'both', 'min', 'max'} If not 'neither', make pointed end(s) for out-of- range values. These are set for a given colormap using the colormap set_under and set_over methods.
extendfrac {None, 'auto', length, lengths} If set to None, both the minimum and maximum triangular colorbar extensions with have a length of 5% of the interior colorbar length (this is the default setting). If set to 'auto', makes the triangular colorbar extensions the same lengths as the interior boxes (when spacing is set to 'uniform') or the same lengths as the respective adjacent interior boxes (when spacing is set to 'proportional'). If a scalar, indicates the length of both the minimum and maximum triangular colorbar extensions as a fraction of the interior colorbar length. A two-element sequence of fractions may also be given, indicating the lengths of the minimum and maximum colorbar extensions respectively as a fraction of the interior colorbar length.
extendrect bool If False the minimum and maximum colorbar extensions will be triangular (the default). If True the extensions will be rectangular.
spacing {'uniform', 'proportional'} Uniform spacing gives each discrete color the same space; proportional makes the space proportional to the data interval.
ticks None or list of ticks or Locator If None, ticks are determined automatically from the input.
format None or str or Formatter If None, ScalarFormatter is used. If a format string is given, e.g., '%.3f', that is used. An alternative Formatter may be given instead.
drawedges bool Whether to draw lines at color boundaries.
label str The label on the colorbar's long axis. The following will probably be useful only in the context of indexed colors (that is, when the mappable has norm=NoNorm()), or other unusual circumstances.
Property Description
boundaries None or a sequence
values None or a sequence which must be of length 1 less than the sequence of boundaries. For each region delimited by adjacent entries in boundaries, the colormapped to the corresponding value in values will be used. If mappable is a ContourSet, its extend kwarg is included automatically. The shrink kwarg provides a simple way to scale the colorbar with respect to the axes. Note that if cax is specified, it determines the size of the colorbar and shrink and aspect kwargs are ignored. For more precise control, you can manually specify the positions of the axes objects in which the mappable and the colorbar are drawn. In this case, do not use any of the axes properties kwargs. It is known that some vector graphics viewers (svg and pdf) renders white gaps between segments of the colorbar. This is due to bugs in the viewers, not Matplotlib. As a workaround, the colorbar can be rendered with overlapping segments: cbar = colorbar()
cbar.solids.set_edgecolor("face")
draw()
However this has negative consequences in other circumstances, e.g. with semi-transparent images (alpha < 1) and colorbar extensions; therefore, this workaround is not used by default (see issue #1188). | |
doc_29048 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_29049 | Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters
module (nn.Module) – module containing the tensor to prune
name (str) – parameter name within module on which pruning will act.
args – arguments passed on to a subclass of BasePruningMethod
importance_scores (torch.Tensor) – tensor of importance scores (of same shape as module parameter) used to compute mask for pruning. The values in this tensor indicate the importance of the corresponding elements in the parameter being pruned. If unspecified or None, the parameter will be used in its place.
kwargs – keyword arguments passed on to a subclass of a BasePruningMethod | |
doc_29050 | See Migration guide for more details. tf.compat.v1.raw_ops.Size
tf.raw_ops.Size(
input, out_type=tf.dtypes.int32, name=None
)
This operation returns an integer representing the number of elements in input. For example: # 't' is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]
size(t) ==> 12
Args
input A Tensor.
out_type An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32.
name A name for the operation (optional).
Returns A Tensor of type out_type. | |
doc_29051 | The named tuple flags exposes the status of command line flags. The attributes are read only.
attribute flag
debug -d
inspect -i
interactive -i
isolated -I
optimize -O or -OO
dont_write_bytecode -B
no_user_site -s
no_site -S
ignore_environment -E
verbose -v
bytes_warning -b
quiet -q
hash_randomization -R
dev_mode -X dev (Python Development Mode)
utf8_mode -X utf8 Changed in version 3.2: Added quiet attribute for the new -q flag. New in version 3.2.3: The hash_randomization attribute. Changed in version 3.3: Removed obsolete division_warning attribute. Changed in version 3.4: Added isolated attribute for -I isolated flag. Changed in version 3.7: Added the dev_mode attribute for the new Python Development Mode and the utf8_mode attribute for the new -X utf8 flag. | |
doc_29052 |
Generate a pie plot. A pie plot is a proportional representation of the numerical data in a column. This function wraps matplotlib.pyplot.pie() for the specified column. If no column reference is passed and subplots=True a pie plot is drawn for each numerical column independently. Parameters
y:int or label, optional
Label or position of the column to plot. If not provided, subplots=True argument must be passed. **kwargs
Keyword arguments to pass on to DataFrame.plot(). Returns
matplotlib.axes.Axes or np.ndarray of them
A NumPy array is returned when subplots is True. See also Series.plot.pie
Generate a pie plot for a Series. DataFrame.plot
Make plots of a DataFrame. Examples In the example below we have a DataFrame with the information about planet’s mass and radius. We pass the ‘mass’ column to the pie function to get a pie plot.
>>> df = pd.DataFrame({'mass': [0.330, 4.87 , 5.97],
... 'radius': [2439.7, 6051.8, 6378.1]},
... index=['Mercury', 'Venus', 'Earth'])
>>> plot = df.plot.pie(y='mass', figsize=(5, 5))
>>> plot = df.plot.pie(subplots=True, figsize=(11, 6)) | |
doc_29053 |
Get the total number of days in the month that this period falls on. Returns
int
See also Period.daysinmonth
Gets the number of days in the month. DatetimeIndex.daysinmonth
Gets the number of days in the month. calendar.monthrange
Returns a tuple containing weekday (0-6 ~ Mon-Sun) and number of days (28-31). Examples
>>> p = pd.Period('2018-2-17')
>>> p.days_in_month
28
>>> pd.Period('2018-03-01').days_in_month
31
Handles the leap year case as well:
>>> p = pd.Period('2016-2-17')
>>> p.days_in_month
29 | |
doc_29054 |
Set the snapping behavior. Snapping aligns positions with the pixel grid, which results in clearer images. For example, if a black line of 1px width was defined at a position in between two pixels, the resulting image would contain the interpolated value of that line in the pixel grid, which would be a grey value on both adjacent pixel positions. In contrast, snapping will move the line to the nearest integer pixel value, so that the resulting image will really contain a 1px wide black line. Snapping is currently only supported by the Agg and MacOSX backends. Parameters
snapbool or None
Possible values:
True: Snap vertices to the nearest pixel center.
False: Do not modify vertex positions.
None: (auto) If the path contains only rectilinear line segments, round to the nearest pixel center. | |
doc_29055 | tf.initializers.GlorotUniform, tf.initializers.glorot_uniform, tf.keras.initializers.glorot_uniform
tf.keras.initializers.GlorotUniform(
seed=None
)
Also available via the shortcut function tf.keras.initializers.glorot_uniform. Draws samples from a uniform distribution within [-limit, limit], where limit = sqrt(6 / (fan_in + fan_out)) (fan_in is the number of input units in the weight tensor and fan_out is the number of output units). Examples:
# Standalone usage:
initializer = tf.keras.initializers.GlorotUniform()
values = initializer(shape=(2, 2))
# Usage in a Keras layer:
initializer = tf.keras.initializers.GlorotUniform()
layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)
Args
seed A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. References: Glorot et al., 2010 (pdf) Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, **kwargs
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. Only floating point types are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype))
**kwargs Additional keyword arguments. | |
doc_29056 |
Update the location of children if necessary and draw them to the given renderer. | |
doc_29057 |
Configures Django by: Loading the settings. Setting up logging. If set_prefix is True, setting the URL resolver script prefix to FORCE_SCRIPT_NAME if defined, or / otherwise. Initializing the application registry. This function is called automatically: When running an HTTP server via Django’s WSGI support. When invoking a management command. It must be called explicitly in other cases, for instance in plain Python scripts. | |
doc_29058 | Makes a PUT request on the provided path and returns a Response object. Useful for testing RESTful interfaces. When data is provided, it is used as the request body, and a Content-Type header is set to content_type. The follow, secure and extra arguments act the same as for Client.get(). | |
doc_29059 |
Called when a pan operation has started. Parameters
x, yfloat
The mouse coordinates in display coords.
buttonMouseButton
The pressed mouse button. Notes This is intended to be overridden by new projection types. | |
doc_29060 |
Set the alpha value used for blending - not supported on all backends. Parameters
alphaarray-like or scalar or None
All values must be within the 0-1 range, inclusive. Masked values and nans are not supported. | |
doc_29061 | Optional. Either True or False. Default is False. Specifies whether all subdirectories of path should be included | |
doc_29062 |
Call transform on the estimator with the best found parameters. Only available if the underlying estimator supports transform and refit=True. Parameters
Xindexable, length n_samples
Must fulfill the input assumptions of the underlying estimator. | |
doc_29063 | This is the default widget used by all GeoDjango form fields. template_name is gis/openlayers.html. OpenLayersWidget and OSMWidget use the openlayers.js file hosted on the cdnjs.cloudflare.com content-delivery network. You can subclass these widgets in order to specify your own version of the OpenLayers.js file in the js property of the inner Media class (see Assets as a static definition). | |
doc_29064 | tf.image.stateless_random_flip_left_right(
image, seed
)
Guarantees the same results given the same seed independent of how many times the function is called, and independent of global seed settings (e.g. tf.random.set_seed). Example usage:
image = np.array([[[1], [2]], [[3], [4]]])
seed = (2, 3)
tf.image.stateless_random_flip_left_right(image, seed).numpy().tolist()
[[[2], [1]], [[4], [3]]]
Args
image 4-D Tensor of shape [batch, height, width, channels] or 3-D Tensor of shape [height, width, channels].
seed A shape [2] Tensor, the seed to the random number generator. Must have dtype int32 or int64. (When using XLA, only int32 is allowed.)
Returns A tensor of the same type and shape as image. | |
doc_29065 |
For each element in self, return the highest index in the string where substring sub is found, such that sub is contained within [start, end]. See also char.rfind | |
doc_29066 |
The last colorbar associated with this ScalarMappable. May be None. | |
doc_29067 |
Returns whether PyTorch is built with MKL-DNN support. | |
doc_29068 |
Get the extents of the tick labels on either side of the axes. | |
doc_29069 |
Bases: matplotlib.offsetbox.OffsetBox A container to add a padding around an Artist. The PaddedBox contains a FancyBboxPatch that is used to visualize it when rendering. Parameters
childArtist
The contained Artist.
padfloat
The padding in points. This will be scaled with the renderer dpi. In contrast width and height are in pixels and thus not scaled.
draw_framebool
Whether to draw the contained FancyBboxPatch.
patch_attrsdict or None
Additional parameters passed to the contained FancyBboxPatch. draw(renderer)[source]
Update the location of children if necessary and draw them to the given renderer.
draw_frame(renderer)[source]
get_extent_offsets(renderer)[source]
Update offset of the children and return the extent of the box. Parameters
rendererRendererBase subclass
Returns
width
height
xdescent
ydescent
list of (xoffset, yoffset) pairs
set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, gid=<UNSET>, height=<UNSET>, in_layout=<UNSET>, label=<UNSET>, offset=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, width=<UNSET>, zorder=<UNSET>)[source]
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
figure Figure
gid str
height float
in_layout bool
label object
offset (float, float) or callable
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
width float
zorder float
update_frame(bbox, fontsize=None)[source] | |
doc_29070 |
Call inverse_transform on the estimator with the best found params. Only available if the underlying estimator implements inverse_transform and refit=True. Parameters
Xtindexable, length n_samples
Must fulfill the input assumptions of the underlying estimator. | |
doc_29071 | See Migration guide for more details. tf.compat.v1.raw_ops.LoadTPUEmbeddingMomentumParameters
tf.raw_ops.LoadTPUEmbeddingMomentumParameters(
parameters, momenta, num_shards, shard_id, table_id=-1, table_name='',
config='', name=None
)
An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.
Args
parameters A Tensor of type float32. Value of parameters used in the Momentum optimization algorithm.
momenta A Tensor of type float32. Value of momenta used in the Momentum optimization algorithm.
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns The created Operation. | |
doc_29072 | sklearn.datasets.fetch_openml(name: Optional[str] = None, *, version: Union[str, int] = 'active', data_id: Optional[int] = None, data_home: Optional[str] = None, target_column: Optional[Union[str, List]] = 'default-target', cache: bool = True, return_X_y: bool = False, as_frame: Union[str, bool] = 'auto') [source]
Fetch dataset from openml by name or dataset id. Datasets are uniquely identified by either an integer ID or by a combination of name and version (i.e. there might be multiple versions of the ‘iris’ dataset). Please give either name or data_id (not both). In case a name is given, a version can also be provided. Read more in the User Guide. New in version 0.20. Note EXPERIMENTAL The API is experimental (particularly the return value structure), and might have small backward-incompatible changes without notice or warning in future releases. Parameters
namestr, default=None
String identifier of the dataset. Note that OpenML can have multiple datasets with the same name.
versionint or ‘active’, default=’active’
Version of the dataset. Can only be provided if also name is given. If ‘active’ the oldest version that’s still active is used. Since there may be more than one active version of a dataset, and those versions may fundamentally be different from one another, setting an exact version is highly recommended.
data_idint, default=None
OpenML ID of the dataset. The most specific way of retrieving a dataset. If data_id is not given, name (and potential version) are used to obtain a dataset.
data_homestr, default=None
Specify another download and cache folder for the data sets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
target_columnstr, list or None, default=’default-target’
Specify the column name in the data to use as target. If ‘default-target’, the standard target column a stored on the server is used. If None, all columns are returned as data and the target is None. If list (of strings), all columns with these names are returned as multi-target (Note: not all scikit-learn classifiers can handle all types of multi-output combinations)
cachebool, default=True
Whether to cache downloaded datasets using joblib.
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target objects.
as_framebool or ‘auto’, default=’auto’
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric, string or categorical). The target is a pandas DataFrame or Series depending on the number of target_columns. The Bunch will contain a frame attribute with the target and the data. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as describe above. If as_frame is ‘auto’, the data and target will be converted to DataFrame or Series as if as_frame is set to True, unless the dataset is stored in sparse format. Changed in version 0.24: The default value of as_frame changed from False to 'auto' in 0.24. Returns
dataBunch
Dictionary-like object, with the following attributes.
datanp.array, scipy.sparse.csr_matrix of floats, or pandas DataFrame
The feature matrix. Categorical features are encoded as ordinals.
targetnp.array, pandas Series or DataFrame
The regression target or classification labels, if applicable. Dtype is float if numeric, and object if categorical. If as_frame is True, target is a pandas object.
DESCRstr
The full description of the dataset
feature_nameslist
The names of the dataset columns target_names: list
The names of the target columns New in version 0.22.
categoriesdict or None
Maps each categorical feature name to a list of values, such that the value encoded as i is ith in the list. If as_frame is True, this is None.
detailsdict
More metadata from OpenML
framepandas DataFrame
Only present when as_frame=True. DataFrame with data and target.
(data, target)tuple if return_X_y is True
Note EXPERIMENTAL This interface is experimental and subsequent releases may change attributes without notice (although there should only be minor changes to data and target). Missing values in the ‘data’ are represented as NaN’s. Missing values in ‘target’ are represented as NaN’s (numerical target) or None (categorical target)
Examples using sklearn.datasets.fetch_openml
Release Highlights for scikit-learn 0.22
Categorical Feature Support in Gradient Boosting
Combine predictors using stacking
Gaussian process regression (GPR) on Mauna Loa CO2 data.
MNIST classification using multinomial logistic + L1
Early stopping of Stochastic Gradient Descent
Poisson regression and non-normal loss
Tweedie regression on insurance claims
Permutation Importance vs Random Forest Feature Importance (MDI)
Common pitfalls in interpretation of coefficients of linear models
Visualizations with Display Objects
Classifier Chain
Approximate nearest neighbors in TSNE
Visualization of MLP weights on MNIST
Column Transformer with Mixed Types
Effect of transforming the targets in regression model | |
doc_29073 | Return the value of the named header field. This is identical to __getitem__() except that optional failobj is returned if the named header is missing (failobj defaults to None). | |
doc_29074 |
Call self as a function. | |
doc_29075 | class sklearn.multiclass.OneVsRestClassifier(estimator, *, n_jobs=None) [source]
One-vs-the-rest (OvR) multiclass strategy. Also known as one-vs-all, this strategy consists in fitting one classifier per class. For each classifier, the class is fitted against all the other classes. In addition to its computational efficiency (only n_classes classifiers are needed), one advantage of this approach is its interpretability. Since each class is represented by one and one classifier only, it is possible to gain knowledge about the class by inspecting its corresponding classifier. This is the most commonly used strategy for multiclass classification and is a fair default choice. OneVsRestClassifier can also be used for multilabel classification. To use this feature, provide an indicator matrix for the target y when calling .fit. In other words, the target labels should be formatted as a 2D binary (0/1) matrix, where [i, j] == 1 indicates the presence of label j in sample i. This estimator uses the binary relevance method to perform multilabel classification, which involves training one binary classifier independently for each label. Read more in the User Guide. Parameters
estimatorestimator object
An estimator object implementing fit and one of decision_function or predict_proba.
n_jobsint, default=None
The number of jobs to use for the computation: the n_classes one-vs-rest problems are computed in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Changed in version v0.20: n_jobs default changed from 1 to None Attributes
estimators_list of n_classes estimators
Estimators used for predictions.
coef_ndarray of shape (1, n_features) or (n_classes, n_features)
Coefficient of the features in the decision function. This attribute exists only if the estimators_ defines coef_. Deprecated since version 0.24: This attribute is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). If you use this attribute in RFE or SelectFromModel, you may pass a callable to the importance_getter parameter that extracts feature the importances from estimators_.
intercept_ndarray of shape (1, 1) or (n_classes, 1)
If y is binary, the shape is (1, 1) else (n_classes, 1) This attribute exists only if the estimators_ defines intercept_. Deprecated since version 0.24: This attribute is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). If you use this attribute in RFE or SelectFromModel, you may pass a callable to the importance_getter parameter that extracts feature the importances from estimators_.
classes_array, shape = [n_classes]
Class labels.
n_classes_int
Number of classes.
label_binarizer_LabelBinarizer object
Object used to transform multiclass labels to binary labels and vice-versa.
multilabel_boolean
Whether this is a multilabel classifier See also
sklearn.multioutput.MultiOutputClassifier
Alternate way of extending an estimator for multilabel classification.
sklearn.preprocessing.MultiLabelBinarizer
Transform iterable of iterables to binary indicator matrix. Examples >>> import numpy as np
>>> from sklearn.multiclass import OneVsRestClassifier
>>> from sklearn.svm import SVC
>>> X = np.array([
... [10, 10],
... [8, 10],
... [-5, 5.5],
... [-5.4, 5.5],
... [-20, -20],
... [-15, -20]
... ])
>>> y = np.array([0, 0, 1, 1, 2, 2])
>>> clf = OneVsRestClassifier(SVC()).fit(X, y)
>>> clf.predict([[-19, -20], [9, 9], [-5, 5]])
array([2, 0, 1])
Methods
decision_function(X) Returns the distance of each sample from the decision boundary for each class.
fit(X, y) Fit underlying estimators.
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, classes]) Partially fit underlying estimators
predict(X) Predict multi-class targets using underlying estimators.
predict_proba(X) Probability estimates.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
decision_function(X) [source]
Returns the distance of each sample from the decision boundary for each class. This can only be used with estimators which implement the decision_function method. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Tarray-like of shape (n_samples, n_classes) or (n_samples,) for binary classification.
Changed in version 0.19: output shape changed to (n_samples,) to conform to scikit-learn conventions for binary classification.
fit(X, y) [source]
Fit underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
y(sparse) array-like of shape (n_samples,) or (n_samples, n_classes)
Multi-class targets. An indicator matrix turns on multilabel classification. Returns
self
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property multilabel_
Whether this is a multilabel classifier
partial_fit(X, y, classes=None) [source]
Partially fit underlying estimators Should be used when memory is inefficient to train all data. Chunks of data can be passed in several iteration. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
y(sparse) array-like of shape (n_samples,) or (n_samples, n_classes)
Multi-class targets. An indicator matrix turns on multilabel classification.
classesarray, shape (n_classes, )
Classes across all calls to partial_fit. Can be obtained via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is only required in the first call of partial_fit and can be omitted in the subsequent calls. Returns
self
predict(X) [source]
Predict multi-class targets using underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data. Returns
y(sparse) array-like of shape (n_samples,) or (n_samples, n_classes)
Predicted multi-class targets.
predict_proba(X) [source]
Probability estimates. The returned estimates for all classes are ordered by label of classes. Note that in the multilabel case, each sample can have any number of labels. This returns the marginal probability that the given sample has the label in question. For example, it is entirely consistent that two labels both have a 90% probability of applying to a given sample. In the single label multiclass case, the rows of the returned matrix sum to 1. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
T(sparse) array-like of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.multiclass.OneVsRestClassifier
Multilabel classification
Receiver Operating Characteristic (ROC)
Precision-Recall
Classifier Chain | |
doc_29076 |
The day of the week with Monday=0, Sunday=6. | |
doc_29077 | Returns a dict representation of QueryDict. For every (key, list) pair in QueryDict, dict will have (key, item), where item is one element of the list, using the same logic as QueryDict.__getitem__(): >>> q = QueryDict('a=1&a=3&a=5')
>>> q.dict()
{'a': '5'} | |
doc_29078 | This class is parallel to BytesParser, but handles string input. Changed in version 3.3: Removed the strict argument. Added the policy keyword. Changed in version 3.6: _class defaults to the policy message_factory.
parse(fp, headersonly=False)
Read all the data from the text-mode file-like object fp, parse the resulting text, and return the root message object. fp must support both the readline() and the read() methods on file-like objects. Other than the text mode requirement, this method operates like BytesParser.parse().
parsestr(text, headersonly=False)
Similar to the parse() method, except it takes a string object instead of a file-like object. Calling this method on a string is equivalent to wrapping text in a StringIO instance first and calling parse(). Optional headersonly is as with the parse() method. | |
doc_29079 |
Alias for set_linestyle. | |
doc_29080 | Yields ModuleInfo for all submodules on path, or, if path is None, all top-level modules on sys.path. path should be either None or a list of paths to look for modules in. prefix is a string to output on the front of every module name on output. Note Only works for a finder which defines an iter_modules() method. This interface is non-standard, so the module also provides implementations for importlib.machinery.FileFinder and zipimport.zipimporter. Changed in version 3.3: Updated to be based directly on importlib rather than relying on the package internal PEP 302 import emulation. | |
doc_29081 | Construct an IPv4 address. An AddressValueError is raised if address is not a valid IPv4 address. The following constitutes a valid IPv4 address: A string in decimal-dot notation, consisting of four decimal integers in the inclusive range 0–255, separated by dots (e.g. 192.168.0.1). Each integer represents an octet (byte) in the address. Leading zeroes are tolerated only for values less than 8 (as there is no ambiguity between the decimal and octal interpretations of such strings). An integer that fits into 32 bits. An integer packed into a bytes object of length 4 (most significant octet first). >>> ipaddress.IPv4Address('192.168.0.1')
IPv4Address('192.168.0.1')
>>> ipaddress.IPv4Address(3232235521)
IPv4Address('192.168.0.1')
>>> ipaddress.IPv4Address(b'\xC0\xA8\x00\x01')
IPv4Address('192.168.0.1')
version
The appropriate version number: 4 for IPv4, 6 for IPv6.
max_prefixlen
The total number of bits in the address representation for this version: 32 for IPv4, 128 for IPv6. The prefix defines the number of leading bits in an address that are compared to determine whether or not an address is part of a network.
compressed
exploded
The string representation in dotted decimal notation. Leading zeroes are never included in the representation. As IPv4 does not define a shorthand notation for addresses with octets set to zero, these two attributes are always the same as str(addr) for IPv4 addresses. Exposing these attributes makes it easier to write display code that can handle both IPv4 and IPv6 addresses.
packed
The binary representation of this address - a bytes object of the appropriate length (most significant octet first). This is 4 bytes for IPv4 and 16 bytes for IPv6.
reverse_pointer
The name of the reverse DNS PTR record for the IP address, e.g.: >>> ipaddress.ip_address("127.0.0.1").reverse_pointer
'1.0.0.127.in-addr.arpa'
>>> ipaddress.ip_address("2001:db8::1").reverse_pointer
'1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa'
This is the name that could be used for performing a PTR lookup, not the resolved hostname itself. New in version 3.5.
is_multicast
True if the address is reserved for multicast use. See RFC 3171 (for IPv4) or RFC 2373 (for IPv6).
is_private
True if the address is allocated for private networks. See iana-ipv4-special-registry (for IPv4) or iana-ipv6-special-registry (for IPv6).
is_global
True if the address is allocated for public networks. See iana-ipv4-special-registry (for IPv4) or iana-ipv6-special-registry (for IPv6). New in version 3.4.
is_unspecified
True if the address is unspecified. See RFC 5735 (for IPv4) or RFC 2373 (for IPv6).
is_reserved
True if the address is otherwise IETF reserved.
is_loopback
True if this is a loopback address. See RFC 3330 (for IPv4) or RFC 2373 (for IPv6).
is_link_local
True if the address is reserved for link-local usage. See RFC 3927. | |
doc_29082 | Run the pygame unit test suite run(*args, **kwds) -> tuple Positional arguments (optional): The names of tests to include. If omitted then all tests are run. Test names
need not include the trailing '_test'. Keyword arguments: incomplete - fail incomplete tests (default False)
nosubprocess - run all test suites in the current process
(default False, use separate subprocesses)
dump - dump failures/errors as dict ready to eval (default False)
file - if provided, the name of a file into which to dump failures/errors
timings - if provided, the number of times to run each individual test to
get an average run time (default is run each test once)
exclude - A list of TAG names to exclude from the run
show_output - show silenced stderr/stdout on errors (default False)
all - dump all results, not just errors (default False)
randomize - randomize order of tests (default False)
seed - if provided, a seed randomizer integer
multi_thread - if provided, the number of THREADS in which to run
subprocessed tests
time_out - if subprocess is True then the time limit in seconds before
killing a test (default 30)
fake - if provided, the name of the fake tests package in the
run_tests__tests subpackage to run instead of the normal
pygame tests
python - the path to a python executable to run subprocessed tests
(default sys.executable) Return value: A tuple of total number of tests run, dictionary of error information.
The dictionary is empty if no errors were recorded. By default individual test modules are run in separate subprocesses. This recreates normal pygame usage where pygame.init() and pygame.quit() are called only once per program execution, and avoids unfortunate interactions between test modules. Also, a time limit is placed on test execution, so frozen tests are killed when there time allotment expired. Use the single process option if threading is not working properly or if tests are taking too long. It is not guaranteed that all tests will pass in single process mode. Tests are run in a randomized order if the randomize argument is True or a seed argument is provided. If no seed integer is provided then the system time is used. Individual test modules may have a __tags__ attribute, a list of tag strings used to selectively omit modules from a run. By default only 'interactive' modules such as cdrom_test are ignored. An interactive module must be run from the console as a Python program. This function can only be called once per Python session. It is not reentrant. | |
doc_29083 | Run the tests associated with this suite, collecting the result into the test result object passed as result. Note that unlike TestCase.run(), TestSuite.run() requires the result object to be passed in. | |
doc_29084 | Enable a server to accept connections. If backlog is specified, it must be at least 0 (if it is lower, it is set to 0); it specifies the number of unaccepted connections that the system will allow before refusing new connections. If not specified, a default reasonable value is chosen. Changed in version 3.5: The backlog parameter is now optional. | |
doc_29085 |
The character width (WX). | |
doc_29086 |
Convert to float. | |
doc_29087 | clip the area where to draw. Just pass None (default) to reset the clip set_clip(screen_rect=None) -> None | |
doc_29088 | Creates a BRIN index. Set the autosummarize parameter to True to enable automatic summarization to be performed by autovacuum. The pages_per_range argument takes a positive integer. Changed in Django 3.2: Positional argument *expressions was added in order to support functional indexes. | |
doc_29089 |
Bases: mpl_toolkits.axes_grid1.axes_size._Base An instance whose size is a fraction of the ref_size. >>> s = Fraction(0.3, AxesX(ax))
get_size(renderer)[source] | |
doc_29090 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_29091 | The object passed as the tzinfo argument to the time constructor, or None if none was passed. | |
doc_29092 | Determine whether code is in tableC.4 (Non-character code points). | |
doc_29093 | Returns the tensor as a (nested) list. For scalars, a standard Python number is returned, just like with item(). Tensors are automatically moved to the CPU first if necessary. This operation is not differentiable. Examples: >>> a = torch.randn(2, 2)
>>> a.tolist()
[[0.012766935862600803, 0.5415473580360413],
[-0.08909505605697632, 0.7729271650314331]]
>>> a[0,0].tolist()
0.012766935862600803 | |
doc_29094 | Synchronize and close the persistent dict object. Operations on a closed shelf will fail with a ValueError. | |
doc_29095 |
Predict classes for X. The predicted class of an input sample is computed as the weighted mean prediction of the classifiers in the ensemble. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. COO, DOK, and LIL are converted to CSR. Returns
yndarray of shape (n_samples,)
The predicted classes. | |
doc_29096 | See Migration guide for more details. tf.compat.v1.type_spec_from_value
tf.type_spec_from_value(
value
)
Examples:
tf.type_spec_from_value(tf.constant([1, 2, 3]))
TensorSpec(shape=(3,), dtype=tf.int32, name=None)
tf.type_spec_from_value(np.array([4.0, 5.0], np.float64))
TensorSpec(shape=(2,), dtype=tf.float64, name=None)
tf.type_spec_from_value(tf.ragged.constant([[1, 2], [3, 4, 5]]))
RaggedTensorSpec(TensorShape([2, None]), tf.int32, 1, tf.int64)
example_input = tf.ragged.constant([[1, 2], [3]])
@tf.function(input_signature=[tf.type_spec_from_value(example_input)])
def f(x):
return tf.reduce_sum(x, axis=1)
Args
value A value that can be accepted or returned by TensorFlow APIs. Accepted types for value include tf.Tensor, any value that can be converted to tf.Tensor using tf.convert_to_tensor, and any subclass of CompositeTensor (such as tf.RaggedTensor).
Returns A TypeSpec that is compatible with value.
Raises
TypeError If a TypeSpec cannot be built for value, because its type is not supported. | |
doc_29097 |
Return the Transform instance used by this artist. | |
doc_29098 | See Migration guide for more details. tf.compat.v1.autograph.experimental.Feature These conversion options are experimental. They are subject to change without notice and offer no guarantees. Example Usage optionals= tf.autograph.experimental.Feature.EQUALITY_OPERATORS
@tf.function(experimental_autograph_options=optionals)
def f(i):
if i == 0: # EQUALITY_OPERATORS allows the use of == here.
tf.print('i is zero')
Attributes
ALL Enable all features.
AUTO_CONTROL_DEPS Insert of control dependencies in the generated code.
ASSERT_STATEMENTS Convert Tensor-dependent assert statements to tf.Assert.
BUILTIN_FUNCTIONS Convert builtin functions applied to Tensors to their TF counterparts.
EQUALITY_OPERATORS Whether to convert the comparison operators, like equality. This is soon to be deprecated as support is being added to the Tensor class.
LISTS Convert list idioms, like initializers, slices, append, etc.
NAME_SCOPES Insert name scopes that name ops according to context, like the function they were defined in.
Class Variables
ALL tf.autograph.experimental.Feature
ASSERT_STATEMENTS tf.autograph.experimental.Feature
AUTO_CONTROL_DEPS tf.autograph.experimental.Feature
BUILTIN_FUNCTIONS tf.autograph.experimental.Feature
EQUALITY_OPERATORS tf.autograph.experimental.Feature
LISTS tf.autograph.experimental.Feature
NAME_SCOPES tf.autograph.experimental.Feature | |
doc_29099 | A form mixin that works on ModelForms, rather than a standalone form. Since this is a subclass of SingleObjectMixin, instances of this mixin have access to the model and queryset attributes, describing the type of object that the ModelForm is manipulating. If you specify both the fields and form_class attributes, an ImproperlyConfigured exception will be raised. Mixins django.views.generic.edit.FormMixin django.views.generic.detail.SingleObjectMixin Methods and Attributes
model
A model class. Can be explicitly provided, otherwise will be determined by examining self.object or queryset.
fields
A list of names of fields. This is interpreted the same way as the Meta.fields attribute of ModelForm. This is a required attribute if you are generating the form class automatically (e.g. using model). Omitting this attribute will result in an ImproperlyConfigured exception.
success_url
The URL to redirect to when the form is successfully processed. success_url may contain dictionary string formatting, which will be interpolated against the object’s field attributes. For example, you could use success_url="/polls/{slug}/" to redirect to a URL composed out of the slug field on a model.
get_form_class()
Retrieve the form class to instantiate. If form_class is provided, that class will be used. Otherwise, a ModelForm will be instantiated using the model associated with the queryset, or with the model, depending on which attribute is provided.
get_form_kwargs()
Add the current instance (self.object) to the standard get_form_kwargs().
get_success_url()
Determine the URL to redirect to when the form is successfully validated. Returns django.views.generic.edit.ModelFormMixin.success_url if it is provided; otherwise, attempts to use the get_absolute_url() of the object.
form_valid(form)
Saves the form instance, sets the current object for the view, and redirects to get_success_url().
form_invalid(form)
Renders a response, providing the invalid form as context. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.