_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_3600 | See Migration guide for more details. tf.compat.v1.app.flags.multi_flags_validator
tf.compat.v1.flags.multi_flags_validator(
flag_names, message='Flag validation failed',
flag_values=_flagvalues.FLAGS
)
Registers the decorated function as a validator for flag_names, e.g. @flags.multi_flags_validator(['foo', 'bar']) def _CheckFooBar(flags_dict): ... See register_multi_flags_validator() for the specification of checker function.
Args
flag_names [str], a list of the flag names to be checked.
message str, error text to be shown to the user if checker returns False. If checker raises flags.ValidationError, message from the raised error will be shown.
flag_values flags.FlagValues, optional FlagValues instance to validate against.
Returns A function decorator that registers its function argument as a validator.
Raises
AttributeError Raised when a flag is not registered as a valid flag name. | |
doc_3601 | The type of frame objects such as found in tb.tb_frame if tb is a traceback object. See the language reference for details of the available attributes and operations. | |
doc_3602 |
Bases: object get_attribute_from_ref_artist(attr_name)[source]
get_ref_artist()[source]
Return the underlying artist that actually defines some properties (e.g., color) of this artist. | |
doc_3603 |
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | |
doc_3604 |
Return the Colormap instance. | |
doc_3605 | Abort a file transfer that is in progress. Using this does not always work, but it’s worth a try. | |
doc_3606 | The reason for this error. It can be a message string or another exception instance. | |
doc_3607 | unlock()
Maildir mailboxes do not support (or require) locking, so these methods do nothing. | |
doc_3608 |
Return the font variant. Values are: 'normal' or 'small-caps'. | |
doc_3609 |
Calculates which of the given dates are valid days, and which are not. New in version 1.7.0. Parameters
datesarray_like of datetime64[D]
The array of dates to process.
weekmaskstr or array_like of bool, optional
A seven-element array indicating which of Monday through Sunday are valid days. May be specified as a length-seven list or array, like [1,1,1,1,1,0,0]; a length-seven string, like ‘1111100’; or a string like “Mon Tue Wed Thu Fri”, made up of 3-character abbreviations for weekdays, optionally separated by white space. Valid abbreviations are: Mon Tue Wed Thu Fri Sat Sun
holidaysarray_like of datetime64[D], optional
An array of dates to consider as invalid dates. They may be specified in any order, and NaT (not-a-time) dates are ignored. This list is saved in a normalized form that is suited for fast calculations of valid days.
busdaycalbusdaycalendar, optional
A busdaycalendar object which specifies the valid days. If this parameter is provided, neither weekmask nor holidays may be provided.
outarray of bool, optional
If provided, this array is filled with the result. Returns
outarray of bool
An array with the same shape as dates, containing True for each valid day, and False for each invalid day. See also busdaycalendar
An object that specifies a custom set of valid days. busday_offset
Applies an offset counted in valid days. busday_count
Counts how many valid days are in a half-open date range. Examples >>> # The weekdays are Friday, Saturday, and Monday
... np.is_busday(['2011-07-01', '2011-07-02', '2011-07-18'],
... holidays=['2011-07-01', '2011-07-04', '2011-07-17'])
array([False, False, True]) | |
doc_3610 | stringprep.in_table_a1(code)
Determine whether code is in tableA.1 (Unassigned code points in Unicode 3.2).
stringprep.in_table_b1(code)
Determine whether code is in tableB.1 (Commonly mapped to nothing).
stringprep.map_table_b2(code)
Return the mapped value for code according to tableB.2 (Mapping for case-folding used with NFKC).
stringprep.map_table_b3(code)
Return the mapped value for code according to tableB.3 (Mapping for case-folding used with no normalization).
stringprep.in_table_c11(code)
Determine whether code is in tableC.1.1 (ASCII space characters).
stringprep.in_table_c12(code)
Determine whether code is in tableC.1.2 (Non-ASCII space characters).
stringprep.in_table_c11_c12(code)
Determine whether code is in tableC.1 (Space characters, union of C.1.1 and C.1.2).
stringprep.in_table_c21(code)
Determine whether code is in tableC.2.1 (ASCII control characters).
stringprep.in_table_c22(code)
Determine whether code is in tableC.2.2 (Non-ASCII control characters).
stringprep.in_table_c21_c22(code)
Determine whether code is in tableC.2 (Control characters, union of C.2.1 and C.2.2).
stringprep.in_table_c3(code)
Determine whether code is in tableC.3 (Private use).
stringprep.in_table_c4(code)
Determine whether code is in tableC.4 (Non-character code points).
stringprep.in_table_c5(code)
Determine whether code is in tableC.5 (Surrogate codes).
stringprep.in_table_c6(code)
Determine whether code is in tableC.6 (Inappropriate for plain text).
stringprep.in_table_c7(code)
Determine whether code is in tableC.7 (Inappropriate for canonical representation).
stringprep.in_table_c8(code)
Determine whether code is in tableC.8 (Change display properties or are deprecated).
stringprep.in_table_c9(code)
Determine whether code is in tableC.9 (Tagging characters).
stringprep.in_table_d1(code)
Determine whether code is in tableD.1 (Characters with bidirectional property “R” or “AL”).
stringprep.in_table_d2(code)
Determine whether code is in tableD.2 (Characters with bidirectional property “L”). | |
doc_3611 | Rename mailbox named oldmailbox to newmailbox. | |
doc_3612 |
Return (status,output) of executed command. Deprecated since version 1.17: Use subprocess.Popen instead Parameters
commandstr
A concatenated string of executable and arguments.
execute_instr
Before running command cd execute_in and after cd -.
use_shell{bool, None}, optional
If True, execute sh -c command. Default None (True)
use_tee{bool, None}, optional
If True use tee. Default None (True) Returns
resstr
Both stdout and stderr messages. Notes On NT, DOS systems the returned status is correct for external commands. Wild cards will not work for non-posix systems or when use_shell=0. | |
doc_3613 | When using class-based views, you can use the UserPassesTestMixin to do this.
test_func()
You have to override the test_func() method of the class to provide the test that is performed. Furthermore, you can set any of the parameters of AccessMixin to customize the handling of unauthorized users: from django.contrib.auth.mixins import UserPassesTestMixin
class MyView(UserPassesTestMixin, View):
def test_func(self):
return self.request.user.email.endswith('@example.com')
get_test_func()
You can also override the get_test_func() method to have the mixin use a differently named function for its checks (instead of test_func()).
Stacking UserPassesTestMixin Due to the way UserPassesTestMixin is implemented, you cannot stack them in your inheritance list. The following does NOT work: class TestMixin1(UserPassesTestMixin):
def test_func(self):
return self.request.user.email.endswith('@example.com')
class TestMixin2(UserPassesTestMixin):
def test_func(self):
return self.request.user.username.startswith('django')
class MyView(TestMixin1, TestMixin2, View):
...
If TestMixin1 would call super() and take that result into account, TestMixin1 wouldn’t work standalone anymore. | |
doc_3614 |
Return specified diagonals. If a is 2-D, returns the diagonal of a with the given offset, i.e., the collection of elements of the form a[i, i+offset]. If a has more than two dimensions, then the axes specified by axis1 and axis2 are used to determine the 2-D sub-array whose diagonal is returned. The shape of the resulting array can be determined by removing axis1 and axis2 and appending an index to the right equal to the size of the resulting diagonals. In versions of NumPy prior to 1.7, this function always returned a new, independent array containing a copy of the values in the diagonal. In NumPy 1.7 and 1.8, it continues to return a copy of the diagonal, but depending on this fact is deprecated. Writing to the resulting array continues to work as it used to, but a FutureWarning is issued. Starting in NumPy 1.9 it returns a read-only view on the original array. Attempting to write to the resulting array will produce an error. In some future release, it will return a read/write view and writing to the returned array will alter your original array. The returned array will have the same type as the input array. If you don’t write to the array returned by this function, then you can just ignore all of the above. If you depend on the current behavior, then we suggest copying the returned array explicitly, i.e., use np.diagonal(a).copy() instead of just np.diagonal(a). This will work with both past and future versions of NumPy. Parameters
aarray_like
Array from which the diagonals are taken.
offsetint, optional
Offset of the diagonal from the main diagonal. Can be positive or negative. Defaults to main diagonal (0).
axis1int, optional
Axis to be used as the first axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to first axis (0).
axis2int, optional
Axis to be used as the second axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to second axis (1). Returns
array_of_diagonalsndarray
If a is 2-D, then a 1-D array containing the diagonal and of the same type as a is returned unless a is a matrix, in which case a 1-D array rather than a (2-D) matrix is returned in order to maintain backward compatibility. If a.ndim > 2, then the dimensions specified by axis1 and axis2 are removed, and a new axis inserted at the end corresponding to the diagonal. Raises
ValueError
If the dimension of a is less than 2. See also diag
MATLAB work-a-like for 1-D and 2-D arrays. diagflat
Create diagonal arrays. trace
Sum along diagonals. Examples >>> a = np.arange(4).reshape(2,2)
>>> a
array([[0, 1],
[2, 3]])
>>> a.diagonal()
array([0, 3])
>>> a.diagonal(1)
array([1])
A 3-D example: >>> a = np.arange(8).reshape(2,2,2); a
array([[[0, 1],
[2, 3]],
[[4, 5],
[6, 7]]])
>>> a.diagonal(0, # Main diagonals of two arrays created by skipping
... 0, # across the outer(left)-most axis last and
... 1) # the "middle" (row) axis first.
array([[0, 6],
[1, 7]])
The sub-arrays whose main diagonals we just obtained; note that each corresponds to fixing the right-most (column) axis, and that the diagonals are “packed” in rows. >>> a[:,:,0] # main diagonal is [0 6]
array([[0, 2],
[4, 6]])
>>> a[:,:,1] # main diagonal is [1 7]
array([[1, 3],
[5, 7]])
The anti-diagonal can be obtained by reversing the order of elements using either numpy.flipud or numpy.fliplr. >>> a = np.arange(9).reshape(3, 3)
>>> a
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> np.fliplr(a).diagonal() # Horizontal flip
array([2, 4, 6])
>>> np.flipud(a).diagonal() # Vertical flip
array([6, 4, 2])
Note that the order in which the diagonal is retrieved varies depending on the flip function. | |
doc_3615 |
Calls str.decode element-wise. See also char.decode | |
doc_3616 | See Migration guide for more details. tf.compat.v1.nn.relu6
tf.nn.relu6(
features, name=None
)
Args
features A Tensor with type float, double, int32, int64, uint8, int16, or int8.
name A name for the operation (optional).
Returns A Tensor with the same type as features.
References: Convolutional Deep Belief Networks on CIFAR-10: Krizhevsky et al., 2010 (pdf) | |
doc_3617 | In-place version of nextafter() | |
doc_3618 |
Return sample standard deviation over requested axis. Normalized by N-1 by default. This can be changed using the ddof argument Parameters
axis:int optional, default None
Axis for the function to be applied on.
ddof:int, default 1
Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.
skipna:bool, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA. Returns
Timedelta | |
doc_3619 | Complex number with zero real part and NaN imaginary part. Equivalent to complex(0.0, float('nan')). New in version 3.6. | |
doc_3620 |
Return a scalar result of performing the reduction operation. Parameters
name:str
Name of the function, supported values are: { any, all, min, max, sum, mean, median, prod, std, var, sem, kurt, skew }.
skipna:bool, default True
If True, skip NaN values. **kwargs
Additional keyword arguments passed to the reduction function. Currently, ddof is the only supported kwarg. Returns
scalar
Raises
TypeError:subclass does not define reductions | |
doc_3621 | The walk() method is an all-purpose generator which can be used to iterate over all the parts and subparts of a message object tree, in depth-first traversal order. You will typically use walk() as the iterator in a for loop; each iteration returns the next subpart. Here’s an example that prints the MIME type of every part of a multipart message structure: >>> for part in msg.walk():
... print(part.get_content_type())
multipart/report
text/plain
message/delivery-status
text/plain
text/plain
message/rfc822
text/plain
walk iterates over the subparts of any part where is_multipart() returns True, even though msg.get_content_maintype() == 'multipart' may return False. We can see this in our example by making use of the _structure debug helper function: >>> for part in msg.walk():
... print(part.get_content_maintype() == 'multipart',
... part.is_multipart())
True True
False False
False True
False False
False False
False True
False False
>>> _structure(msg)
multipart/report
text/plain
message/delivery-status
text/plain
text/plain
message/rfc822
text/plain
Here the message parts are not multiparts, but they do contain subparts. is_multipart() returns True and walk descends into the subparts. | |
doc_3622 |
Return a Bbox that contains all of the given bboxes. | |
doc_3623 | tf.nn.erosion2d(
value, filters, strides, padding, data_format, dilations, name=None
)
The value tensor has shape [batch, in_height, in_width, depth] and the filters tensor has shape [filters_height, filters_width, depth], i.e., each input channel is processed independently of the others with its own structuring function. The output tensor has shape [batch, out_height, out_width, depth]. The spatial dimensions of the output tensor depend on the padding algorithm. We currently only support the default "NHWC" data_format. In detail, the grayscale morphological 2-D erosion is given by: output[b, y, x, c] =
min_{dy, dx} value[b,
strides[1] * y - dilations[1] * dy,
strides[2] * x - dilations[2] * dx,
c] -
filters[dy, dx, c]
Duality: The erosion of value by the filters is equal to the negation of the dilation of -value by the reflected filters.
Args
value A Tensor. 4-D with shape [batch, in_height, in_width, depth].
filters A Tensor. Must have the same type as value. 3-D with shape [filters_height, filters_width, depth].
strides A list of ints that has length >= 4. 1-D of length 4. The stride of the sliding window for each dimension of the input tensor. Must be: [1, stride_height, stride_width, 1].
padding A string from: "SAME", "VALID". The type of padding algorithm to use.
data_format A string, only "NHWC" is currently supported.
dilations A list of ints that has length >= 4. 1-D of length 4. The input stride for atrous morphological dilation. Must be: [1, rate_height, rate_width, 1].
name A name for the operation (optional). If not specified "erosion2d" is used.
Returns A Tensor. Has the same type as value. 4-D with shape [batch, out_height, out_width, depth].
Raises
ValueError If the value depth does not match filters' shape, or if padding is other than 'VALID' or 'SAME'. | |
doc_3624 | Get channel binding data for current connection, as a bytes object. Returns None if not connected or the handshake has not been completed. The cb_type parameter allow selection of the desired channel binding type. Valid channel binding types are listed in the CHANNEL_BINDING_TYPES list. Currently only the ‘tls-unique’ channel binding, defined by RFC 5929, is supported. ValueError will be raised if an unsupported channel binding type is requested. New in version 3.3. | |
doc_3625 |
template_name: 'django/forms/widgets/checkbox_select.html'
option_template_name: 'django/forms/widgets/checkbox_option.html'
Similar to SelectMultiple, but rendered as a list of checkboxes: <div>
<div><input type="checkbox" name="..." ></div>
...
</div>
The outer <div> container receives the id attribute of the widget, if defined, or BoundField.auto_id otherwise. Changed in Django 4.0: So they are announced more concisely by screen readers, checkboxes were changed to render in <div> tags. | |
doc_3626 | An optional list/tuple of years to use in the “year” select box. The default is a list containing the current year and the next 9 years. | |
doc_3627 | This attribute is a tuple of classes that are considered when looking for base classes during method resolution. | |
doc_3628 | Returns the absolute value of x. | |
doc_3629 |
Akaike information criterion for the current model on the input X. Parameters
Xarray of shape (n_samples, n_dimensions)
Returns
aicfloat
The lower the better. | |
doc_3630 | A boolean instructing the field to accept Unicode letters in addition to ASCII letters. Defaults to False. | |
doc_3631 | The FileStorage class is a thin wrapper over incoming files. It is used by the request object to represent uploaded files. All the attributes of the wrapper stream are proxied by the file storage so it’s possible to do storage.read() instead of the long form storage.stream.read().
stream
The input stream for the uploaded file. This usually points to an open temporary file.
filename
The filename of the file on the client.
name
The name of the form field.
headers
The multipart headers as Headers object. This usually contains irrelevant information but in combination with custom multipart requests the raw headers might be interesting. Changelog New in version 0.6.
close()
Close the underlying file if possible.
property content_length
The content-length sent in the header. Usually not available
property content_type
The content-type sent in the header. Usually not available
property mimetype
Like content_type, but without parameters (eg, without charset, type etc.) and always lowercase. For example if the content type is text/HTML; charset=utf-8 the mimetype would be 'text/html'. Changelog New in version 0.7.
property mimetype_params
The mimetype parameters as dict. For example if the content type is text/html; charset=utf-8 the params would be {'charset': 'utf-8'}. Changelog New in version 0.7.
save(dst, buffer_size=16384)
Save the file to a destination path or file object. If the destination is a file object you have to close it yourself after the call. The buffer size is the number of bytes held in memory during the copy process. It defaults to 16KB. For secure file saving also have a look at secure_filename(). Parameters
dst – a filename, os.PathLike, or open file object to write to.
buffer_size – Passed as the length parameter of shutil.copyfileobj(). Changelog Changed in version 1.0: Supports pathlib. | |
doc_3632 | Return a list of the weeks in the month month of the year as full weeks. Weeks are lists of seven tuples of day numbers and weekday numbers. | |
doc_3633 | Sends the signal signal to the child process. Note On Windows, SIGTERM is an alias for terminate(). CTRL_C_EVENT and CTRL_BREAK_EVENT can be sent to processes started with a creationflags parameter which includes CREATE_NEW_PROCESS_GROUP. | |
doc_3634 |
Autoscale the axis view to the data (toggle). Convenience method for simple axis view autoscaling. It turns autoscaling on or off, and then, if autoscaling for either axis is on, it performs the autoscaling on the specified axis or Axes. Parameters
enablebool or None, default: True
True turns autoscaling on, False turns it off. None leaves the autoscaling state unchanged.
axis{'both', 'x', 'y'}, default: 'both'
Which axis to operate on.
tightbool or None, default: None
If True, first set the margins to zero. Then, this argument is forwarded to autoscale_view (regardless of its value); see the description of its behavior there. | |
doc_3635 |
Return the cross product of two (arrays of) vectors. The cross product of a and b in \(R^3\) is a vector perpendicular to both a and b. If a and b are arrays of vectors, the vectors are defined by the last axis of a and b by default, and these axes can have dimensions 2 or 3. Where the dimension of either a or b is 2, the third component of the input vector is assumed to be zero and the cross product calculated accordingly. In cases where both input vectors have dimension 2, the z-component of the cross product is returned. Parameters
aarray_like
Components of the first vector(s).
barray_like
Components of the second vector(s).
axisaint, optional
Axis of a that defines the vector(s). By default, the last axis.
axisbint, optional
Axis of b that defines the vector(s). By default, the last axis.
axiscint, optional
Axis of c containing the cross product vector(s). Ignored if both input vectors have dimension 2, as the return is scalar. By default, the last axis.
axisint, optional
If defined, the axis of a, b and c that defines the vector(s) and cross product(s). Overrides axisa, axisb and axisc. Returns
cndarray
Vector cross product(s). Raises
ValueError
When the dimension of the vector(s) in a and/or b does not equal 2 or 3. See also inner
Inner product outer
Outer product. ix_
Construct index arrays. Notes New in version 1.9.0. Supports full broadcasting of the inputs. Examples Vector cross-product. >>> x = [1, 2, 3]
>>> y = [4, 5, 6]
>>> np.cross(x, y)
array([-3, 6, -3])
One vector with dimension 2. >>> x = [1, 2]
>>> y = [4, 5, 6]
>>> np.cross(x, y)
array([12, -6, -3])
Equivalently: >>> x = [1, 2, 0]
>>> y = [4, 5, 6]
>>> np.cross(x, y)
array([12, -6, -3])
Both vectors with dimension 2. >>> x = [1,2]
>>> y = [4,5]
>>> np.cross(x, y)
array(-3)
Multiple vector cross-products. Note that the direction of the cross product vector is defined by the right-hand rule. >>> x = np.array([[1,2,3], [4,5,6]])
>>> y = np.array([[4,5,6], [1,2,3]])
>>> np.cross(x, y)
array([[-3, 6, -3],
[ 3, -6, 3]])
The orientation of c can be changed using the axisc keyword. >>> np.cross(x, y, axisc=0)
array([[-3, 3],
[ 6, -6],
[-3, 3]])
Change the vector definition of x and y using axisa and axisb. >>> x = np.array([[1,2,3], [4,5,6], [7, 8, 9]])
>>> y = np.array([[7, 8, 9], [4,5,6], [1,2,3]])
>>> np.cross(x, y)
array([[ -6, 12, -6],
[ 0, 0, 0],
[ 6, -12, 6]])
>>> np.cross(x, y, axisa=0, axisb=0)
array([[-24, 48, -24],
[-30, 60, -30],
[-36, 72, -36]]) | |
doc_3636 |
Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Parameters
Xarray-like of shape (n_samples, n_features), default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
radiusfloat, default=None
Radius of neighborhoods. The default is the value passed to the constructor.
mode{‘connectivity’, ‘distance’}, default=’connectivity’
Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points.
sort_resultsbool, default=False
If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’. New in version 0.22. Returns
Asparse-matrix of shape (n_queries, n_samples_fit)
n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix if of format CSR. See also
kneighbors_graph
Examples >>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(radius=1.5)
>>> neigh.fit(X)
NearestNeighbors(radius=1.5)
>>> A = neigh.radius_neighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 0.],
[1., 0., 1.]]) | |
doc_3637 | get the number of axes on a Joystick get_numaxes() -> int Returns the number of input axes are on a Joystick. There will usually be two for the position. Controls like rudders and throttles are treated as additional axes. The pygame.JOYAXISMOTION events will be in the range from -1.0 to 1.0. A value of 0.0 means the axis is centered. Gamepad devices will usually be -1, 0, or 1 with no values in between. Older analog joystick axes will not always use the full -1 to 1 range, and the centered value will be some area around 0. Analog joysticks usually have a bit of noise in their axis, which will generate a lot of rapid small motion events. | |
doc_3638 | See Migration guide for more details. tf.compat.v1.RaggedTensorSpec
tf.RaggedTensorSpec(
shape=None, dtype=tf.dtypes.float32, ragged_rank=None,
row_splits_dtype=tf.dtypes.int64, flat_values_spec=None
)
Args
shape The shape of the RaggedTensor, or None to allow any shape. If a shape is specified, then all ragged dimensions must have size None.
dtype tf.DType of values in the RaggedTensor.
ragged_rank Python integer, the number of times the RaggedTensor's flat_values is partitioned. Defaults to shape.ndims - 1.
row_splits_dtype dtype for the RaggedTensor's row_splits tensor. One of tf.int32 or tf.int64.
flat_values_spec TypeSpec for flat_value of the RaggedTensor. It shall be provided when the flat_values is a CompositeTensor rather then Tensor. If both dtype and flat_values_spec and are provided, dtype must be the same as flat_values_spec.dtype. (experimental)
Attributes
dtype The tf.dtypes.DType specified by this type for the RaggedTensor.
rt = tf.ragged.constant([["a"], ["b", "c"]], dtype=tf.string)
tf.type_spec_from_value(rt).dtype
tf.string
flat_values_spec The TypeSpec of the flat_values of RaggedTensor.
ragged_rank The number of times the RaggedTensor's flat_values is partitioned. Defaults to shape.ndims - 1.
values = tf.ragged.constant([[1, 2, 3], [4], [5, 6], [7, 8, 9, 10]])
tf.type_spec_from_value(values).ragged_rank
1
rt1 = tf.RaggedTensor.from_uniform_row_length(values, 2)
tf.type_spec_from_value(rt1).ragged_rank
2
row_splits_dtype The tf.dtypes.DType of the RaggedTensor's row_splits.
rt = tf.ragged.constant([[1, 2, 3], [4]], row_splits_dtype=tf.int64)
tf.type_spec_from_value(rt).row_splits_dtype
tf.int64
shape The statically known shape of the RaggedTensor.
rt = tf.ragged.constant([[0], [1, 2]])
tf.type_spec_from_value(rt).shape
TensorShape([2, None])
rt = tf.ragged.constant([[[0, 1]], [[1, 2], [3, 4]]], ragged_rank=1)
tf.type_spec_from_value(rt).shape
TensorShape([2, None, 2])
value_type The Python type for values that are compatible with this TypeSpec. In particular, all values that are compatible with this TypeSpec must be an instance of this type.
Methods from_value View source
@classmethod
from_value(
value
)
is_compatible_with View source
is_compatible_with(
spec_or_value
)
Returns true if spec_or_value is compatible with this TypeSpec. most_specific_compatible_type View source
most_specific_compatible_type(
other
)
Returns the most specific TypeSpec compatible with self and other.
Args
other A TypeSpec.
Raises
ValueError If there is no TypeSpec that is compatible with both self and other. __eq__ View source
__eq__(
other
)
Return self==value. __ne__ View source
__ne__(
other
)
Return self!=value. | |
doc_3639 | Set the default content type. ctype should either be text/plain or message/rfc822, although this is not enforced. The default content type is not stored in the Content-Type header. | |
doc_3640 | Raises an HTTPException for the given status code or WSGI application. If a status code is given, it will be looked up in the list of exceptions and will raise that exception. If passed a WSGI application, it will wrap it in a proxy WSGI exception and raise that: abort(404) # 404 Not Found
abort(Response('Hello World'))
Parameters
status (Union[int, Response]) –
args (Any) –
kwargs (Any) – Return type
NoReturn | |
doc_3641 | create a Font object from the system fonts SysFont(name, size, bold=False, italic=False) -> Font Return a new Font object that is loaded from the system fonts. The font will match the requested bold and italic flags. Pygame uses a small set of common font aliases. If the specific font you ask for is not available, a reasonable alternative may be used. If a suitable system font is not found this will fall back on loading the default pygame font. The font name can also be an iterable of font names, a string of comma-separated font names, or a bytes of comma-separated font names, in which case the set of names will be searched in order. New in pygame 2.0.1: Accept an iterable of font names. | |
doc_3642 | Return the list of objects that directly refer to any of objs. This function will only locate those containers which support garbage collection; extension types which do refer to other objects but do not support garbage collection will not be found. Note that objects which have already been dereferenced, but which live in cycles and have not yet been collected by the garbage collector can be listed among the resulting referrers. To get only currently live objects, call collect() before calling get_referrers(). Warning Care must be taken when using objects returned by get_referrers() because some of them could still be under construction and hence in a temporarily invalid state. Avoid using get_referrers() for any purpose other than debugging. Raises an auditing event gc.get_referrers with argument objs. | |
doc_3643 | Join a base URL and a possibly relative URL to form an absolute interpretation of the latter. Parameters
base (Union[str, Tuple[str, str, str, str, str]]) – the base URL for the join operation.
url (Union[str, Tuple[str, str, str, str, str]]) – the URL to join.
allow_fragments (bool) – indicates whether fragments should be allowed. Return type
str | |
doc_3644 | Add a new header tuple to the list. Keyword arguments can specify additional parameters for the header value, with underscores converted to dashes: >>> d = Headers()
>>> d.add('Content-Type', 'text/plain')
>>> d.add('Content-Disposition', 'attachment', filename='foo.png')
The keyword argument dumping uses dump_options_header() behind the scenes. Changelog New in version 0.4.1: keyword arguments were added for wsgiref compatibility. | |
doc_3645 | If locale is given and not None, setlocale() modifies the locale setting for the category. The available categories are listed in the data description below. locale may be a string, or an iterable of two strings (language code and encoding). If it’s an iterable, it’s converted to a locale name using the locale aliasing engine. An empty string specifies the user’s default settings. If the modification of the locale fails, the exception Error is raised. If successful, the new locale setting is returned. If locale is omitted or None, the current setting for category is returned. setlocale() is not thread-safe on most systems. Applications typically start with a call of import locale
locale.setlocale(locale.LC_ALL, '')
This sets the locale for all categories to the user’s default setting (typically specified in the LANG environment variable). If the locale is not changed thereafter, using multithreading should not cause problems. | |
doc_3646 |
Bases: mpl_toolkits.mplot3d.art3d.Patch3D 3D PathPatch object. The following kwarg properties are supported
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha unknown
animated bool
antialiased or aa bool or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
color color
edgecolor or ec color or None
facecolor or fc color or None
figure Figure
fill bool
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw float or None
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
zorder float do_3d_projection(renderer=<deprecated parameter>)[source]
set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, antialiased=<UNSET>, capstyle=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, color=<UNSET>, edgecolor=<UNSET>, facecolor=<UNSET>, fill=<UNSET>, gid=<UNSET>, hatch=<UNSET>, in_layout=<UNSET>, joinstyle=<UNSET>, label=<UNSET>, linestyle=<UNSET>, linewidth=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, zorder=<UNSET>)[source]
Set multiple properties at once. Supported properties are
Property Description
3d_properties unknown
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
antialiased or aa bool or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
color color
edgecolor or ec color or None
facecolor or fc color or None
figure Figure
fill bool
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw float or None
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
zorder float
set_3d_properties(path, zs=0, zdir='z')[source]
Examples using mpl_toolkits.mplot3d.art3d.PathPatch3D
Draw flat objects in 3D plot | |
doc_3647 | Return True if both the real and imaginary parts of x are finite, and False otherwise. New in version 3.2. | |
doc_3648 |
Remove a toolitem from the ToolContainer. This method must get implemented per backend. Called when ToolManager emits a tool_removed_event. Parameters
namestr
Name of the tool to remove. | |
doc_3649 | A dictionary of context data that will be added to the default context data passed to the template. | |
doc_3650 |
Compute an approximation of the bounding box obtained by applying transform_xy to the box delimited by (x1, y1, x2, y2). The intended use is to have (x1, y1, x2, y2) in axes coordinates, and have transform_xy be the transform from axes coordinates to data coordinates; this method then returns the range of data coordinates that span the actual axes. The computation is done by sampling nx * ny equispaced points in the (x1, y1, x2, y2) box and finding the resulting points with extremal coordinates; then adding some padding to take into account the finite sampling. As each sampling step covers a relative range of 1/nx or 1/ny, the padding is computed by expanding the span covered by the extremal coordinates by these fractions. | |
doc_3651 | See Migration guide for more details. tf.compat.v1.raw_ops.Sum
tf.raw_ops.Sum(
input, axis, keep_dims=False, name=None
)
Reduces input along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.
Args
input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. The tensor to reduce.
axis A Tensor. Must be one of the following types: int32, int64. The dimensions to reduce. Must be in the range [-rank(input), rank(input)).
keep_dims An optional bool. Defaults to False. If true, retain reduced dimensions with length 1.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_3652 | Rounding occurred though possibly no information was lost. Signaled whenever rounding discards digits; even if those digits are zero (such as rounding 5.00 to 5.0). If not trapped, returns the result unchanged. This signal is used to detect loss of significant digits. | |
doc_3653 | See Migration guide for more details. tf.compat.v1.data.experimental.TFRecordWriter
tf.data.experimental.TFRecordWriter(
filename, compression_type=None
)
The elements of the dataset must be scalar strings. To serialize dataset elements as strings, you can use the tf.io.serialize_tensor function. dataset = tf.data.Dataset.range(3)
dataset = dataset.map(tf.io.serialize_tensor)
writer = tf.data.experimental.TFRecordWriter("/path/to/file.tfrecord")
writer.write(dataset)
To read back the elements, use TFRecordDataset. dataset = tf.data.TFRecordDataset("/path/to/file.tfrecord")
dataset = dataset.map(lambda x: tf.io.parse_tensor(x, tf.int64))
To shard a dataset across multiple TFRecord files: dataset = ... # dataset to be written
def reduce_func(key, dataset):
filename = tf.strings.join([PATH_PREFIX, tf.strings.as_string(key)])
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(dataset.map(lambda _, x: x))
return tf.data.Dataset.from_tensors(filename)
dataset = dataset.enumerate()
dataset = dataset.apply(tf.data.experimental.group_by_window(
lambda i, _: i % NUM_SHARDS, reduce_func, tf.int64.max
))
Args
filename a string path indicating where to write the TFRecord data.
compression_type (Optional.) a string indicating what type of compression to use when writing the file. See tf.io.TFRecordCompressionType for what types of compression are available. Defaults to None. Methods write View source
write(
dataset
)
Writes a dataset to a TFRecord file. An operation that writes the content of the specified dataset to the file specified in the constructor. If the file exists, it will be overwritten.
Args
dataset a tf.data.Dataset whose elements are to be written to a file
Returns In graph mode, this returns an operation which when executed performs the write. In eager mode, the write is performed by the method itself and there is no return value.
Raises TypeError: if dataset is not a tf.data.Dataset. TypeError: if the elements produced by the dataset are not scalar strings. | |
doc_3654 |
Whether or not the index values only consist of dates. | |
doc_3655 |
An AdaBoost classifier. An AdaBoost [1] classifier is a meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset but where the weights of incorrectly classified instances are adjusted such that subsequent classifiers focus more on difficult cases. This class implements the algorithm known as AdaBoost-SAMME [2]. Read more in the User Guide. New in version 0.14. Parameters
base_estimatorobject, default=None
The base estimator from which the boosted ensemble is built. Support for sample weighting is required, as well as proper classes_ and n_classes_ attributes. If None, then the base estimator is DecisionTreeClassifier initialized with max_depth=1.
n_estimatorsint, default=50
The maximum number of estimators at which boosting is terminated. In case of perfect fit, the learning procedure is stopped early.
learning_ratefloat, default=1.
Learning rate shrinks the contribution of each classifier by learning_rate. There is a trade-off between learning_rate and n_estimators.
algorithm{‘SAMME’, ‘SAMME.R’}, default=’SAMME.R’
If ‘SAMME.R’ then use the SAMME.R real boosting algorithm. base_estimator must support calculation of class probabilities. If ‘SAMME’ then use the SAMME discrete boosting algorithm. The SAMME.R algorithm typically converges faster than SAMME, achieving a lower test error with fewer boosting iterations.
random_stateint, RandomState instance or None, default=None
Controls the random seed given at each base_estimator at each boosting iteration. Thus, it is only used when base_estimator exposes a random_state. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes
base_estimator_estimator
The base estimator from which the ensemble is grown.
estimators_list of classifiers
The collection of fitted sub-estimators.
classes_ndarray of shape (n_classes,)
The classes labels.
n_classes_int
The number of classes.
estimator_weights_ndarray of floats
Weights for each estimator in the boosted ensemble.
estimator_errors_ndarray of floats
Classification error for each estimator in the boosted ensemble.
feature_importances_ndarray of shape (n_features,)
The impurity-based feature importances. See also
AdaBoostRegressor
An AdaBoost regressor that begins by fitting a regressor on the original dataset and then fits additional copies of the regressor on the same dataset but where the weights of instances are adjusted according to the error of the current prediction.
GradientBoostingClassifier
GB builds an additive model in a forward stage-wise fashion. Regression trees are fit on the negative gradient of the binomial or multinomial deviance loss function. Binary classification is a special case where only a single regression tree is induced.
sklearn.tree.DecisionTreeClassifier
A non-parametric supervised learning method used for classification. Creates a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. References
1
Y. Freund, R. Schapire, “A Decision-Theoretic Generalization of on-Line Learning and an Application to Boosting”, 1995.
2
Zhu, H. Zou, S. Rosset, T. Hastie, “Multi-class AdaBoost”, 2009. Examples >>> from sklearn.ensemble import AdaBoostClassifier
>>> from sklearn.datasets import make_classification
>>> X, y = make_classification(n_samples=1000, n_features=4,
... n_informative=2, n_redundant=0,
... random_state=0, shuffle=False)
>>> clf = AdaBoostClassifier(n_estimators=100, random_state=0)
>>> clf.fit(X, y)
AdaBoostClassifier(n_estimators=100, random_state=0)
>>> clf.predict([[0, 0, 0, 0]])
array([1])
>>> clf.score(X, y)
0.983...
Methods
decision_function(X) Compute the decision function of X.
fit(X, y[, sample_weight]) Build a boosted classifier from the training set (X, y).
get_params([deep]) Get parameters for this estimator.
predict(X) Predict classes for X.
predict_log_proba(X) Predict class log-probabilities for X.
predict_proba(X) Predict class probabilities for X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
staged_decision_function(X) Compute decision function of X for each boosting iteration.
staged_predict(X) Return staged predictions for X.
staged_predict_proba(X) Predict class probabilities for X.
staged_score(X, y[, sample_weight]) Return staged scores for X, y.
decision_function(X) [source]
Compute the decision function of X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. COO, DOK, and LIL are converted to CSR. Returns
scorendarray of shape of (n_samples, k)
The decision function of the input samples. The order of outputs is the same of that of the classes_ attribute. Binary classification is a special cases with k == 1, otherwise k==n_classes. For binary classification, values closer to -1 or 1 mean more like the first or second class in classes_, respectively.
property feature_importances_
The impurity-based feature importances. The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance. Warning: impurity-based feature importances can be misleading for high cardinality features (many unique values). See sklearn.inspection.permutation_importance as an alternative. Returns
feature_importances_ndarray of shape (n_features,)
The feature importances.
fit(X, y, sample_weight=None) [source]
Build a boosted classifier from the training set (X, y). Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. COO, DOK, and LIL are converted to CSR.
yarray-like of shape (n_samples,)
The target values (class labels).
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, the sample weights are initialized to 1 / n_samples. Returns
selfobject
Fitted estimator.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict classes for X. The predicted class of an input sample is computed as the weighted mean prediction of the classifiers in the ensemble. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. COO, DOK, and LIL are converted to CSR. Returns
yndarray of shape (n_samples,)
The predicted classes.
predict_log_proba(X) [source]
Predict class log-probabilities for X. The predicted class log-probabilities of an input sample is computed as the weighted mean predicted class log-probabilities of the classifiers in the ensemble. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. COO, DOK, and LIL are converted to CSR. Returns
pndarray of shape (n_samples, n_classes)
The class probabilities of the input samples. The order of outputs is the same of that of the classes_ attribute.
predict_proba(X) [source]
Predict class probabilities for X. The predicted class probabilities of an input sample is computed as the weighted mean predicted class probabilities of the classifiers in the ensemble. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. COO, DOK, and LIL are converted to CSR. Returns
pndarray of shape (n_samples, n_classes)
The class probabilities of the input samples. The order of outputs is the same of that of the classes_ attribute.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
staged_decision_function(X) [source]
Compute decision function of X for each boosting iteration. This method allows monitoring (i.e. determine error on testing set) after each boosting iteration. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. COO, DOK, and LIL are converted to CSR. Yields
scoregenerator of ndarray of shape (n_samples, k)
The decision function of the input samples. The order of outputs is the same of that of the classes_ attribute. Binary classification is a special cases with k == 1, otherwise k==n_classes. For binary classification, values closer to -1 or 1 mean more like the first or second class in classes_, respectively.
staged_predict(X) [source]
Return staged predictions for X. The predicted class of an input sample is computed as the weighted mean prediction of the classifiers in the ensemble. This generator method yields the ensemble prediction after each iteration of boosting and therefore allows monitoring, such as to determine the prediction on a test set after each boost. Parameters
Xarray-like of shape (n_samples, n_features)
The input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. COO, DOK, and LIL are converted to CSR. Yields
ygenerator of ndarray of shape (n_samples,)
The predicted classes.
staged_predict_proba(X) [source]
Predict class probabilities for X. The predicted class probabilities of an input sample is computed as the weighted mean predicted class probabilities of the classifiers in the ensemble. This generator method yields the ensemble predicted class probabilities after each iteration of boosting and therefore allows monitoring, such as to determine the predicted class probabilities on a test set after each boost. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. COO, DOK, and LIL are converted to CSR. Yields
pgenerator of ndarray of shape (n_samples,)
The class probabilities of the input samples. The order of outputs is the same of that of the classes_ attribute.
staged_score(X, y, sample_weight=None) [source]
Return staged scores for X, y. This generator method yields the ensemble score after each iteration of boosting and therefore allows monitoring, such as to determine the score on a test set after each boost. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. COO, DOK, and LIL are converted to CSR.
yarray-like of shape (n_samples,)
Labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Yields
zfloat | |
doc_3656 |
Construct a GraphModule. Parameters
root (Union[torch.nn.Module, Dict[str, Any]) – root can either be an nn.Module instance or a Dict mapping strings to any attribute type. In the case that root is a Module, any references to Module-based objects (via qualified name) in the Graph’s Nodes’ target field will be copied over from the respective place within root’s Module hierarchy into the GraphModule’s module hierarchy. In the case that root is a dict, the qualified name found in a Node’s target will be looked up directly in the dict’s keys. The object mapped to by the Dict will be copied over into the appropriate place within the GraphModule’s module hierarchy.
graph (Graph) – graph contains the nodes this GraphModule should use for code generation
name (str) – name denotes the name of this GraphModule for debugging purposes. If it’s unset, all error messages will report as originating from GraphModule. It may be helpful to set this to root’s original name or a name that makes sense within the context of your transform. | |
doc_3657 | Files which are in both a and b, but could not be compared. | |
doc_3658 | An optional method which, when called, should invalidate any internal cache used by the finder. Used by importlib.invalidate_caches() when invalidating the caches of all finders on sys.meta_path. Changed in version 3.4: Returns None when called instead of NotImplemented. | |
doc_3659 | Remove the specified section from the configuration. If the section in fact existed, return True. Otherwise return False. | |
doc_3660 |
Update the location of children if necessary and draw them to the given renderer. | |
doc_3661 | See Migration guide for more details. tf.compat.v1.raw_ops.QuantizedConv2DWithBias
tf.raw_ops.QuantizedConv2DWithBias(
input, filter, bias, min_input, max_input, min_filter, max_filter, strides,
padding, out_type=tf.dtypes.qint32, dilations=[1, 1, 1, 1], padding_list=[],
name=None
)
Args
input A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16.
filter A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16.
bias A Tensor of type float32.
min_input A Tensor of type float32.
max_input A Tensor of type float32.
min_filter A Tensor of type float32.
max_filter A Tensor of type float32.
strides A list of ints.
padding A string from: "SAME", "VALID".
out_type An optional tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16. Defaults to tf.qint32.
dilations An optional list of ints. Defaults to [1, 1, 1, 1].
padding_list An optional list of ints. Defaults to [].
name A name for the operation (optional).
Returns A tuple of Tensor objects (output, min_output, max_output). output A Tensor of type out_type.
min_output A Tensor of type float32.
max_output A Tensor of type float32. | |
doc_3662 | class sklearn.linear_model.MultiTaskLassoCV(*, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, max_iter=1000, tol=0.0001, copy_X=True, cv=None, verbose=False, n_jobs=None, random_state=None, selection='cyclic') [source]
Multi-task Lasso model trained with L1/L2 mixed-norm as regularizer. See glossary entry for cross-validation estimator. The optimization objective for MultiTaskLasso is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * ||W||_21
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. New in version 0.15. Parameters
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasarray-like, default=None
List of alphas where to compute the models. If not provided, set automatically.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
max_iterint, default=1000
The maximum number of iterations.
tolfloat, default=1e-4
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
cvint, cross-validation generator or iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, int, to specify the number of folds.
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
verbosebool or int, default=False
Amount of verbosity.
n_jobsint, default=None
Number of CPUs to use during the cross validation. Note that this is used only if multiple values for l1_ratio are given. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
random_stateint, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
selection{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes
intercept_ndarray of shape (n_tasks,)
Independent term in decision function.
coef_ndarray of shape (n_tasks, n_features)
Parameter vector (W in the cost function formula). Note that coef_ stores the transpose of W, W.T.
alpha_float
The amount of penalization chosen by cross validation.
mse_path_ndarray of shape (n_alphas, n_folds)
Mean square error for the test set on each fold, varying alpha.
alphas_ndarray of shape (n_alphas,)
The grid of alphas used for fitting.
n_iter_int
Number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha.
dual_gap_float
The dual gap at the end of the optimization for the optimal alpha. See also
MultiTaskElasticNet
ElasticNetCV
MultiTaskElasticNetCV
Notes The algorithm used to fit the model is coordinate descent. To avoid unnecessary memory duplication the X and y arguments of the fit method should be directly passed as Fortran-contiguous numpy arrays. Examples >>> from sklearn.linear_model import MultiTaskLassoCV
>>> from sklearn.datasets import make_regression
>>> from sklearn.metrics import r2_score
>>> X, y = make_regression(n_targets=2, noise=4, random_state=0)
>>> reg = MultiTaskLassoCV(cv=5, random_state=0).fit(X, y)
>>> r2_score(y, reg.predict(X))
0.9994...
>>> reg.alpha_
0.5713...
>>> reg.predict(X[:1,])
array([[153.7971..., 94.9015...]])
Methods
fit(X, y) Fit linear model with coordinate descent.
get_params([deep]) Get parameters for this estimator.
path(*args, **kwargs) Compute Lasso path with coordinate descent
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse.
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
static path(*args, **kwargs) [source]
Compute Lasso path with coordinate descent The Lasso optimization function varies for mono and multi-outputs. For mono-output tasks it is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3
n_alphasint, default=100
Number of alphas along the regularization path
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
**paramskwargs
keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. See also
lars_path
Lasso
LassoLars
LassoCV
LassoLarsCV
sklearn.decomposition.sparse_encode
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars_path Examples Comparing lasso_path and lars_path with interpolation: >>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T
>>> y = np.array([1, 2, 3.1])
>>> # Use lasso_path to compute a coefficient path
>>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5])
>>> print(coef_path)
[[0. 0. 0.46874778]
[0.2159048 0.4425765 0.23689075]]
>>> # Now use lars_path and 1D linear interpolation to compute the
>>> # same path
>>> from sklearn.linear_model import lars_path
>>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso')
>>> from scipy import interpolate
>>> coef_path_continuous = interpolate.interp1d(alphas[::-1],
... coef_path_lars[:, ::-1])
>>> print(coef_path_continuous([5., 1., .5]))
[[0. 0. 0.46915237]
[0.2159048 0.4425765 0.23668876]]
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_3663 | This dictionary maps the HTTP 1.1 status codes to the W3C names. Example: http.client.responses[http.client.NOT_FOUND] is 'Not Found'. | |
doc_3664 |
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | |
doc_3665 |
Convert string s to vertices and codes using the provided ttf font. | |
doc_3666 |
Set the zorder for the artist. Artists with lower zorder values are drawn first. Parameters
levelfloat | |
doc_3667 |
An array class with possibly masked values. Masked values of True exclude the corresponding element from any computation. Construction: x = MaskedArray(data, mask=nomask, dtype=None, copy=False, subok=True,
ndmin=0, fill_value=None, keep_mask=True, hard_mask=None,
shrink=True, order=None)
Parameters
dataarray_like
Input data.
masksequence, optional
Mask. Must be convertible to an array of booleans with the same shape as data. True indicates a masked (i.e. invalid) data.
dtypedtype, optional
Data type of the output. If dtype is None, the type of the data argument (data.dtype) is used. If dtype is not None and different from data.dtype, a copy is performed.
copybool, optional
Whether to copy the input data (True), or to use a reference instead. Default is False.
subokbool, optional
Whether to return a subclass of MaskedArray if possible (True) or a plain MaskedArray. Default is True.
ndminint, optional
Minimum number of dimensions. Default is 0.
fill_valuescalar, optional
Value used to fill in the masked values when necessary. If None, a default based on the data-type is used.
keep_maskbool, optional
Whether to combine mask with the mask of the input data, if any (True), or to use only mask for the output (False). Default is True.
hard_maskbool, optional
Whether to use a hard mask or not. With a hard mask, masked values cannot be unmasked. Default is False.
shrinkbool, optional
Whether to force compression of an empty mask. Default is True.
order{‘C’, ‘F’, ‘A’}, optional
Specify the order of the array. If order is ‘C’, then the array will be in C-contiguous order (last-index varies the fastest). If order is ‘F’, then the returned array will be in Fortran-contiguous order (first-index varies the fastest). If order is ‘A’ (default), then the returned array may be in any order (either C-, Fortran-contiguous, or even discontiguous), unless a copy is required, in which case it will be C-contiguous. Examples The mask can be initialized with an array of boolean values with the same shape as data. >>> data = np.arange(6).reshape((2, 3))
>>> np.ma.MaskedArray(data, mask=[[False, True, False],
... [False, False, True]])
masked_array(
data=[[0, --, 2],
[3, 4, --]],
mask=[[False, True, False],
[False, False, True]],
fill_value=999999)
Alternatively, the mask can be initialized to homogeneous boolean array with the same shape as data by passing in a scalar boolean value: >>> np.ma.MaskedArray(data, mask=False)
masked_array(
data=[[0, 1, 2],
[3, 4, 5]],
mask=[[False, False, False],
[False, False, False]],
fill_value=999999)
>>> np.ma.MaskedArray(data, mask=True)
masked_array(
data=[[--, --, --],
[--, --, --]],
mask=[[ True, True, True],
[ True, True, True]],
fill_value=999999,
dtype=int64)
Note The recommended practice for initializing mask with a scalar boolean value is to use True/False rather than np.True_/np.False_. The reason is nomask is represented internally as np.False_. >>> np.False_ is np.ma.nomask
True | |
doc_3668 | tf.experimental.numpy.ones_like(
a, dtype=None
)
Unsupported arguments: order, subok, shape. See the NumPy documentation for numpy.ones_like. | |
doc_3669 |
Return x: no detrending. Parameters
xany object
An object containing the data
axisint
This parameter is ignored. It is included for compatibility with detrend_mean See also detrend_mean
Another detrend algorithm. detrend_linear
Another detrend algorithm. detrend
A wrapper around all the detrend algorithms. | |
doc_3670 |
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
bbox_to_anchor unknown
child unknown
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
figure Figure
gid str
height float
in_layout bool
label object
offset (float, float) or callable
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
width float
zorder float | |
doc_3671 |
Return the Transform instance used by this artist offset. | |
doc_3672 | See Migration guide for more details. tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular
tf.linalg.LinearOperatorBlockLowerTriangular(
operators, is_non_singular=None, is_self_adjoint=None,
is_positive_definite=None, is_square=None,
name='LinearOperatorBlockLowerTriangular'
)
This operator is initialized with a nested list of linear operators, which are combined into a new LinearOperator whose underlying matrix representation is square and has each operator on or below the main diagonal, and zero's elsewhere. Each element of the outer list is a list of LinearOperators corresponding to a row-partition of the blockwise structure. The number of LinearOperators in row-partion i must be equal to i. For example, a blockwise 3 x 3 LinearOperatorBlockLowerTriangular is initialized with the list [[op_00], [op_10, op_11], [op_20, op_21, op_22]], where the op_ij, i < 3, j <= i, are LinearOperator instances. The LinearOperatorBlockLowerTriangular behaves as the following blockwise matrix, where 0 represents appropriately-sized [batch] matrices of zeros: [[op_00, 0, 0],
[op_10, op_11, 0],
[op_20, op_21, op_22]]
Each op_jj on the diagonal is required to represent a square matrix, and hence will have shape batch_shape_j + [M_j, M_j]. LinearOperators in row j of the blockwise structure must have range_dimension equal to that of op_jj, and LinearOperators in column j must have domain_dimension equal to that of op_jj. If each op_jj on the diagonal has shape batch_shape_j + [M_j, M_j], then the combined operator has shape broadcast_batch_shape + [sum M_j, sum M_j], where broadcast_batch_shape is the mutual broadcast of batch_shape_j, j = 0, 1, ..., J, assuming the intermediate batch shapes broadcast. Even if the combined shape is well defined, the combined operator's methods may fail due to lack of broadcasting ability in the defining operators' methods. For example, to create a 4 x 4 linear operator combined of three 2 x 2 operators: >>> operator_0 = tf.linalg.LinearOperatorFullMatrix([[1., 2.], [3., 4.]])
>>> operator_1 = tf.linalg.LinearOperatorFullMatrix([[1., 0.], [0., 1.]])
>>> operator_2 = tf.linalg.LinearOperatorLowerTriangular([[5., 6.], [7., 8]])
>>> operator = LinearOperatorBlockLowerTriangular(
... [[operator_0], [operator_1, operator_2]])
operator.to_dense()
<tf.Tensor: shape=(4, 4), dtype=float32, numpy=
array([[1., 2., 0., 0.],
[3., 4., 0., 0.],
[1., 0., 5., 0.],
[0., 1., 7., 8.]], dtype=float32)>
operator.shape
TensorShape([4, 4])
operator.log_abs_determinant()
<tf.Tensor: shape=(), dtype=float32, numpy=4.3820267>
x0 = [[1., 6.], [-3., 4.]]
x1 = [[0., 2.], [4., 0.]]
x = tf.concat([x0, x1], 0) # Shape [2, 4] Tensor
operator.matmul(x)
<tf.Tensor: shape=(4, 2), dtype=float32, numpy=
array([[-5., 14.],
[-9., 34.],
[ 1., 16.],
[29., 18.]], dtype=float32)>
The above matmul is equivalent to: >>> tf.concat([operator_0.matmul(x0),
... operator_1.matmul(x0) + operator_2.matmul(x1)], axis=0)
<tf.Tensor: shape=(4, 2), dtype=float32, numpy=
array([[-5., 14.],
[-9., 34.],
[ 1., 16.],
[29., 18.]], dtype=float32)>
Shape compatibility This operator acts on [batch] matrix with compatible shape. x is a batch matrix with compatible shape for matmul and solve if operator.shape = [B1,...,Bb] + [M, N], with b >= 0
x.shape = [B1,...,Bb] + [N, R], with R >= 0.
For example: Create a [2, 3] batch of 4 x 4 linear operators: >>> matrix_44 = tf.random.normal(shape=[2, 3, 4, 4])
>>> operator_44 = tf.linalg.LinearOperatorFullMatrix(matrix_44)
Create a [1, 3] batch of 5 x 4 linear operators: >>> matrix_54 = tf.random.normal(shape=[1, 3, 5, 4])
>>> operator_54 = tf.linalg.LinearOperatorFullMatrix(matrix_54)
Create a [1, 3] batch of 5 x 5 linear operators: >>> matrix_55 = tf.random.normal(shape=[1, 3, 5, 5])
>>> operator_55 = tf.linalg.LinearOperatorFullMatrix(matrix_55)
Combine to create a [2, 3] batch of 9 x 9 operators: >>> operator_99 = LinearOperatorBlockLowerTriangular(
... [[operator_44], [operator_54, operator_55]])
>>> operator_99.shape
TensorShape([2, 3, 9, 9])
Create a shape [2, 1, 9] batch of vectors and apply the operator to it. >>> x = tf.random.normal(shape=[2, 1, 9])
>>> y = operator_99.matvec(x)
>>> y.shape
TensorShape([2, 3, 9])
Create a blockwise list of vectors and apply the operator to it. A blockwise list is returned. >>> x4 = tf.random.normal(shape=[2, 1, 4])
>>> x5 = tf.random.normal(shape=[2, 3, 5])
>>> y_blockwise = operator_99.matvec([x4, x5])
>>> y_blockwise[0].shape
TensorShape([2, 3, 4])
>>> y_blockwise[1].shape
TensorShape([2, 3, 5])
Performance Suppose operator is a LinearOperatorBlockLowerTriangular consisting of D row-partitions and D column-partitions, such that the total number of operators is N = D * (D + 1) // 2.
operator.matmul has complexity equal to the sum of the matmul complexities of the individual operators.
operator.solve has complexity equal to the sum of the solve complexities of the operators on the diagonal and the matmul complexities of the operators off the diagonal.
operator.determinant has complexity equal to the sum of the determinant complexities of the operators on the diagonal. Matrix property hints This LinearOperator is initialized with boolean flags of the form is_X, for X = non_singular, self_adjoint, positive_definite, square. These have the following meaning: If is_X == True, callers should expect the operator to have the property X. This is a promise that should be fulfilled, but is not a runtime assert. For example, finite floating point precision may result in these promises being violated. If is_X == False, callers should expect the operator to not have X. If is_X == None (the default), callers should have no expectation either way.
Args
operators Iterable of iterables of LinearOperator objects, each with the same dtype. Each element of operators corresponds to a row- partition, in top-to-bottom order. The operators in each row-partition are filled in left-to-right. For example, operators = [[op_0], [op_1, op_2], [op_3, op_4, op_5]] creates a LinearOperatorBlockLowerTriangular with full block structure [[op_0, 0, 0], [op_1, op_2, 0], [op_3, op_4, op_5]]. The number of operators in the ith row must be equal to i, such that each operator falls on or below the diagonal of the blockwise structure. LinearOperators that fall on the diagonal (the last elements of each row) must be square. The other LinearOperators must have domain dimension equal to the domain dimension of the LinearOperators in the same column-partition, and range dimension equal to the range dimension of the LinearOperators in the same row-partition.
is_non_singular Expect that this operator is non-singular.
is_self_adjoint Expect that this operator is equal to its hermitian transpose.
is_positive_definite Expect that this operator is positive definite, meaning the quadratic form x^H A x has positive real part for all nonzero x. Note that we do not require the operator to be self-adjoint to be positive-definite. See: https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices
is_square Expect that this operator acts like square [batch] matrices. This will raise a ValueError if set to False.
name A name for this LinearOperator.
Raises
TypeError If all operators do not have the same dtype.
ValueError If operators is empty, contains an erroneous number of elements, or contains operators with incompatible shapes.
Attributes
H Returns the adjoint of the current LinearOperator. Given A representing this LinearOperator, return A*. Note that calling self.adjoint() and self.H are equivalent.
batch_shape TensorShape of batch dimensions of this LinearOperator. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns TensorShape([B1,...,Bb]), equivalent to A.shape[:-2]
domain_dimension Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns N.
dtype The DType of Tensors handled by this LinearOperator.
graph_parents List of graph dependencies of this LinearOperator. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Do not call graph_parents.
is_non_singular
is_positive_definite
is_self_adjoint
is_square Return True/False depending on if this operator is square.
operators
parameters Dictionary of parameters used to instantiate this LinearOperator.
range_dimension Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns M.
shape TensorShape of this LinearOperator. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns TensorShape([B1,...,Bb, M, N]), equivalent to A.shape.
tensor_rank Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns b + 2.
Methods add_to_tensor View source
add_to_tensor(
x, name='add_to_tensor'
)
Add matrix represented by this operator to x. Equivalent to A + x.
Args
x Tensor with same dtype and shape broadcastable to self.shape.
name A name to give this Op.
Returns A Tensor with broadcast shape and same dtype as self.
adjoint View source
adjoint(
name='adjoint'
)
Returns the adjoint of the current LinearOperator. Given A representing this LinearOperator, return A*. Note that calling self.adjoint() and self.H are equivalent.
Args
name A name for this Op.
Returns LinearOperator which represents the adjoint of this LinearOperator.
assert_non_singular View source
assert_non_singular(
name='assert_non_singular'
)
Returns an Op that asserts this operator is non singular. This operator is considered non-singular if ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
Args
name A string name to prepend to created ops.
Returns An Assert Op, that, when run, will raise an InvalidArgumentError if the operator is singular.
assert_positive_definite View source
assert_positive_definite(
name='assert_positive_definite'
)
Returns an Op that asserts this operator is positive definite. Here, positive definite means that the quadratic form x^H A x has positive real part for all nonzero x. Note that we do not require the operator to be self-adjoint to be positive definite.
Args
name A name to give this Op.
Returns An Assert Op, that, when run, will raise an InvalidArgumentError if the operator is not positive definite.
assert_self_adjoint View source
assert_self_adjoint(
name='assert_self_adjoint'
)
Returns an Op that asserts this operator is self-adjoint. Here we check that this operator is exactly equal to its hermitian transpose.
Args
name A string name to prepend to created ops.
Returns An Assert Op, that, when run, will raise an InvalidArgumentError if the operator is not self-adjoint.
batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of batch dimensions of this operator, determined at runtime. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns a Tensor holding [B1,...,Bb].
Args
name A name for this Op.
Returns int32 Tensor
cholesky View source
cholesky(
name='cholesky'
)
Returns a Cholesky factor as a LinearOperator. Given A representing this LinearOperator, if A is positive definite self-adjoint, return L, where A = L L^T, i.e. the cholesky decomposition.
Args
name A name for this Op.
Returns LinearOperator which represents the lower triangular matrix in the Cholesky decomposition.
Raises
ValueError When the LinearOperator is not hinted to be positive definite and self adjoint. cond View source
cond(
name='cond'
)
Returns the condition number of this linear operator.
Args
name A name for this Op.
Returns Shape [B1,...,Bb] Tensor of same dtype as self.
determinant View source
determinant(
name='det'
)
Determinant for every batch member.
Args
name A name for this Op.
Returns Tensor with shape self.batch_shape and same dtype as self.
Raises
NotImplementedError If self.is_square is False. diag_part View source
diag_part(
name='diag_part'
)
Efficiently get the [batch] diagonal part of this operator. If this operator has shape [B1,...,Bb, M, N], this returns a Tensor diagonal, of shape [B1,...,Bb, min(M, N)], where diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]. my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
Args
name A name for this Op.
Returns
diag_part A Tensor of same dtype as self. domain_dimension_tensor View source
domain_dimension_tensor(
name='domain_dimension_tensor'
)
Dimension (in the sense of vector spaces) of the domain of this operator. Determined at runtime. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns N.
Args
name A name for this Op.
Returns int32 Tensor
eigvals View source
eigvals(
name='eigvals'
)
Returns the eigenvalues of this linear operator. If the operator is marked as self-adjoint (via is_self_adjoint) this computation can be more efficient.
Note: This currently only supports self-adjoint operators.
Args
name A name for this Op.
Returns Shape [B1,...,Bb, N] Tensor of same dtype as self.
inverse View source
inverse(
name='inverse'
)
Returns the Inverse of this LinearOperator. Given A representing this LinearOperator, return a LinearOperator representing A^-1.
Args
name A name scope to use for ops added by this method.
Returns LinearOperator representing inverse of this matrix.
Raises
ValueError When the LinearOperator is not hinted to be non_singular. log_abs_determinant View source
log_abs_determinant(
name='log_abs_det'
)
Log absolute value of determinant for every batch member.
Args
name A name for this Op.
Returns Tensor with shape self.batch_shape and same dtype as self.
Raises
NotImplementedError If self.is_square is False. matmul View source
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
Transform [batch] matrix x with left multiplication: x --> Ax. # Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
Args
x LinearOperator, Tensor with compatible shape and same dtype as self, or a blockwise iterable of LinearOperators or Tensors. See class docstring for definition of shape compatibility.
adjoint Python bool. If True, left multiply by the adjoint: A^H x.
adjoint_arg Python bool. If True, compute A x^H where x^H is the hermitian transpose (transposition and complex conjugation).
name A name for this Op.
Returns A LinearOperator or Tensor with shape [..., M, R] and same dtype as self, or if x is blockwise, a list of Tensors with shapes that concatenate to [..., M, R].
matvec View source
matvec(
x, adjoint=False, name='matvec'
)
Transform [batch] vector x with left multiplication: x --> Ax. # Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
Args
x Tensor with compatible shape and same dtype as self, or an iterable of Tensors. Tensors are treated a [batch] vectors, meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility.
adjoint Python bool. If True, left multiply by the adjoint: A^H x.
name A name for this Op.
Returns A Tensor with shape [..., M] and same dtype as self.
range_dimension_tensor View source
range_dimension_tensor(
name='range_dimension_tensor'
)
Dimension (in the sense of vector spaces) of the range of this operator. Determined at runtime. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns M.
Args
name A name for this Op.
Returns int32 Tensor
shape_tensor View source
shape_tensor(
name='shape_tensor'
)
Shape of this LinearOperator, determined at runtime. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns a Tensor holding [B1,...,Bb, M, N], equivalent to tf.shape(A).
Args
name A name for this Op.
Returns int32 Tensor
solve View source
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
Solve (exact or approx) R (batch) systems of equations: A X = rhs. The returned Tensor will be close to an exact solution if A is well conditioned. Otherwise closeness will vary. See class docstring for details. Given the blockwise n + 1-by-n + 1 linear operator: op = [[A_00 0 ... 0 ... 0], [A_10 A_11 ... 0 ... 0], ... [A_k0 A_k1 ... A_kk ... 0], ... [A_n0 A_n1 ... A_nk ... A_nn]] we find x = op.solve(y) by observing that y_k = A_k0.matmul(x_0) + A_k1.matmul(x_1) + ... + A_kk.matmul(x_k) and therefore x_k = A_kk.solve(y_k - A_k0.matmul(x_0) - ... - A_k(k-1).matmul(x_(k-1))) where x_k and y_k are the kth blocks obtained by decomposing x and y along their appropriate axes. We first solve x_0 = A_00.solve(y_0). Proceeding inductively, we solve for x_k, k = 1..n, given x_0..x_(k-1). The adjoint case is solved similarly, beginning with x_n = A_nn.solve(y_n, adjoint=True) and proceeding backwards. Examples: # Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
Args
rhs Tensor with same dtype as this operator and compatible shape, or a list of Tensors. Tensors are treated like a [batch] matrices meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility.
adjoint Python bool. If True, solve the system involving the adjoint of this LinearOperator: A^H X = rhs.
adjoint_arg Python bool. If True, solve A X = rhs^H where rhs^H is the hermitian transpose (transposition and complex conjugation).
name A name scope to use for ops added by this method.
Returns Tensor with shape [...,N, R] and same dtype as rhs.
Raises
NotImplementedError If self.is_non_singular or is_square is False. solvevec View source
solvevec(
rhs, adjoint=False, name='solve'
)
Solve single equation with best effort: A X = rhs. The returned Tensor will be close to an exact solution if A is well conditioned. Otherwise closeness will vary. See class docstring for details. Examples: # Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
Args
rhs Tensor with same dtype as this operator, or list of Tensors (for blockwise operators). Tensors are treated as [batch] vectors, meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions.
adjoint Python bool. If True, solve the system involving the adjoint of this LinearOperator: A^H X = rhs.
name A name scope to use for ops added by this method.
Returns Tensor with shape [...,N] and same dtype as rhs.
Raises
NotImplementedError If self.is_non_singular or is_square is False. tensor_rank_tensor View source
tensor_rank_tensor(
name='tensor_rank_tensor'
)
Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns b + 2.
Args
name A name for this Op.
Returns int32 Tensor, determined at runtime.
to_dense View source
to_dense(
name='to_dense'
)
Return a dense (batch) matrix representing this operator. trace View source
trace(
name='trace'
)
Trace of the linear operator, equal to sum of self.diag_part(). If the operator is square, this is also the sum of the eigenvalues.
Args
name A name for this Op.
Returns Shape [B1,...,Bb] Tensor of same dtype as self.
__matmul__ View source
__matmul__(
other
) | |
doc_3673 |
Get the transformation used for drawing x-axis labels, ticks and gridlines. The x-direction is in data coordinates and the y-direction is in axis coordinates. Note This transformation is primarily used by the Axis class, and is meant to be overridden by new kinds of projections that may need to place axis elements in different locations. | |
doc_3674 | Returns a string identifying the Python implementation SCM branch. | |
doc_3675 |
Returns
transformTransform
The transform used for drawing x-axis labels, which will add pad_points of padding (in points) between the axis and the label. The x-direction is in data coordinates and the y-direction is in axis coordinates
valign{'center', 'top', 'bottom', 'baseline', 'center_baseline'}
The text vertical alignment.
halign{'center', 'left', 'right'}
The text horizontal alignment. Notes This transformation is primarily used by the Axis class, and is meant to be overridden by new kinds of projections that may need to place axis elements in different locations. | |
doc_3676 | Returns True if x is a quiet NaN; otherwise returns False. | |
doc_3677 |
Set a label that will be displayed in the legend. Parameters
sobject
s will be converted to a string by calling str. | |
doc_3678 | Return the inode number of the entry. The result is cached on the os.DirEntry object. Use os.stat(entry.path, follow_symlinks=False).st_ino to fetch up-to-date information. On the first, uncached call, a system call is required on Windows but not on Unix. | |
doc_3679 | Return the digest of the bytes passed to the update() method so far. This bytes object will be the same length as the digest_size of the digest given to the constructor. It may contain non-ASCII bytes, including NUL bytes. Warning When comparing the output of digest() to an externally-supplied digest during a verification routine, it is recommended to use the compare_digest() function instead of the == operator to reduce the vulnerability to timing attacks. | |
doc_3680 | Returns context data for displaying the object. The base implementation of this method requires that the self.object attribute be set by the view (even if None). Be sure to do this if you are using this mixin without one of the built-in views that does so. It returns a dictionary with these contents:
object: The object that this view is displaying (self.object).
context_object_name: self.object will also be stored under the name returned by get_context_object_name(), which defaults to the lowercased version of the model name. Context variables override values from template context processors Any variables from get_context_data() take precedence over context variables from context processors. For example, if your view sets the model attribute to User, the default context object name of user would override the user variable from the django.contrib.auth.context_processors.auth() context processor. Use get_context_object_name() to avoid a clash. | |
doc_3681 | Return an iterator for the month month in the year year similar to itermonthdates(), but not restricted by the datetime.date range. Days returned will be tuples consisting of a year, a month, a day of the month, and a day of the week numbers. New in version 3.7. | |
doc_3682 |
Calculate the absolute value element-wise. np.abs is a shorthand for this function. Parameters
xarray_like
Input array.
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
absolutendarray
An ndarray containing the absolute value of each element in x. For complex input, a + ib, the absolute value is \(\sqrt{ a^2 + b^2 }\). This is a scalar if x is a scalar. Examples >>> x = np.array([-1.2, 1.2])
>>> np.absolute(x)
array([ 1.2, 1.2])
>>> np.absolute(1.2 + 1j)
1.5620499351813308
Plot the function over [-10, 10]: >>> import matplotlib.pyplot as plt
>>> x = np.linspace(start=-10, stop=10, num=101)
>>> plt.plot(x, np.absolute(x))
>>> plt.show()
Plot the function over the complex plane: >>> xx = x + 1j * x[:, np.newaxis]
>>> plt.imshow(np.abs(xx), extent=[-10, 10, -10, 10], cmap='gray')
>>> plt.show()
The abs function can be used as a shorthand for np.absolute on ndarrays. >>> x = np.array([-1.2, 1.2])
>>> abs(x)
array([1.2, 1.2]) | |
doc_3683 | Provides functionality to topologically sort a graph of hashable nodes. A topological order is a linear ordering of the vertices in a graph such that for every directed edge u -> v from vertex u to vertex v, vertex u comes before vertex v in the ordering. For instance, the vertices of the graph may represent tasks to be performed, and the edges may represent constraints that one task must be performed before another; in this example, a topological ordering is just a valid sequence for the tasks. A complete topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph. If the optional graph argument is provided it must be a dictionary representing a directed acyclic graph where the keys are nodes and the values are iterables of all predecessors of that node in the graph (the nodes that have edges that point to the value in the key). Additional nodes can be added to the graph using the add() method. In the general case, the steps required to perform the sorting of a given graph are as follows: Create an instance of the TopologicalSorter with an optional initial graph. Add additional nodes to the graph. Call prepare() on the graph. While is_active() is True, iterate over the nodes returned by get_ready() and process them. Call done() on each node as it finishes processing. In case just an immediate sorting of the nodes in the graph is required and no parallelism is involved, the convenience method TopologicalSorter.static_order() can be used directly: >>> graph = {"D": {"B", "C"}, "C": {"A"}, "B": {"A"}}
>>> ts = TopologicalSorter(graph)
>>> tuple(ts.static_order())
('A', 'C', 'B', 'D')
The class is designed to easily support parallel processing of the nodes as they become ready. For instance: topological_sorter = TopologicalSorter()
# Add nodes to 'topological_sorter'...
topological_sorter.prepare()
while topological_sorter.is_active():
for node in topological_sorter.get_ready():
# Worker threads or processes take nodes to work on off the
# 'task_queue' queue.
task_queue.put(node)
# When the work for a node is done, workers put the node in
# 'finalized_tasks_queue' so we can get more nodes to work on.
# The definition of 'is_active()' guarantees that, at this point, at
# least one node has been placed on 'task_queue' that hasn't yet
# been passed to 'done()', so this blocking 'get()' must (eventually)
# succeed. After calling 'done()', we loop back to call 'get_ready()'
# again, so put newly freed nodes on 'task_queue' as soon as
# logically possible.
node = finalized_tasks_queue.get()
topological_sorter.done(node)
add(node, *predecessors)
Add a new node and its predecessors to the graph. Both the node and all elements in predecessors must be hashable. If called multiple times with the same node argument, the set of dependencies will be the union of all dependencies passed in. It is possible to add a node with no dependencies (predecessors is not provided) or to provide a dependency twice. If a node that has not been provided before is included among predecessors it will be automatically added to the graph with no predecessors of its own. Raises ValueError if called after prepare().
prepare()
Mark the graph as finished and check for cycles in the graph. If any cycle is detected, CycleError will be raised, but get_ready() can still be used to obtain as many nodes as possible until cycles block more progress. After a call to this function, the graph cannot be modified, and therefore no more nodes can be added using add().
is_active()
Returns True if more progress can be made and False otherwise. Progress can be made if cycles do not block the resolution and either there are still nodes ready that haven’t yet been returned by TopologicalSorter.get_ready() or the number of nodes marked TopologicalSorter.done() is less than the number that have been returned by TopologicalSorter.get_ready(). The __bool__() method of this class defers to this function, so instead of: if ts.is_active():
...
it is possible to simply do: if ts:
...
Raises ValueError if called without calling prepare() previously.
done(*nodes)
Marks a set of nodes returned by TopologicalSorter.get_ready() as processed, unblocking any successor of each node in nodes for being returned in the future by a call to TopologicalSorter.get_ready(). Raises ValueError if any node in nodes has already been marked as processed by a previous call to this method or if a node was not added to the graph by using TopologicalSorter.add(), if called without calling prepare() or if node has not yet been returned by get_ready().
get_ready()
Returns a tuple with all the nodes that are ready. Initially it returns all nodes with no predecessors, and once those are marked as processed by calling TopologicalSorter.done(), further calls will return all new nodes that have all their predecessors already processed. Once no more progress can be made, empty tuples are returned. Raises ValueError if called without calling prepare() previously.
static_order()
Returns an iterable of nodes in a topological order. Using this method does not require to call TopologicalSorter.prepare() or TopologicalSorter.done(). This method is equivalent to: def static_order(self):
self.prepare()
while self.is_active():
node_group = self.get_ready()
yield from node_group
self.done(*node_group)
The particular order that is returned may depend on the specific order in which the items were inserted in the graph. For example: >>> ts = TopologicalSorter()
>>> ts.add(3, 2, 1)
>>> ts.add(1, 0)
>>> print([*ts.static_order()])
[2, 0, 1, 3]
>>> ts2 = TopologicalSorter()
>>> ts2.add(1, 0)
>>> ts2.add(3, 2, 1)
>>> print([*ts2.static_order()])
[0, 2, 1, 3]
This is due to the fact that “0” and “2” are in the same level in the graph (they would have been returned in the same call to get_ready()) and the order between them is determined by the order of insertion. If any cycle is detected, CycleError will be raised.
New in version 3.9. | |
doc_3684 |
Fit the gradient boosting model. Parameters
Xarray-like of shape (n_samples, n_features)
The input samples.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,) default=None
Weights of training data. New in version 0.23. Returns
selfobject | |
doc_3685 |
Move ticks and ticklabels (if present) to the right of the axes. | |
doc_3686 | class sklearn.gaussian_process.kernels.Hyperparameter(name, value_type, bounds, n_elements=1, fixed=None) [source]
A kernel hyperparameter’s specification in form of a namedtuple. New in version 0.18. Attributes
namestr
The name of the hyperparameter. Note that a kernel using a hyperparameter with name “x” must have the attributes self.x and self.x_bounds
value_typestr
The type of the hyperparameter. Currently, only “numeric” hyperparameters are supported.
boundspair of floats >= 0 or “fixed”
The lower and upper bound on the parameter. If n_elements>1, a pair of 1d array with n_elements each may be given alternatively. If the string “fixed” is passed as bounds, the hyperparameter’s value cannot be changed.
n_elementsint, default=1
The number of elements of the hyperparameter value. Defaults to 1, which corresponds to a scalar hyperparameter. n_elements > 1 corresponds to a hyperparameter which is vector-valued, such as, e.g., anisotropic length-scales.
fixedbool, default=None
Whether the value of this hyperparameter is fixed, i.e., cannot be changed during hyperparameter tuning. If None is passed, the “fixed” is derived based on the given bounds. Examples >>> from sklearn.gaussian_process.kernels import ConstantKernel
>>> from sklearn.datasets import make_friedman2
>>> from sklearn.gaussian_process import GaussianProcessRegressor
>>> from sklearn.gaussian_process.kernels import Hyperparameter
>>> X, y = make_friedman2(n_samples=50, noise=0, random_state=0)
>>> kernel = ConstantKernel(constant_value=1.0,
... constant_value_bounds=(0.0, 10.0))
We can access each hyperparameter: >>> for hyperparameter in kernel.hyperparameters:
... print(hyperparameter)
Hyperparameter(name='constant_value', value_type='numeric',
bounds=array([[ 0., 10.]]), n_elements=1, fixed=False)
>>> params = kernel.get_params()
>>> for key in sorted(params): print(f"{key} : {params[key]}")
constant_value : 1.0
constant_value_bounds : (0.0, 10.0)
Methods
count(value, /) Return number of occurrences of value.
index(value[, start, stop]) Return first index of value.
__call__(*args, **kwargs)
Call self as a function.
bounds
Alias for field number 2
count(value, /)
Return number of occurrences of value.
fixed
Alias for field number 4
index(value, start=0, stop=sys.maxsize, /)
Return first index of value. Raises ValueError if the value is not present.
n_elements
Alias for field number 3
name
Alias for field number 0
value_type
Alias for field number 1
Examples using sklearn.gaussian_process.kernels.Hyperparameter
Gaussian processes on discrete data structures | |
doc_3687 |
Display a message on toolbar or in status bar. | |
doc_3688 | alias of torch.distributions.constraints._Cat | |
doc_3689 |
This is a sequential container which calls the Conv 1d, Batch Norm 1d, and ReLU modules. During quantization this will be replaced with the corresponding fused module. | |
doc_3690 | The exit status or error message that is passed to the constructor. (Defaults to None.) | |
doc_3691 |
Convert strings in value to floats using mapping information stored in the unit object. Parameters
valuestr or iterable
Value or list of values to be converted.
unitUnitData
An object mapping strings to integers.
axisAxis
The axis on which the converted value is plotted. Note axis is unused. Returns
float or ndarray[float] | |
doc_3692 |
Applies element-wise, the function SoftSign(x)=x1+∣x∣\text{SoftSign}(x) = \frac{x}{1 + |x|} See Softsign for more details. | |
doc_3693 |
Create a new backend-specific subclass of Timer. This is useful for getting periodic events through the backend's native event loop. Implemented only for backends with GUIs. Parameters
intervalint
Timer interval in milliseconds.
callbackslist[tuple[callable, tuple, dict]]
Sequence of (func, args, kwargs) where func(*args, **kwargs) will be executed by the timer every interval. Callbacks which return False or 0 will be removed from the timer. Examples >>> timer = fig.canvas.new_timer(callbacks=[(f1, (1,), {'a': 3})]) | |
doc_3694 |
Like decorator_from_middleware, but returns a function that accepts the arguments to be passed to the middleware_class. For example, the cache_page() decorator is created from the CacheMiddleware like this: cache_page = decorator_from_middleware_with_args(CacheMiddleware)
@cache_page(3600)
def my_view(request):
pass | |
doc_3695 | See Migration guide for more details. tf.compat.v1.raw_ops.LoadTPUEmbeddingAdagradParametersGradAccumDebug
tf.raw_ops.LoadTPUEmbeddingAdagradParametersGradAccumDebug(
parameters, accumulators, gradient_accumulators, num_shards, shard_id,
table_id=-1, table_name='', config='', name=None
)
An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.
Args
parameters A Tensor of type float32. Value of parameters used in the Adagrad optimization algorithm.
accumulators A Tensor of type float32. Value of accumulators used in the Adagrad optimization algorithm.
gradient_accumulators A Tensor of type float32. Value of gradient_accumulators used in the Adagrad optimization algorithm.
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns The created Operation. | |
doc_3696 | See Migration guide for more details. tf.compat.v1.raw_ops.Select
tf.raw_ops.Select(
condition, x, y, name=None
)
The x, and y tensors must all have the same shape, and the output will also have that shape. The condition tensor must be a scalar if x and y are scalars. If x and y are vectors or higher rank, then condition must be either a scalar, a vector with size matching the first dimension of x, or must have the same shape as x. The condition tensor acts as a mask that chooses, based on the value at each element, whether the corresponding element / row in the output should be taken from x (if true) or y (if false). If condition is a vector and x and y are higher rank matrices, then it chooses which row (outer dimension) to copy from x and y. If condition has the same shape as x and y, then it chooses which element to copy from x and y. For example: # 'condition' tensor is [[True, False]
# [False, True]]
# 't' is [[1, 2],
# [3, 4]]
# 'e' is [[5, 6],
# [7, 8]]
select(condition, t, e) # => [[1, 6], [7, 4]]
# 'condition' tensor is [True, False]
# 't' is [[1, 2],
# [3, 4]]
# 'e' is [[5, 6],
# [7, 8]]
select(condition, t, e) ==> [[1, 2],
[7, 8]]
Args
condition A Tensor of type bool.
x A Tensor which may have the same shape as condition. If condition is rank 1, x may have higher rank, but its first dimension must match the size of condition.
y A Tensor with the same type and shape as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as t. | |
doc_3697 |
Alias for set_linestyle. | |
doc_3698 | Checks for any printable ASCII character which is not a space or an alphanumeric character. | |
doc_3699 | See Migration guide for more details. tf.compat.v1.debugging.is_nan, tf.compat.v1.is_nan, tf.compat.v1.math.is_nan
tf.math.is_nan(
x, name=None
)
Example: x = tf.constant([5.0, np.nan, 6.8, np.nan, np.inf])
tf.math.is_nan(x) ==> [False, True, False, True, False]
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64.
name A name for the operation (optional).
Returns A Tensor of type bool.
Numpy Compatibility Equivalent to np.isnan |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.