_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_24100 | Registers a backward hook on the module. The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature: hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The grad_input and grad_output are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in grad_input and grad_output will be None for all non-Tensor arguments. Warning Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle | |
doc_24101 |
Convert c to an RGB color, silently dropping the alpha channel.
Examples using matplotlib.colors.to_rgb
List of named colors | |
doc_24102 |
Connect func as callback function to changes of the slider value. Parameters
funccallable
Function to call when slider is changed. The function must accept a numpy array with shape (2,) as its argument. Returns
int
Connection id (which can be used to disconnect func). | |
doc_24103 |
Return self>>value. | |
doc_24104 | See Migration guide for more details. tf.compat.v1.keras.regularizers.get
tf.keras.regularizers.get(
identifier
) | |
doc_24105 | The base class for all built-in exceptions. It is not meant to be directly inherited by user-defined classes (for that, use Exception). If str() is called on an instance of this class, the representation of the argument(s) to the instance are returned, or the empty string when there were no arguments.
args
The tuple of arguments given to the exception constructor. Some built-in exceptions (like OSError) expect a certain number of arguments and assign a special meaning to the elements of this tuple, while others are usually called only with a single string giving an error message.
with_traceback(tb)
This method sets tb as the new traceback for the exception and returns the exception object. It is usually used in exception handling code like this: try:
...
except SomeException:
tb = sys.exc_info()[2]
raise OtherException(...).with_traceback(tb) | |
doc_24106 |
Return a random sample of items from an axis of object. You can use random_state for reproducibility. Parameters
n:int, optional
Number of items from axis to return. Cannot be used with frac. Default = 1 if frac = None.
frac:float, optional
Fraction of axis items to return. Cannot be used with n.
replace:bool, default False
Allow or disallow sampling of the same row more than once.
weights:str or ndarray-like, optional
Default ‘None’ results in equal probability weighting. If passed a Series, will align with target object on index. Index values in weights not found in sampled object will be ignored and index values in sampled object not in weights will be assigned weights of zero. If called on a DataFrame, will accept the name of a column when axis = 0. Unless weights are a Series, weights must be same length as axis being sampled. If weights do not sum to 1, they will be normalized to sum to 1. Missing values in the weights column will be treated as zero. Infinite values not allowed.
random_state:int, array-like, BitGenerator, np.random.RandomState, np.random.Generator, optional
If int, array-like, or BitGenerator, seed for random number generator. If np.random.RandomState or np.random.Generator, use as given. Changed in version 1.1.0: array-like and BitGenerator object now passed to np.random.RandomState() as seed Changed in version 1.4.0: np.random.Generator objects now accepted
axis:{0 or ‘index’, 1 or ‘columns’, None}, default None
Axis to sample. Accepts axis number or name. Default is stat axis for given data type (0 for Series and DataFrames).
ignore_index:bool, default False
If True, the resulting index will be labeled 0, 1, …, n - 1. New in version 1.3.0. Returns
Series or DataFrame
A new object of same type as caller containing n items randomly sampled from the caller object. See also DataFrameGroupBy.sample
Generates random samples from each group of a DataFrame object. SeriesGroupBy.sample
Generates random samples from each group of a Series object. numpy.random.choice
Generates a random sample from a given 1-D numpy array. Notes If frac > 1, replacement should be set to True. Examples
>>> df = pd.DataFrame({'num_legs': [2, 4, 8, 0],
... 'num_wings': [2, 0, 0, 0],
... 'num_specimen_seen': [10, 2, 1, 8]},
... index=['falcon', 'dog', 'spider', 'fish'])
>>> df
num_legs num_wings num_specimen_seen
falcon 2 2 10
dog 4 0 2
spider 8 0 1
fish 0 0 8
Extract 3 random elements from the Series df['num_legs']: Note that we use random_state to ensure the reproducibility of the examples.
>>> df['num_legs'].sample(n=3, random_state=1)
fish 0
spider 8
falcon 2
Name: num_legs, dtype: int64
A random 50% sample of the DataFrame with replacement:
>>> df.sample(frac=0.5, replace=True, random_state=1)
num_legs num_wings num_specimen_seen
dog 4 0 2
fish 0 0 8
An upsample sample of the DataFrame with replacement: Note that replace parameter has to be True for frac parameter > 1.
>>> df.sample(frac=2, replace=True, random_state=1)
num_legs num_wings num_specimen_seen
dog 4 0 2
fish 0 0 8
falcon 2 2 10
falcon 2 2 10
fish 0 0 8
dog 4 0 2
fish 0 0 8
dog 4 0 2
Using a DataFrame column as weights. Rows with larger value in the num_specimen_seen column are more likely to be sampled.
>>> df.sample(n=2, weights='num_specimen_seen', random_state=1)
num_legs num_wings num_specimen_seen
falcon 2 2 10
fish 0 0 8 | |
doc_24107 | The path to the templates folder, relative to root_path, to add to the template loader. None if templates should not be added. | |
doc_24108 | Raises a ValidationError with a code of 'max_value' if value is greater than limit_value, which may be a callable. | |
doc_24109 | See torch.tanh() | |
doc_24110 |
Axis supplied was invalid. This is raised whenever an axis parameter is specified that is larger than the number of array dimensions. For compatibility with code written against older numpy versions, which raised a mixture of ValueError and IndexError for this situation, this exception subclasses both to ensure that except ValueError and except IndexError statements continue to catch AxisError. New in version 1.13. Parameters
axisint or str
The out of bounds axis or a custom exception message. If an axis is provided, then ndim should be specified as well.
ndimint, optional
The number of array dimensions.
msg_prefixstr, optional
A prefix for the exception message. Examples >>> array_1d = np.arange(10)
>>> np.cumsum(array_1d, axis=1)
Traceback (most recent call last):
...
numpy.AxisError: axis 1 is out of bounds for array of dimension 1
Negative axes are preserved: >>> np.cumsum(array_1d, axis=-2)
Traceback (most recent call last):
...
numpy.AxisError: axis -2 is out of bounds for array of dimension 1
The class constructor generally takes the axis and arrays’ dimensionality as arguments: >>> print(np.AxisError(2, 1, msg_prefix='error'))
error: axis 2 is out of bounds for array of dimension 1
Alternatively, a custom exception message can be passed: >>> print(np.AxisError('Custom error message'))
Custom error message
Attributes
axisint, optional
The out of bounds axis or None if a custom exception message was provided. This should be the axis as passed by the user, before any normalization to resolve negative indices. New in version 1.22.
ndimint, optional
The number of array dimensions or None if a custom exception message was provided. New in version 1.22. | |
doc_24111 |
Create a recarray from a list of records in text form. Parameters
recListsequence
data in the same field may be heterogeneous - they will be promoted to the highest data type.
dtypedata-type, optional
valid dtype for all arrays
shapeint or tuple of ints, optional
shape of each array. formats, names, titles, aligned, byteorder :
If dtype is None, these arguments are passed to numpy.format_parser to construct a dtype. See that function for detailed documentation. If both formats and dtype are None, then this will auto-detect formats. Use list of tuples rather than list of lists for faster processing. Returns
np.recarray
record array consisting of given recList rows. Examples >>> r=np.core.records.fromrecords([(456,'dbe',1.2),(2,'de',1.3)],
... names='col1,col2,col3')
>>> print(r[0])
(456, 'dbe', 1.2)
>>> r.col1
array([456, 2])
>>> r.col2
array(['dbe', 'de'], dtype='<U3')
>>> import pickle
>>> pickle.loads(pickle.dumps(r))
rec.array([(456, 'dbe', 1.2), ( 2, 'de', 1.3)],
dtype=[('col1', '<i8'), ('col2', '<U3'), ('col3', '<f8')]) | |
doc_24112 | Translate an Internet service name and protocol name to a port number for that service. The optional protocol name, if given, should be 'tcp' or 'udp', otherwise any protocol will match. Raises an auditing event socket.getservbyname with arguments servicename, protocolname. | |
doc_24113 | Werkzeug implements WSGI, the standard Python interface between applications and servers.
Jinja is a template language that renders the pages your application serves.
MarkupSafe comes with Jinja. It escapes untrusted input when rendering templates to avoid injection attacks.
ItsDangerous securely signs data to ensure its integrity. This is used to protect Flask’s session cookie.
Click is a framework for writing command line applications. It provides the flask command and allows adding custom management commands. Optional dependencies These distributions will not be installed automatically. Flask will detect and use them if you install them.
Blinker provides support for Signals.
python-dotenv enables support for Environment Variables From dotenv when running flask commands.
Watchdog provides a faster, more efficient reloader for the development server. Virtual environments Use a virtual environment to manage the dependencies for your project, both in development and in production. What problem does a virtual environment solve? The more Python projects you have, the more likely it is that you need to work with different versions of Python libraries, or even Python itself. Newer versions of libraries for one project can break compatibility in another project. Virtual environments are independent groups of Python libraries, one for each project. Packages installed for one project will not affect other projects or the operating system’s packages. Python comes bundled with the venv module to create virtual environments. Create an environment Create a project folder and a venv folder within:
macOS/LinuxWindows
$ mkdir myproject
$ cd myproject
$ python3 -m venv venv
> mkdir myproject
> cd myproject
> py -3 -m venv venv
Activate the environment Before you work on your project, activate the corresponding environment:
macOS/LinuxWindows
$ . venv/bin/activate
> venv\Scripts\activate
Your shell prompt will change to show the name of the activated environment. Install Flask Within the activated environment, use the following command to install Flask: $ pip install Flask
Flask is now installed. Check out the Quickstart or go to the Documentation Overview. | |
doc_24114 | Register subclass as a “virtual subclass” of this ABC. For example: from abc import ABC
class MyABC(ABC):
pass
MyABC.register(tuple)
assert issubclass(tuple, MyABC)
assert isinstance((), MyABC)
Changed in version 3.3: Returns the registered subclass, to allow usage as a class decorator. Changed in version 3.4: To detect calls to register(), you can use the get_cache_token() function. | |
doc_24115 | If matching the URL failed, this is the exception that will be raised / was raised as part of the request handling. This is usually a NotFound exception or something similar. | |
doc_24116 | tf.math.confusion_matrix(
labels, predictions, num_classes=None, weights=None, dtype=tf.dtypes.int32,
name=None
)
The matrix columns represent the prediction labels and the rows represent the real labels. The confusion matrix is always a 2-D array of shape [n, n], where n is the number of valid labels for a given classification task. Both prediction and labels must be 1-D arrays of the same shape in order for this function to work. If num_classes is None, then num_classes will be set to one plus the maximum value in either predictions or labels. Class labels are expected to start at 0. For example, if num_classes is 3, then the possible labels would be [0, 1, 2]. If weights is not None, then each prediction contributes its corresponding weight to the total value of the confusion matrix cell. For example: tf.math.confusion_matrix([1, 2, 4], [2, 2, 4]) ==>
[[0 0 0 0 0]
[0 0 1 0 0]
[0 0 1 0 0]
[0 0 0 0 0]
[0 0 0 0 1]]
Note that the possible labels are assumed to be [0, 1, 2, 3, 4], resulting in a 5x5 confusion matrix.
Args
labels 1-D Tensor of real labels for the classification task.
predictions 1-D Tensor of predictions for a given classification.
num_classes The possible number of labels the classification task can have. If this value is not provided, it will be calculated using both predictions and labels array.
weights An optional Tensor whose shape matches predictions.
dtype Data type of the confusion matrix.
name Scope name.
Returns A Tensor of type dtype with shape [n, n] representing the confusion matrix, where n is the number of possible labels in the classification task.
Raises
ValueError If both predictions and labels are not 1-D vectors and have mismatched shapes, or if weights is not None and its shape doesn't match predictions. | |
doc_24117 | Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch.nn.init). Parameters
fn (Module -> None) – function to be applied to each submodule Returns
self Return type
Module Example: >>> @torch.no_grad()
>>> def init_weights(m):
>>> print(m)
>>> if type(m) == nn.Linear:
>>> m.weight.fill_(1.0)
>>> print(m.weight)
>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
>>> net.apply(init_weights)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1., 1.],
[ 1., 1.]])
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1., 1.],
[ 1., 1.]])
Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
) | |
doc_24118 |
Set the SubplotSpec instance. | |
doc_24119 |
Return the Figure instance the artist belongs to. | |
doc_24120 | Test whether the filename string matches the pattern string, returning True or False. Both parameters are case-normalized using os.path.normcase(). fnmatchcase() can be used to perform a case-sensitive comparison, regardless of whether that’s standard for the operating system. This example will print all file names in the current directory with the extension .txt: import fnmatch
import os
for file in os.listdir('.'):
if fnmatch.fnmatch(file, '*.txt'):
print(file) | |
doc_24121 | If you are running an entropy-gathering daemon (EGD) somewhere, and path is the pathname of a socket connection open to it, this will read 256 bytes of randomness from the socket, and add it to the SSL pseudo-random number generator to increase the security of generated secret keys. This is typically only necessary on systems without better sources of randomness. See http://egd.sourceforge.net/ or http://prngd.sourceforge.net/ for sources of entropy-gathering daemons. Availability: not available with LibreSSL and OpenSSL > 1.1.0. | |
doc_24122 |
Predict class probabilities for X. The predicted class probabilities of an input sample are computed as the mean predicted class probabilities of the trees in the forest. The class probability of a single tree is the fraction of samples of the same class in a leaf. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. Returns
pndarray of shape (n_samples, n_classes), or a list of n_outputs
such arrays if n_outputs > 1. The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. | |
doc_24123 | Returns the matrix raised to the power n for square matrices. For batch of matrices, each individual matrix is raised to the power n. If n is negative, then the inverse of the matrix (if invertible) is raised to the power n. For a batch of matrices, the batched inverse (if invertible) is raised to the power n. If n is 0, then an identity matrix is returned. Parameters
input (Tensor) – the input tensor.
n (int) – the power to raise the matrix to Example: >>> a = torch.randn(2, 2, 2)
>>> a
tensor([[[-1.9975, -1.9610],
[ 0.9592, -2.3364]],
[[-1.2534, -1.3429],
[ 0.4153, -1.4664]]])
>>> torch.matrix_power(a, 3)
tensor([[[ 3.9392, -23.9916],
[ 11.7357, -0.2070]],
[[ 0.2468, -6.7168],
[ 2.0774, -0.8187]]]) | |
doc_24124 |
[Deprecated] Notes Deprecated since version 3.5: | |
doc_24125 |
Return a Series/DataFrame with absolute numeric value of each element. This function only applies to elements that are all numeric. Returns
abs
Series/DataFrame containing the absolute value of each element. See also numpy.absolute
Calculate the absolute value element-wise. Notes For complex inputs, 1.2 + 1j, the absolute value is \(\sqrt{ a^2 + b^2 }\). Examples Absolute numeric values in a Series.
>>> s = pd.Series([-1.10, 2, -3.33, 4])
>>> s.abs()
0 1.10
1 2.00
2 3.33
3 4.00
dtype: float64
Absolute numeric values in a Series with complex numbers.
>>> s = pd.Series([1.2 + 1j])
>>> s.abs()
0 1.56205
dtype: float64
Absolute numeric values in a Series with a Timedelta element.
>>> s = pd.Series([pd.Timedelta('1 days')])
>>> s.abs()
0 1 days
dtype: timedelta64[ns]
Select rows with data closest to certain value using argsort (from StackOverflow).
>>> df = pd.DataFrame({
... 'a': [4, 5, 6, 7],
... 'b': [10, 20, 30, 40],
... 'c': [100, 50, -30, -50]
... })
>>> df
a b c
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
>>> df.loc[(df.c - 43).abs().argsort()]
a b c
1 5 20 50
0 4 10 100
2 6 30 -30
3 7 40 -50 | |
doc_24126 | See Migration guide for more details. tf.compat.v1.feature_column.embedding_column
tf.feature_column.embedding_column(
categorical_column, dimension, combiner='mean', initializer=None,
ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True,
use_safe_embedding_lookup=True
)
Use this when your inputs are sparse, but you want to convert them to a dense representation (e.g., to feed to a DNN). Inputs must be a CategoricalColumn created by any of the categorical_column_* function. Here is an example of using embedding_column with DNNClassifier: video_id = categorical_column_with_identity(
key='video_id', num_buckets=1000000, default_value=0)
columns = [embedding_column(video_id, 9),...]
estimator = tf.estimator.DNNClassifier(feature_columns=columns, ...)
label_column = ...
def input_fn():
features = tf.io.parse_example(
..., features=make_parse_example_spec(columns + [label_column]))
labels = features.pop(label_column.name)
return features, labels
estimator.train(input_fn=input_fn, steps=100)
Here is an example using embedding_column with model_fn: def model_fn(features, ...):
video_id = categorical_column_with_identity(
key='video_id', num_buckets=1000000, default_value=0)
columns = [embedding_column(video_id, 9),...]
dense_tensor = input_layer(features, columns)
# Form DNN layers, calculate loss, and return EstimatorSpec.
...
Args
categorical_column A CategoricalColumn created by a categorical_column_with_* function. This column produces the sparse IDs that are inputs to the embedding lookup.
dimension An integer specifying dimension of the embedding, must be > 0.
combiner A string specifying how to reduce if there are multiple entries in a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with 'mean' the default. 'sqrtn' often achieves good accuracy, in particular with bag-of-words columns. Each of this can be thought as example level normalizations on the column. For more information, see tf.embedding_lookup_sparse.
initializer A variable initializer function to be used in embedding variable initialization. If not specified, defaults to truncated_normal_initializer with mean 0.0 and standard deviation 1/sqrt(dimension).
ckpt_to_load_from String representing checkpoint name/pattern from which to restore column weights. Required if tensor_name_in_ckpt is not None.
tensor_name_in_ckpt Name of the Tensor in ckpt_to_load_from from which to restore the column weights. Required if ckpt_to_load_from is not None.
max_norm If not None, embedding values are l2-normalized to this value.
trainable Whether or not the embedding is trainable. Default is True.
use_safe_embedding_lookup If true, uses safe_embedding_lookup_sparse instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures there are no empty rows and all weights and ids are positive at the expense of extra compute cost. This only applies to rank 2 (NxM) shaped input tensors. Defaults to true, consider turning off if the above checks are not needed. Note that having empty rows will not trigger any error though the output result might be 0 or omitted.
Returns DenseColumn that converts from sparse input.
Raises
ValueError if dimension not > 0.
ValueError if exactly one of ckpt_to_load_from and tensor_name_in_ckpt is specified.
ValueError if initializer is specified and is not callable.
RuntimeError If eager execution is enabled. | |
doc_24127 | See Migration guide for more details. tf.compat.v1.repeat
tf.repeat(
input, repeats, axis=None, name=None
)
See also tf.concat, tf.stack, tf.tile.
Args
input An N-dimensional Tensor.
repeats An 1-D int Tensor. The number of repetitions for each element. repeats is broadcasted to fit the shape of the given axis. len(repeats) must equal input.shape[axis] if axis is not None.
axis An int. The axis along which to repeat values. By default (axis=None), use the flattened input array, and return a flat output array.
name A name for the operation.
Returns A Tensor which has the same shape as input, except along the given axis. If axis is None then the output array is flattened to match the flattened input array.
Example usage:
repeat(['a', 'b', 'c'], repeats=[3, 0, 2], axis=0)
<tf.Tensor: shape=(5,), dtype=string,
numpy=array([b'a', b'a', b'a', b'c', b'c'], dtype=object)>
repeat([[1, 2], [3, 4]], repeats=[2, 3], axis=0)
<tf.Tensor: shape=(5, 2), dtype=int32, numpy=
array([[1, 2],
[1, 2],
[3, 4],
[3, 4],
[3, 4]], dtype=int32)>
repeat([[1, 2], [3, 4]], repeats=[2, 3], axis=1)
<tf.Tensor: shape=(2, 5), dtype=int32, numpy=
array([[1, 1, 2, 2, 2],
[3, 3, 4, 4, 4]], dtype=int32)>
repeat(3, repeats=4)
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([3, 3, 3, 3], dtype=int32)>
repeat([[1,2], [3,4]], repeats=2)
<tf.Tensor: shape=(8,), dtype=int32,
numpy=array([1, 1, 2, 2, 3, 3, 4, 4], dtype=int32)> | |
doc_24128 | Manually clean up the data in the locals for this context. Call this at the end of the request or use make_middleware(). Return type
None | |
doc_24129 |
Roll provided date forward to next offset only if not on offset. Returns
TimeStamp
Rolled timestamp if not on offset, otherwise unchanged timestamp. | |
doc_24130 |
Alias for set_linewidth. | |
doc_24131 | Return True if the queue is empty, False otherwise. | |
doc_24132 |
Bases: matplotlib.blocking_input.BlockingInput Callable for retrieving mouse clicks in a blocking way. This class will also retrieve keypresses and map them to mouse clicks: delete and backspace are a right click, enter is like a middle click, and all others are like a left click. add_click(event)[source]
Add the coordinates of an event to the list of clicks. Parameters
eventMouseEvent
button_add=1[source]
button_pop=3[source]
button_stop=2[source]
cleanup(event=None)[source]
Parameters
eventMouseEvent, optional
Not used
key_event()[source]
Process a key press event, mapping keys to appropriate mouse clicks.
mouse_event()[source]
Process a mouse click event.
mouse_event_add(event)[source]
Process an button-1 event (add a click if inside axes). Parameters
eventMouseEvent
mouse_event_pop(event)[source]
Process an button-3 event (remove the last click). Parameters
eventMouseEvent
mouse_event_stop(event)[source]
Process an button-2 event (end blocking input). Parameters
eventMouseEvent
pop(event, index=- 1)[source]
Remove a click and the associated event from the list of clicks. Defaults to the last click.
pop_click(event, index=- 1)[source]
Remove a click (by default, the last) from the list of clicks. Parameters
eventMouseEvent
post_event()[source]
Process an event. | |
doc_24133 | Time of most recent content modification expressed in seconds. | |
doc_24134 |
Return the clip path. | |
doc_24135 | Closes all files. If you put real file objects into the files dict you can call this method to automatically close them all in one go. Return type
None | |
doc_24136 | Remove (delete) the directory path. If the directory does not exist or is not empty, an FileNotFoundError or an OSError is raised respectively. In order to remove whole directory trees, shutil.rmtree() can be used. This function can support paths relative to directory descriptors. Raises an auditing event os.rmdir with arguments path, dir_fd. New in version 3.3: The dir_fd parameter. Changed in version 3.6: Accepts a path-like object. | |
doc_24137 | Casts all parameters and buffers to dst_type. Parameters
dst_type (type or string) – the desired type Returns
self Return type
Module | |
doc_24138 | Finally send the headers to the output stream and flush the internal headers buffer. New in version 3.3. | |
doc_24139 | See torch.mode() | |
doc_24140 |
The .. mathmpl:: directive, as documented in the module's docstring. | |
doc_24141 | Append the last nelements items of history to a file. The default filename is ~/.history. The file must already exist. This calls append_history() in the underlying library. This function only exists if Python was compiled for a version of the library that supports it. New in version 3.5. | |
doc_24142 |
Return the month names of the DateTimeIndex with specified locale. Parameters
locale:str, optional
Locale determining the language in which to return the month name. Default is English locale. Returns
Index
Index of month names. Examples
>>> idx = pd.date_range(start='2018-01', freq='M', periods=3)
>>> idx
DatetimeIndex(['2018-01-31', '2018-02-28', '2018-03-31'],
dtype='datetime64[ns]', freq='M')
>>> idx.month_name()
Index(['January', 'February', 'March'], dtype='object') | |
doc_24143 | Get tree status. Returns
tree_stats: tuple of int
(number of trims, number of leaves, number of splits) | |
doc_24144 | Closes the PixelArray, and releases Surface lock. transpose() -> PixelArray This method is for explicitly closing the PixelArray, and releasing a lock on the Suface. New in pygame 1.9.4. | |
doc_24145 |
Context manager for floating-point error handling. Using an instance of errstate as a context manager allows statements in that context to execute with a known error handling behavior. Upon entering the context the error handling is set with seterr and seterrcall, and upon exiting it is reset to what it was before. Changed in version 1.17.0: errstate is also usable as a function decorator, saving a level of indentation if an entire function is wrapped. See contextlib.ContextDecorator for more information. Parameters
kwargs{divide, over, under, invalid}
Keyword arguments. The valid keywords are the possible floating-point exceptions. Each keyword should have a string value that defines the treatment for the particular error. Possible values are {‘ignore’, ‘warn’, ‘raise’, ‘call’, ‘print’, ‘log’}. See also
seterr, geterr, seterrcall, geterrcall
Notes For complete documentation of the types of floating-point exceptions and treatment options, see seterr. Examples >>> olderr = np.seterr(all='ignore') # Set error handling to known state.
>>> np.arange(3) / 0.
array([nan, inf, inf])
>>> with np.errstate(divide='warn'):
... np.arange(3) / 0.
array([nan, inf, inf])
>>> np.sqrt(-1)
nan
>>> with np.errstate(invalid='raise'):
... np.sqrt(-1)
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
FloatingPointError: invalid value encountered in sqrt
Outside the context the error handling behavior has not changed: >>> np.geterr()
{'divide': 'ignore', 'over': 'ignore', 'under': 'ignore', 'invalid': 'ignore'}
Methods
__call__(func) Call self as a function. | |
doc_24146 | Create and return a new database name, initialize it with schema, and set the properties ProductName, ProductCode, ProductVersion, and Manufacturer. schema must be a module object containing tables and _Validation_records attributes; typically, msilib.schema should be used. The database will contain just the schema and the validation records when this function returns. | |
doc_24147 | See Migration guide for more details. tf.compat.v1.raw_ops.BesselK1
tf.raw_ops.BesselK1(
x, name=None
)
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | |
doc_24148 |
A reference to the array that is iterated over. Examples >>> x = np.arange(5)
>>> fl = x.flat
>>> fl.base is x
True | |
doc_24149 | Returns the sum of the elements of the diagonal of the input 2-D matrix. Example: >>> x = torch.arange(1., 10.).view(3, 3)
>>> x
tensor([[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]])
>>> torch.trace(x)
tensor(15.) | |
doc_24150 |
Calculate slice bound that corresponds to given label. Returns leftmost (one-past-the-rightmost if side=='right') position of given label. Parameters
label:object
side:{‘left’, ‘right’}
kind:{‘loc’, ‘getitem’} or None
Deprecated since version 1.4.0. Returns
int
Index of label. | |
doc_24151 | Get the mapping of live local variables in generator to their current values. A dictionary is returned that maps from variable names to values. This is the equivalent of calling locals() in the body of the generator, and all the same caveats apply. If generator is a generator with no currently associated frame, then an empty dictionary is returned. TypeError is raised if generator is not a Python generator object. CPython implementation detail: This function relies on the generator exposing a Python stack frame for introspection, which isn’t guaranteed to be the case in all implementations of Python. In such cases, this function will always return an empty dictionary. New in version 3.3. | |
doc_24152 | Deprecated and no longer used. | |
doc_24153 | Return a ctypes object allocated from shared memory which is a copy of the ctypes object obj. | |
doc_24154 | PostgreSQL specific aggregation functions General-purpose aggregation functions Aggregate functions for statistics Usage examples
PostgreSQL specific database constraints ExclusionConstraint
PostgreSQL specific query expressions ArraySubquery() expressions
PostgreSQL specific model fields Indexing these fields ArrayField CIText fields HStoreField Range Fields
PostgreSQL specific form fields and widgets Fields Widgets
PostgreSQL specific database functions RandomUUID TransactionNow
PostgreSQL specific model indexes BloomIndex BrinIndex BTreeIndex GinIndex GistIndex HashIndex SpGistIndex OpClass() expressions
PostgreSQL specific lookups Trigram similarity Unaccent
Database migration operations Creating extension using migrations CreateExtension BloomExtension BtreeGinExtension BtreeGistExtension CITextExtension CryptoExtension HStoreExtension TrigramExtension UnaccentExtension Managing collations using migrations Concurrent index operations Adding constraints without enforcing validation
Full text search The search lookup SearchVector SearchQuery SearchRank SearchHeadline Changing the search configuration Weighting queries Performance Trigram similarity
Validators KeysValidator Range validators | |
doc_24155 | 'DEFAULT_PARSER_CLASSES': [
'rest_framework.parsers.JSONParser',
]
}
You can also set the parsers used for an individual view, or viewset, using the APIView class-based views. from rest_framework.parsers import JSONParser
from rest_framework.response import Response
from rest_framework.views import APIView
class ExampleView(APIView):
"""
A view that can accept POST requests with JSON content.
"""
parser_classes = [JSONParser]
def post(self, request, format=None):
return Response({'received data': request.data})
Or, if you're using the @api_view decorator with function based views. from rest_framework.decorators import api_view
from rest_framework.decorators import parser_classes
from rest_framework.parsers import JSONParser
@api_view(['POST'])
@parser_classes([JSONParser])
def example_view(request, format=None):
"""
A view that can accept POST requests with JSON content.
"""
return Response({'received data': request.data})
API Reference JSONParser Parses JSON request content. request.data will be populated with a dictionary of data. .media_type: application/json FormParser Parses HTML form content. request.data will be populated with a QueryDict of data. You will typically want to use both FormParser and MultiPartParser together in order to fully support HTML form data. .media_type: application/x-www-form-urlencoded MultiPartParser Parses multipart HTML form content, which supports file uploads. Both request.data will be populated with a QueryDict. You will typically want to use both FormParser and MultiPartParser together in order to fully support HTML form data. .media_type: multipart/form-data FileUploadParser Parses raw file upload content. The request.data property will be a dictionary with a single key 'file' containing the uploaded file. If the view used with FileUploadParser is called with a filename URL keyword argument, then that argument will be used as the filename. If it is called without a filename URL keyword argument, then the client must set the filename in the Content-Disposition HTTP header. For example Content-Disposition: attachment; filename=upload.jpg. .media_type: */* Notes: The FileUploadParser is for usage with native clients that can upload the file as a raw data request. For web-based uploads, or for native clients with multipart upload support, you should use the MultiPartParser instead. Since this parser's media_type matches any content type, FileUploadParser should generally be the only parser set on an API view.
FileUploadParser respects Django's standard FILE_UPLOAD_HANDLERS setting, and the request.upload_handlers attribute. See the Django documentation for more details. Basic usage example: # views.py
class FileUploadView(views.APIView):
parser_classes = [FileUploadParser]
def put(self, request, filename, format=None):
file_obj = request.data['file']
# ...
# do some stuff with uploaded file
# ...
return Response(status=204)
# urls.py
urlpatterns = [
# ...
re_path(r'^upload/(?P<filename>[^/]+)$', FileUploadView.as_view())
]
Custom parsers To implement a custom parser, you should override BaseParser, set the .media_type property, and implement the .parse(self, stream, media_type, parser_context) method. The method should return the data that will be used to populate the request.data property. The arguments passed to .parse() are: stream A stream-like object representing the body of the request. media_type Optional. If provided, this is the media type of the incoming request content. Depending on the request's Content-Type: header, this may be more specific than the renderer's media_type attribute, and may include media type parameters. For example "text/plain; charset=utf-8". parser_context Optional. If supplied, this argument will be a dictionary containing any additional context that may be required to parse the request content. By default this will include the following keys: view, request, args, kwargs. Example The following is an example plaintext parser that will populate the request.data property with a string representing the body of the request. class PlainTextParser(BaseParser):
"""
Plain text parser.
"""
media_type = 'text/plain'
def parse(self, stream, media_type=None, parser_context=None):
"""
Simply return a string representing the body of the request.
"""
return stream.read()
Third party packages The following third party packages are also available. YAML REST framework YAML provides YAML parsing and rendering support. It was previously included directly in the REST framework package, and is now instead supported as a third-party package. Installation & configuration Install using pip. $ pip install djangorestframework-yaml
Modify your REST framework settings. REST_FRAMEWORK = {
'DEFAULT_PARSER_CLASSES': [
'rest_framework_yaml.parsers.YAMLParser',
],
'DEFAULT_RENDERER_CLASSES': [
'rest_framework_yaml.renderers.YAMLRenderer',
],
}
XML REST Framework XML provides a simple informal XML format. It was previously included directly in the REST framework package, and is now instead supported as a third-party package. Installation & configuration Install using pip. $ pip install djangorestframework-xml
Modify your REST framework settings. REST_FRAMEWORK = {
'DEFAULT_PARSER_CLASSES': [
'rest_framework_xml.parsers.XMLParser',
],
'DEFAULT_RENDERER_CLASSES': [
'rest_framework_xml.renderers.XMLRenderer',
],
}
MessagePack MessagePack is a fast, efficient binary serialization format. Juan Riaza maintains the djangorestframework-msgpack package which provides MessagePack renderer and parser support for REST framework. CamelCase JSON djangorestframework-camel-case provides camel case JSON renderers and parsers for REST framework. This allows serializers to use Python-style underscored field names, but be exposed in the API as Javascript-style camel case field names. It is maintained by Vitaly Babiy. parsers.py | |
doc_24156 |
Bases: object A class that converts strings to paths. DPI=72
FONT_SCALE=100.0
get_glyphs_mathtext(prop, s, glyph_map=None, return_new_glyphs_only=False)[source]
Parse mathtext string s and convert it to a (vertices, codes) pair.
get_glyphs_tex(prop, s, glyph_map=None, return_new_glyphs_only=False)[source]
Convert the string s to vertices and codes using usetex mode.
get_glyphs_with_font(font, s, glyph_map=None, return_new_glyphs_only=False)[source]
Convert string s to vertices and codes using the provided ttf font.
get_texmanager()[source]
Return the cached TexManager instance.
get_text_path(prop, s, ismath=False)[source]
Convert text s to path (a tuple of vertices and codes for matplotlib.path.Path). Parameters
propFontProperties
The font properties for the text.
sstr
The text to be converted.
ismath{False, True, "TeX"}
If True, use mathtext parser. If "TeX", use tex for rendering. Returns
vertslist
A list of numpy arrays containing the x and y coordinates of the vertices.
codeslist
A list of path codes. Examples Create a list of vertices and codes from a text, and create a Path from those: from matplotlib.path import Path
from matplotlib.textpath import TextToPath
from matplotlib.font_manager import FontProperties
fp = FontProperties(family="Humor Sans", style="italic")
verts, codes = TextToPath().get_text_path(fp, "ABC")
path = Path(verts, codes, closed=False)
Also see TextPath for a more direct way to create a path from a text.
get_text_width_height_descent(s, prop, ismath)[source] | |
doc_24157 |
Bases: matplotlib.backend_managers.ToolEvent Event to inform that a tool has been triggered. | |
doc_24158 | See Migration guide for more details. tf.compat.v1.compat.as_text
tf.compat.as_text(
bytes_or_text, encoding='utf-8'
)
Returns the input as a unicode string. Uses utf-8 encoding for text by default.
Args
bytes_or_text A bytes, str, or unicode object.
encoding A string indicating the charset for decoding unicode.
Returns A unicode (Python 2) or str (Python 3) object.
Raises
TypeError If bytes_or_text is not a binary or unicode string. | |
doc_24159 | Returns a new tensor with the exponential of the elements of the input tensor input. yi=exiy_{i} = e^{x_{i}}
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> torch.exp(torch.tensor([0, math.log(2.)]))
tensor([ 1., 2.]) | |
doc_24160 | Join adjacent text nodes so that all stretches of text are stored as single Text instances. This simplifies processing text from a DOM tree for many applications. | |
doc_24161 |
Set the CapStyle for the collection (for all its elements). Parameters
csCapStyle or {'butt', 'projecting', 'round'} | |
doc_24162 |
Scale features using statistics that are robust to outliers. This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile). Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Median and interquartile range are then stored to be used on later data using the transform method. Standardization of a dataset is a common requirement for many machine learning estimators. Typically this is done by removing the mean and scaling to unit variance. However, outliers can often influence the sample mean / variance in a negative way. In such cases, the median and the interquartile range often give better results. New in version 0.17. Read more in the User Guide. Parameters
with_centeringbool, default=True
If True, center the data before scaling. This will cause transform to raise an exception when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory.
with_scalingbool, default=True
If True, scale the data to interquartile range.
quantile_rangetuple (q_min, q_max), 0.0 < q_min < q_max < 100.0, default=(25.0, 75.0), == (1st quantile, 3rd quantile), == IQR
Quantile range used to calculate scale_. New in version 0.18.
copybool, default=True
If False, try to avoid a copy and do inplace scaling instead. This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a copy may still be returned.
unit_variancebool, default=False
If True, scale data so that normally distributed features have a variance of 1. In general, if the difference between the x-values of q_max and q_min for a standard normal distribution is greater than 1, the dataset will be scaled down. If less than 1, the dataset will be scaled up. New in version 0.24. Attributes
center_array of floats
The median value for each feature in the training set.
scale_array of floats
The (scaled) interquartile range for each feature in the training set. New in version 0.17: scale_ attribute. See also
robust_scale
Equivalent function without the estimator API.
PCA
Further removes the linear correlation across features with ‘whiten=True’. Notes For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. https://en.wikipedia.org/wiki/Median https://en.wikipedia.org/wiki/Interquartile_range Examples >>> from sklearn.preprocessing import RobustScaler
>>> X = [[ 1., -2., 2.],
... [ -2., 1., 3.],
... [ 4., 1., -2.]]
>>> transformer = RobustScaler().fit(X)
>>> transformer
RobustScaler()
>>> transformer.transform(X)
array([[ 0. , -2. , 0. ],
[-1. , 0. , 0.4],
[ 1. , 0. , -1.6]])
Methods
fit(X[, y]) Compute the median and quantiles to be used for scaling.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Scale back the data to the original representation
set_params(**params) Set the parameters of this estimator.
transform(X) Center and scale the data.
fit(X, y=None) [source]
Compute the median and quantiles to be used for scaling. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to compute the median and quantiles used for later scaling along the features axis.
yNone
Ignored. Returns
selfobject
Fitted scaler.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Scale back the data to the original representation Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The rescaled data to be transformed back. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Center and scale the data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the specified axis. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array. | |
doc_24163 | Return a copy of the string with the leading and trailing characters removed. The chars argument is a string specifying the set of characters to be removed. If omitted or None, the chars argument defaults to removing whitespace. The chars argument is not a prefix or suffix; rather, all combinations of its values are stripped: >>> ' spacious '.strip()
'spacious'
>>> 'www.example.com'.strip('cmowz.')
'example'
The outermost leading and trailing chars argument values are stripped from the string. Characters are removed from the leading end until reaching a string character that is not contained in the set of characters in chars. A similar action takes place on the trailing end. For example: >>> comment_string = '#....... Section 3.2.1 Issue #32 .......'
>>> comment_string.strip('.#! ')
'Section 3.2.1 Issue #32' | |
doc_24164 | os.RTLD_NOW
os.RTLD_GLOBAL
os.RTLD_LOCAL
os.RTLD_NODELETE
os.RTLD_NOLOAD
os.RTLD_DEEPBIND
Flags for use with the setdlopenflags() and getdlopenflags() functions. See the Unix manual page dlopen(3) for what the different flags mean. New in version 3.3. | |
doc_24165 | Return a byte representation of the message corresponding to key, or raise a KeyError exception if no such message exists. New in version 3.2. | |
doc_24166 |
Color the text in a gradient style. The text color is determined according to the data in each column, row or frame, or by a given gradient map. Requires matplotlib. Parameters
cmap:str or colormap
Matplotlib colormap.
low:float
Compress the color range at the low end. This is a multiple of the data range to extend below the minimum; good values usually in [0, 1], defaults to 0.
high:float
Compress the color range at the high end. This is a multiple of the data range to extend above the maximum; good values usually in [0, 1], defaults to 0.
axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0
Apply to each column (axis=0 or 'index'), to each row (axis=1 or 'columns'), or to the entire DataFrame at once with axis=None.
subset:label, array-like, IndexSlice, optional
A valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input or single key, to DataFrame.loc[:, <subset>] where the columns are prioritised, to limit data to before applying the function.
text_color_threshold:float or int
This argument is ignored (only used in background_gradient). Luminance threshold for determining text color in [0, 1]. Facilitates text visibility across varying background colors. All text is dark if 0, and light if 1, defaults to 0.408.
vmin:float, optional
Minimum data value that corresponds to colormap minimum value. If not specified the minimum value of the data (or gmap) will be used. New in version 1.0.0.
vmax:float, optional
Maximum data value that corresponds to colormap maximum value. If not specified the maximum value of the data (or gmap) will be used. New in version 1.0.0.
gmap:array-like, optional
Gradient map for determining the text colors. If not supplied will use the underlying data from rows, columns or frame. If given as an ndarray or list-like must be an identical shape to the underlying data considering axis and subset. If given as DataFrame or Series must have same index and column labels considering axis and subset. If supplied, vmin and vmax should be given relative to this gradient map. New in version 1.3.0. Returns
self:Styler
See also Styler.background_gradient
Color the background in a gradient style. Notes When using low and high the range of the gradient, given by the data if gmap is not given or by gmap, is extended at the low end effectively by map.min - low * map.range and at the high end by map.max + high * map.range before the colors are normalized and determined. If combining with vmin and vmax the map.min, map.max and map.range are replaced by values according to the values derived from vmin and vmax. This method will preselect numeric columns and ignore non-numeric columns unless a gmap is supplied in which case no preselection occurs. Examples
>>> df = pd.DataFrame(columns=["City", "Temp (c)", "Rain (mm)", "Wind (m/s)"],
... data=[["Stockholm", 21.6, 5.0, 3.2],
... ["Oslo", 22.4, 13.3, 3.1],
... ["Copenhagen", 24.5, 0.0, 6.7]])
Shading the values column-wise, with axis=0, preselecting numeric columns
>>> df.style.text_gradient(axis=0)
Shading all values collectively using axis=None
>>> df.style.text_gradient(axis=None)
Compress the color map from the both low and high ends
>>> df.style.text_gradient(axis=None, low=0.75, high=1.0)
Manually setting vmin and vmax gradient thresholds
>>> df.style.text_gradient(axis=None, vmin=6.7, vmax=21.6)
Setting a gmap and applying to all columns with another cmap
>>> df.style.text_gradient(axis=0, gmap=df['Temp (c)'], cmap='YlOrRd')
...
Setting the gradient map for a dataframe (i.e. axis=None), we need to explicitly state subset to match the gmap shape
>>> gmap = np.array([[1,2,3], [2,3,4], [3,4,5]])
>>> df.style.text_gradient(axis=None, gmap=gmap,
... cmap='YlOrRd', subset=['Temp (c)', 'Rain (mm)', 'Wind (m/s)']
... ) | |
doc_24167 | Return the name of the (text or binary) file in which an object was defined. This will fail with a TypeError if the object is a built-in module, class, or function. | |
doc_24168 |
Return log-probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_. | |
doc_24169 |
Convenience method for simple axis view autoscaling. See matplotlib.axes.Axes.autoscale() for full explanation. Note that this function behaves the same, but for all three axes. Therefore, 'z' can be passed for axis, and 'both' applies to all three axes. | |
doc_24170 | class sklearn.linear_model.HuberRegressor(*, epsilon=1.35, max_iter=100, alpha=0.0001, warm_start=False, fit_intercept=True, tol=1e-05) [source]
Linear regression model that is robust to outliers. The Huber Regressor optimizes the squared loss for the samples where |(y - X'w) / sigma| < epsilon and the absolute loss for the samples where |(y - X'w) / sigma| > epsilon, where w and sigma are parameters to be optimized. The parameter sigma makes sure that if y is scaled up or down by a certain factor, one does not need to rescale epsilon to achieve the same robustness. Note that this does not take into account the fact that the different features of X may be of different scales. This makes sure that the loss function is not heavily influenced by the outliers while not completely ignoring their effect. Read more in the User Guide New in version 0.18. Parameters
epsilonfloat, greater than 1.0, default=1.35
The parameter epsilon controls the number of samples that should be classified as outliers. The smaller the epsilon, the more robust it is to outliers.
max_iterint, default=100
Maximum number of iterations that scipy.optimize.minimize(method="L-BFGS-B") should run for.
alphafloat, default=0.0001
Regularization parameter.
warm_startbool, default=False
This is useful if the stored attributes of a previously used model has to be reused. If set to False, then the coefficients will be rewritten for every call to fit. See the Glossary.
fit_interceptbool, default=True
Whether or not to fit the intercept. This can be set to False if the data is already centered around the origin.
tolfloat, default=1e-05
The iteration will stop when max{|proj g_i | i = 1, ..., n} <= tol where pg_i is the i-th component of the projected gradient. Attributes
coef_array, shape (n_features,)
Features got by optimizing the Huber loss.
intercept_float
Bias.
scale_float
The value by which |y - X'w - c| is scaled down.
n_iter_int
Number of iterations that scipy.optimize.minimize(method="L-BFGS-B") has run for. Changed in version 0.20: In SciPy <= 1.0.0 the number of lbfgs iterations may exceed max_iter. n_iter_ will now report at most max_iter.
outliers_array, shape (n_samples,)
A boolean mask which is set to True where the samples are identified as outliers. References
1
Peter J. Huber, Elvezio M. Ronchetti, Robust Statistics Concomitant scale estimates, pg 172
2
Art B. Owen (2006), A robust hybrid of lasso and ridge regression. https://statweb.stanford.edu/~owen/reports/hhu.pdf Examples >>> import numpy as np
>>> from sklearn.linear_model import HuberRegressor, LinearRegression
>>> from sklearn.datasets import make_regression
>>> rng = np.random.RandomState(0)
>>> X, y, coef = make_regression(
... n_samples=200, n_features=2, noise=4.0, coef=True, random_state=0)
>>> X[:4] = rng.uniform(10, 20, (4, 2))
>>> y[:4] = rng.uniform(10, 20, 4)
>>> huber = HuberRegressor().fit(X, y)
>>> huber.score(X, y)
-7.284...
>>> huber.predict(X[:1,])
array([806.7200...])
>>> linear = LinearRegression().fit(X, y)
>>> print("True coefficients:", coef)
True coefficients: [20.4923... 34.1698...]
>>> print("Huber coefficients:", huber.coef_)
Huber coefficients: [17.7906... 31.0106...]
>>> print("Linear Regression coefficients:", linear.coef_)
Linear Regression coefficients: [-1.9221... 7.0226...]
Methods
fit(X, y[, sample_weight]) Fit the model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
Xarray-like, shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yarray-like, shape (n_samples,)
Target vector relative to X.
sample_weightarray-like, shape (n_samples,)
Weight given to each sample. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.linear_model.HuberRegressor
HuberRegressor vs Ridge on dataset with strong outliers
Robust linear estimator fitting | |
doc_24171 |
[Deprecated] Convert UNIX time to days since Matplotlib epoch. Parameters
elist of floats
Time in seconds since 1970-01-01. Returns
numpy.array
Time in days since Matplotlib epoch (see get_epoch()). Notes Deprecated since version 3.5. | |
doc_24172 | word will usually be a user’s password as typed at a prompt or in a graphical interface. The optional salt is either a string as returned from mksalt(), one of the crypt.METHOD_* values (though not all may be available on all platforms), or a full encrypted password including salt, as returned by this function. If salt is not provided, the strongest method will be used (as returned by methods()). Checking a password is usually done by passing the plain-text password as word and the full results of a previous crypt() call, which should be the same as the results of this call. salt (either a random 2 or 16 character string, possibly prefixed with $digit$ to indicate the method) which will be used to perturb the encryption algorithm. The characters in salt must be in the set [./a-zA-Z0-9], with the exception of Modular Crypt Format which prefixes a $digit$. Returns the hashed password as a string, which will be composed of characters from the same alphabet as the salt. Since a few crypt(3) extensions allow different values, with different sizes in the salt, it is recommended to use the full crypted password as salt when checking for a password. Changed in version 3.3: Accept crypt.METHOD_* values in addition to strings for salt. | |
doc_24173 | See Migration guide for more details. tf.compat.v1.nn.ctc_unique_labels
tf.nn.ctc_unique_labels(
labels, name=None
)
For use with tf.nn.ctc_loss optional argument unique: This op can be used to preprocess labels in input pipeline to for better speed/memory use computing the ctc loss on TPU. Example: ctc_unique_labels([[3, 4, 4, 3]]) -> unique labels padded with 0: [[3, 4, 0, 0]] indices of original labels in unique: [0, 1, 1, 0]
Args
labels tensor of shape [batch_size, max_label_length] padded with 0.
name A name for this Op. Defaults to "ctc_unique_labels".
Returns tuple of unique labels, tensor of shape [batch_size, max_label_length]
indices into unique labels, shape [batch_size, max_label_length] | |
doc_24174 | A subclass of SSLError raised when trying to read or write and the SSL connection has been closed cleanly. Note that this doesn’t mean that the underlying transport (read TCP) has been closed. New in version 3.3. | |
doc_24175 | See Migration guide for more details. tf.compat.v1.raw_ops.Lu
tf.raw_ops.Lu(
input, output_idx_type=tf.dtypes.int32, name=None
)
The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. The input has to be invertible. The output consists of two tensors LU and P containing the LU decomposition of all input submatrices [..., :, :]. LU encodes the lower triangular and upper triangular factors. For each input submatrix of shape [M, M], L is a lower triangular matrix of shape [M, M] with unit diagonal whose entries correspond to the strictly lower triangular part of LU. U is a upper triangular matrix of shape [M, M] whose entries correspond to the upper triangular part, including the diagonal, of LU. P represents a permutation matrix encoded as a list of indices each between 0 and M-1, inclusive. If P_mat denotes the permutation matrix corresponding to P, then the L, U and P satisfies P_mat * input = L * U.
Args
input A Tensor. Must be one of the following types: float64, float32, half, complex64, complex128. A tensor of shape [..., M, M] whose inner-most 2 dimensions form matrices of size [M, M].
output_idx_type An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32.
name A name for the operation (optional).
Returns A tuple of Tensor objects (lu, p). lu A Tensor. Has the same type as input.
p A Tensor of type output_idx_type. | |
doc_24176 |
Compare images generated by the test with those specified in baseline_images, which must correspond, else an ImageComparisonFailure exception will be raised. Parameters
baseline_imageslist or None
A list of strings specifying the names of the images generated by calls to Figure.savefig. If None, the test function must use the baseline_images fixture, either as a parameter or with pytest.mark.usefixtures. This value is only allowed when using pytest.
extensionsNone or list of str
The list of extensions to test, e.g. ['png', 'pdf']. If None, defaults to all supported extensions: png, pdf, and svg. When testing a single extension, it can be directly included in the names passed to baseline_images. In that case, extensions must not be set. In order to keep the size of the test suite from ballooning, we only include the svg or pdf outputs if the test is explicitly exercising a feature dependent on that backend (see also the check_figures_equal decorator for that purpose).
tolfloat, default: 0
The RMS threshold above which the test is considered failed. Due to expected small differences in floating-point calculations, on 32-bit systems an additional 0.06 is added to this threshold.
freetype_versionstr or tuple
The expected freetype version or range of versions for this test to pass.
remove_textbool
Remove the title and tick text from the figure before comparison. This is useful to make the baseline images independent of variations in text rendering between different versions of FreeType. This does not remove other, more deliberate, text, such as legends and annotations.
savefig_kwargdict
Optional arguments that are passed to the savefig method.
stylestr, dict, or list
The optional style(s) to apply to the image test. The test itself can also apply additional styles if desired. Defaults to ["classic",
"_classic_test_patch"]. | |
doc_24177 | Assert that running the interpreter with args and optional environment variables env_vars fails (rc != 0) and return a (return code,
stdout, stderr) tuple. See assert_python_ok() for more options. Changed in version 3.9: The function no longer strips whitespaces from stderr. | |
doc_24178 |
Trim values at input threshold(s). Assigns values outside boundary to boundary values. Thresholds can be singular values or array like, and in the latter case the clipping is performed element-wise in the specified axis. Parameters
lower:float or array-like, default None
Minimum threshold value. All values below this threshold will be set to it. A missing threshold (e.g NA) will not clip the value.
upper:float or array-like, default None
Maximum threshold value. All values above this threshold will be set to it. A missing threshold (e.g NA) will not clip the value.
axis:int or str axis name, optional
Align object with lower and upper along the given axis.
inplace:bool, default False
Whether to perform the operation in place on the data. *args, **kwargs
Additional keywords have no effect but might be accepted for compatibility with numpy. Returns
Series or DataFrame or None
Same type as calling object with the values outside the clip boundaries replaced or None if inplace=True. See also Series.clip
Trim values at input threshold in series. DataFrame.clip
Trim values at input threshold in dataframe. numpy.clip
Clip (limit) the values in an array. Examples
>>> data = {'col_0': [9, -3, 0, -1, 5], 'col_1': [-2, -7, 6, 8, -5]}
>>> df = pd.DataFrame(data)
>>> df
col_0 col_1
0 9 -2
1 -3 -7
2 0 6
3 -1 8
4 5 -5
Clips per column using lower and upper thresholds:
>>> df.clip(-4, 6)
col_0 col_1
0 6 -2
1 -3 -4
2 0 6
3 -1 6
4 5 -4
Clips using specific lower and upper thresholds per column element:
>>> t = pd.Series([2, -4, -1, 6, 3])
>>> t
0 2
1 -4
2 -1
3 6
4 3
dtype: int64
>>> df.clip(t, t + 4, axis=0)
col_0 col_1
0 6 2
1 -3 -4
2 0 3
3 6 8
4 5 3
Clips using specific lower threshold per column element, with missing values:
>>> t = pd.Series([2, -4, np.NaN, 6, 3])
>>> t
0 2.0
1 -4.0
2 NaN
3 6.0
4 3.0
dtype: float64
>>> df.clip(t, axis=0)
col_0 col_1
0 9 2
1 -3 -4
2 0 6
3 6 8
4 5 3 | |
doc_24179 |
Refines the dimension names of self according to names. Refining is a special case of renaming that “lifts” unnamed dimensions. A None dim can be refined to have any name; a named dim can only be refined to have the same name. Because named tensors can coexist with unnamed tensors, refining names gives a nice way to write named-tensor-aware code that works with both named and unnamed tensors. names may contain up to one Ellipsis (...). The Ellipsis is expanded greedily; it is expanded in-place to fill names to the same length as self.dim() using names from the corresponding indices of self.names. Python 2 does not support Ellipsis but one may use a string literal instead ('...'). Parameters
names (iterable of str) – The desired names of the output tensor. May contain up to one Ellipsis. Examples: >>> imgs = torch.randn(32, 3, 128, 128)
>>> named_imgs = imgs.refine_names('N', 'C', 'H', 'W')
>>> named_imgs.names
('N', 'C', 'H', 'W')
>>> tensor = torch.randn(2, 3, 5, 7, 11)
>>> tensor = tensor.refine_names('A', ..., 'B', 'C')
>>> tensor.names
('A', None, None, 'B', 'C')
Warning The named tensor API is experimental and subject to change. | |
doc_24180 |
Open a grouping element with label s and gid (if set) as id. Only used by the SVG renderer. | |
doc_24181 |
Identity function. If p is the returned series, then p(x) == x for all values of x. Parameters
domain{None, array_like}, optional
If given, the array must be of the form [beg, end], where beg and end are the endpoints of the domain. If None is given then the class domain is used. The default is None.
window{None, array_like}, optional
If given, the resulting array must be if the form [beg, end], where beg and end are the endpoints of the window. If None is given then the class window is used. The default is None. Returns
new_seriesseries
Series of representing the identity. | |
doc_24182 | Function to be used to compare method names when sorting them in getTestCaseNames() and all the loadTestsFrom*() methods. | |
doc_24183 | See Migration guide for more details. tf.compat.v1.raw_ops.BatchMatMulV2
tf.raw_ops.BatchMatMulV2(
x, y, adj_x=False, adj_y=False, name=None
)
Multiplies all slices of Tensor x and y (each slice can be viewed as an element of a batch), and arranges the individual results in a single output tensor of the same batch size. Each of the individual slices can optionally be adjointed (to adjoint a matrix means to transpose and conjugate it) before multiplication by setting the adj_x or adj_y flag to True, which are by default False. The input tensors x and y are 2-D or higher with shape [..., r_x, c_x] and [..., r_y, c_y]. The output tensor is 2-D or higher with shape [..., r_o, c_o], where: r_o = c_x if adj_x else r_x
c_o = r_y if adj_y else c_y
It is computed as: output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])
Note: BatchMatMulV2 supports broadcasting in the batch dimensions. More about broadcasting here.
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int16, int32, int64, complex64, complex128. 2-D or higher with shape [..., r_x, c_x].
y A Tensor. Must have the same type as x. 2-D or higher with shape [..., r_y, c_y].
adj_x An optional bool. Defaults to False. If True, adjoint the slices of x. Defaults to False.
adj_y An optional bool. Defaults to False. If True, adjoint the slices of y. Defaults to False.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | |
doc_24184 | tf.strings.reduce_join(
inputs, axis=None, keepdims=False, separator='', name=None
)
tf.strings.reduce_join([['abc','123'],
['def','456']]).numpy()
b'abc123def456'
tf.strings.reduce_join([['abc','123'],
['def','456']], axis=-1).numpy()
array([b'abc123', b'def456'], dtype=object)
tf.strings.reduce_join([['abc','123'],
['def','456']],
axis=-1,
separator=" ").numpy()
array([b'abc 123', b'def 456'], dtype=object)
Args
inputs A tf.string tensor.
axis Which axis to join along. The default behavior is to join all elements, producing a scalar.
keepdims If true, retains reduced dimensions with length 1.
separator a string added between each string being joined.
name A name for the operation (optional).
Returns A tf.string tensor. | |
doc_24185 | tf.experimental.numpy.multiply(
x1, x2
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.multiply. | |
doc_24186 |
Implements Adamax algorithm (a variant of Adam based on infinity norm). It has been proposed in Adam: A Method for Stochastic Optimization. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 2e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | |
doc_24187 | This key is not used in versions of Windows after 98. | |
doc_24188 |
Multiply one Legendre series by another. Returns the product of two Legendre series c1 * c2. The arguments are sequences of coefficients, from lowest order “term” to highest, e.g., [1,2,3] represents the series P_0 + 2*P_1 + 3*P_2. Parameters
c1, c2array_like
1-D arrays of Legendre series coefficients ordered from low to high. Returns
outndarray
Of Legendre series coefficients representing their product. See also
legadd, legsub, legmulx, legdiv, legpow
Notes In general, the (polynomial) product of two C-series results in terms that are not in the Legendre polynomial basis set. Thus, to express the product as a Legendre series, it is necessary to “reproject” the product onto said basis set, which may produce “unintuitive” (but correct) results; see Examples section below. Examples >>> from numpy.polynomial import legendre as L
>>> c1 = (1,2,3)
>>> c2 = (3,2)
>>> L.legmul(c1,c2) # multiplication requires "reprojection"
array([ 4.33333333, 10.4 , 11.66666667, 3.6 ]) # may vary | |
doc_24189 | Exception raised for errors that are related to the database. | |
doc_24190 | Return a repr of dict with keys sorted. | |
doc_24191 | This method allows you to compare two Charset instances for equality. | |
doc_24192 | See Migration guide for more details. tf.compat.v1.raw_ops.Pad
tf.raw_ops.Pad(
input, paddings, name=None
)
This operation pads a input with zeros according to the paddings you specify. paddings is an integer tensor with shape [Dn, 2], where n is the rank of input. For each dimension D of input, paddings[D, 0] indicates how many zeros to add before the contents of input in that dimension, and paddings[D, 1] indicates how many zeros to add after the contents of input in that dimension. The padded size of each dimension D of the output is: paddings(D, 0) + input.dim_size(D) + paddings(D, 1) For example: # 't' is [[1, 1], [2, 2]]
# 'paddings' is [[1, 1], [2, 2]]
# rank of 't' is 2
pad(t, paddings) ==> [[0, 0, 0, 0, 0, 0]
[0, 0, 1, 1, 0, 0]
[0, 0, 2, 2, 0, 0]
[0, 0, 0, 0, 0, 0]]
Args
input A Tensor.
paddings A Tensor. Must be one of the following types: int32, int64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_24193 | tf.compat.v1.lite.toco_convert(
input_data, input_tensors, output_tensors, *args, **kwargs
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use lite.TFLiteConverter instead. Typically this function is used to convert from TensorFlow GraphDef to TFLite. Conversion can be customized by providing arguments that are forwarded to build_toco_convert_protos (see documentation for details). This function has been deprecated. Please use lite.TFLiteConverter instead.
Args
input_data Input data (i.e. often sess.graph_def),
input_tensors List of input tensors. Type and shape are computed using foo.shape and foo.dtype.
output_tensors List of output tensors (only .name is used from this).
*args See build_toco_convert_protos,
**kwargs See build_toco_convert_protos.
Returns The converted data. For example if TFLite was the destination, then this will be a tflite flatbuffer in a bytes array.
Raises Defined in build_toco_convert_protos. | |
doc_24194 |
The transposed array. Same as self.transpose(). See also transpose
Examples >>> x = np.array([[1.,2.],[3.,4.]])
>>> x
array([[ 1., 2.],
[ 3., 4.]])
>>> x.T
array([[ 1., 3.],
[ 2., 4.]])
>>> x = np.array([1.,2.,3.,4.])
>>> x
array([ 1., 2., 3., 4.])
>>> x.T
array([ 1., 2., 3., 4.]) | |
doc_24195 | See Migration guide for more details. tf.compat.v1.raw_ops.OrderedMapUnstage
tf.raw_ops.OrderedMapUnstage(
key, indices, dtypes, capacity=0, memory_limit=0, container='',
shared_name='', name=None
)
from the underlying container. If the underlying container does not contain this key, the op will block until it does.
Args
key A Tensor of type int64.
indices A Tensor of type int32.
dtypes A list of tf.DTypes that has length >= 1.
capacity An optional int that is >= 0. Defaults to 0.
memory_limit An optional int that is >= 0. Defaults to 0.
container An optional string. Defaults to "".
shared_name An optional string. Defaults to "".
name A name for the operation (optional).
Returns A list of Tensor objects of type dtypes. | |
doc_24196 | See Migration guide for more details. tf.compat.v1.raw_ops.BatchToSpaceND
tf.raw_ops.BatchToSpaceND(
input, block_shape, crops, name=None
)
This operation reshapes the "batch" dimension 0 into M + 1 dimensions of shape block_shape + [batch], interleaves these blocks back into the grid defined by the spatial dimensions [1, ..., M], to obtain a result with the same rank as the input. The spatial dimensions of this intermediate result are then optionally cropped according to crops to produce the output. This is the reverse of SpaceToBatch. See below for a precise description.
Args
input A Tensor. N-D with shape input_shape = [batch] + spatial_shape + remaining_shape, where spatial_shape has M dimensions.
block_shape A Tensor. Must be one of the following types: int32, int64. 1-D with shape [M], all values must be >= 1.
crops A Tensor. Must be one of the following types: int32, int64. 2-D with shape [M, 2], all values must be >= 0. crops[i] = [crop_start, crop_end] specifies the amount to crop from input dimension i + 1, which corresponds to spatial dimension i. It is required that crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]. This operation is equivalent to the following steps: Reshape input to reshaped of shape: [block_shape[0], ..., block_shape[M-1], batch / prod(block_shape), input_shape[1], ..., input_shape[N-1]] Permute dimensions of reshaped to produce permuted of shape [batch / prod(block_shape), input_shape[1], block_shape[0], ..., input_shape[M], block_shape[M-1], input_shape[M+1], ..., input_shape[N-1]] Reshape permuted to produce reshaped_permuted of shape [batch / prod(block_shape), input_shape[1] * block_shape[0], ..., input_shape[M] * block_shape[M-1], input_shape[M+1], ..., input_shape[N-1]] Crop the start and end of dimensions [1, ..., M] of reshaped_permuted according to crops to produce the output of shape: [batch / prod(block_shape), input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1], ..., input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1], input_shape[M+1], ..., input_shape[N-1]] Some examples: (1) For the following input of shape [4, 1, 1, 1], block_shape = [2, 2], and crops = [[0, 0], [0, 0]]: [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
The output tensor has shape [1, 2, 2, 1] and value: x = [[[[1], [2]], [[3], [4]]]]
(2) For the following input of shape [4, 1, 1, 3], block_shape = [2, 2], and crops = [[0, 0], [0, 0]]: [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]
The output tensor has shape [1, 2, 2, 3] and value: x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]]
(3) For the following input of shape [4, 2, 2, 1], block_shape = [2, 2], and crops = [[0, 0], [0, 0]]: x = [[[[1], [3]], [[9], [11]]],
[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
[[[6], [8]], [[14], [16]]]]
The output tensor has shape [1, 4, 4, 1] and value: x = [[[[1], [2], [3], [4]],
[[5], [6], [7], [8]],
[[9], [10], [11], [12]],
[[13], [14], [15], [16]]]]
(4) For the following input of shape [8, 1, 3, 1], block_shape = [2, 2], and crops = [[0, 0], [2, 0]]: x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
[[[0], [2], [4]]], [[[0], [10], [12]]],
[[[0], [5], [7]]], [[[0], [13], [15]]],
[[[0], [6], [8]]], [[[0], [14], [16]]]]
The output tensor has shape [2, 2, 4, 1] and value: x = [[[[1], [2], [3], [4]],
[[5], [6], [7], [8]]],
[[[9], [10], [11], [12]],
[[13], [14], [15], [16]]]]
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_24197 |
Alias for get_linestyle. | |
doc_24198 | Raised when a specified range of text does not fit into a string. This is not known to be used in the Python DOM implementations, but may be received from DOM implementations not written in Python. | |
doc_24199 |
Predict multi-output variable using a model
trained for each target variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data. Returns
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets predicted across multiple predictors. Note: Separate models are generated for each predictor. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.