_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_27900 |
Transform data X according to the fitted model. Changed in version 0.18: doc_topic_distr is now normalized Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix. Returns
doc_topic_distrndarray of shape (n_samples, n_components)
Document topic distribution for X. | |
doc_27901 | Return the daylight saving time (DST) adjustment, as a timedelta object or None if DST information isn’t known. Return timedelta(0) if DST is not in effect. If DST is in effect, return the offset as a timedelta object (see utcoffset() for details). Note that DST offset, if applicable, has already been added to the UTC offset returned by utcoffset(), so there’s no need to consult dst() unless you’re interested in obtaining DST info separately. For example, datetime.timetuple() calls its tzinfo attribute’s dst() method to determine how the tm_isdst flag should be set, and tzinfo.fromutc() calls dst() to account for DST changes when crossing time zones. An instance tz of a tzinfo subclass that models both standard and daylight times must be consistent in this sense: tz.utcoffset(dt) - tz.dst(dt) must return the same result for every datetime dt with dt.tzinfo ==
tz For sane tzinfo subclasses, this expression yields the time zone’s “standard offset”, which should not depend on the date or the time, but only on geographic location. The implementation of datetime.astimezone() relies on this, but cannot detect violations; it’s the programmer’s responsibility to ensure it. If a tzinfo subclass cannot guarantee this, it may be able to override the default implementation of tzinfo.fromutc() to work correctly with astimezone() regardless. Most implementations of dst() will probably look like one of these two: def dst(self, dt):
# a fixed-offset class: doesn't account for DST
return timedelta(0)
or: def dst(self, dt):
# Code to set dston and dstoff to the time zone's DST
# transition times based on the input dt.year, and expressed
# in standard local time.
if dston <= dt.replace(tzinfo=None) < dstoff:
return timedelta(hours=1)
else:
return timedelta(0)
The default implementation of dst() raises NotImplementedError. Changed in version 3.7: The DST offset is not restricted to a whole number of minutes. | |
doc_27902 | The type of unbound class methods of some built-in data types such as dict.__dict__['fromkeys']. New in version 3.7. | |
doc_27903 | Interact with process: send data to stdin (if input is not None); read data from stdout and stderr, until EOF is reached; wait for process to terminate. The optional input argument is the data (bytes object) that will be sent to the child process. Return a tuple (stdout_data, stderr_data). If either BrokenPipeError or ConnectionResetError exception is raised when writing input into stdin, the exception is ignored. This condition occurs when the process exits before all data are written into stdin. If it is desired to send data to the process’ stdin, the process needs to be created with stdin=PIPE. Similarly, to get anything other than None in the result tuple, the process has to be created with stdout=PIPE and/or stderr=PIPE arguments. Note, that the data read is buffered in memory, so do not use this method if the data size is large or unlimited. | |
doc_27904 | accessor for ‘no-store’ | |
doc_27905 | exception lzma.LZMAError
This exception is raised when an error occurs during compression or decompression, or while initializing the compressor/decompressor state.
Reading and writing compressed files
lzma.open(filename, mode="rb", *, format=None, check=-1, preset=None, filters=None, encoding=None, errors=None, newline=None)
Open an LZMA-compressed file in binary or text mode, returning a file object. The filename argument can be either an actual file name (given as a str, bytes or path-like object), in which case the named file is opened, or it can be an existing file object to read from or write to. The mode argument can be any of "r", "rb", "w", "wb", "x", "xb", "a" or "ab" for binary mode, or "rt", "wt", "xt", or "at" for text mode. The default is "rb". When opening a file for reading, the format and filters arguments have the same meanings as for LZMADecompressor. In this case, the check and preset arguments should not be used. When opening a file for writing, the format, check, preset and filters arguments have the same meanings as for LZMACompressor. For binary mode, this function is equivalent to the LZMAFile constructor: LZMAFile(filename, mode, ...). In this case, the encoding, errors and newline arguments must not be provided. For text mode, a LZMAFile object is created, and wrapped in an io.TextIOWrapper instance with the specified encoding, error handling behavior, and line ending(s). Changed in version 3.4: Added support for the "x", "xb" and "xt" modes. Changed in version 3.6: Accepts a path-like object.
class lzma.LZMAFile(filename=None, mode="r", *, format=None, check=-1, preset=None, filters=None)
Open an LZMA-compressed file in binary mode. An LZMAFile can wrap an already-open file object, or operate directly on a named file. The filename argument specifies either the file object to wrap, or the name of the file to open (as a str, bytes or path-like object). When wrapping an existing file object, the wrapped file will not be closed when the LZMAFile is closed. The mode argument can be either "r" for reading (default), "w" for overwriting, "x" for exclusive creation, or "a" for appending. These can equivalently be given as "rb", "wb", "xb" and "ab" respectively. If filename is a file object (rather than an actual file name), a mode of "w" does not truncate the file, and is instead equivalent to "a". When opening a file for reading, the input file may be the concatenation of multiple separate compressed streams. These are transparently decoded as a single logical stream. When opening a file for reading, the format and filters arguments have the same meanings as for LZMADecompressor. In this case, the check and preset arguments should not be used. When opening a file for writing, the format, check, preset and filters arguments have the same meanings as for LZMACompressor. LZMAFile supports all the members specified by io.BufferedIOBase, except for detach() and truncate(). Iteration and the with statement are supported. The following method is also provided:
peek(size=-1)
Return buffered data without advancing the file position. At least one byte of data will be returned, unless EOF has been reached. The exact number of bytes returned is unspecified (the size argument is ignored). Note While calling peek() does not change the file position of the LZMAFile, it may change the position of the underlying file object (e.g. if the LZMAFile was constructed by passing a file object for filename).
Changed in version 3.4: Added support for the "x" and "xb" modes. Changed in version 3.5: The read() method now accepts an argument of None. Changed in version 3.6: Accepts a path-like object.
Compressing and decompressing data in memory
class lzma.LZMACompressor(format=FORMAT_XZ, check=-1, preset=None, filters=None)
Create a compressor object, which can be used to compress data incrementally. For a more convenient way of compressing a single chunk of data, see compress(). The format argument specifies what container format should be used. Possible values are:
FORMAT_XZ: The .xz container format.
This is the default format.
FORMAT_ALONE: The legacy .lzma container format.
This format is more limited than .xz – it does not support integrity checks or multiple filters.
FORMAT_RAW: A raw data stream, not using any container format.
This format specifier does not support integrity checks, and requires that you always specify a custom filter chain (for both compression and decompression). Additionally, data compressed in this manner cannot be decompressed using FORMAT_AUTO (see LZMADecompressor). The check argument specifies the type of integrity check to include in the compressed data. This check is used when decompressing, to ensure that the data has not been corrupted. Possible values are:
CHECK_NONE: No integrity check. This is the default (and the only acceptable value) for FORMAT_ALONE and FORMAT_RAW.
CHECK_CRC32: 32-bit Cyclic Redundancy Check.
CHECK_CRC64: 64-bit Cyclic Redundancy Check. This is the default for FORMAT_XZ.
CHECK_SHA256: 256-bit Secure Hash Algorithm. If the specified check is not supported, an LZMAError is raised. The compression settings can be specified either as a preset compression level (with the preset argument), or in detail as a custom filter chain (with the filters argument). The preset argument (if provided) should be an integer between 0 and 9 (inclusive), optionally OR-ed with the constant PRESET_EXTREME. If neither preset nor filters are given, the default behavior is to use PRESET_DEFAULT (preset level 6). Higher presets produce smaller output, but make the compression process slower. Note In addition to being more CPU-intensive, compression with higher presets also requires much more memory (and produces output that needs more memory to decompress). With preset 9 for example, the overhead for an LZMACompressor object can be as high as 800 MiB. For this reason, it is generally best to stick with the default preset. The filters argument (if provided) should be a filter chain specifier. See Specifying custom filter chains for details.
compress(data)
Compress data (a bytes object), returning a bytes object containing compressed data for at least part of the input. Some of data may be buffered internally, for use in later calls to compress() and flush(). The returned data should be concatenated with the output of any previous calls to compress().
flush()
Finish the compression process, returning a bytes object containing any data stored in the compressor’s internal buffers. The compressor cannot be used after this method has been called.
class lzma.LZMADecompressor(format=FORMAT_AUTO, memlimit=None, filters=None)
Create a decompressor object, which can be used to decompress data incrementally. For a more convenient way of decompressing an entire compressed stream at once, see decompress(). The format argument specifies the container format that should be used. The default is FORMAT_AUTO, which can decompress both .xz and .lzma files. Other possible values are FORMAT_XZ, FORMAT_ALONE, and FORMAT_RAW. The memlimit argument specifies a limit (in bytes) on the amount of memory that the decompressor can use. When this argument is used, decompression will fail with an LZMAError if it is not possible to decompress the input within the given memory limit. The filters argument specifies the filter chain that was used to create the stream being decompressed. This argument is required if format is FORMAT_RAW, but should not be used for other formats. See Specifying custom filter chains for more information about filter chains. Note This class does not transparently handle inputs containing multiple compressed streams, unlike decompress() and LZMAFile. To decompress a multi-stream input with LZMADecompressor, you must create a new decompressor for each stream.
decompress(data, max_length=-1)
Decompress data (a bytes-like object), returning uncompressed data as bytes. Some of data may be buffered internally, for use in later calls to decompress(). The returned data should be concatenated with the output of any previous calls to decompress(). If max_length is nonnegative, returns at most max_length bytes of decompressed data. If this limit is reached and further output can be produced, the needs_input attribute will be set to False. In this case, the next call to decompress() may provide data as b'' to obtain more of the output. If all of the input data was decompressed and returned (either because this was less than max_length bytes, or because max_length was negative), the needs_input attribute will be set to True. Attempting to decompress data after the end of stream is reached raises an EOFError. Any data found after the end of the stream is ignored and saved in the unused_data attribute. Changed in version 3.5: Added the max_length parameter.
check
The ID of the integrity check used by the input stream. This may be CHECK_UNKNOWN until enough of the input has been decoded to determine what integrity check it uses.
eof
True if the end-of-stream marker has been reached.
unused_data
Data found after the end of the compressed stream. Before the end of the stream is reached, this will be b"".
needs_input
False if the decompress() method can provide more decompressed data before requiring new uncompressed input. New in version 3.5.
lzma.compress(data, format=FORMAT_XZ, check=-1, preset=None, filters=None)
Compress data (a bytes object), returning the compressed data as a bytes object. See LZMACompressor above for a description of the format, check, preset and filters arguments.
lzma.decompress(data, format=FORMAT_AUTO, memlimit=None, filters=None)
Decompress data (a bytes object), returning the uncompressed data as a bytes object. If data is the concatenation of multiple distinct compressed streams, decompress all of these streams, and return the concatenation of the results. See LZMADecompressor above for a description of the format, memlimit and filters arguments.
Miscellaneous
lzma.is_check_supported(check)
Return True if the given integrity check is supported on this system. CHECK_NONE and CHECK_CRC32 are always supported. CHECK_CRC64 and CHECK_SHA256 may be unavailable if you are using a version of liblzma that was compiled with a limited feature set.
Specifying custom filter chains A filter chain specifier is a sequence of dictionaries, where each dictionary contains the ID and options for a single filter. Each dictionary must contain the key "id", and may contain additional keys to specify filter-dependent options. Valid filter IDs are as follows:
Compression filters:
FILTER_LZMA1 (for use with FORMAT_ALONE)
FILTER_LZMA2 (for use with FORMAT_XZ and FORMAT_RAW)
Delta filter:
FILTER_DELTA
Branch-Call-Jump (BCJ) filters:
FILTER_X86 FILTER_IA64 FILTER_ARM FILTER_ARMTHUMB FILTER_POWERPC FILTER_SPARC A filter chain can consist of up to 4 filters, and cannot be empty. The last filter in the chain must be a compression filter, and any other filters must be delta or BCJ filters. Compression filters support the following options (specified as additional entries in the dictionary representing the filter):
preset: A compression preset to use as a source of default values for options that are not specified explicitly.
dict_size: Dictionary size in bytes. This should be between 4 KiB and 1.5 GiB (inclusive).
lc: Number of literal context bits.
lp: Number of literal position bits. The sum lc + lp must be at most 4.
pb: Number of position bits; must be at most 4.
mode: MODE_FAST or MODE_NORMAL.
nice_len: What should be considered a “nice length” for a match. This should be 273 or less.
mf: What match finder to use – MF_HC3, MF_HC4, MF_BT2, MF_BT3, or MF_BT4.
depth: Maximum search depth used by match finder. 0 (default) means to select automatically based on other filter options. The delta filter stores the differences between bytes, producing more repetitive input for the compressor in certain circumstances. It supports one option, dist. This indicates the distance between bytes to be subtracted. The default is 1, i.e. take the differences between adjacent bytes. The BCJ filters are intended to be applied to machine code. They convert relative branches, calls and jumps in the code to use absolute addressing, with the aim of increasing the redundancy that can be exploited by the compressor. These filters support one option, start_offset. This specifies the address that should be mapped to the beginning of the input data. The default is 0. Examples Reading in a compressed file: import lzma
with lzma.open("file.xz") as f:
file_content = f.read()
Creating a compressed file: import lzma
data = b"Insert Data Here"
with lzma.open("file.xz", "w") as f:
f.write(data)
Compressing data in memory: import lzma
data_in = b"Insert Data Here"
data_out = lzma.compress(data_in)
Incremental compression: import lzma
lzc = lzma.LZMACompressor()
out1 = lzc.compress(b"Some data\n")
out2 = lzc.compress(b"Another piece of data\n")
out3 = lzc.compress(b"Even more data\n")
out4 = lzc.flush()
# Concatenate all the partial results:
result = b"".join([out1, out2, out3, out4])
Writing compressed data to an already-open file: import lzma
with open("file.xz", "wb") as f:
f.write(b"This data will not be compressed\n")
with lzma.open(f, "w") as lzf:
lzf.write(b"This *will* be compressed\n")
f.write(b"Not compressed\n")
Creating a compressed file using a custom filter chain: import lzma
my_filters = [
{"id": lzma.FILTER_DELTA, "dist": 5},
{"id": lzma.FILTER_LZMA2, "preset": 7 | lzma.PRESET_EXTREME},
]
with lzma.open("file.xz", "w", filters=my_filters) as f:
f.write(b"blah blah blah") | |
doc_27906 | Return POSIX timestamp corresponding to the datetime instance. The return value is a float similar to that returned by time.time(). Naive datetime instances are assumed to represent local time and this method relies on the platform C mktime() function to perform the conversion. Since datetime supports wider range of values than mktime() on many platforms, this method may raise OverflowError for times far in the past or far in the future. For aware datetime instances, the return value is computed as: (dt - datetime(1970, 1, 1, tzinfo=timezone.utc)).total_seconds()
New in version 3.3. Changed in version 3.6: The timestamp() method uses the fold attribute to disambiguate the times during a repeated interval. Note There is no method to obtain the POSIX timestamp directly from a naive datetime instance representing UTC time. If your application uses this convention and your system timezone is not set to UTC, you can obtain the POSIX timestamp by supplying tzinfo=timezone.utc: timestamp = dt.replace(tzinfo=timezone.utc).timestamp()
or by calculating the timestamp directly: timestamp = (dt - datetime(1970, 1, 1)) / timedelta(seconds=1) | |
doc_27907 |
Return the list of file formats that compare_images can compare on this system. Returns
list of str
E.g. ['png', 'pdf', 'svg', 'eps']. | |
doc_27908 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_27909 | Returns a tuple (lhs_string, lhs_params), as returned by compiler.compile(lhs). This method can be overridden to tune how the lhs is processed. compiler is an SQLCompiler object, to be used like compiler.compile(lhs) for compiling lhs. The connection can be used for compiling vendor specific SQL. If lhs is not None, use it as the processed lhs instead of self.lhs. | |
doc_27910 |
Predict the class labels for the provided data. Parameters
Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’
Test samples. Returns
yndarray of shape (n_queries,) or (n_queries, n_outputs)
Class labels for each data sample. | |
doc_27911 | Open for binary reading the resource within package. package is either a name or a module object which conforms to the Package requirements. resource is the name of the resource to open within package; it may not contain path separators and it may not have sub-resources (i.e. it cannot be a directory). This function returns a typing.BinaryIO instance, a binary I/O stream open for reading. | |
doc_27912 |
Convert the PeriodArray to the specified frequency freq. Equivalent to applying pandas.Period.asfreq() with the given arguments to each Period in this PeriodArray. Parameters
freq:str
A frequency.
how:str {‘E’, ‘S’}, default ‘E’
Whether the elements should be aligned to the end or start within pa period. ‘E’, ‘END’, or ‘FINISH’ for end, ‘S’, ‘START’, or ‘BEGIN’ for start. January 31st (‘END’) vs. January 1st (‘START’) for example. Returns
PeriodArray
The transformed PeriodArray with the new frequency. See also pandas.arrays.PeriodArray.asfreq
Convert each Period in a PeriodArray to the given frequency. Period.asfreq
Convert a Period object to the given frequency. Examples
>>> pidx = pd.period_range('2010-01-01', '2015-01-01', freq='A')
>>> pidx
PeriodIndex(['2010', '2011', '2012', '2013', '2014', '2015'],
dtype='period[A-DEC]')
>>> pidx.asfreq('M')
PeriodIndex(['2010-12', '2011-12', '2012-12', '2013-12', '2014-12',
'2015-12'], dtype='period[M]')
>>> pidx.asfreq('M', how='S')
PeriodIndex(['2010-01', '2011-01', '2012-01', '2013-01', '2014-01',
'2015-01'], dtype='period[M]') | |
doc_27913 | MultiPoint objects may be instantiated by passing in Point objects as arguments, or a single sequence of Point objects: >>> mp = MultiPoint(Point(0, 0), Point(1, 1))
>>> mp = MultiPoint( (Point(0, 0), Point(1, 1)) ) | |
doc_27914 |
The day of the week with Monday=0, Sunday=6. Return the day of the week. It is assumed the week starts on Monday, which is denoted by 0 and ends on Sunday which is denoted by 6. This method is available on both Series with datetime values (using the dt accessor) or DatetimeIndex. Returns
Series or Index
Containing integers indicating the day number. See also Series.dt.dayofweek
Alias. Series.dt.weekday
Alias. Series.dt.day_name
Returns the name of the day of the week. Examples
>>> s = pd.date_range('2016-12-31', '2017-01-08', freq='D').to_series()
>>> s.dt.dayofweek
2016-12-31 5
2017-01-01 6
2017-01-02 0
2017-01-03 1
2017-01-04 2
2017-01-05 3
2017-01-06 4
2017-01-07 5
2017-01-08 6
Freq: D, dtype: int64 | |
doc_27915 | By default, when reading from /dev/random, getrandom() blocks if no random bytes are available, and when reading from /dev/urandom, it blocks if the entropy pool has not yet been initialized. If the GRND_NONBLOCK flag is set, then getrandom() does not block in these cases, but instead immediately raises BlockingIOError. New in version 3.6. | |
doc_27916 | Remove all items from the dictionary. | |
doc_27917 | operator.__ifloordiv__(a, b)
a = ifloordiv(a, b) is equivalent to a //= b. | |
doc_27918 |
Return an Index of values for requested level. This is primarily useful to get an individual level of values from a MultiIndex, but is provided on Index as well for compatibility. Parameters
level:int or str
It is either the integer position or the name of the level. Returns
Index
Calling object, as there is only one level in the Index. See also MultiIndex.get_level_values
Get values for a level of a MultiIndex. Notes For Index, level should be 0, since there are no multiple levels. Examples
>>> idx = pd.Index(list('abc'))
>>> idx
Index(['a', 'b', 'c'], dtype='object')
Get level values by supplying level as integer:
>>> idx.get_level_values(0)
Index(['a', 'b', 'c'], dtype='object') | |
doc_27919 | Gets or sets the blue value of the Color. b -> int The blue value of the Color. | |
doc_27920 | See Migration guide for more details. tf.compat.v1.raw_ops.TruncateMod
tf.raw_ops.TruncateMod(
x, y, name=None
)
the result here is consistent with a truncating divide. E.g. truncate(x / y) * y + truncate_mod(x, y) = x.
Note: truncatemod supports broadcasting. More about broadcasting here
Args
x A Tensor. Must be one of the following types: int32, int64, bfloat16, half, float32, float64.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | |
doc_27921 | Concrete class for urldefrag() results containing bytes data. The decode() method returns a DefragResult instance. New in version 3.2. | |
doc_27922 |
Feature-wise transformation of the data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse csc_matrix. Additionally, the sparse matrix needs to be nonnegative if ignore_implicit_zeros is False. Returns
Xt{ndarray, sparse matrix} of shape (n_samples, n_features)
The projected data. | |
doc_27923 | assertIsNotNone(expr, msg=None)
Test that expr is (or is not) None. New in version 3.1. | |
doc_27924 | The height of the source in pixels (Y-axis). >>> GDALRaster({'width': 10, 'height': 20, 'srid': 4326}).height
20 | |
doc_27925 | Django view for the page that shows the modification history for a given model instance. | |
doc_27926 | See Migration guide for more details. tf.compat.v1.raw_ops.QuantizedMatMul
tf.raw_ops.QuantizedMatMul(
a, b, min_a, max_a, min_b, max_b, Toutput=tf.dtypes.qint32, transpose_a=False,
transpose_b=False, Tactivation=tf.dtypes.quint8, name=None
)
The inputs must be two-dimensional matrices and the inner dimension of a (after being transposed if transpose_a is non-zero) must match the outer dimension of b (after being transposed if transposed_b is non-zero).
Args
a A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. Must be a two-dimensional tensor.
b A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. Must be a two-dimensional tensor.
min_a A Tensor of type float32. The float value that the lowest quantized a value represents.
max_a A Tensor of type float32. The float value that the highest quantized a value represents.
min_b A Tensor of type float32. The float value that the lowest quantized b value represents.
max_b A Tensor of type float32. The float value that the highest quantized b value represents.
Toutput An optional tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16. Defaults to tf.qint32.
transpose_a An optional bool. Defaults to False. If true, a is transposed before multiplication.
transpose_b An optional bool. Defaults to False. If true, b is transposed before multiplication.
Tactivation An optional tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16. Defaults to tf.quint8. The type of output produced by activation function following this operation.
name A name for the operation (optional).
Returns A tuple of Tensor objects (out, min_out, max_out). out A Tensor of type Toutput.
min_out A Tensor of type float32.
max_out A Tensor of type float32. | |
doc_27927 | Specifies the type of each element in the array. | |
doc_27928 | create an object to help track time Clock() -> Clock Creates a new Clock object that can be used to track an amount of time. The clock also provides several functions to help control a game's framerate. tick()
update the clock tick(framerate=0) -> milliseconds This method should be called once per frame. It will compute how many milliseconds have passed since the previous call. If you pass the optional framerate argument the function will delay to keep the game running slower than the given ticks per second. This can be used to help limit the runtime speed of a game. By calling Clock.tick(40) once per frame, the program will never run at more than 40 frames per second. Note that this function uses SDL_Delay function which is not accurate on every platform, but does not use much CPU. Use tick_busy_loop if you want an accurate timer, and don't mind chewing CPU.
tick_busy_loop()
update the clock tick_busy_loop(framerate=0) -> milliseconds This method should be called once per frame. It will compute how many milliseconds have passed since the previous call. If you pass the optional framerate argument the function will delay to keep the game running slower than the given ticks per second. This can be used to help limit the runtime speed of a game. By calling Clock.tick_busy_loop(40) once per frame, the program will never run at more than 40 frames per second. Note that this function uses pygame.time.delay(), which uses lots of CPU in a busy loop to make sure that timing is more accurate. New in pygame 1.8.
get_time()
time used in the previous tick get_time() -> milliseconds The number of milliseconds that passed between the previous two calls to Clock.tick().
get_rawtime()
actual time used in the previous tick get_rawtime() -> milliseconds Similar to Clock.get_time(), but does not include any time used while Clock.tick() was delaying to limit the framerate.
get_fps()
compute the clock framerate get_fps() -> float Compute your game's framerate (in frames per second). It is computed by averaging the last ten calls to Clock.tick(). | |
doc_27929 |
Scalar method identical to the corresponding array attribute. Please see ndarray.dump. | |
doc_27930 |
K-Means clustering. Read more in the User Guide. Parameters
n_clustersint, default=8
The number of clusters to form as well as the number of centroids to generate.
init{‘k-means++’, ‘random’}, callable or array-like of shape (n_clusters, n_features), default=’k-means++’
Method for initialization: ‘k-means++’ : selects initial cluster centers for k-mean clustering in a smart way to speed up convergence. See section Notes in k_init for more details. ‘random’: choose n_clusters observations (rows) at random from data for the initial centroids. If an array is passed, it should be of shape (n_clusters, n_features) and gives the initial centers. If a callable is passed, it should take arguments X, n_clusters and a random state and return an initialization.
n_initint, default=10
Number of time the k-means algorithm will be run with different centroid seeds. The final results will be the best output of n_init consecutive runs in terms of inertia.
max_iterint, default=300
Maximum number of iterations of the k-means algorithm for a single run.
tolfloat, default=1e-4
Relative tolerance with regards to Frobenius norm of the difference in the cluster centers of two consecutive iterations to declare convergence.
precompute_distances{‘auto’, True, False}, default=’auto’
Precompute distances (faster but takes more memory). ‘auto’ : do not precompute distances if n_samples * n_clusters > 12 million. This corresponds to about 100MB overhead per job using double precision. True : always precompute distances. False : never precompute distances. Deprecated since version 0.23: ‘precompute_distances’ was deprecated in version 0.22 and will be removed in 1.0 (renaming of 0.25). It has no effect.
verboseint, default=0
Verbosity mode.
random_stateint, RandomState instance or None, default=None
Determines random number generation for centroid initialization. Use an int to make the randomness deterministic. See Glossary.
copy_xbool, default=True
When pre-computing distances it is more numerically accurate to center the data first. If copy_x is True (default), then the original data is not modified. If False, the original data is modified, and put back before the function returns, but small numerical differences may be introduced by subtracting and then adding the data mean. Note that if the original data is not C-contiguous, a copy will be made even if copy_x is False. If the original data is sparse, but not in CSR format, a copy will be made even if copy_x is False.
n_jobsint, default=None
The number of OpenMP threads to use for the computation. Parallelism is sample-wise on the main cython loop which assigns each sample to its closest center. None or -1 means using all processors. Deprecated since version 0.23: n_jobs was deprecated in version 0.23 and will be removed in 1.0 (renaming of 0.25).
algorithm{“auto”, “full”, “elkan”}, default=”auto”
K-means algorithm to use. The classical EM-style algorithm is “full”. The “elkan” variation is more efficient on data with well-defined clusters, by using the triangle inequality. However it’s more memory intensive due to the allocation of an extra array of shape (n_samples, n_clusters). For now “auto” (kept for backward compatibiliy) chooses “elkan” but it might change in the future for a better heuristic. Changed in version 0.18: Added Elkan algorithm Attributes
cluster_centers_ndarray of shape (n_clusters, n_features)
Coordinates of cluster centers. If the algorithm stops before fully converging (see tol and max_iter), these will not be consistent with labels_.
labels_ndarray of shape (n_samples,)
Labels of each point
inertia_float
Sum of squared distances of samples to their closest cluster center.
n_iter_int
Number of iterations run. See also
MiniBatchKMeans
Alternative online implementation that does incremental updates of the centers positions using mini-batches. For large scale learning (say n_samples > 10k) MiniBatchKMeans is probably much faster than the default batch implementation. Notes The k-means problem is solved using either Lloyd’s or Elkan’s algorithm. The average complexity is given by O(k n T), were n is the number of samples and T is the number of iteration. The worst case complexity is given by O(n^(k+2/p)) with n = n_samples, p = n_features. (D. Arthur and S. Vassilvitskii, ‘How slow is the k-means method?’ SoCG2006) In practice, the k-means algorithm is very fast (one of the fastest clustering algorithms available), but it falls in local minima. That’s why it can be useful to restart it several times. If the algorithm stops before fully converging (because of tol or max_iter), labels_ and cluster_centers_ will not be consistent, i.e. the cluster_centers_ will not be the means of the points in each cluster. Also, the estimator will reassign labels_ after the last iteration to make labels_ consistent with predict on the training set. Examples >>> from sklearn.cluster import KMeans
>>> import numpy as np
>>> X = np.array([[1, 2], [1, 4], [1, 0],
... [10, 2], [10, 4], [10, 0]])
>>> kmeans = KMeans(n_clusters=2, random_state=0).fit(X)
>>> kmeans.labels_
array([1, 1, 1, 0, 0, 0], dtype=int32)
>>> kmeans.predict([[0, 0], [12, 3]])
array([1, 0], dtype=int32)
>>> kmeans.cluster_centers_
array([[10., 2.],
[ 1., 2.]])
Methods
fit(X[, y, sample_weight]) Compute k-means clustering.
fit_predict(X[, y, sample_weight]) Compute cluster centers and predict cluster index for each sample.
fit_transform(X[, y, sample_weight]) Compute clustering and transform X to cluster-distance space.
get_params([deep]) Get parameters for this estimator.
predict(X[, sample_weight]) Predict the closest cluster each sample in X belongs to.
score(X[, y, sample_weight]) Opposite of the value of X on the K-means objective.
set_params(**params) Set the parameters of this estimator.
transform(X) Transform X to a cluster-distance space.
fit(X, y=None, sample_weight=None) [source]
Compute k-means clustering. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training instances to cluster. It must be noted that the data will be converted to C ordering, which will cause a memory copy if the given data is not C-contiguous. If a sparse matrix is passed, a copy will be made if it’s not in CSR format.
yIgnored
Not used, present here for API consistency by convention.
sample_weightarray-like of shape (n_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight. New in version 0.20. Returns
self
Fitted estimator.
fit_predict(X, y=None, sample_weight=None) [source]
Compute cluster centers and predict cluster index for each sample. Convenience method; equivalent to calling fit(X) followed by predict(X). Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
New data to transform.
yIgnored
Not used, present here for API consistency by convention.
sample_weightarray-like of shape (n_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight. Returns
labelsndarray of shape (n_samples,)
Index of the cluster each sample belongs to.
fit_transform(X, y=None, sample_weight=None) [source]
Compute clustering and transform X to cluster-distance space. Equivalent to fit(X).transform(X), but more efficiently implemented. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
New data to transform.
yIgnored
Not used, present here for API consistency by convention.
sample_weightarray-like of shape (n_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight. Returns
X_newndarray of shape (n_samples, n_clusters)
X transformed in the new space.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X, sample_weight=None) [source]
Predict the closest cluster each sample in X belongs to. In the vector quantization literature, cluster_centers_ is called the code book and each value returned by predict is the index of the closest code in the code book. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
New data to predict.
sample_weightarray-like of shape (n_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight. Returns
labelsndarray of shape (n_samples,)
Index of the cluster each sample belongs to.
score(X, y=None, sample_weight=None) [source]
Opposite of the value of X on the K-means objective. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
New data.
yIgnored
Not used, present here for API consistency by convention.
sample_weightarray-like of shape (n_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight. Returns
scorefloat
Opposite of the value of X on the K-means objective.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Transform X to a cluster-distance space. In the new space, each dimension is the distance to the cluster centers. Note that even if X is sparse, the array returned by transform will typically be dense. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
New data to transform. Returns
X_newndarray of shape (n_samples, n_clusters)
X transformed in the new space. | |
doc_27931 | Determine whether code is in tableC.5 (Surrogate codes). | |
doc_27932 | Performs Tensor dtype and/or device conversion. A torch.dtype and torch.device are inferred from the arguments of self.to(*args, **kwargs). Note If the self Tensor already has the correct torch.dtype and torch.device, then self is returned. Otherwise, the returned tensor is a copy of self with the desired torch.dtype and torch.device. Here are the ways to call to:
to(dtype, non_blocking=False, copy=False, memory_format=torch.preserve_format) → Tensor
Returns a Tensor with the specified dtype Args:
memory_format (torch.memory_format, optional): the desired memory format of returned Tensor. Default: torch.preserve_format.
to(device=None, dtype=None, non_blocking=False, copy=False, memory_format=torch.preserve_format) → Tensor
Returns a Tensor with the specified device and (optional) dtype. If dtype is None it is inferred to be self.dtype. When non_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion. Args:
memory_format (torch.memory_format, optional): the desired memory format of returned Tensor. Default: torch.preserve_format.
to(other, non_blocking=False, copy=False) → Tensor
Returns a Tensor with same torch.dtype and torch.device as the Tensor other. When non_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion.
Example: >>> tensor = torch.randn(2, 2) # Initially dtype=float32, device=cpu
>>> tensor.to(torch.float64)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64)
>>> cuda0 = torch.device('cuda:0')
>>> tensor.to(cuda0)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], device='cuda:0')
>>> tensor.to(cuda0, dtype=torch.float64)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')
>>> other = torch.randn((), dtype=torch.float64, device=cuda0)
>>> tensor.to(other, non_blocking=True)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0') | |
doc_27933 | A dictionary representing the string environment at the time the interpreter was started. Keys and values are bytes on Unix and str on Windows. For example, environ[b'HOME'] (environ['HOME'] on Windows) is the pathname of your home directory, equivalent to getenv("HOME") in C. Modifying this dictionary does not affect the string environment passed on by execv(), popen() or system(); if you need to change the environment, pass environ to execve() or add variable assignments and export statements to the command string for system() or popen(). Changed in version 3.2: On Unix, keys and values are bytes. Note The os module provides an alternate implementation of environ which updates the environment on modification. Note also that updating os.environ will render this dictionary obsolete. Use of the os module version of this is recommended over direct access to the posix module. | |
doc_27934 | See Migration guide for more details. tf.compat.v1.keras.layers.RNN
tf.keras.layers.RNN(
cell, return_sequences=False, return_state=False, go_backwards=False,
stateful=False, unroll=False, time_major=False, **kwargs
)
See the Keras RNN API guide for details about the usage of RNN API.
Arguments
cell A RNN cell instance or a list of RNN cell instances. A RNN cell is a class that has: A call(input_at_t, states_at_t) method, returning (output_at_t, states_at_t_plus_1). The call method of the cell can also take the optional argument constants, see section "Note on passing external constants" below. A state_size attribute. This can be a single integer (single state) in which case it is the size of the recurrent state. This can also be a list/tuple of integers (one size per state). The state_size can also be TensorShape or tuple/list of TensorShape, to represent high dimension state. A output_size attribute. This can be a single integer or a TensorShape, which represent the shape of the output. For backward compatible reason, if this attribute is not available for the cell, the value will be inferred by the first element of the state_size. A get_initial_state(inputs=None, batch_size=None, dtype=None) method that creates a tensor meant to be fed to call() as the initial state, if the user didn't specify any initial state via other means. The returned initial state should have a shape of [batch_size, cell.state_size]. The cell might choose to create a tensor full of zeros, or full of other values based on the cell's implementation. inputs is the input tensor to the RNN layer, which should contain the batch size as its shape[0], and also dtype. Note that the shape[0] might be None during the graph construction. Either the inputs or the pair of batch_size and dtype are provided. batch_size is a scalar tensor that represents the batch size of the inputs. dtype is tf.DType that represents the dtype of the inputs. For backward compatible reason, if this method is not implemented by the cell, the RNN layer will create a zero filled tensor with the size of [batch_size, cell.state_size]. In the case that cell is a list of RNN cell instances, the cells will be stacked on top of each other in the RNN, resulting in an efficient stacked RNN.
return_sequences Boolean (default False). Whether to return the last output in the output sequence, or the full sequence.
return_state Boolean (default False). Whether to return the last state in addition to the output.
go_backwards Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
stateful Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
unroll Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.
time_major The shape format of the inputs and outputs tensors. If True, the inputs and outputs will be in shape (timesteps, batch, ...), whereas in the False case, it will be (batch, timesteps, ...). Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
zero_output_for_mask Boolean (default False). Whether the output should use zeros for the masked timesteps. Note that this field is only used when return_sequences is True and mask is provided. It can useful if you want to reuse the raw output sequence of the RNN without interference from the masked timesteps, eg, merging bidirectional RNNs. Call arguments:
inputs: Input tensor.
mask: Binary tensor of shape [batch_size, timesteps] indicating whether a given timestep should be masked.
training: Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the cell when calling it. This is for use with cells that use dropout.
initial_state: List of initial state tensors to be passed to the first call of the cell.
constants: List of constant tensors to be passed to the cell at each timestep. Input shape: N-D tensor with shape [batch_size, timesteps, ...] or [timesteps, batch_size, ...] when time_major is True. Output shape: If return_state: a list of tensors. The first tensor is the output. The remaining tensors are the last states, each with shape [batch_size, state_size], where state_size could be a high dimension tensor shape. If return_sequences: N-D tensor with shape [batch_size, timesteps, output_size], where output_size could be a high dimension tensor shape, or [timesteps, batch_size, output_size] when time_major is True. Else, N-D tensor with shape [batch_size, output_size], where output_size could be a high dimension tensor shape. Masking: This layer supports masking for input data with a variable number of timesteps. To introduce masks to your data, use an [tf.keras.layers.Embedding] layer with the mask_zero parameter set to True. Note on using statefulness in RNNs: You can set RNN layers to be 'stateful', which means that the states computed for the samples in one batch will be reused as initial states for the samples in the next batch. This assumes a one-to-one mapping between samples in different successive batches. To enable statefulness: - Specify `stateful=True` in the layer constructor.
- Specify a fixed batch size for your model, by passing
If sequential model:
`batch_input_shape=(...)` to the first layer in your model.
Else for functional model with 1 or more Input layers:
`batch_shape=(...)` to all the first layers in your model.
This is the expected shape of your inputs
*including the batch size*.
It should be a tuple of integers, e.g. `(32, 10, 100)`.
- Specify `shuffle=False` when calling `fit()`.
To reset the states of your model, call .reset_states() on either a specific layer, or on your entire model. Note on specifying the initial state of RNNs: You can specify the initial state of RNN layers symbolically by calling them with the keyword argument initial_state. The value of initial_state should be a tensor or list of tensors representing the initial state of the RNN layer. You can specify the initial state of RNN layers numerically by calling reset_states with the keyword argument states. The value of states should be a numpy array or list of numpy arrays representing the initial state of the RNN layer. Note on passing external constants to RNNs: You can pass "external" constants to the cell using the constants keyword argument of RNN.call (as well as RNN.call) method. This requires that the cell.call method accepts the same keyword argument constants. Such constants can be used to condition the cell transformation on additional static inputs (not changing over time), a.k.a. an attention mechanism. Examples: # First, let's define a RNN Cell, as a layer subclass.
class MinimalRNNCell(keras.layers.Layer):
def __init__(self, units, **kwargs):
self.units = units
self.state_size = units
super(MinimalRNNCell, self).__init__(**kwargs)
def build(self, input_shape):
self.kernel = self.add_weight(shape=(input_shape[-1], self.units),
initializer='uniform',
name='kernel')
self.recurrent_kernel = self.add_weight(
shape=(self.units, self.units),
initializer='uniform',
name='recurrent_kernel')
self.built = True
def call(self, inputs, states):
prev_output = states[0]
h = K.dot(inputs, self.kernel)
output = h + K.dot(prev_output, self.recurrent_kernel)
return output, [output]
# Let's use this cell in a RNN layer:
cell = MinimalRNNCell(32)
x = keras.Input((None, 5))
layer = RNN(cell)
y = layer(x)
# Here's how to use the cell to build a stacked RNN:
cells = [MinimalRNNCell(32), MinimalRNNCell(64)]
x = keras.Input((None, 5))
layer = RNN(cells)
y = layer(x)
Attributes
states
Methods reset_states View source
reset_states(
states=None
)
Reset the recorded states for the stateful RNN layer. Can only be used when RNN layer is constructed with stateful = True. Args: states: Numpy arrays that contains the value for the initial state, which will be feed to cell at the first time step. When the value is None, zero filled numpy array will be created based on the cell state size.
Raises
AttributeError When the RNN layer is not stateful.
ValueError When the batch size of the RNN layer is unknown.
ValueError When the input numpy array is not compatible with the RNN layer state, either size wise or dtype wise. | |
doc_27935 | get the current button state get_button(button) -> bool Returns the current state of a joystick button. | |
doc_27936 |
Returns a dictionary of CUDA memory allocator statistics for a given device. The return value of this function is a dictionary of statistics, each of which is a non-negative integer. Core statistics:
"allocated.{all,large_pool,small_pool}.{current,peak,allocated,freed}": number of allocation requests received by the memory allocator.
"allocated_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}": amount of allocated memory.
"segment.{all,large_pool,small_pool}.{current,peak,allocated,freed}": number of reserved segments from cudaMalloc().
"reserved_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}": amount of reserved memory.
"active.{all,large_pool,small_pool}.{current,peak,allocated,freed}": number of active memory blocks.
"active_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}": amount of active memory.
"inactive_split.{all,large_pool,small_pool}.{current,peak,allocated,freed}": number of inactive, non-releasable memory blocks.
"inactive_split_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}": amount of inactive, non-releasable memory. For these core statistics, values are broken down as follows. Pool type:
all: combined statistics across all memory pools.
large_pool: statistics for the large allocation pool (as of October 2019, for size >= 1MB allocations).
small_pool: statistics for the small allocation pool (as of October 2019, for size < 1MB allocations). Metric type:
current: current value of this metric.
peak: maximum value of this metric.
allocated: historical total increase in this metric.
freed: historical total decrease in this metric. In addition to the core statistics, we also provide some simple event counters:
"num_alloc_retries": number of failed cudaMalloc calls that result in a cache flush and retry.
"num_ooms": number of out-of-memory errors thrown. Parameters
device (torch.device or int, optional) – selected device. Returns statistics for the current device, given by current_device(), if device is None (default). Note See Memory management for more details about GPU memory management. | |
doc_27937 |
Return whether the colors c1 and c2 are the same. c1, c2 can be single colors or lists/arrays of colors. | |
doc_27938 | Generic version of list. Useful for annotating return types. To annotate arguments it is preferred to use an abstract collection type such as Sequence or Iterable. This type may be used as follows: T = TypeVar('T', int, float)
def vec2(x: T, y: T) -> List[T]:
return [x, y]
def keep_positives(vector: Sequence[T]) -> List[T]:
return [item for item in vector if item > 0]
Deprecated since version 3.9: builtins.list now supports []. See PEP 585 and Generic Alias Type. | |
doc_27939 | terminates outgoing messages immediately abort() -> None The caller should immediately close the output port; this call may result in transmission of a partial midi message. There is no abort for Midi input because the user can simply ignore messages in the buffer and close an input device at any time. | |
doc_27940 |
Alias for set_fontweight. | |
doc_27941 |
Grid of parameters with a discrete number of values for each. Can be used to iterate over parameter value combinations with the Python built-in function iter. The order of the generated parameter combinations is deterministic. Read more in the User Guide. Parameters
param_griddict of str to sequence, or sequence of such
The parameter grid to explore, as a dictionary mapping estimator parameters to sequences of allowed values. An empty dict signifies default parameters. A sequence of dicts signifies a sequence of grids to search, and is useful to avoid exploring parameter combinations that make no sense or have no effect. See the examples below. See also
GridSearchCV
Uses ParameterGrid to perform a full parallelized parameter search. Examples >>> from sklearn.model_selection import ParameterGrid
>>> param_grid = {'a': [1, 2], 'b': [True, False]}
>>> list(ParameterGrid(param_grid)) == (
... [{'a': 1, 'b': True}, {'a': 1, 'b': False},
... {'a': 2, 'b': True}, {'a': 2, 'b': False}])
True
>>> grid = [{'kernel': ['linear']}, {'kernel': ['rbf'], 'gamma': [1, 10]}]
>>> list(ParameterGrid(grid)) == [{'kernel': 'linear'},
... {'kernel': 'rbf', 'gamma': 1},
... {'kernel': 'rbf', 'gamma': 10}]
True
>>> ParameterGrid(grid)[1] == {'kernel': 'rbf', 'gamma': 1}
True | |
doc_27942 | Test if a rule would match. Works like match but returns True if the URL matches, or False if it does not exist. Parameters
path_info (Optional[str]) – the path info to use for matching. Overrides the path info specified on binding.
method (Optional[str]) – the HTTP method used for matching. Overrides the method specified on binding. Return type
bool | |
doc_27943 |
Bases: object A simple pyparsing-based parser for fontconfig patterns. parse(pattern)[source]
Parse the given fontconfig pattern and return a dictionary of key/value pairs useful for initializing a font_manager.FontProperties object. | |
doc_27944 | Detaches the Tensor from the graph that created it, making it a leaf. Views cannot be detached in-place. | |
doc_27945 |
A random forest regressor. A random forest is a meta estimator that fits a number of classifying decision trees on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is controlled with the max_samples parameter if bootstrap=True (default), otherwise the whole dataset is used to build each tree. Read more in the User Guide. Parameters
n_estimatorsint, default=100
The number of trees in the forest. Changed in version 0.22: The default value of n_estimators changed from 10 to 100 in 0.22.
criterion{“mse”, “mae”}, default=”mse”
The function to measure the quality of a split. Supported criteria are “mse” for the mean squared error, which is equal to variance reduction as feature selection criterion, and “mae” for the mean absolute error. New in version 0.18: Mean Absolute Error (MAE) criterion.
max_depthint, default=None
The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
min_samples_splitint or float, default=2
The minimum number of samples required to split an internal node: If int, then consider min_samples_split as the minimum number. If float, then min_samples_split is a fraction and ceil(min_samples_split * n_samples) are the minimum number of samples for each split. Changed in version 0.18: Added float values for fractions.
min_samples_leafint or float, default=1
The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least min_samples_leaf training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression. If int, then consider min_samples_leaf as the minimum number. If float, then min_samples_leaf is a fraction and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node. Changed in version 0.18: Added float values for fractions.
min_weight_fraction_leaffloat, default=0.0
The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
max_features{“auto”, “sqrt”, “log2”}, int or float, default=”auto”
The number of features to consider when looking for the best split: If int, then consider max_features features at each split. If float, then max_features is a fraction and round(max_features * n_features) features are considered at each split. If “auto”, then max_features=n_features. If “sqrt”, then max_features=sqrt(n_features). If “log2”, then max_features=log2(n_features). If None, then max_features=n_features. Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than max_features features.
max_leaf_nodesint, default=None
Grow trees with max_leaf_nodes in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.
min_impurity_decreasefloat, default=0.0
A node will be split if this split induces a decrease of the impurity greater than or equal to this value. The weighted impurity decrease equation is the following: N_t / N * (impurity - N_t_R / N_t * right_impurity
- N_t_L / N_t * left_impurity)
where N is the total number of samples, N_t is the number of samples at the current node, N_t_L is the number of samples in the left child, and N_t_R is the number of samples in the right child. N, N_t, N_t_R and N_t_L all refer to the weighted sum, if sample_weight is passed. New in version 0.19.
min_impurity_splitfloat, default=None
Threshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf. Deprecated since version 0.19: min_impurity_split has been deprecated in favor of min_impurity_decrease in 0.19. The default value of min_impurity_split has changed from 1e-7 to 0 in 0.23 and it will be removed in 1.0 (renaming of 0.25). Use min_impurity_decrease instead.
bootstrapbool, default=True
Whether bootstrap samples are used when building trees. If False, the whole dataset is used to build each tree.
oob_scorebool, default=False
whether to use out-of-bag samples to estimate the R^2 on unseen data.
n_jobsint, default=None
The number of jobs to run in parallel. fit, predict, decision_path and apply are all parallelized over the trees. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
random_stateint, RandomState instance or None, default=None
Controls both the randomness of the bootstrapping of the samples used when building trees (if bootstrap=True) and the sampling of the features to consider when looking for the best split at each node (if max_features < n_features). See Glossary for details.
verboseint, default=0
Controls the verbosity when fitting and predicting.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. See the Glossary.
ccp_alphanon-negative float, default=0.0
Complexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller than ccp_alpha will be chosen. By default, no pruning is performed. See Minimal Cost-Complexity Pruning for details. New in version 0.22.
max_samplesint or float, default=None
If bootstrap is True, the number of samples to draw from X to train each base estimator. If None (default), then draw X.shape[0] samples. If int, then draw max_samples samples. If float, then draw max_samples * X.shape[0] samples. Thus, max_samples should be in the interval (0, 1). New in version 0.22. Attributes
base_estimator_DecisionTreeRegressor
The child estimator template used to create the collection of fitted sub-estimators.
estimators_list of DecisionTreeRegressor
The collection of fitted sub-estimators.
feature_importances_ndarray of shape (n_features,)
The impurity-based feature importances.
n_features_int
The number of features when fit is performed.
n_outputs_int
The number of outputs when fit is performed.
oob_score_float
Score of the training dataset obtained using an out-of-bag estimate. This attribute exists only when oob_score is True.
oob_prediction_ndarray of shape (n_samples,)
Prediction computed with out-of-bag estimate on the training set. This attribute exists only when oob_score is True. See also
DecisionTreeRegressor, ExtraTreesRegressor
Notes The default values for the parameters controlling the size of the trees (e.g. max_depth, min_samples_leaf, etc.) lead to fully grown and unpruned trees which can potentially be very large on some data sets. To reduce memory consumption, the complexity and size of the trees should be controlled by setting those parameter values. The features are always randomly permuted at each split. Therefore, the best found split may vary, even with the same training data, max_features=n_features and bootstrap=False, if the improvement of the criterion is identical for several splits enumerated during the search of the best split. To obtain a deterministic behaviour during fitting, random_state has to be fixed. The default value max_features="auto" uses n_features rather than n_features / 3. The latter was originally suggested in [1], whereas the former was more recently justified empirically in [2]. References
1
Breiman, “Random Forests”, Machine Learning, 45(1), 5-32, 2001.
2
P. Geurts, D. Ernst., and L. Wehenkel, “Extremely randomized trees”, Machine Learning, 63(1), 3-42, 2006. Examples >>> from sklearn.ensemble import RandomForestRegressor
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(n_features=4, n_informative=2,
... random_state=0, shuffle=False)
>>> regr = RandomForestRegressor(max_depth=2, random_state=0)
>>> regr.fit(X, y)
RandomForestRegressor(...)
>>> print(regr.predict([[0, 0, 0, 0]]))
[-8.32987858]
Methods
apply(X) Apply trees in the forest to X, return leaf indices.
decision_path(X) Return the decision path in the forest.
fit(X, y[, sample_weight]) Build a forest of trees from the training set (X, y).
get_params([deep]) Get parameters for this estimator.
predict(X) Predict regression target for X.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
apply(X) [source]
Apply trees in the forest to X, return leaf indices. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. Returns
X_leavesndarray of shape (n_samples, n_estimators)
For each datapoint x in X and for each tree in the forest, return the index of the leaf x ends up in.
decision_path(X) [source]
Return the decision path in the forest. New in version 0.18. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. Returns
indicatorsparse matrix of shape (n_samples, n_nodes)
Return a node indicator matrix where non zero elements indicates that the samples goes through the nodes. The matrix is of CSR format.
n_nodes_ptrndarray of shape (n_estimators + 1,)
The columns from indicator[n_nodes_ptr[i]:n_nodes_ptr[i+1]] gives the indicator value for the i-th estimator.
property feature_importances_
The impurity-based feature importances. The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance. Warning: impurity-based feature importances can be misleading for high cardinality features (many unique values). See sklearn.inspection.permutation_importance as an alternative. Returns
feature_importances_ndarray of shape (n_features,)
The values of this array sum to 1, unless all trees are single node trees consisting of only the root node, in which case it will be an array of zeros.
fit(X, y, sample_weight=None) [source]
Build a forest of trees from the training set (X, y). Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csc_matrix.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
The target values (class labels in classification, real numbers in regression).
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict regression target for X. The predicted regression target of an input sample is computed as the mean predicted regression targets of the trees in the forest. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. Returns
yndarray of shape (n_samples,) or (n_samples, n_outputs)
The predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_27946 |
Return the directory that contains the fortranobject.c and .h files. Note This function is not needed when building an extension with numpy.distutils directly from .f and/or .pyf files in one go. Python extension modules built with f2py-generated code need to use fortranobject.c as a source file, and include the fortranobject.h header. This function can be used to obtain the directory containing both of these files. Returns
include_pathstr
Absolute path to the directory containing fortranobject.c and fortranobject.h. See also numpy.get_include
function that returns the numpy include directory Notes New in version 1.22.0. Unless the build system you are using has specific support for f2py, building a Python extension using a .pyf signature file is a two-step process. For a module mymod: Step 1: run python -m numpy.f2py mymod.pyf --quiet. This generates _mymodmodule.c and (if needed) _fblas-f2pywrappers.f files next to mymod.pyf.
Step 2: build your Python extension module. This requires the following source files: _mymodmodule.c
_mymod-f2pywrappers.f (if it was generated in step 1) fortranobject.c | |
doc_27947 | See Migration guide for more details. tf.compat.v1.raw_ops.ReaderReadUpTo
tf.raw_ops.ReaderReadUpTo(
reader_handle, queue_handle, num_records, name=None
)
Will dequeue from the input queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less than num_records even before the last batch.
Args
reader_handle A Tensor of type mutable string. Handle to a Reader.
queue_handle A Tensor of type mutable string. Handle to a Queue, with string work items.
num_records A Tensor of type int64. number of records to read from Reader.
name A name for the operation (optional).
Returns A tuple of Tensor objects (keys, values). keys A Tensor of type string.
values A Tensor of type string. | |
doc_27948 | checks if a frame is ready query_image() -> bool If an image is ready to get, it returns true. Otherwise it returns false. Note that some webcams will always return False and will only queue a frame when called with a blocking function like get_image(). This is useful to separate the framerate of the game from that of the camera without having to use threading. | |
doc_27949 |
Set the y position of the text. Parameters
yfloat | |
doc_27950 |
Return whether the Artist has an explicitly set transform. This is True after set_transform has been called. | |
doc_27951 |
Return the array of values, that are mapped to colors. The base class ScalarMappable does not make any assumptions on the dimensionality and shape of the array. | |
doc_27952 | controls the position of the candidate list set_text_input_rect(Rect) -> None This sets the rectangle used for typing with an IME. It controls where the candidate list will open, if supported. New in pygame 2.0.0. | |
doc_27953 |
Calculate approximate perplexity for data X. Perplexity is defined as exp(-1. * log-likelihood per word) Changed in version 0.19: doc_topic_distr argument has been deprecated and is ignored because user no longer has access to unnormalized distribution Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
sub_samplingbool
Do sub-sampling or not. Returns
scorefloat
Perplexity score. | |
doc_27954 |
Get help information for a function, class, or module. Parameters
objectobject or str, optional
Input object or name to get information about. If object is a numpy object, its docstring is given. If it is a string, available modules are searched for matching objects. If None, information about info itself is returned.
maxwidthint, optional
Printing width.
outputfile like object, optional
File like object that the output is written to, default is stdout. The object has to be opened in ‘w’ or ‘a’ mode.
toplevelstr, optional
Start search at this level. See also
source, lookfor
Notes When used interactively with an object, np.info(obj) is equivalent to help(obj) on the Python prompt or obj? on the IPython prompt. Examples >>> np.info(np.polyval)
polyval(p, x)
Evaluate the polynomial p at x.
...
When using a string for object it is possible to get multiple results. >>> np.info('fft')
*** Found in numpy ***
Core FFT routines
...
*** Found in numpy.fft ***
fft(a, n=None, axis=-1)
...
*** Repeat reference found in numpy.fft.fftpack ***
*** Total of 3 references found. *** | |
doc_27955 |
Move back up the view lim stack. For convenience of being directly connected as a GUI callback, which often get passed additional parameters, this method accepts arbitrary parameters, but does not use them. | |
doc_27956 |
Resample time-series data. Convenience method for frequency conversion and resampling of time series. The object must have a datetime-like index (DatetimeIndex, PeriodIndex, or TimedeltaIndex), or the caller must pass the label of a datetime-like series/index to the on/level keyword parameter. Parameters
rule:DateOffset, Timedelta or str
The offset string or object representing target conversion.
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
Which axis to use for up- or down-sampling. For Series this will default to 0, i.e. along the rows. Must be DatetimeIndex, TimedeltaIndex or PeriodIndex.
closed:{‘right’, ‘left’}, default None
Which side of bin interval is closed. The default is ‘left’ for all frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’.
label:{‘right’, ‘left’}, default None
Which bin edge label to label bucket with. The default is ‘left’ for all frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’.
convention:{‘start’, ‘end’, ‘s’, ‘e’}, default ‘start’
For PeriodIndex only, controls whether to use the start or end of rule.
kind:{‘timestamp’, ‘period’}, optional, default None
Pass ‘timestamp’ to convert the resulting index to a DateTimeIndex or ‘period’ to convert it to a PeriodIndex. By default the input representation is retained.
loffset:timedelta, default None
Adjust the resampled time labels. Deprecated since version 1.1.0: You should add the loffset to the df.index after the resample. See below.
base:int, default 0
For frequencies that evenly subdivide 1 day, the “origin” of the aggregated intervals. For example, for ‘5min’ frequency, base could range from 0 through 4. Defaults to 0. Deprecated since version 1.1.0: The new arguments that you should use are ‘offset’ or ‘origin’.
on:str, optional
For a DataFrame, column to use instead of index for resampling. Column must be datetime-like.
level:str or int, optional
For a MultiIndex, level (name or number) to use for resampling. level must be datetime-like.
origin:Timestamp or str, default ‘start_day’
The timestamp on which to adjust the grouping. The timezone of origin must match the timezone of the index. If string, must be one of the following: ‘epoch’: origin is 1970-01-01 ‘start’: origin is the first value of the timeseries ‘start_day’: origin is the first day at midnight of the timeseries New in version 1.1.0. ‘end’: origin is the last value of the timeseries ‘end_day’: origin is the ceiling midnight of the last day New in version 1.3.0.
offset:Timedelta or str, default is None
An offset timedelta added to the origin. New in version 1.1.0. Returns
pandas.core.Resampler
Resampler object. See also Series.resample
Resample a Series. DataFrame.resample
Resample a DataFrame. groupby
Group Series by mapping, function, label, or list of labels. asfreq
Reindex a Series with the given frequency without grouping. Notes See the user guide for more. To learn more about the offset strings, please see this link. Examples Start by creating a series with 9 one minute timestamps.
>>> index = pd.date_range('1/1/2000', periods=9, freq='T')
>>> series = pd.Series(range(9), index=index)
>>> series
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
Freq: T, dtype: int64
Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin.
>>> series.resample('3T').sum()
2000-01-01 00:00:00 3
2000-01-01 00:03:00 12
2000-01-01 00:06:00 21
Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the left. Please note that the value in the bucket used as the label is not included in the bucket, which it labels. For example, in the original series the bucket 2000-01-01 00:03:00 contains the value 3, but the summed value in the resampled bucket with the label 2000-01-01 00:03:00 does not include 3 (if it did, the summed value would be 6, not 3). To include this value close the right side of the bin interval as illustrated in the example below this one.
>>> series.resample('3T', label='right').sum()
2000-01-01 00:03:00 3
2000-01-01 00:06:00 12
2000-01-01 00:09:00 21
Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but close the right side of the bin interval.
>>> series.resample('3T', label='right', closed='right').sum()
2000-01-01 00:00:00 0
2000-01-01 00:03:00 6
2000-01-01 00:06:00 15
2000-01-01 00:09:00 15
Freq: 3T, dtype: int64
Upsample the series into 30 second bins.
>>> series.resample('30S').asfreq()[0:5] # Select first 5 rows
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
2000-01-01 00:01:00 1.0
2000-01-01 00:01:30 NaN
2000-01-01 00:02:00 2.0
Freq: 30S, dtype: float64
Upsample the series into 30 second bins and fill the NaN values using the pad method.
>>> series.resample('30S').pad()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 0
2000-01-01 00:01:00 1
2000-01-01 00:01:30 1
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
Upsample the series into 30 second bins and fill the NaN values using the bfill method.
>>> series.resample('30S').bfill()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 1
2000-01-01 00:01:00 1
2000-01-01 00:01:30 2
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
Pass a custom function via apply
>>> def custom_resampler(arraylike):
... return np.sum(arraylike) + 5
...
>>> series.resample('3T').apply(custom_resampler)
2000-01-01 00:00:00 8
2000-01-01 00:03:00 17
2000-01-01 00:06:00 26
Freq: 3T, dtype: int64
For a Series with a PeriodIndex, the keyword convention can be used to control whether to use the start or end of rule. Resample a year by quarter using ‘start’ convention. Values are assigned to the first quarter of the period.
>>> s = pd.Series([1, 2], index=pd.period_range('2012-01-01',
... freq='A',
... periods=2))
>>> s
2012 1
2013 2
Freq: A-DEC, dtype: int64
>>> s.resample('Q', convention='start').asfreq()
2012Q1 1.0
2012Q2 NaN
2012Q3 NaN
2012Q4 NaN
2013Q1 2.0
2013Q2 NaN
2013Q3 NaN
2013Q4 NaN
Freq: Q-DEC, dtype: float64
Resample quarters by month using ‘end’ convention. Values are assigned to the last month of the period.
>>> q = pd.Series([1, 2, 3, 4], index=pd.period_range('2018-01-01',
... freq='Q',
... periods=4))
>>> q
2018Q1 1
2018Q2 2
2018Q3 3
2018Q4 4
Freq: Q-DEC, dtype: int64
>>> q.resample('M', convention='end').asfreq()
2018-03 1.0
2018-04 NaN
2018-05 NaN
2018-06 2.0
2018-07 NaN
2018-08 NaN
2018-09 3.0
2018-10 NaN
2018-11 NaN
2018-12 4.0
Freq: M, dtype: float64
For DataFrame objects, the keyword on can be used to specify the column instead of the index for resampling.
>>> d = {'price': [10, 11, 9, 13, 14, 18, 17, 19],
... 'volume': [50, 60, 40, 100, 50, 100, 40, 50]}
>>> df = pd.DataFrame(d)
>>> df['week_starting'] = pd.date_range('01/01/2018',
... periods=8,
... freq='W')
>>> df
price volume week_starting
0 10 50 2018-01-07
1 11 60 2018-01-14
2 9 40 2018-01-21
3 13 100 2018-01-28
4 14 50 2018-02-04
5 18 100 2018-02-11
6 17 40 2018-02-18
7 19 50 2018-02-25
>>> df.resample('M', on='week_starting').mean()
price volume
week_starting
2018-01-31 10.75 62.5
2018-02-28 17.00 60.0
For a DataFrame with MultiIndex, the keyword level can be used to specify on which level the resampling needs to take place.
>>> days = pd.date_range('1/1/2000', periods=4, freq='D')
>>> d2 = {'price': [10, 11, 9, 13, 14, 18, 17, 19],
... 'volume': [50, 60, 40, 100, 50, 100, 40, 50]}
>>> df2 = pd.DataFrame(
... d2,
... index=pd.MultiIndex.from_product(
... [days, ['morning', 'afternoon']]
... )
... )
>>> df2
price volume
2000-01-01 morning 10 50
afternoon 11 60
2000-01-02 morning 9 40
afternoon 13 100
2000-01-03 morning 14 50
afternoon 18 100
2000-01-04 morning 17 40
afternoon 19 50
>>> df2.resample('D', level=0).sum()
price volume
2000-01-01 21 110
2000-01-02 22 140
2000-01-03 32 150
2000-01-04 36 90
If you want to adjust the start of the bins based on a fixed timestamp:
>>> start, end = '2000-10-01 23:30:00', '2000-10-02 00:30:00'
>>> rng = pd.date_range(start, end, freq='7min')
>>> ts = pd.Series(np.arange(len(rng)) * 3, index=rng)
>>> ts
2000-10-01 23:30:00 0
2000-10-01 23:37:00 3
2000-10-01 23:44:00 6
2000-10-01 23:51:00 9
2000-10-01 23:58:00 12
2000-10-02 00:05:00 15
2000-10-02 00:12:00 18
2000-10-02 00:19:00 21
2000-10-02 00:26:00 24
Freq: 7T, dtype: int64
>>> ts.resample('17min').sum()
2000-10-01 23:14:00 0
2000-10-01 23:31:00 9
2000-10-01 23:48:00 21
2000-10-02 00:05:00 54
2000-10-02 00:22:00 24
Freq: 17T, dtype: int64
>>> ts.resample('17min', origin='epoch').sum()
2000-10-01 23:18:00 0
2000-10-01 23:35:00 18
2000-10-01 23:52:00 27
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
>>> ts.resample('17min', origin='2000-01-01').sum()
2000-10-01 23:24:00 3
2000-10-01 23:41:00 15
2000-10-01 23:58:00 45
2000-10-02 00:15:00 45
Freq: 17T, dtype: int64
If you want to adjust the start of the bins with an offset Timedelta, the two following lines are equivalent:
>>> ts.resample('17min', origin='start').sum()
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
>>> ts.resample('17min', offset='23h30min').sum()
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
If you want to take the largest Timestamp as the end of the bins:
>>> ts.resample('17min', origin='end').sum()
2000-10-01 23:35:00 0
2000-10-01 23:52:00 18
2000-10-02 00:09:00 27
2000-10-02 00:26:00 63
Freq: 17T, dtype: int64
In contrast with the start_day, you can use end_day to take the ceiling midnight of the largest Timestamp as the end of the bins and drop the bins not containing data:
>>> ts.resample('17min', origin='end_day').sum()
2000-10-01 23:38:00 3
2000-10-01 23:55:00 15
2000-10-02 00:12:00 45
2000-10-02 00:29:00 45
Freq: 17T, dtype: int64
To replace the use of the deprecated base argument, you can now use offset, in this example it is equivalent to have base=2:
>>> ts.resample('17min', offset='2min').sum()
2000-10-01 23:16:00 0
2000-10-01 23:33:00 9
2000-10-01 23:50:00 36
2000-10-02 00:07:00 39
2000-10-02 00:24:00 24
Freq: 17T, dtype: int64
To replace the use of the deprecated loffset argument:
>>> from pandas.tseries.frequencies import to_offset
>>> loffset = '19min'
>>> ts_out = ts.resample('17min').sum()
>>> ts_out.index = ts_out.index + to_offset(loffset)
>>> ts_out
2000-10-01 23:33:00 0
2000-10-01 23:50:00 9
2000-10-02 00:07:00 21
2000-10-02 00:24:00 54
2000-10-02 00:41:00 24
Freq: 17T, dtype: int64 | |
doc_27957 |
Render the Figure. It is important that this method actually walk the artist tree even if not output is produced because this will trigger deferred work (like computing limits auto-limits and tick values) that users may want access to before saving to disk. | |
doc_27958 |
Applies the element-wise function: Hardsigmoid(x)={0if x≤−3,1if x≥+3,x/6+1/2otherwise\text{Hardsigmoid}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ 1 & \text{if~} x \ge +3, \\ x / 6 + 1 / 2 & \text{otherwise} \end{cases}
Parameters
inplace – can optionally do the operation in-place. Default: False Shape:
Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.Hardsigmoid()
>>> input = torch.randn(2)
>>> output = m(input) | |
doc_27959 |
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns
self
Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. | |
doc_27960 |
Check if estimator adheres to scikit-learn conventions. This estimator will run an extensive test-suite for input validation, shapes, etc, making sure that the estimator complies with scikit-learn conventions as detailed in Rolling your own estimator. Additional tests for classifiers, regressors, clustering or transformers will be run if the Estimator class inherits from the corresponding mixin from sklearn.base. Setting generate_only=True returns a generator that yields (estimator, check) tuples where the check can be called independently from each other, i.e. check(estimator). This allows all checks to be run independently and report the checks that are failing. scikit-learn provides a pytest specific decorator, parametrize_with_checks, making it easier to test multiple estimators. Parameters
Estimatorestimator object
Estimator instance to check. Changed in version 0.24: Passing a class was deprecated in version 0.23, and support for classes was removed in 0.24.
generate_onlybool, default=False
When False, checks are evaluated when check_estimator is called. When True, check_estimator returns a generator that yields (estimator, check) tuples. The check is run by calling check(estimator). New in version 0.22. Returns
checks_generatorgenerator
Generator that yields (estimator, check) tuples. Returned when generate_only=True. | |
doc_27961 |
The number of input dimensions of this transform. Must be overridden (with integers) in the subclass. | |
doc_27962 |
Return the requested number of words for PRNG seeding. A BitGenerator should call this method in its constructor with an appropriate n_words parameter to properly seed itself. Parameters
n_wordsint
dtypenp.uint32 or np.uint64, optional
The size of each word. This should only be either uint32 or uint64. Strings (‘uint32’, ‘uint64’) are fine. Note that requesting uint64 will draw twice as many bits as uint32 for the same n_words. This is a convenience for BitGenerator`s that express their states as `uint64 arrays. Returns
stateuint32 or uint64 array, shape=(n_words,) | |
doc_27963 |
Other Members
AUTOTUNE -1
INFINITE_CARDINALITY -1
UNKNOWN_CARDINALITY -2 | |
doc_27964 | See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyCenteredRMSProp
tf.raw_ops.SparseApplyCenteredRMSProp(
var, mg, ms, mom, lr, rho, momentum, epsilon, grad, indices, use_locking=False,
name=None
)
The centered RMSProp algorithm uses an estimate of the centered second moment (i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncentered) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory. Note that in dense implementation of this algorithm, mg, ms, and mom will update even if the grad is zero, but in this sparse implementation, mg, ms, and mom will not update in iterations during which the grad is zero. mean_square = decay * mean_square + (1-decay) * gradient ** 2 mean_grad = decay * mean_grad + (1-decay) * gradient Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2) $$ms $$mom $$var
Args
var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable().
mg A mutable Tensor. Must have the same type as var. Should be from a Variable().
ms A mutable Tensor. Must have the same type as var. Should be from a Variable().
mom A mutable Tensor. Must have the same type as var. Should be from a Variable().
lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar.
rho A Tensor. Must have the same type as var. Decay rate. Must be a scalar.
momentum A Tensor. Must have the same type as var.
epsilon A Tensor. Must have the same type as var. Ridge term. Must be a scalar.
grad A Tensor. Must have the same type as var. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var, ms and mom.
use_locking An optional bool. Defaults to False. If True, updating of the var, mg, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as var.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license. Last updated 2021-02-18 UTC. Stay connected Blog GitHub Twitter YouTube Support Issue tracker Release notes Stack Overflow Brand guidelines Cite TensorFlow Terms Privacy Sign up for the TensorFlow monthly newsletter Subscribe Language English 中文 – 简体 | |
doc_27965 |
Return whether the artist uses clipping. | |
doc_27966 |
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_27967 | Token value for "<". | |
doc_27968 |
Return the Transform instance used by this artist. | |
doc_27969 |
Applies Batch Normalization for each channel across a batch of data. See BatchNorm1d, BatchNorm2d, BatchNorm3d for details. | |
doc_27970 | Base class of the dialog controls. dlg is the dialog object the control belongs to, and name is the control’s name.
event(event, argument, condition=1, ordering=None)
Make an entry into the ControlEvent table for this control.
mapping(event, attribute)
Make an entry into the EventMapping table for this control.
condition(action, condition)
Make an entry into the ControlCondition table for this control. | |
doc_27971 | bytearray.islower()
Return True if there is at least one lowercase ASCII character in the sequence and no uppercase ASCII characters, False otherwise. For example: >>> b'hello world'.islower()
True
>>> b'Hello world'.islower()
False
Lowercase ASCII characters are those byte values in the sequence b'abcdefghijklmnopqrstuvwxyz'. Uppercase ASCII characters are those byte values in the sequence b'ABCDEFGHIJKLMNOPQRSTUVWXYZ'. | |
doc_27972 |
A dynamic quantized LSTM module with floating point tensor as inputs and outputs. We adopt the same interface as torch.nn.LSTM, please see https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM for documentation. Examples: >>> rnn = nn.LSTM(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> c0 = torch.randn(2, 3, 20)
>>> output, (hn, cn) = rnn(input, (h0, c0)) | |
doc_27973 |
Set the linewidth(s) for the collection. lw can be a scalar or a sequence; if it is a sequence the patches will cycle through the sequence Parameters
lwfloat or list of floats | |
doc_27974 |
Return a set of CPU features that implied by ‘names’ Parameters
names: str or sequence of str
CPU feature name(s) in uppercase. keep_origins: bool
if False(default) then the returned set will not contain any features from ‘names’. This case happens only when two features imply each other. Examples >>> self.feature_implies("SSE3")
{'SSE', 'SSE2'}
>>> self.feature_implies("SSE2")
{'SSE'}
>>> self.feature_implies("SSE2", keep_origins=True)
# 'SSE2' found here since 'SSE' and 'SSE2' imply each other
{'SSE', 'SSE2'} | |
doc_27975 |
Get a mask, or integer index, of the features selected Parameters
indicesbool, default=False
If True, the return value will be an array of integers, rather than a boolean mask. Returns
supportarray
An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector. | |
doc_27976 | Computes the element-wise remainder of division. The dividend and divisor may contain both for integer and floating point numbers. The remainder has the same sign as the divisor other. Supports broadcasting to a common shape, type promotion, and integer and float inputs. Note Complex inputs are not supported. In some cases, it is not mathematically possible to satisfy the definition of a modulo operation with complex numbers. See torch.fmod() for how division by zero is handled. Parameters
input (Tensor) – the dividend
other (Tensor or Scalar) – the divisor Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> torch.remainder(torch.tensor([-3., -2, -1, 1, 2, 3]), 2)
tensor([ 1., 0., 1., 1., 0., 1.])
>>> torch.remainder(torch.tensor([1, 2, 3, 4, 5]), 1.5)
tensor([ 1.0000, 0.5000, 0.0000, 1.0000, 0.5000])
See also torch.fmod(), which computes the element-wise remainder of division equivalently to the C library function fmod(). | |
doc_27977 |
Remove a prefix from an object series. If the prefix is not present, the original string will be returned. Parameters
prefix:str
Remove the prefix of the string. Returns
Series/Index: object
The Series or Index with given prefix removed. See also Series.str.removesuffix
Remove a suffix from an object series. Examples
>>> s = pd.Series(["str_foo", "str_bar", "no_prefix"])
>>> s
0 str_foo
1 str_bar
2 no_prefix
dtype: object
>>> s.str.removeprefix("str_")
0 foo
1 bar
2 no_prefix
dtype: object
>>> s = pd.Series(["foo_str", "bar_str", "no_suffix"])
>>> s
0 foo_str
1 bar_str
2 no_suffix
dtype: object
>>> s.str.removesuffix("_str")
0 foo
1 bar
2 no_suffix
dtype: object | |
doc_27978 |
Set the colormap for luminance data. Parameters
cmapColormap or str or None | |
doc_27979 | Return the result of the Future. If the Future is done and has a result set by the set_result() method, the result value is returned. If the Future is done and has an exception set by the set_exception() method, this method raises the exception. If the Future has been cancelled, this method raises a CancelledError exception. If the Future’s result isn’t yet available, this method raises a InvalidStateError exception. | |
doc_27980 |
Return a list of the row axis labels. | |
doc_27981 | numpy.float128[source]
Alias for numpy.longdouble, named after its size in bits. The existence of these aliases depends on the platform. | |
doc_27982 | class socketserver.ThreadingMixIn
Forking and threading versions of each type of server can be created using these mix-in classes. For instance, ThreadingUDPServer is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer):
pass
The mix-in class comes first, since it overrides a method defined in UDPServer. Setting the various attributes also changes the behavior of the underlying server mechanism. ForkingMixIn and the Forking classes mentioned below are only available on POSIX platforms that support fork(). socketserver.ForkingMixIn.server_close() waits until all child processes complete, except if socketserver.ForkingMixIn.block_on_close attribute is false. socketserver.ThreadingMixIn.server_close() waits until all non-daemon threads complete, except if socketserver.ThreadingMixIn.block_on_close attribute is false. Use daemonic threads by setting ThreadingMixIn.daemon_threads to True to not wait until threads complete. Changed in version 3.7: socketserver.ForkingMixIn.server_close() and socketserver.ThreadingMixIn.server_close() now waits until all child processes and non-daemonic threads complete. Add a new socketserver.ForkingMixIn.block_on_close class attribute to opt-in for the pre-3.7 behaviour. | |
doc_27983 |
Fill missing values introduced by upsampling. In statistics, imputation is the process of replacing missing data with substituted values [1]. When resampling data, missing values may appear (e.g., when the resampling frequency is higher than the original frequency). Missing values that existed in the original data will not be modified. Parameters
method:{‘pad’, ‘backfill’, ‘ffill’, ‘bfill’, ‘nearest’}
Method to use for filling holes in resampled data ‘pad’ or ‘ffill’: use previous valid observation to fill gap (forward fill). ‘backfill’ or ‘bfill’: use next valid observation to fill gap. ‘nearest’: use nearest valid observation to fill gap.
limit:int, optional
Limit of how many consecutive missing values to fill. Returns
Series or DataFrame
An upsampled Series or DataFrame with missing values filled. See also bfill
Backward fill NaN values in the resampled data. ffill
Forward fill NaN values in the resampled data. nearest
Fill NaN values in the resampled data with nearest neighbor starting from center. interpolate
Fill NaN values using interpolation. Series.fillna
Fill NaN values in the Series using the specified method, which can be ‘bfill’ and ‘ffill’. DataFrame.fillna
Fill NaN values in the DataFrame using the specified method, which can be ‘bfill’ and ‘ffill’. References 1
https://en.wikipedia.org/wiki/Imputation_(statistics) Examples Resampling a Series:
>>> s = pd.Series([1, 2, 3],
... index=pd.date_range('20180101', periods=3, freq='h'))
>>> s
2018-01-01 00:00:00 1
2018-01-01 01:00:00 2
2018-01-01 02:00:00 3
Freq: H, dtype: int64
Without filling the missing values you get:
>>> s.resample("30min").asfreq()
2018-01-01 00:00:00 1.0
2018-01-01 00:30:00 NaN
2018-01-01 01:00:00 2.0
2018-01-01 01:30:00 NaN
2018-01-01 02:00:00 3.0
Freq: 30T, dtype: float64
>>> s.resample('30min').fillna("backfill")
2018-01-01 00:00:00 1
2018-01-01 00:30:00 2
2018-01-01 01:00:00 2
2018-01-01 01:30:00 3
2018-01-01 02:00:00 3
Freq: 30T, dtype: int64
>>> s.resample('15min').fillna("backfill", limit=2)
2018-01-01 00:00:00 1.0
2018-01-01 00:15:00 NaN
2018-01-01 00:30:00 2.0
2018-01-01 00:45:00 2.0
2018-01-01 01:00:00 2.0
2018-01-01 01:15:00 NaN
2018-01-01 01:30:00 3.0
2018-01-01 01:45:00 3.0
2018-01-01 02:00:00 3.0
Freq: 15T, dtype: float64
>>> s.resample('30min').fillna("pad")
2018-01-01 00:00:00 1
2018-01-01 00:30:00 1
2018-01-01 01:00:00 2
2018-01-01 01:30:00 2
2018-01-01 02:00:00 3
Freq: 30T, dtype: int64
>>> s.resample('30min').fillna("nearest")
2018-01-01 00:00:00 1
2018-01-01 00:30:00 2
2018-01-01 01:00:00 2
2018-01-01 01:30:00 3
2018-01-01 02:00:00 3
Freq: 30T, dtype: int64
Missing values present before the upsampling are not affected.
>>> sm = pd.Series([1, None, 3],
... index=pd.date_range('20180101', periods=3, freq='h'))
>>> sm
2018-01-01 00:00:00 1.0
2018-01-01 01:00:00 NaN
2018-01-01 02:00:00 3.0
Freq: H, dtype: float64
>>> sm.resample('30min').fillna('backfill')
2018-01-01 00:00:00 1.0
2018-01-01 00:30:00 NaN
2018-01-01 01:00:00 NaN
2018-01-01 01:30:00 3.0
2018-01-01 02:00:00 3.0
Freq: 30T, dtype: float64
>>> sm.resample('30min').fillna('pad')
2018-01-01 00:00:00 1.0
2018-01-01 00:30:00 1.0
2018-01-01 01:00:00 NaN
2018-01-01 01:30:00 NaN
2018-01-01 02:00:00 3.0
Freq: 30T, dtype: float64
>>> sm.resample('30min').fillna('nearest')
2018-01-01 00:00:00 1.0
2018-01-01 00:30:00 NaN
2018-01-01 01:00:00 NaN
2018-01-01 01:30:00 3.0
2018-01-01 02:00:00 3.0
Freq: 30T, dtype: float64
DataFrame resampling is done column-wise. All the same options are available.
>>> df = pd.DataFrame({'a': [2, np.nan, 6], 'b': [1, 3, 5]},
... index=pd.date_range('20180101', periods=3,
... freq='h'))
>>> df
a b
2018-01-01 00:00:00 2.0 1
2018-01-01 01:00:00 NaN 3
2018-01-01 02:00:00 6.0 5
>>> df.resample('30min').fillna("bfill")
a b
2018-01-01 00:00:00 2.0 1
2018-01-01 00:30:00 NaN 3
2018-01-01 01:00:00 NaN 3
2018-01-01 01:30:00 6.0 5
2018-01-01 02:00:00 6.0 5 | |
doc_27984 |
Create a new BboxTransform that linearly transforms points from boxin to boxout. | |
doc_27985 | Touch each location in the window that has been touched in any of its ancestor windows. This routine is called by refresh(), so it should almost never be necessary to call it manually. | |
doc_27986 | See Migration guide for more details. tf.compat.v1.lite.OpsSet Warning: Experimental interface, subject to change.
Class Variables
EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8 tf.lite.OpsSet
SELECT_TF_OPS tf.lite.OpsSet
TFLITE_BUILTINS tf.lite.OpsSet
TFLITE_BUILTINS_INT8 tf.lite.OpsSet | |
doc_27987 |
Groupby iterator. Returns
Generator yielding sequence of (name, subsetted object)
for each group | |
doc_27988 | Return a list of all valid maps. The domain argument allows overriding the NIS domain used for the lookup. If unspecified, lookup is in the default NIS domain. | |
doc_27989 |
Bases: skimage.viewer.utils.core.LinearColormap Color map that varies linearly from alpha = 0 to 1
__init__(rgb, max_alpha=1, name='clear_color') [source]
Create color map from linear mapping segments segmentdata argument is a dictionary with a red, green and blue entries. Each entry should be a list of x, y0, y1 tuples, forming rows in a table. Entries for alpha are optional. Example: suppose you want red to increase from 0 to 1 over the bottom half, green to do the same over the middle half, and blue over the top half. Then you would use: cdict = {'red': [(0.0, 0.0, 0.0),
(0.5, 1.0, 1.0),
(1.0, 1.0, 1.0)],
'green': [(0.0, 0.0, 0.0),
(0.25, 0.0, 0.0),
(0.75, 1.0, 1.0),
(1.0, 1.0, 1.0)],
'blue': [(0.0, 0.0, 0.0),
(0.5, 0.0, 0.0),
(1.0, 1.0, 1.0)]}
Each row in the table for a given color is a sequence of x, y0, y1 tuples. In each sequence, x must increase monotonically from 0 to 1. For any input value z falling between x[i] and x[i+1], the output value of a given color will be linearly interpolated between y1[i] and y0[i+1]: row i: x y0 y1
/
/
row i+1: x y0 y1
Hence y0 in the first row and y1 in the last row are never used. See also
LinearSegmentedColormap.from_list
Static method; factory function for generating a smoothly-varying LinearSegmentedColormap.
makeMappingArray
For information about making a mapping array. | |
doc_27990 | See Migration guide for more details. tf.compat.v1.raw_ops.EluGrad
tf.raw_ops.EluGrad(
gradients, outputs, name=None
)
Args
gradients A Tensor. Must be one of the following types: half, bfloat16, float32, float64. The backpropagated gradients to the corresponding Elu operation.
outputs A Tensor. Must have the same type as gradients. The outputs of the corresponding Elu operation.
name A name for the operation (optional).
Returns A Tensor. Has the same type as gradients. | |
doc_27991 |
Set whether the artist is intended to be used in an animation. If True, the artist is excluded from regular drawing of the figure. You have to call Figure.draw_artist / Axes.draw_artist explicitly on the artist. This appoach is used to speed up animations using blitting. See also matplotlib.animation and Faster rendering by using blitting. Parameters
bbool | |
doc_27992 | Inserts the key-value pair into the store based on the supplied key and value. If key already exists in the store, it will overwrite the old value with the new supplied value. Parameters
key (str) – The key to be added to the store.
value (str) – The value associated with key to be added to the store. Example::
>>> import torch.distributed as dist
>>> from datetime import timedelta
>>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30))
>>> store.set("first_key", "first_value")
>>> # Should return "first_value"
>>> store.get("first_key") | |
doc_27993 | See Migration guide for more details. tf.compat.v1.raw_ops.RandomStandardNormal
tf.raw_ops.RandomStandardNormal(
shape, dtype, seed=0, seed2=0, name=None
)
The generated values will have mean 0 and standard deviation 1.
Args
shape A Tensor. Must be one of the following types: int32, int64. The shape of the output tensor.
dtype A tf.DType from: tf.half, tf.bfloat16, tf.float32, tf.float64. The type of the output.
seed An optional int. Defaults to 0. If either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2 An optional int. Defaults to 0. A second seed to avoid seed collision.
name A name for the operation (optional).
Returns A Tensor of type dtype. | |
doc_27994 |
Return the figure's background patch visibility, i.e. whether the figure background will be drawn. Equivalent to Figure.patch.get_visible(). | |
doc_27995 | A subclass of FileDialog that creates a dialog window for selecting a destination file.
ok_command()
Test whether or not the selection points to a valid file that is not a directory. Confirmation is required if an already existing file is selected. | |
doc_27996 |
Validation curve. Determine training and test scores for varying parameter values. Compute scores for an estimator with different values of a specified parameter. This is similar to grid search with one parameter. However, this will also compute training scores and is merely a utility for plotting the results. Read more in the User Guide. Parameters
estimatorobject type that implements the “fit” and “predict” methods
An object of that type which is cloned for each validation.
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,) or (n_samples, n_outputs) or None
Target relative to X for classification or regression; None for unsupervised learning.
param_namestr
Name of the parameter that will be varied.
param_rangearray-like of shape (n_values,)
The values of the parameter that will be evaluated.
groupsarray-like of shape (n_samples,), default=None
Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” cv instance (e.g., GroupKFold).
cvint, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross validation, int, to specify the number of folds in a (Stratified)KFold,
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
scoringstr or callable, default=None
A str (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y).
n_jobsint, default=None
Number of jobs to run in parallel. Training the estimator and computing the score are parallelized over the combinations of each parameter value and each cross-validation split. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
pre_dispatchint or str, default=’all’
Number of predispatched jobs for parallel execution (default is all). The option can reduce the allocated memory. The str can be an expression like ‘2*n_jobs’.
verboseint, default=0
Controls the verbosity: the higher, the more messages.
fit_paramsdict, default=None
Parameters to pass to the fit method of the estimator. New in version 0.24.
error_score‘raise’ or numeric, default=np.nan
Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised. New in version 0.20. Returns
train_scoresarray of shape (n_ticks, n_cv_folds)
Scores on training sets.
test_scoresarray of shape (n_ticks, n_cv_folds)
Scores on test set. Notes See Plotting Validation Curves | |
doc_27997 |
Predict multi-class targets using underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data. Returns
ynumpy array of shape [n_samples]
Predicted multi-class targets. | |
doc_27998 |
Get the image as ARGB bytes. draw must be called at least once before this function will work and to update the renderer for any subsequent changes to the Figure. | |
doc_27999 |
Set the artist's visibility. Parameters
bbool |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.