_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_30200 |
input_type: 'text'
template_name: 'django/forms/widgets/datetime.html'
Renders as: <input type="text" ...>
Takes same arguments as TextInput, with one more optional argument:
format
The format in which this field’s initial value will be displayed.
If no format argument is provided, the default format is the first format found in DATETIME_INPUT_FORMATS and respects Format localization. By default, the microseconds part of the time value is always set to 0. If microseconds are required, use a subclass with the supports_microseconds attribute set to True. | |
doc_30201 | Returns the median of the values in input, ignoring NaN values. This function is identical to torch.median() when there are no NaN values in input. When input has one or more NaN values, torch.median() will always return NaN, while this function will return the median of the non-NaN elements in input. If all the elements in input are NaN it will also return NaN. Parameters
input (Tensor) – the input tensor. Example: >>> a = torch.tensor([1, float('nan'), 3, 2])
>>> a.median()
tensor(nan)
>>> a.nanmedian()
tensor(2.)
torch.nanmedian(input, dim=-1, keepdim=False, *, out=None) -> (Tensor, LongTensor)
Returns a namedtuple (values, indices) where values contains the median of each row of input in the dimension dim, ignoring NaN values, and indices contains the index of the median values found in the dimension dim. This function is identical to torch.median() when there are no NaN values in a reduced row. When a reduced row has one or more NaN values, torch.median() will always reduce it to NaN, while this function will reduce it to the median of the non-NaN elements. If all the elements in a reduced row are NaN then it will be reduced to NaN, too. Parameters
input (Tensor) – the input tensor.
dim (int) – the dimension to reduce.
keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments
out ((Tensor, Tensor), optional) – The first tensor will be populated with the median values and the second tensor, which must have dtype long, with their indices in the dimension dim of input. Example: >>> a = torch.tensor([[2, 3, 1], [float('nan'), 1, float('nan')]])
>>> a
tensor([[2., 3., 1.],
[nan, 1., nan]])
>>> a.median(0)
torch.return_types.median(values=tensor([nan, 1., nan]), indices=tensor([1, 1, 1]))
>>> a.nanmedian(0)
torch.return_types.nanmedian(values=tensor([2., 1., 1.]), indices=tensor([0, 1, 0])) | |
doc_30202 |
Set if artist is to be included in layout calculations, E.g. Constrained Layout Guide, Figure.tight_layout(), and fig.savefig(fname, bbox_inches='tight'). Parameters
in_layoutbool | |
doc_30203 |
Subtract one Laguerre series from another. Returns the difference of two Laguerre series c1 - c2. The sequences of coefficients are from lowest order term to highest, i.e., [1,2,3] represents the series P_0 + 2*P_1 + 3*P_2. Parameters
c1, c2array_like
1-D arrays of Laguerre series coefficients ordered from low to high. Returns
outndarray
Of Laguerre series coefficients representing their difference. See also
lagadd, lagmulx, lagmul, lagdiv, lagpow
Notes Unlike multiplication, division, etc., the difference of two Laguerre series is a Laguerre series (without having to “reproject” the result onto the basis set) so subtraction, just like that of “standard” polynomials, is simply “component-wise.” Examples >>> from numpy.polynomial.laguerre import lagsub
>>> lagsub([1, 2, 3, 4], [1, 2, 3])
array([0., 0., 0., 4.]) | |
doc_30204 |
Run score function on (X, y) and get the appropriate features. Parameters
Xarray-like of shape (n_samples, n_features)
The training input samples.
yarray-like of shape (n_samples,)
The target values (class labels in classification, real numbers in regression). Returns
selfobject | |
doc_30205 | This exception is derived from RuntimeError. In user defined base classes, abstract methods should raise this exception when they require derived classes to override the method, or while the class is being developed to indicate that the real implementation still needs to be added. Note It should not be used to indicate that an operator or method is not meant to be supported at all – in that case either leave the operator / method undefined or, if a subclass, set it to None. Note NotImplementedError and NotImplemented are not interchangeable, even though they have similar names and purposes. See NotImplemented for details on when to use it. | |
doc_30206 | Create a new element in the current theme, of the given etype which is expected to be either “image”, “from” or “vsapi”. The latter is only available in Tk 8.6a for Windows XP and Vista and is not described here. If “image” is used, args should contain the default image name followed by statespec/value pairs (this is the imagespec), and kw may have the following options:
border=padding
padding is a list of up to four integers, specifying the left, top, right, and bottom borders, respectively.
height=height
Specifies a minimum height for the element. If less than zero, the base image’s height is used as a default.
padding=padding
Specifies the element’s interior padding. Defaults to border’s value if not specified.
sticky=spec
Specifies how the image is placed within the final parcel. spec contains zero or more characters “n”, “s”, “w”, or “e”.
width=width
Specifies a minimum width for the element. If less than zero, the base image’s width is used as a default. If “from” is used as the value of etype, element_create() will clone an existing element. args is expected to contain a themename, from which the element will be cloned, and optionally an element to clone from. If this element to clone from is not specified, an empty element will be used. kw is discarded. | |
doc_30207 | Specifies the UUID layout given in RFC 4122. | |
doc_30208 | Set to the top level directory for the test package. | |
doc_30209 |
Test whether mouseevent occurred on the line. An event is deemed to have occurred "on" the line if it is less than self.pickradius (default: 5 points) away from it. Use get_pickradius or set_pickradius to get or set the pick radius. Parameters
mouseeventmatplotlib.backend_bases.MouseEvent
Returns
containsbool
Whether any values are within the radius.
detailsdict
A dictionary {'ind': pointlist}, where pointlist is a list of points of the line that are within the pickradius around the event position. TODO: sort returned indices by distance | |
doc_30210 | Return a Maildir instance representing the folder whose name is folder. A NoSuchMailboxError exception is raised if the folder does not exist. | |
doc_30211 | Return True if the object argument appears callable, False if not. If this returns True, it is still possible that a call fails, but if it is False, calling object will never succeed. Note that classes are callable (calling a class returns a new instance); instances are callable if their class has a __call__() method. New in version 3.2: This function was first removed in Python 3.0 and then brought back in Python 3.2. | |
doc_30212 | accessor for ‘no-cache’ | |
doc_30213 | Returns the client address. Changed in version 3.3: Previously, a name lookup was performed. To avoid name resolution delays, it now always returns the IP address. | |
doc_30214 |
Convert y using the unit type of the yaxis. If the artist is not in contained in an Axes or if the yaxis does not have units, y itself is returned. | |
doc_30215 | tf.compat.v1.lite.TFLiteConverter(
graph_def, input_tensors, output_tensors, input_arrays_with_shape=None,
output_arrays=None, experimental_debug_info_func=None
)
This is used to convert from a TensorFlow GraphDef, SavedModel or tf.keras model into either a TFLite FlatBuffer or graph visualization. Example usage: # Converting a GraphDef from session.
converter = tf.compat.v1.lite.TFLiteConverter.from_session(
sess, in_tensors, out_tensors)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
# Converting a GraphDef from file.
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
# Converting a SavedModel.
converter = tf.compat.v1.lite.TFLiteConverter.from_saved_model(
saved_model_dir)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
# Converting a tf.keras model.
converter = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file(
keras_model)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
Args
graph_def Frozen TensorFlow GraphDef.
input_tensors List of input tensors. Type and shape are computed using foo.shape and foo.dtype.
output_tensors List of output tensors (only .name is used from this).
input_arrays_with_shape Tuple of strings representing input tensor names and list of integers representing input shapes (e.g., [("foo" : [1, 16, 16, 3])]). Use only when graph cannot be loaded into TensorFlow and when input_tensors and output_tensors are None. (default None)
output_arrays List of output tensors to freeze graph with. Use only when graph cannot be loaded into TensorFlow and when input_tensors and output_tensors are None. (default None)
experimental_debug_info_func An experimental function to retrieve the graph debug info for a set of nodes from the graph_def.
Raises
ValueError Invalid arguments.
Attributes
inference_type Target data type of real-number arrays in the output file. Must be {tf.float32, tf.uint8}. If optimzations are provided, this parameter is ignored. (default tf.float32)
inference_input_type Target data type of real-number input arrays. Allows for a different type for input arrays. If an integer type is provided and optimizations are not used, quantized_input_stats must be provided. If inference_type is tf.uint8, signaling conversion to a fully quantized model from a quantization-aware trained input model, then inference_input_type defaults to tf.uint8. In all other cases, inference_input_type defaults to tf.float32. Must be {tf.float32, tf.uint8, tf.int8}
inference_output_type Target data type of real-number output arrays. Allows for a different type for output arrays. If inference_type is tf.uint8, signaling conversion to a fully quantized model from a quantization-aware trained output model, then inference_output_type defaults to tf.uint8. In all other cases, inference_output_type must be tf.float32, an error will be thrown otherwise. Must be {tf.float32, tf.uint8, tf.int8}
output_format Output file format. Currently must be {TFLITE, GRAPHVIZ_DOT}. (default TFLITE)
quantized_input_stats Dict of strings representing input tensor names mapped to tuple of floats representing the mean and standard deviation of the training data (e.g., {"foo" : (0., 1.)}). Only need if inference_input_type is QUANTIZED_UINT8. real_input_value = (quantized_input_value - mean_value) / std_dev_value. (default {})
default_ranges_stats Tuple of integers representing (min, max) range values for all arrays without a specified range. Intended for experimenting with quantization via "dummy quantization". (default None)
drop_control_dependency Boolean indicating whether to drop control dependencies silently. This is due to TFLite not supporting control dependencies. (default True)
reorder_across_fake_quant Boolean indicating whether to reorder FakeQuant nodes in unexpected locations. Used when the location of the FakeQuant nodes is preventing graph transformations necessary to convert the graph. Results in a graph that differs from the quantized training graph, potentially causing differing arithmetic behavior. (default False)
change_concat_input_ranges Boolean to change behavior of min/max ranges for inputs and outputs of the concat operator for quantized models. Changes the ranges of concat operator overlap when true. (default False)
allow_custom_ops Boolean indicating whether to allow custom operations. When false any unknown operation is an error. When true, custom ops are created for any op that is unknown. The developer will need to provide these to the TensorFlow Lite runtime with a custom resolver. (default False)
post_training_quantize Deprecated. Please specify [Optimize.DEFAULT] for optimizations instead. Boolean indicating whether to quantize the weights of the converted float model. Model size will be reduced and there will be latency improvements (at the cost of accuracy). (default False)
dump_graphviz_dir Full filepath of folder to dump the graphs at various stages of processing GraphViz .dot files. Preferred over --output_format=GRAPHVIZ_DOT in order to keep the requirements of the output file. (default None)
dump_graphviz_video Boolean indicating whether to dump the graph after every graph transformation. (default False)
conversion_summary_dir A string indicating the path to the generated conversion logs.
target_ops Deprecated. Please specify target_spec.supported_ops instead. Set of OpsSet options indicating which converter to use. (default set([OpsSet.TFLITE_BUILTINS]))
target_spec Experimental flag, subject to change. Specification of target device.
optimizations Experimental flag, subject to change. A list of optimizations to apply when converting the model. E.g. [Optimize.DEFAULT]
representative_dataset A representative dataset that can be used to generate input and output samples for the model. The converter can use the dataset to evaluate different optimizations.
experimental_new_converter Experimental flag, subject to change. Enables MLIR-based conversion instead of TOCO conversion. (default True) Methods convert View source
convert()
Converts a TensorFlow GraphDef based on instance variables.
Returns The converted data in serialized format. Either a TFLite Flatbuffer or a Graphviz graph depending on value in output_format.
Raises
ValueError Input shape is not specified. None value for dimension in input_tensor. from_frozen_graph View source
@classmethod
from_frozen_graph(
graph_def_file, input_arrays, output_arrays, input_shapes=None
)
Creates a TFLiteConverter class from a file containing a frozen GraphDef.
Args
graph_def_file Full filepath of file containing frozen GraphDef.
input_arrays List of input tensors to freeze graph with.
output_arrays List of output tensors to freeze graph with.
input_shapes Dict of strings representing input tensor names to list of integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}). Automatically determined when input shapes is None (e.g., {"foo" : None}). (default None)
Returns TFLiteConverter class.
Raises
IOError File not found. Unable to parse input file.
ValueError The graph is not frozen. input_arrays or output_arrays contains an invalid tensor name. input_shapes is not correctly defined when required from_keras_model_file View source
@classmethod
from_keras_model_file(
model_file, input_arrays=None, input_shapes=None, output_arrays=None,
custom_objects=None
)
Creates a TFLiteConverter class from a tf.keras model file.
Args
model_file Full filepath of HDF5 file containing the tf.keras model.
input_arrays List of input tensors to freeze graph with. Uses input arrays from SignatureDef when none are provided. (default None)
input_shapes Dict of strings representing input tensor names to list of integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}). Automatically determined when input shapes is None (e.g., {"foo" : None}). (default None)
output_arrays List of output tensors to freeze graph with. Uses output arrays from SignatureDef when none are provided. (default None)
custom_objects Dict mapping names (strings) to custom classes or functions to be considered during model deserialization. (default None)
Returns TFLiteConverter class.
from_saved_model View source
@classmethod
from_saved_model(
saved_model_dir, input_arrays=None, input_shapes=None, output_arrays=None,
tag_set=None, signature_key=None
)
Creates a TFLiteConverter class from a SavedModel.
Args
saved_model_dir SavedModel directory to convert.
input_arrays List of input tensors to freeze graph with. Uses input arrays from SignatureDef when none are provided. (default None)
input_shapes Dict of strings representing input tensor names to list of integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}). Automatically determined when input shapes is None (e.g., {"foo" : None}). (default None)
output_arrays List of output tensors to freeze graph with. Uses output arrays from SignatureDef when none are provided. (default None)
tag_set Set of tags identifying the MetaGraphDef within the SavedModel to analyze. All tags in the tag set must be present. (default set("serve"))
signature_key Key identifying SignatureDef containing inputs and outputs. (default DEFAULT_SERVING_SIGNATURE_DEF_KEY)
Returns TFLiteConverter class.
from_session View source
@classmethod
from_session(
sess, input_tensors, output_tensors
)
Creates a TFLiteConverter class from a TensorFlow Session.
Args
sess TensorFlow Session.
input_tensors List of input tensors. Type and shape are computed using foo.shape and foo.dtype.
output_tensors List of output tensors (only .name is used from this).
Returns TFLiteConverter class.
get_input_arrays View source
get_input_arrays()
Returns a list of the names of the input tensors.
Returns List of strings. | |
doc_30216 |
Set the norm limits for image scaling. Parameters
vmin, vmaxfloat
The limits. The limits may also be passed as a tuple (vmin, vmax) as a single positional argument. | |
doc_30217 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_30218 | A data structure of functions to call to pass extra context values when rendering templates, in the format {scope: [functions]}. The scope key is the name of a blueprint the functions are active for, or None for all requests. To register a function, use the context_processor() decorator. This data structure is internal. It should not be modified directly and its format may change at any time. | |
doc_30219 |
Return selected slices of this array along given axis. Refer to numpy.compress for full documentation. See also numpy.compress
equivalent function | |
doc_30220 |
Return a list of URLs, one for each element of the collection. The list contains None for elements without a URL. See Hyperlinks for an example. | |
doc_30221 |
Return the marker edge width in points. See also set_markeredgewidth. | |
doc_30222 |
Return the label used for this artist in the legend. | |
doc_30223 |
Reset the axes stack. | |
doc_30224 |
Slice substrings from each element in the Series or Index. Parameters
start:int, optional
Start position for slice operation.
stop:int, optional
Stop position for slice operation.
step:int, optional
Step size for slice operation. Returns
Series or Index of object
Series or Index from sliced substring from original string object. See also Series.str.slice_replace
Replace a slice with a string. Series.str.get
Return element at position. Equivalent to Series.str.slice(start=i, stop=i+1) with i being the position. Examples
>>> s = pd.Series(["koala", "dog", "chameleon"])
>>> s
0 koala
1 dog
2 chameleon
dtype: object
>>> s.str.slice(start=1)
0 oala
1 og
2 hameleon
dtype: object
>>> s.str.slice(start=-1)
0 a
1 g
2 n
dtype: object
>>> s.str.slice(stop=2)
0 ko
1 do
2 ch
dtype: object
>>> s.str.slice(step=2)
0 kaa
1 dg
2 caeen
dtype: object
>>> s.str.slice(start=0, stop=5, step=3)
0 kl
1 d
2 cm
dtype: object
Equivalent behaviour to:
>>> s.str[0:5:3]
0 kl
1 d
2 cm
dtype: object | |
doc_30225 | Mock objects limit the results of dir(some_mock) to useful results. For mocks with a spec this includes all the permitted attributes for the mock. See FILTER_DIR for what this filtering does, and how to switch it off. | |
doc_30226 | mmap.MADV_RANDOM
mmap.MADV_SEQUENTIAL
mmap.MADV_WILLNEED
mmap.MADV_DONTNEED
mmap.MADV_REMOVE
mmap.MADV_DONTFORK
mmap.MADV_DOFORK
mmap.MADV_HWPOISON
mmap.MADV_MERGEABLE
mmap.MADV_UNMERGEABLE
mmap.MADV_SOFT_OFFLINE
mmap.MADV_HUGEPAGE
mmap.MADV_NOHUGEPAGE
mmap.MADV_DONTDUMP
mmap.MADV_DODUMP
mmap.MADV_FREE
mmap.MADV_NOSYNC
mmap.MADV_AUTOSYNC
mmap.MADV_NOCORE
mmap.MADV_CORE
mmap.MADV_PROTECT
These options can be passed to mmap.madvise(). Not every option will be present on every system. Availability: Systems with the madvise() system call. New in version 3.8. | |
doc_30227 | tf.compat.v1.map_fn(
fn, elems, dtype=None, parallel_iterations=None, back_prop=True,
swap_memory=False, infer_shape=True, name=None, fn_output_signature=None
)
Warning: SOME ARGUMENTS ARE DEPRECATED: (dtype). They will be removed in a future version. Instructions for updating: Use fn_output_signature instead See also tf.scan. map_fn unstacks elems on axis 0 to obtain a sequence of elements; calls fn to transform each element; and then stacks the transformed values back together. Mapping functions with single-Tensor inputs and outputs If elems is a single tensor and fn's signature is tf.Tensor->tf.Tensor, then map_fn(fn, elems) is equivalent to tf.stack([fn(elem) for elem in tf.unstack(elems)]). E.g.:
tf.map_fn(fn=lambda t: tf.range(t, t + 3), elems=tf.constant([3, 5, 2]))
<tf.Tensor: shape=(3, 3), dtype=int32, numpy=
array([[3, 4, 5],
[5, 6, 7],
[2, 3, 4]], dtype=int32)>
map_fn(fn, elems).shape = [elems.shape[0]] + fn(elems[0]).shape. Mapping functions with multi-arity inputs and outputs map_fn also supports functions with multi-arity inputs and outputs: If elems is a tuple (or nested structure) of tensors, then those tensors must all have the same outer-dimension size (num_elems); and fn is used to transform each tuple (or structure) of corresponding slices from elems. E.g., if elems is a tuple (t1, t2, t3), then fn is used to transform each tuple of slices (t1[i], t2[i], t3[i]) (where 0 <= i < num_elems). If fn returns a tuple (or nested structure) of tensors, then the result is formed by stacking corresponding elements from those structures. Specifying fn's output signature If fn's input and output signatures are different, then the output signature must be specified using fn_output_signature. (The input and output signatures are differ if their structures, dtypes, or tensor types do not match). E.g.:
tf.map_fn(fn=tf.strings.length, # input & output have different dtypes
elems=tf.constant(["hello", "moon"]),
fn_output_signature=tf.int32)
<tf.Tensor: shape=(2,), dtype=int32, numpy=array([5, 4], dtype=int32)>
tf.map_fn(fn=tf.strings.join, # input & output have different structures
elems=[tf.constant(['The', 'A']), tf.constant(['Dog', 'Cat'])],
fn_output_signature=tf.string)
<tf.Tensor: shape=(2,), dtype=string,
numpy=array([b'TheDog', b'ACat'], dtype=object)>
fn_output_signature can be specified using any of the following: A tf.DType or tf.TensorSpec (to describe a tf.Tensor) A tf.RaggedTensorSpec (to describe a tf.RaggedTensor) A tf.SparseTensorSpec (to describe a tf.sparse.SparseTensor) A (possibly nested) tuple, list, or dict containing the above types. RaggedTensors map_fn supports tf.RaggedTensor inputs and outputs. In particular:
If elems is a RaggedTensor, then fn will be called with each row of that ragged tensor. If elems has only one ragged dimension, then the values passed to fn will be tf.Tensors. If elems has multiple ragged dimensions, then the values passed to fn will be tf.RaggedTensors with one fewer ragged dimension.
If the result of map_fn should be a RaggedTensor, then use a tf.RaggedTensorSpec to specify fn_output_signature. If fn returns tf.Tensors with varying sizes, then use a tf.RaggedTensorSpec with ragged_rank=0 to combine them into a single ragged tensor (which will have ragged_rank=1). If fn returns tf.RaggedTensors, then use a tf.RaggedTensorSpec with the same ragged_rank.
# Example: RaggedTensor input
rt = tf.ragged.constant([[1, 2, 3], [], [4, 5], [6]])
tf.map_fn(tf.reduce_sum, rt, fn_output_signature=tf.int32)
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([6, 0, 9, 6], dtype=int32)>
# Example: RaggedTensor output
elems = tf.constant([3, 5, 0, 2])
tf.map_fn(tf.range, elems,
fn_output_signature=tf.RaggedTensorSpec(shape=[None],
dtype=tf.int32))
<tf.RaggedTensor [[0, 1, 2], [0, 1, 2, 3, 4], [], [0, 1]]>
Note: map_fn should only be used if you need to map a function over the rows of a RaggedTensor. If you wish to map a function over the individual values, then you should use:
tf.ragged.map_flat_values(fn, rt) (if fn is expressible as TensorFlow ops)
rt.with_flat_values(map_fn(fn, rt.flat_values)) (otherwise) E.g.:
rt = tf.ragged.constant([[1, 2, 3], [], [4, 5], [6]])
tf.ragged.map_flat_values(lambda x: x + 2, rt)
<tf.RaggedTensor [[3, 4, 5], [], [6, 7], [8]]>
SparseTensors map_fn supports tf.sparse.SparseTensor inputs and outputs. In particular: If elems is a SparseTensor, then fn will be called with each row of that sparse tensor. In particular, the value passed to fn will be a tf.sparse.SparseTensor with one fewer dimension than elems. If the result of map_fn should be a SparseTensor, then use a tf.SparseTensorSpec to specify fn_output_signature. The individual SparseTensors returned by fn will be stacked into a single SparseTensor with one more dimension.
# Example: SparseTensor input
st = tf.sparse.SparseTensor([[0, 0], [2, 0], [2, 1]], [2, 3, 4], [4, 4])
tf.map_fn(tf.sparse.reduce_sum, st, fn_output_signature=tf.int32)
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([2, 0, 7, 0], dtype=int32)>
# Example: SparseTensor output
tf.sparse.to_dense(
tf.map_fn(tf.sparse.eye, tf.constant([2, 3]),
fn_output_signature=tf.SparseTensorSpec(None, tf.float32)))
<tf.Tensor: shape=(2, 3, 3), dtype=float32, numpy=
array([[[1., 0., 0.],
[0., 1., 0.],
[0., 0., 0.]],
[[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]]], dtype=float32)>
Note: map_fn should only be used if you need to map a function over the rows of a SparseTensor. If you wish to map a function over the nonzero values, then you should use:
If the function is expressible as TensorFlow ops, use: tf.sparse.SparseTensor(st.indices, fn(st.values), st.dense_shape)
Otherwise, use: tf.sparse.SparseTensor(st.indices, tf.map_fn(fn, st.values),
st.dense_shape)
map_fn vs. vectorized operations map_fn will apply the operations used by fn to each element of elems, resulting in O(elems.shape[0]) total operations. This is somewhat mitigated by the fact that map_fn can process elements in parallel. However, a transform expressed using map_fn is still typically less efficient than an equivalent transform expressed using vectorized operations. map_fn should typically only be used if one of the following is true: It is difficult or expensive to express the desired transform with vectorized operations.
fn creates large intermediate values, so an equivalent vectorized transform would take too much memory. Processing elements in parallel is more efficient than an equivalent vectorized transform. Efficiency of the transform is not critical, and using map_fn is more readable. E.g., the example given above that maps fn=lambda t: tf.range(t, t + 3) across elems could be rewritten more efficiently using vectorized ops:
elems = tf.constant([3, 5, 2])
tf.range(3) + tf.expand_dims(elems, 1)
<tf.Tensor: shape=(3, 3), dtype=int32, numpy=
array([[3, 4, 5],
[5, 6, 7],
[2, 3, 4]], dtype=int32)>
In some cases, tf.vectorized_map can be used to automatically convert a function to a vectorized eqivalent. Eager execution When executing eagerly, map_fn does not execute in parallel even if parallel_iterations is set to a value > 1. You can still get the performance benefits of running a function in parallel by using the tf.function decorator:
fn=lambda t: tf.range(t, t + 3)
@tf.function
def func(elems):
return tf.map_fn(fn, elems, parallel_iterations=3)
func(tf.constant([3, 5, 2]))
<tf.Tensor: shape=(3, 3), dtype=int32, numpy=
array([[3, 4, 5],
[5, 6, 7],
[2, 3, 4]], dtype=int32)>
Note: if you use the tf.function decorator, any non-TensorFlow Python code that you may have written in your function won't get executed. See tf.function for more details. The recommendation would be to debug without tf.function but switch to it to get performance benefits of running map_fn in parallel.
Args
fn The callable to be performed. It accepts one argument, which will have the same (possibly nested) structure as elems. Its output must have the same structure as fn_output_signature if one is provided; otherwise it must have the same structure as elems.
elems A tensor or (possibly nested) sequence of tensors, each of which will be unstacked along their first dimension. fn will be applied to the nested sequence of the resulting slices. elems may include ragged and sparse tensors. elems must consist of at least one tensor.
dtype Deprecated: Equivalent to fn_output_signature.
parallel_iterations (optional) The number of iterations allowed to run in parallel. When graph building, the default value is 10. While executing eagerly, the default value is set to 1.
back_prop (optional) False disables support for back propagation.
swap_memory (optional) True enables GPU-CPU memory swapping.
infer_shape (optional) False disables tests for consistent output shapes.
name (optional) Name prefix for the returned tensors.
fn_output_signature The output signature of fn. Must be specified if fn's input and output signatures are different (i.e., if their structures, dtypes, or tensor types do not match). fn_output_signature can be specified using any of the following: A tf.DType or tf.TensorSpec (to describe a tf.Tensor) A tf.RaggedTensorSpec (to describe a tf.RaggedTensor) A tf.SparseTensorSpec (to describe a tf.sparse.SparseTensor) A (possibly nested) tuple, list, or dict containing the above types.
Returns A tensor or (possibly nested) sequence of tensors. Each tensor stacks the results of applying fn to tensors unstacked from elems along the first dimension, from first to last. The result may include ragged and sparse tensors.
Raises
TypeError if fn is not callable or the structure of the output of fn and fn_output_signature do not match.
ValueError if the lengths of the output of fn and fn_output_signature do not match, or if the elems does not contain any tensor. Examples:
elems = np.array([1, 2, 3, 4, 5, 6])
tf.map_fn(lambda x: x * x, elems)
<tf.Tensor: shape=(6,), dtype=int64, numpy=array([ 1, 4, 9, 16, 25, 36])>
elems = (np.array([1, 2, 3]), np.array([-1, 1, -1]))
tf.map_fn(lambda x: x[0] * x[1], elems, fn_output_signature=tf.int64)
<tf.Tensor: shape=(3,), dtype=int64, numpy=array([-1, 2, -3])>
elems = np.array([1, 2, 3])
tf.map_fn(lambda x: (x, -x), elems,
fn_output_signature=(tf.int64, tf.int64))
(<tf.Tensor: shape=(3,), dtype=int64, numpy=array([1, 2, 3])>,
<tf.Tensor: shape=(3,), dtype=int64, numpy=array([-1, -2, -3])>) | |
doc_30228 | See Migration guide for more details. tf.compat.v1.strings.reduce_join
tf.compat.v1.reduce_join(
inputs, axis=None, keep_dims=None, separator='', name=None,
reduction_indices=None, keepdims=None
)
tf.strings.reduce_join([['abc','123'],
['def','456']]).numpy()
b'abc123def456'
tf.strings.reduce_join([['abc','123'],
['def','456']], axis=-1).numpy()
array([b'abc123', b'def456'], dtype=object)
tf.strings.reduce_join([['abc','123'],
['def','456']],
axis=-1,
separator=" ").numpy()
array([b'abc 123', b'def 456'], dtype=object)
Args
inputs A tf.string tensor.
axis Which axis to join along. The default behavior is to join all elements, producing a scalar.
keepdims If true, retains reduced dimensions with length 1.
separator a string added between each string being joined.
name A name for the operation (optional).
Returns A tf.string tensor. | |
doc_30229 | Computes the eigenvalues and eigenvectors of a real square matrix. Note Since eigenvalues and eigenvectors might be complex, backward pass is supported only if eigenvalues and eigenvectors are all real valued. When input is on CUDA, torch.eig() causes host-device synchronization. Parameters
input (Tensor) – the square matrix of shape (n×n)(n \times n) for which the eigenvalues and eigenvectors will be computed
eigenvectors (bool) – True to compute both eigenvalues and eigenvectors; otherwise, only eigenvalues will be computed Keyword Arguments
out (tuple, optional) – the output tensors Returns
A namedtuple (eigenvalues, eigenvectors) containing
eigenvalues (Tensor): Shape (n×2)(n \times 2) . Each row is an eigenvalue of input, where the first element is the real part and the second element is the imaginary part. The eigenvalues are not necessarily ordered.
eigenvectors (Tensor): If eigenvectors=False, it’s an empty tensor. Otherwise, this tensor of shape (n×n)(n \times n) can be used to compute normalized (unit length) eigenvectors of corresponding eigenvalues as follows. If the corresponding eigenvalues[j] is a real number, column eigenvectors[:, j] is the eigenvector corresponding to eigenvalues[j]. If the corresponding eigenvalues[j] and eigenvalues[j + 1] form a complex conjugate pair, then the true eigenvectors can be computed as true eigenvector[j]=eigenvectors[:,j]+i×eigenvectors[:,j+1]\text{true eigenvector}[j] = eigenvectors[:, j] + i \times eigenvectors[:, j + 1] , true eigenvector[j+1]=eigenvectors[:,j]−i×eigenvectors[:,j+1]\text{true eigenvector}[j + 1] = eigenvectors[:, j] - i \times eigenvectors[:, j + 1] . Return type
(Tensor, Tensor) Example: Trivial example with a diagonal matrix. By default, only eigenvalues are computed:
>>> a = torch.diag(torch.tensor([1, 2, 3], dtype=torch.double))
>>> e, v = torch.eig(a)
>>> e
tensor([[1., 0.],
[2., 0.],
[3., 0.]], dtype=torch.float64)
>>> v
tensor([], dtype=torch.float64)
Compute also the eigenvectors:
>>> e, v = torch.eig(a, eigenvectors=True)
>>> e
tensor([[1., 0.],
[2., 0.],
[3., 0.]], dtype=torch.float64)
>>> v
tensor([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]], dtype=torch.float64) | |
doc_30230 |
Return (a % i), that is pre-Python 2.6 string formatting (interpolation), element-wise for a pair of array_likes of str or unicode. Parameters
aarray_like of str or unicode
valuesarray_like of values
These values will be element-wise interpolated into the string. Returns
outndarray
Output array of str or unicode, depending on input types See also str.__mod__ | |
doc_30231 | Any value error related to the address. | |
doc_30232 | Convert an integer number to an octal string prefixed with “0o”. The result is a valid Python expression. If x is not a Python int object, it has to define an __index__() method that returns an integer. For example: >>> oct(8)
'0o10'
>>> oct(-56)
'-0o70'
If you want to convert an integer number to octal string either with prefix “0o” or not, you can use either of the following ways. >>> '%#o' % 10, '%o' % 10
('0o12', '12')
>>> format(10, '#o'), format(10, 'o')
('0o12', '12')
>>> f'{10:#o}', f'{10:o}'
('0o12', '12')
See also format() for more information. | |
doc_30233 |
If prop is None, return a list of strings of all settable properties and their valid values. If prop is not None, it is a valid property name and that property will be returned as a string of property : valid values. | |
doc_30234 | Building URLs works pretty much the other way round. Instead of match you call build and pass it the endpoint and a dict of arguments for the placeholders. The build function also accepts an argument called force_external which, if you set it to True will force external URLs. Per default external URLs (include the server name) will only be used if the target URL is on a different subdomain. >>> m = Map([
... Rule('/', endpoint='index'),
... Rule('/downloads/', endpoint='downloads/index'),
... Rule('/downloads/<int:id>', endpoint='downloads/show')
... ])
>>> urls = m.bind("example.com", "/")
>>> urls.build("index", {})
'/'
>>> urls.build("downloads/show", {'id': 42})
'/downloads/42'
>>> urls.build("downloads/show", {'id': 42}, force_external=True)
'http://example.com/downloads/42'
Because URLs cannot contain non ASCII data you will always get bytes back. Non ASCII characters are urlencoded with the charset defined on the map instance. Additional values are converted to strings and appended to the URL as URL querystring parameters: >>> urls.build("index", {'q': 'My Searchstring'})
'/?q=My+Searchstring'
When processing those additional values, lists are furthermore interpreted as multiple values (as per werkzeug.datastructures.MultiDict): >>> urls.build("index", {'q': ['a', 'b', 'c']})
'/?q=a&q=b&q=c'
Passing a MultiDict will also add multiple values: >>> urls.build("index", MultiDict((('p', 'z'), ('q', 'a'), ('q', 'b'))))
'/?p=z&q=a&q=b'
If a rule does not exist when building a BuildError exception is raised. The build method accepts an argument called method which allows you to specify the method you want to have an URL built for if you have different methods for the same endpoint specified. Parameters
endpoint (str) – the endpoint of the URL to build.
values (Optional[Mapping[str, Any]]) – the values for the URL to build. Unhandled values are appended to the URL as query parameters.
method (Optional[str]) – the HTTP method for the rule if there are different URLs for different methods on the same endpoint.
force_external (bool) – enforce full canonical external URLs. If the URL scheme is not provided, this will generate a protocol-relative URL.
append_unknown (bool) – unknown parameters are appended to the generated URL as query string argument. Disable this if you want the builder to ignore those.
url_scheme (Optional[str]) – Scheme to use in place of the bound url_scheme. Return type
str Changed in version 2.0: Added the url_scheme parameter. Changelog New in version 0.6: Added the append_unknown parameter. | |
doc_30235 |
Bases: matplotlib.dviread.Dvi A virtual font (*.vf file) containing subroutines for dvi files. Parameters
filenamestr or path-like
Notes The virtual font format is a derivative of dvi: http://mirrors.ctan.org/info/knuth/virtual-fonts This class reuses some of the machinery of Dvi but replaces the _read loop and dispatch mechanism. Examples vf = Vf(filename)
glyph = vf[code]
glyph.text, glyph.boxes, glyph.width
Read the data from the file named filename and convert TeX's internal units to units of dpi per inch. dpi only sets the units and does not limit the resolution. Use None to return TeX's internal units. | |
doc_30236 |
Test whether input is an instance of MaskedArray. This function returns True if x is an instance of MaskedArray and returns False otherwise. Any object is accepted as input. Parameters
xobject
Object to test. Returns
resultbool
True if x is a MaskedArray. See also isMA
Alias to isMaskedArray. isarray
Alias to isMaskedArray. Examples >>> import numpy.ma as ma
>>> a = np.eye(3, 3)
>>> a
array([[ 1., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 1.]])
>>> m = ma.masked_values(a, 0)
>>> m
masked_array(
data=[[1.0, --, --],
[--, 1.0, --],
[--, --, 1.0]],
mask=[[False, True, True],
[ True, False, True],
[ True, True, False]],
fill_value=0.0)
>>> ma.isMaskedArray(a)
False
>>> ma.isMaskedArray(m)
True
>>> ma.isMaskedArray([0, 1, 2])
False | |
doc_30237 |
Calculate the euclidean distances in the presence of missing values. Compute the euclidean distance between each pair of samples in X and Y, where Y=X is assumed if Y=None. When calculating the distance between a pair of samples, this formulation ignores feature coordinates with a missing value in either sample and scales up the weight of the remaining coordinates: dist(x,y) = sqrt(weight * sq. distance from present coordinates) where, weight = Total # of coordinates / # of present coordinates For example, the distance between [3, na, na, 6] and [1, na, 4, 5] is: \[\sqrt{\frac{4}{2}((3-1)^2 + (6-5)^2)}\] If all the coordinates are missing or if there are no common present coordinates then NaN is returned for that pair. Read more in the User Guide. New in version 0.22. Parameters
Xarray-like of shape=(n_samples_X, n_features)
Yarray-like of shape=(n_samples_Y, n_features), default=None
squaredbool, default=False
Return squared Euclidean distances.
missing_valuesnp.nan or int, default=np.nan
Representation of missing value.
copybool, default=True
Make and use a deep copy of X and Y (if Y exists). Returns
distancesndarray of shape (n_samples_X, n_samples_Y)
See also
paired_distances
Distances between pairs of elements of X and Y. References John K. Dixon, “Pattern Recognition with Partly Missing Data”, IEEE Transactions on Systems, Man, and Cybernetics, Volume: 9, Issue: 10, pp. 617 - 621, Oct. 1979. http://ieeexplore.ieee.org/abstract/document/4310090/
Examples >>> from sklearn.metrics.pairwise import nan_euclidean_distances
>>> nan = float("NaN")
>>> X = [[0, 1], [1, nan]]
>>> nan_euclidean_distances(X, X) # distance between rows of X
array([[0. , 1.41421356],
[1.41421356, 0. ]])
>>> # get distance to origin
>>> nan_euclidean_distances(X, [[0, 0]])
array([[1. ],
[1.41421356]]) | |
doc_30238 |
Chebyshev series whose graph is a straight line. Parameters
off, sclscalars
The specified line is given by off + scl*x. Returns
yndarray
This module’s representation of the Chebyshev series for off + scl*x. See also numpy.polynomial.polynomial.polyline
numpy.polynomial.legendre.legline
numpy.polynomial.laguerre.lagline
numpy.polynomial.hermite.hermline
numpy.polynomial.hermite_e.hermeline
Examples >>> import numpy.polynomial.chebyshev as C
>>> C.chebline(3,2)
array([3, 2])
>>> C.chebval(-3, C.chebline(3,2)) # should be -3
-3.0 | |
doc_30239 | When passed a quoted tag it will check if this tag is part of the set. If the tag is weak it is checked against weak and strong tags, otherwise strong only. | |
doc_30240 | Alias for clamp_(). | |
doc_30241 | Display url using the browser handled by this controller. If new is 1, a new browser window is opened if possible. If new is 2, a new browser page (“tab”) is opened if possible. | |
doc_30242 | A generic version of contextlib.AbstractContextManager. New in version 3.5.4. New in version 3.6.0. Deprecated since version 3.9: contextlib.AbstractContextManager now supports []. See PEP 585 and Generic Alias Type. | |
doc_30243 | A dictionary or other mapping object used to store an object’s (writable) attributes. | |
doc_30244 | Command that was used to spawn the child process. | |
doc_30245 | From the AuthenticationMiddleware: An instance of AUTH_USER_MODEL representing the currently logged-in user. If the user isn’t currently logged in, user will be set to an instance of AnonymousUser. You can tell them apart with is_authenticated, like so: if request.user.is_authenticated:
... # Do something for logged-in users.
else:
... # Do something for anonymous users. | |
doc_30246 | asyncio.isfuture(obj)
Return True if obj is either of: an instance of asyncio.Future, an instance of asyncio.Task, a Future-like object with a _asyncio_future_blocking attribute. New in version 3.5.
asyncio.ensure_future(obj, *, loop=None)
Return:
obj argument as is, if obj is a Future, a Task, or a Future-like object (isfuture() is used for the test.) a Task object wrapping obj, if obj is a coroutine (iscoroutine() is used for the test); in this case the coroutine will be scheduled by ensure_future(). a Task object that would await on obj, if obj is an awaitable (inspect.isawaitable() is used for the test.) If obj is neither of the above a TypeError is raised. Important See also the create_task() function which is the preferred way for creating new Tasks. Changed in version 3.5.1: The function accepts any awaitable object.
asyncio.wrap_future(future, *, loop=None)
Wrap a concurrent.futures.Future object in a asyncio.Future object.
Future Object
class asyncio.Future(*, loop=None)
A Future represents an eventual result of an asynchronous operation. Not thread-safe. Future is an awaitable object. Coroutines can await on Future objects until they either have a result or an exception set, or until they are cancelled. Typically Futures are used to enable low-level callback-based code (e.g. in protocols implemented using asyncio transports) to interoperate with high-level async/await code. The rule of thumb is to never expose Future objects in user-facing APIs, and the recommended way to create a Future object is to call loop.create_future(). This way alternative event loop implementations can inject their own optimized implementations of a Future object. Changed in version 3.7: Added support for the contextvars module.
result()
Return the result of the Future. If the Future is done and has a result set by the set_result() method, the result value is returned. If the Future is done and has an exception set by the set_exception() method, this method raises the exception. If the Future has been cancelled, this method raises a CancelledError exception. If the Future’s result isn’t yet available, this method raises a InvalidStateError exception.
set_result(result)
Mark the Future as done and set its result. Raises a InvalidStateError error if the Future is already done.
set_exception(exception)
Mark the Future as done and set an exception. Raises a InvalidStateError error if the Future is already done.
done()
Return True if the Future is done. A Future is done if it was cancelled or if it has a result or an exception set with set_result() or set_exception() calls.
cancelled()
Return True if the Future was cancelled. The method is usually used to check if a Future is not cancelled before setting a result or an exception for it: if not fut.cancelled():
fut.set_result(42)
add_done_callback(callback, *, context=None)
Add a callback to be run when the Future is done. The callback is called with the Future object as its only argument. If the Future is already done when this method is called, the callback is scheduled with loop.call_soon(). An optional keyword-only context argument allows specifying a custom contextvars.Context for the callback to run in. The current context is used when no context is provided. functools.partial() can be used to pass parameters to the callback, e.g.: # Call 'print("Future:", fut)' when "fut" is done.
fut.add_done_callback(
functools.partial(print, "Future:"))
Changed in version 3.7: The context keyword-only parameter was added. See PEP 567 for more details.
remove_done_callback(callback)
Remove callback from the callbacks list. Returns the number of callbacks removed, which is typically 1, unless a callback was added more than once.
cancel(msg=None)
Cancel the Future and schedule callbacks. If the Future is already done or cancelled, return False. Otherwise, change the Future’s state to cancelled, schedule the callbacks, and return True. Changed in version 3.9: Added the msg parameter.
exception()
Return the exception that was set on this Future. The exception (or None if no exception was set) is returned only if the Future is done. If the Future has been cancelled, this method raises a CancelledError exception. If the Future isn’t done yet, this method raises an InvalidStateError exception.
get_loop()
Return the event loop the Future object is bound to. New in version 3.7.
This example creates a Future object, creates and schedules an asynchronous Task to set result for the Future, and waits until the Future has a result: async def set_after(fut, delay, value):
# Sleep for *delay* seconds.
await asyncio.sleep(delay)
# Set *value* as a result of *fut* Future.
fut.set_result(value)
async def main():
# Get the current event loop.
loop = asyncio.get_running_loop()
# Create a new Future object.
fut = loop.create_future()
# Run "set_after()" coroutine in a parallel Task.
# We are using the low-level "loop.create_task()" API here because
# we already have a reference to the event loop at hand.
# Otherwise we could have just used "asyncio.create_task()".
loop.create_task(
set_after(fut, 1, '... world'))
print('hello ...')
# Wait until *fut* has a result (1 second) and print it.
print(await fut)
asyncio.run(main())
Important The Future object was designed to mimic concurrent.futures.Future. Key differences include: unlike asyncio Futures, concurrent.futures.Future instances cannot be awaited.
asyncio.Future.result() and asyncio.Future.exception() do not accept the timeout argument.
asyncio.Future.result() and asyncio.Future.exception() raise an InvalidStateError exception when the Future is not done. Callbacks registered with asyncio.Future.add_done_callback() are not called immediately. They are scheduled with loop.call_soon() instead. asyncio Future is not compatible with the concurrent.futures.wait() and concurrent.futures.as_completed() functions.
asyncio.Future.cancel() accepts an optional msg argument, but concurrent.futures.cancel() does not. | |
doc_30247 | Return a bytes object which is a printable representation of the character ch. Control characters are represented as a caret followed by the character, for example as b'^C'. Printing characters are left as they are. | |
doc_30248 | With a context of {'first_name': 'John', 'last_name': 'Doe'}, this template renders to: My first name is John. My last name is Doe.
Dictionary lookup, attribute lookup and list-index lookups are implemented with a dot notation: {{ my_dict.key }}
{{ my_object.attribute }}
{{ my_list.0 }}
If a variable resolves to a callable, the template system will call it with no arguments and use its result instead of the callable. Tags Tags provide arbitrary logic in the rendering process. This definition is deliberately vague. For example, a tag can output content, serve as a control structure e.g. an “if” statement or a “for” loop, grab content from a database, or even enable access to other template tags. Tags are surrounded by {% and %} like this: {% csrf_token %}
Most tags accept arguments: {% cycle 'odd' 'even' %}
Some tags require beginning and ending tags: {% if user.is_authenticated %}Hello, {{ user.username }}.{% endif %}
A reference of built-in tags is available as well as instructions for writing custom tags. Filters Filters transform the values of variables and tag arguments. They look like this: {{ django|title }}
With a context of {'django': 'the web framework for perfectionists with
deadlines'}, this template renders to: The Web Framework For Perfectionists With Deadlines
Some filters take an argument: {{ my_date|date:"Y-m-d" }}
A reference of built-in filters is available as well as instructions for writing custom filters. Comments Comments look like this: {# this won't be rendered #}
A {% comment %} tag provides multi-line comments. Components About this section This is an overview of the Django template language’s APIs. For details see the API reference. Engine django.template.Engine encapsulates an instance of the Django template system. The main reason for instantiating an Engine directly is to use the Django template language outside of a Django project. django.template.backends.django.DjangoTemplates is a thin wrapper adapting django.template.Engine to Django’s template backend API. Template django.template.Template represents a compiled template. Templates are obtained with Engine.get_template() or Engine.from_string(). Likewise django.template.backends.django.Template is a thin wrapper adapting django.template.Template to the common template API. Context django.template.Context holds some metadata in addition to the context data. It is passed to Template.render() for rendering a template. django.template.RequestContext is a subclass of Context that stores the current HttpRequest and runs template context processors. The common API doesn’t have an equivalent concept. Context data is passed in a plain dict and the current HttpRequest is passed separately if needed. Loaders Template loaders are responsible for locating templates, loading them, and returning Template objects. Django provides several built-in template loaders and supports custom template loaders. Context processors Context processors are functions that receive the current HttpRequest as an argument and return a dict of data to be added to the rendering context. Their main use is to add common data shared by all templates to the context without repeating code in every view. Django provides many built-in context processors, and you can implement your own additional context processors, too. Support for template engines Configuration Templates engines are configured with the TEMPLATES setting. It’s a list of configurations, one for each engine. The default value is empty. The settings.py generated by the startproject command defines a more useful value: TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
# ... some options here ...
},
},
]
BACKEND is a dotted Python path to a template engine class implementing Django’s template backend API. The built-in backends are django.template.backends.django.DjangoTemplates and django.template.backends.jinja2.Jinja2. Since most engines load templates from files, the top-level configuration for each engine contains two common settings:
DIRS defines a list of directories where the engine should look for template source files, in search order.
APP_DIRS tells whether the engine should look for templates inside installed applications. Each backend defines a conventional name for the subdirectory inside applications where its templates should be stored. While uncommon, it’s possible to configure several instances of the same backend with different options. In that case you should define a unique NAME for each engine. OPTIONS contains backend-specific settings. Usage The django.template.loader module defines two functions to load templates.
get_template(template_name, using=None)
This function loads the template with the given name and returns a Template object. The exact type of the return value depends on the backend that loaded the template. Each backend has its own Template class. get_template() tries each template engine in order until one succeeds. If the template cannot be found, it raises TemplateDoesNotExist. If the template is found but contains invalid syntax, it raises TemplateSyntaxError. How templates are searched and loaded depends on each engine’s backend and configuration. If you want to restrict the search to a particular template engine, pass the engine’s NAME in the using argument.
select_template(template_name_list, using=None)
select_template() is just like get_template(), except it takes a list of template names. It tries each name in order and returns the first template that exists.
If loading a template fails, the following two exceptions, defined in django.template, may be raised:
exception TemplateDoesNotExist(msg, tried=None, backend=None, chain=None)
This exception is raised when a template cannot be found. It accepts the following optional arguments for populating the template postmortem on the debug page:
backend The template backend instance from which the exception originated.
tried A list of sources that were tried when finding the template. This is formatted as a list of tuples containing (origin, status), where origin is an origin-like object and status is a string with the reason the template wasn’t found.
chain A list of intermediate TemplateDoesNotExist exceptions raised when trying to load a template. This is used by functions, such as get_template(), that try to load a given template from multiple engines.
exception TemplateSyntaxError(msg)
This exception is raised when a template was found but contains errors.
Template objects returned by get_template() and select_template() must provide a render() method with the following signature:
Template.render(context=None, request=None)
Renders this template with a given context. If context is provided, it must be a dict. If it isn’t provided, the engine will render the template with an empty context. If request is provided, it must be an HttpRequest. Then the engine must make it, as well as the CSRF token, available in the template. How this is achieved is up to each backend.
Here’s an example of the search algorithm. For this example the TEMPLATES setting is: TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
'/home/html/example.com',
'/home/html/default',
],
},
{
'BACKEND': 'django.template.backends.jinja2.Jinja2',
'DIRS': [
'/home/html/jinja2',
],
},
]
If you call get_template('story_detail.html'), here are the files Django will look for, in order:
/home/html/example.com/story_detail.html ('django' engine)
/home/html/default/story_detail.html ('django' engine)
/home/html/jinja2/story_detail.html ('jinja2' engine) If you call select_template(['story_253_detail.html', 'story_detail.html']), here’s what Django will look for:
/home/html/example.com/story_253_detail.html ('django' engine)
/home/html/default/story_253_detail.html ('django' engine)
/home/html/jinja2/story_253_detail.html ('jinja2' engine)
/home/html/example.com/story_detail.html ('django' engine)
/home/html/default/story_detail.html ('django' engine)
/home/html/jinja2/story_detail.html ('jinja2' engine) When Django finds a template that exists, it stops looking. Tip You can use select_template() for flexible template loading. For example, if you’ve written a news story and want some stories to have custom templates, use something like select_template(['story_%s_detail.html' % story.id,
'story_detail.html']). That’ll allow you to use a custom template for an individual story, with a fallback template for stories that don’t have custom templates. It’s possible – and preferable – to organize templates in subdirectories inside each directory containing templates. The convention is to make a subdirectory for each Django app, with subdirectories within those subdirectories as needed. Do this for your own sanity. Storing all templates in the root level of a single directory gets messy. To load a template that’s within a subdirectory, use a slash, like so: get_template('news/story_detail.html')
Using the same TEMPLATES option as above, this will attempt to load the following templates:
/home/html/example.com/news/story_detail.html ('django' engine)
/home/html/default/news/story_detail.html ('django' engine)
/home/html/jinja2/news/story_detail.html ('jinja2' engine) In addition, to cut down on the repetitive nature of loading and rendering templates, Django provides a shortcut function which automates the process.
render_to_string(template_name, context=None, request=None, using=None)
render_to_string() loads a template like get_template() and calls its render() method immediately. It takes the following arguments.
template_name The name of the template to load and render. If it’s a list of template names, Django uses select_template() instead of get_template() to find the template.
context A dict to be used as the template’s context for rendering.
request An optional HttpRequest that will be available during the template’s rendering process.
using An optional template engine NAME. The search for the template will be restricted to that engine. Usage example: from django.template.loader import render_to_string
rendered = render_to_string('my_template.html', {'foo': 'bar'})
See also the render() shortcut which calls render_to_string() and feeds the result into an HttpResponse suitable for returning from a view. Finally, you can use configured engines directly:
engines
Template engines are available in django.template.engines: from django.template import engines
django_engine = engines['django']
template = django_engine.from_string("Hello {{ name }}!")
The lookup key — 'django' in this example — is the engine’s NAME.
Built-in backends
class DjangoTemplates
Set BACKEND to 'django.template.backends.django.DjangoTemplates' to configure a Django template engine. When APP_DIRS is True, DjangoTemplates engines look for templates in the templates subdirectory of installed applications. This generic name was kept for backwards-compatibility. DjangoTemplates engines accept the following OPTIONS:
'autoescape': a boolean that controls whether HTML autoescaping is enabled. It defaults to True. Warning Only set it to False if you’re rendering non-HTML templates!
'context_processors': a list of dotted Python paths to callables that are used to populate the context when a template is rendered with a request. These callables take a request object as their argument and return a dict of items to be merged into the context. It defaults to an empty list. See RequestContext for more information.
'debug': a boolean that turns on/off template debug mode. If it is True, the fancy error page will display a detailed report for any exception raised during template rendering. This report contains the relevant snippet of the template with the appropriate line highlighted. It defaults to the value of the DEBUG setting.
'loaders': a list of dotted Python paths to template loader classes. Each Loader class knows how to import templates from a particular source. Optionally, a tuple can be used instead of a string. The first item in the tuple should be the Loader class name, and subsequent items are passed to the Loader during initialization. The default depends on the values of DIRS and APP_DIRS. See Loader types for details.
'string_if_invalid': the output, as a string, that the template system should use for invalid (e.g. misspelled) variables. It defaults to an empty string. See How invalid variables are handled for details.
'file_charset': the charset used to read template files on disk. It defaults to 'utf-8'.
'libraries': A dictionary of labels and dotted Python paths of template tag modules to register with the template engine. This can be used to add new libraries or provide alternate labels for existing ones. For example: OPTIONS={
'libraries': {
'myapp_tags': 'path.to.myapp.tags',
'admin.urls': 'django.contrib.admin.templatetags.admin_urls',
},
}
Libraries can be loaded by passing the corresponding dictionary key to the {% load %} tag.
'builtins': A list of dotted Python paths of template tag modules to add to built-ins. For example: OPTIONS={
'builtins': ['myapp.builtins'],
}
Tags and filters from built-in libraries can be used without first calling the {% load %} tag.
class Jinja2
Requires Jinja2 to be installed: $ python -m pip install Jinja2
...\> py -m pip install Jinja2
Set BACKEND to 'django.template.backends.jinja2.Jinja2' to configure a Jinja2 engine. When APP_DIRS is True, Jinja2 engines look for templates in the jinja2 subdirectory of installed applications. The most important entry in OPTIONS is 'environment'. It’s a dotted Python path to a callable returning a Jinja2 environment. It defaults to 'jinja2.Environment'. Django invokes that callable and passes other options as keyword arguments. Furthermore, Django adds defaults that differ from Jinja2’s for a few options:
'autoescape': True
'loader': a loader configured for DIRS and APP_DIRS
'auto_reload': settings.DEBUG
'undefined': DebugUndefined if settings.DEBUG else Undefined
Jinja2 engines also accept the following OPTIONS:
'context_processors': a list of dotted Python paths to callables that are used to populate the context when a template is rendered with a request. These callables take a request object as their argument and return a dict of items to be merged into the context. It defaults to an empty list. Using context processors with Jinja2 templates is discouraged. Context processors are useful with Django templates because Django templates don’t support calling functions with arguments. Since Jinja2 doesn’t have that limitation, it’s recommended to put the function that you would use as a context processor in the global variables available to the template using jinja2.Environment as described below. You can then call that function in the template: {{ function(request) }}
Some Django templates context processors return a fixed value. For Jinja2 templates, this layer of indirection isn’t necessary since you can add constants directly in jinja2.Environment. The original use case for adding context processors for Jinja2 involved: Making an expensive computation that depends on the request. Needing the result in every template. Using the result multiple times in each template. Unless all of these conditions are met, passing a function to the template is more in line with the design of Jinja2. The default configuration is purposefully kept to a minimum. If a template is rendered with a request (e.g. when using render()), the Jinja2 backend adds the globals request, csrf_input, and csrf_token to the context. Apart from that, this backend doesn’t create a Django-flavored environment. It doesn’t know about Django filters and tags. In order to use Django-specific APIs, you must configure them into the environment. For example, you can create myproject/jinja2.py with this content: from django.templatetags.static import static
from django.urls import reverse
from jinja2 import Environment
def environment(**options):
env = Environment(**options)
env.globals.update({
'static': static,
'url': reverse,
})
return env
and set the 'environment' option to 'myproject.jinja2.environment'. Then you could use the following constructs in Jinja2 templates: <img src="{{ static('path/to/company-logo.png') }}" alt="Company Logo">
<a href="{{ url('admin:index') }}">Administration</a>
The concepts of tags and filters exist both in the Django template language and in Jinja2 but they’re used differently. Since Jinja2 supports passing arguments to callables in templates, many features that require a template tag or filter in Django templates can be achieved by calling a function in Jinja2 templates, as shown in the example above. Jinja2’s global namespace removes the need for template context processors. The Django template language doesn’t have an equivalent of Jinja2 tests. | |
doc_30249 |
Return whether method object o is an alias for another method. | |
doc_30250 | See Migration guide for more details. tf.compat.v1.raw_ops.Merge
tf.raw_ops.Merge(
inputs, name=None
)
Merge waits for at least one of the tensors in inputs to become available. It is usually combined with Switch to implement branching. Merge forwards the first tensor to become available to output, and sets value_index to its index in inputs.
Args
inputs A list of at least 1 Tensor objects with the same type. The input tensors, exactly one of which will become available.
name A name for the operation (optional).
Returns A tuple of Tensor objects (output, value_index). output A Tensor. Has the same type as inputs.
value_index A Tensor of type int32. | |
doc_30251 | See Migration guide for more details. tf.compat.v1.raw_ops.StatsAggregatorHandleV2
tf.raw_ops.StatsAggregatorHandleV2(
container='', shared_name='', name=None
)
Args
container An optional string. Defaults to "".
shared_name An optional string. Defaults to "".
name A name for the operation (optional).
Returns A Tensor of type resource. | |
doc_30252 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_30253 | Returns True if x is subnormal; otherwise returns False. | |
doc_30254 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_30255 | See Migration guide for more details. tf.compat.v1.raw_ops.ParseTensor
tf.raw_ops.ParseTensor(
serialized, out_type, name=None
)
Args
serialized A Tensor of type string. A scalar string containing a serialized TensorProto proto.
out_type A tf.DType. The type of the serialized tensor. The provided type must match the type of the serialized tensor and no implicit conversion will take place.
name A name for the operation (optional).
Returns A Tensor of type out_type. | |
doc_30256 |
Make a violin plot. Make a violin plot for each column of dataset or each vector in sequence dataset. Each filled area extends to represent the entire data range, with optional lines at the mean, the median, the minimum, the maximum, and user-specified quantiles. Parameters
datasetArray or a sequence of vectors.
The input data.
positionsarray-like, default: [1, 2, ..., n]
The positions of the violins. The ticks and limits are automatically set to match the positions.
vertbool, default: True.
If true, creates a vertical violin plot. Otherwise, creates a horizontal violin plot.
widthsarray-like, default: 0.5
Either a scalar or a vector that sets the maximal width of each violin. The default is 0.5, which uses about half of the available horizontal space.
showmeansbool, default: False
If True, will toggle rendering of the means.
showextremabool, default: True
If True, will toggle rendering of the extrema.
showmediansbool, default: False
If True, will toggle rendering of the medians.
quantilesarray-like, default: None
If not None, set a list of floats in interval [0, 1] for each violin, which stands for the quantiles that will be rendered for that violin.
pointsint, default: 100
Defines the number of points to evaluate each of the gaussian kernel density estimations at.
bw_methodstr, scalar or callable, optional
The method used to calculate the estimator bandwidth. This can be 'scott', 'silverman', a scalar constant or a callable. If a scalar, this will be used directly as kde.factor. If a callable, it should take a GaussianKDE instance as its only parameter and return a scalar. If None (default), 'scott' is used.
dataindexable object, optional
If given, the following parameters also accept a string s, which is interpreted as data[s] (unless this raises an exception): dataset Returns
dict
A dictionary mapping each component of the violinplot to a list of the corresponding collection instances created. The dictionary has the following keys:
bodies: A list of the PolyCollection instances containing the filled area of each violin.
cmeans: A LineCollection instance that marks the mean values of each of the violin's distribution.
cmins: A LineCollection instance that marks the bottom of each violin's distribution.
cmaxes: A LineCollection instance that marks the top of each violin's distribution.
cbars: A LineCollection instance that marks the centers of each violin's distribution.
cmedians: A LineCollection instance that marks the median values of each of the violin's distribution.
cquantiles: A LineCollection instance created to identify the quantile values of each of the violin's distribution. | |
doc_30257 | Is raised for non-fatal errors when using TarFile.extract(), but only if TarFile.errorlevel== 2. | |
doc_30258 |
Set the zorder for the artist. Artists with lower zorder values are drawn first. Parameters
levelfloat | |
doc_30259 |
Return this Axis' tick lines as a list of Line2Ds.
Examples using matplotlib.axis.Axis.get_ticklines
Fig Axes Customize Simple
Artist tutorial | |
doc_30260 | Raised when trying to run an operation without the adequate access rights - for example filesystem permissions. Corresponds to errno EACCES and EPERM. | |
doc_30261 | Enable keyboard traversal for a toplevel window containing this notebook. This will extend the bindings for the toplevel window containing the notebook as follows:
Control-Tab: selects the tab following the currently selected one.
Shift-Control-Tab: selects the tab preceding the currently selected one.
Alt-K: where K is the mnemonic (underlined) character of any tab, will select that tab. Multiple notebooks in a single toplevel may be enabled for traversal, including nested notebooks. However, notebook traversal only works properly if all panes have the notebook they are in as master. | |
doc_30262 | Return a list of all the message’s field values. | |
doc_30263 | See torch.mv() | |
doc_30264 | A subscript, such as l[1]. value is the subscripted object (usually sequence or mapping). slice is an index, slice or key. It can be a Tuple and contain a Slice. ctx is Load, Store or Del according to the action performed with the subscript. >>> print(ast.dump(ast.parse('l[1:2, 3]', mode='eval'), indent=4))
Expression(
body=Subscript(
value=Name(id='l', ctx=Load()),
slice=Tuple(
elts=[
Slice(
lower=Constant(value=1),
upper=Constant(value=2)),
Constant(value=3)],
ctx=Load()),
ctx=Load())) | |
doc_30265 |
Number of elements in the array. Equal to np.prod(a.shape), i.e., the product of the array’s dimensions. Notes a.size returns a standard arbitrary precision Python integer. This may not be the case with other methods of obtaining the same value (like the suggested np.prod(a.shape), which returns an instance of np.int_), and may be relevant if the value is used further in calculations that may overflow a fixed size integer type. Examples >>> x = np.zeros((3, 5, 2), dtype=np.complex128)
>>> x.size
30
>>> np.prod(x.shape)
30 | |
doc_30266 |
Return self//=value. | |
doc_30267 |
Return boolean flag, True if artist is included in layout calculations. E.g. Constrained Layout Guide, Figure.tight_layout(), and fig.savefig(fname, bbox_inches='tight'). | |
doc_30268 |
Bases: dateutil.rrule.rrulebase That's the base of the rrule operation. It accepts all the keywords defined in the RFC as its constructor parameters (except byday, which was renamed to byweekday) and more. The constructor prototype is: rrule(freq)
Where freq must be one of YEARLY, MONTHLY, WEEKLY, DAILY, HOURLY, MINUTELY, or SECONDLY. Note Per RFC section 3.3.10, recurrence instances falling on invalid dates and times are ignored rather than coerced: Recurrence rules may generate recurrence instances with an invalid date (e.g., February 30) or nonexistent local time (e.g., 1:30 AM on a day where the local time is moved forward by an hour at 1:00 AM). Such recurrence instances MUST be ignored and MUST NOT be counted as part of the recurrence set. This can lead to possibly surprising behavior when, for example, the start date occurs at the end of the month: >>> from dateutil.rrule import rrule, MONTHLY
>>> from datetime import datetime
>>> start_date = datetime(2014, 12, 31)
>>> list(rrule(freq=MONTHLY, count=4, dtstart=start_date))
...
[datetime.datetime(2014, 12, 31, 0, 0),
datetime.datetime(2015, 1, 31, 0, 0),
datetime.datetime(2015, 3, 31, 0, 0),
datetime.datetime(2015, 5, 31, 0, 0)]
Additionally, it supports the following keyword arguments: Parameters
dtstart -- The recurrence start. Besides being the base for the recurrence, missing parameters in the final recurrence instances will also be extracted from this date. If not given, datetime.now() will be used instead.
interval -- The interval between each freq iteration. For example, when using YEARLY, an interval of 2 means once every two years, but with HOURLY, it means once every two hours. The default interval is 1.
wkst -- The week start day. Must be one of the MO, TU, WE constants, or an integer, specifying the first day of the week. This will affect recurrences based on weekly periods. The default week start is got from calendar.firstweekday(), and may be modified by calendar.setfirstweekday().
count --
If given, this determines how many occurrences will be generated. Note As of version 2.5.0, the use of the keyword until in conjunction with count is deprecated, to make sure dateutil is fully compliant with RFC-5545 Sec. 3.3.10. Therefore, until and count must not occur in the same call to rrule.
until --
If given, this must be a datetime instance specifying the upper-bound limit of the recurrence. The last recurrence in the rule is the greatest datetime that is less than or equal to the value specified in the until parameter. Note As of version 2.5.0, the use of the keyword until in conjunction with count is deprecated, to make sure dateutil is fully compliant with RFC-5545 Sec. 3.3.10. Therefore, until and count must not occur in the same call to rrule.
bysetpos -- If given, it must be either an integer, or a sequence of integers, positive or negative. Each given integer will specify an occurrence number, corresponding to the nth occurrence of the rule inside the frequency period. For example, a bysetpos of -1 if combined with a MONTHLY frequency, and a byweekday of (MO, TU, WE, TH, FR), will result in the last work day of every month.
bymonth -- If given, it must be either an integer, or a sequence of integers, meaning the months to apply the recurrence to.
bymonthday -- If given, it must be either an integer, or a sequence of integers, meaning the month days to apply the recurrence to.
byyearday -- If given, it must be either an integer, or a sequence of integers, meaning the year days to apply the recurrence to.
byeaster -- If given, it must be either an integer, or a sequence of integers, positive or negative. Each integer will define an offset from the Easter Sunday. Passing the offset 0 to byeaster will yield the Easter Sunday itself. This is an extension to the RFC specification.
byweekno -- If given, it must be either an integer, or a sequence of integers, meaning the week numbers to apply the recurrence to. Week numbers have the meaning described in ISO8601, that is, the first week of the year is that containing at least four days of the new year.
byweekday -- If given, it must be either an integer (0 == MO), a sequence of integers, one of the weekday constants (MO, TU, etc), or a sequence of these constants. When given, these variables will define the weekdays where the recurrence will be applied. It's also possible to use an argument n for the weekday instances, which will mean the nth occurrence of this weekday in the period. For example, with MONTHLY, or with YEARLY and BYMONTH, using FR(+1) in byweekday will specify the first friday of the month where the recurrence happens. Notice that in the RFC documentation, this is specified as BYDAY, but was renamed to avoid the ambiguity of that keyword.
byhour -- If given, it must be either an integer, or a sequence of integers, meaning the hours to apply the recurrence to.
byminute -- If given, it must be either an integer, or a sequence of integers, meaning the minutes to apply the recurrence to.
bysecond -- If given, it must be either an integer, or a sequence of integers, meaning the seconds to apply the recurrence to.
cache -- If given, it must be a boolean value specifying to enable or disable caching of results. If you will use the same rrule instance multiple times, enabling caching will improve the performance considerably. replace(**kwargs)
Return new rrule with same attributes except for those attributes given new values by whichever keyword arguments are specified. | |
doc_30269 |
Set the Figure instance the artist belongs to. Parameters
figFigure | |
doc_30270 | Return True if the string is a valid identifier according to the language definition, section Identifiers and keywords. Call keyword.iskeyword() to test whether string s is a reserved identifier, such as def and class. Example: >>> from keyword import iskeyword
>>> 'hello'.isidentifier(), iskeyword('hello')
True, False
>>> 'def'.isidentifier(), iskeyword('def')
True, True | |
doc_30271 | Returns the aggregation period for date_list. Returns date_list_period by default. | |
doc_30272 | See Migration guide for more details. tf.compat.v1.raw_ops.MatrixSolveLs
tf.raw_ops.MatrixSolveLs(
matrix, rhs, l2_regularizer, fast=True, name=None
)
matrix is a tensor of shape [..., M, N] whose inner-most 2 dimensions form real or complex matrices of size [M, N]. Rhs is a tensor of the same type as matrix and shape [..., M, K]. The output is a tensor shape [..., N, K] where each output matrix solves each of the equations matrix[..., :, :] * output[..., :, :] = rhs[..., :, :] in the least squares sense. We use the following notation for (complex) matrix and right-hand sides in the batch: matrix=\(A \in \mathbb{C}^{m \times n}\), rhs=\(B \in \mathbb{C}^{m \times k}\), output=\(X \in \mathbb{C}^{n \times k}\), l2_regularizer=\(\lambda \in \mathbb{R}\). If fast is True, then the solution is computed by solving the normal equations using Cholesky decomposition. Specifically, if \(m \ge n\) then \(X = (A^H A + \lambda I)^{-1} A^H B\), which solves the least-squares problem \(X = \mathrm{argmin}_{Z \in \Re^{n \times k} } ||A Z - B||_F^2 + \lambda ||Z||_F^2\). If \(m \lt n\) then output is computed as \(X = A^H (A A^H + \lambda I)^{-1} B\), which (for \(\lambda = 0\)) is the minimum-norm solution to the under-determined linear system, i.e. \(X = \mathrm{argmin}_{Z \in \mathbb{C}^{n \times k} } ||Z||_F^2 \), subject to \(A Z = B\). Notice that the fast path is only numerically stable when \(A\) is numerically full rank and has a condition number \(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon_{mach} } }\) or \(\lambda\) is sufficiently large. If fast is False an algorithm based on the numerically robust complete orthogonal decomposition is used. This computes the minimum-norm least-squares solution, even when \(A\) is rank deficient. This path is typically 6-7 times slower than the fast path. If fast is False then l2_regularizer is ignored.
Args
matrix A Tensor. Must be one of the following types: float64, float32, half, complex64, complex128. Shape is [..., M, N].
rhs A Tensor. Must have the same type as matrix. Shape is [..., M, K].
l2_regularizer A Tensor of type float64. Scalar tensor.
fast An optional bool. Defaults to True.
name A name for the operation (optional).
Returns A Tensor. Has the same type as matrix.
Numpy Compatibility Equivalent to np.linalg.lstsq | |
doc_30273 | alias of werkzeug.routing.Rule | |
doc_30274 | See Migration guide for more details. tf.compat.v1.conj, tf.compat.v1.math.conj
tf.math.conj(
x, name=None
)
Given a tensor x of complex numbers, this operation returns a tensor of complex numbers that are the complex conjugate of each element in x. The complex numbers in x must be of the form \(a + bj\), where a is the real part and b is the imaginary part. The complex conjugate returned by this operation is of the form \(a - bj\). For example:
x = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j])
tf.math.conj(x)
<tf.Tensor: shape=(2,), dtype=complex128,
numpy=array([-2.25-4.75j, 3.25-5.75j])>
If x is real, it is returned unchanged. For example:
x = tf.constant([-2.25, 3.25])
tf.math.conj(x)
<tf.Tensor: shape=(2,), dtype=float32,
numpy=array([-2.25, 3.25], dtype=float32)>
Args
x Tensor to conjugate. Must have numeric or variant type.
name A name for the operation (optional).
Returns A Tensor that is the conjugate of x (with the same type).
Raises
TypeError If x is not a numeric tensor. Numpy Compatibility Equivalent to numpy.conj. | |
doc_30275 |
Perform standardization by centering and scaling Parameters
X{array-like, sparse matrix of shape (n_samples, n_features)
The data used to scale along the features axis.
copybool, default=None
Copy the input X or not. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array. | |
doc_30276 |
Define the picking behavior of the artist. Parameters
pickerNone or bool or float or callable
This can be one of the following:
None: Picking is disabled for this artist (default). A boolean: If True then picking will be enabled and the artist will fire a pick event if the mouse event is over the artist. A float: If picker is a number it is interpreted as an epsilon tolerance in points and the artist will fire off an event if its data is within epsilon of the mouse event. For some artists like lines and patch collections, the artist may provide additional data to the pick event that is generated, e.g., the indices of the data within epsilon of the pick event
A function: If picker is callable, it is a user supplied function which determines whether the artist is hit by the mouse event: hit, props = picker(artist, mouseevent)
to determine the hit test. if the mouse event is over the artist, return hit=True and props is a dictionary of properties you want added to the PickEvent attributes. | |
doc_30277 | A return statement. >>> print(ast.dump(ast.parse('return 4'), indent=4))
Module(
body=[
Return(
value=Constant(value=4))],
type_ignores=[]) | |
doc_30278 |
Perform classification on samples in X. For a one-class model, +1 or -1 is returned. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples_test, n_samples_train)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns
y_predndarray of shape (n_samples,)
Class labels for samples in X. | |
doc_30279 |
Filename of the image. str: Filename of the image to use in a Toolbar. If None, the name is used as a label in the toolbar button. | |
doc_30280 | In-place version of unsqueeze() | |
doc_30281 | See Migration guide for more details. tf.compat.v1.raw_ops.ResourceApplyAdagrad
tf.raw_ops.ResourceApplyAdagrad(
var, accum, lr, grad, use_locking=False, update_slots=True, name=None
)
accum += grad * grad var -= lr * grad * (1 / sqrt(accum))
Args
var A Tensor of type resource. Should be from a Variable().
accum A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
update_slots An optional bool. Defaults to True.
name A name for the operation (optional).
Returns The created Operation. | |
doc_30282 |
Return the scalar type of highest precision of the same kind as the input. Parameters
tdtype or dtype specifier
The input data type. This can be a dtype object or an object that is convertible to a dtype. Returns
outdtype
The highest precision data type of the same kind (dtype.kind) as t. See also
obj2sctype, mintypecode, sctype2char
dtype
Examples >>> np.maximum_sctype(int)
<class 'numpy.int64'>
>>> np.maximum_sctype(np.uint8)
<class 'numpy.uint64'>
>>> np.maximum_sctype(complex)
<class 'numpy.complex256'> # may vary
>>> np.maximum_sctype(str)
<class 'numpy.str_'>
>>> np.maximum_sctype('i2')
<class 'numpy.int64'>
>>> np.maximum_sctype('f4')
<class 'numpy.float128'> # may vary | |
doc_30283 |
Set new codes on MultiIndex. Defaults to returning new index. Parameters
codes:sequence or list of sequence
New codes to apply.
level:int, level name, or sequence of int/level names (default None)
Level(s) to set (None for all levels).
inplace:bool
If True, mutates in place. Deprecated since version 1.2.0.
verify_integrity:bool, default True
If True, checks that levels and codes are compatible. Returns
new index (of same type and class…etc) or None
The same type as the caller or None if inplace=True. Examples
>>> idx = pd.MultiIndex.from_tuples(
... [(1, "one"), (1, "two"), (2, "one"), (2, "two")], names=["foo", "bar"]
... )
>>> idx
MultiIndex([(1, 'one'),
(1, 'two'),
(2, 'one'),
(2, 'two')],
names=['foo', 'bar'])
>>> idx.set_codes([[1, 0, 1, 0], [0, 0, 1, 1]])
MultiIndex([(2, 'one'),
(1, 'one'),
(2, 'two'),
(1, 'two')],
names=['foo', 'bar'])
>>> idx.set_codes([1, 0, 1, 0], level=0)
MultiIndex([(2, 'one'),
(1, 'two'),
(2, 'one'),
(1, 'two')],
names=['foo', 'bar'])
>>> idx.set_codes([0, 0, 1, 1], level='bar')
MultiIndex([(1, 'one'),
(1, 'one'),
(2, 'two'),
(2, 'two')],
names=['foo', 'bar'])
>>> idx.set_codes([[1, 0, 1, 0], [0, 0, 1, 1]], level=[0, 1])
MultiIndex([(2, 'one'),
(1, 'one'),
(2, 'two'),
(1, 'two')],
names=['foo', 'bar']) | |
doc_30284 |
Compute standard deviation of groups, excluding missing values. For multiple groupings, the result index will be a MultiIndex. Parameters
ddof:int, default 1
Degrees of freedom.
engine:str, default None
'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.4.0.
engine_kwargs:dict, default None
For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {{'nopython': True, 'nogil': False, 'parallel': False}} New in version 1.4.0. Returns
Series or DataFrame
Standard deviation of values within each group. See also Series.groupby
Apply a function groupby to a Series. DataFrame.groupby
Apply a function groupby to each row or column of a DataFrame. | |
doc_30285 | Touch all locations in ancestors of the window that have been changed in the window. | |
doc_30286 |
Parameters
nx, nx1int
Integers specifying the column-position of the cell. When nx1 is None, a single nx-th column is specified. Otherwise location of columns spanning between nx to nx1 (but excluding nx1-th column) is specified.
ny, ny1int
Same as nx and nx1, but for row positions. axes
renderer | |
doc_30287 |
Return an array formed from the elements of a at the given indices. Refer to numpy.take for full documentation. See also numpy.take
equivalent function | |
doc_30288 | Read and return a list of lines from the stream. hint can be specified to control the number of lines read: no more lines will be read if the total size (in bytes/characters) of all lines so far exceeds hint. Note that it’s already possible to iterate on file objects using for
line in file: ... without calling file.readlines(). | |
doc_30289 |
Get a serializable descriptor from the dtype. The .descr attribute of a dtype object cannot be round-tripped through the dtype() constructor. Simple types, like dtype(‘float32’), have a descr which looks like a record array with one field with ‘’ as a name. The dtype() constructor interprets this as a request to give a default name. Instead, we construct descriptor that can be passed to dtype(). Parameters
dtypedtype
The dtype of the array that will be written to disk. Returns
descrobject
An object that can be passed to numpy.dtype() in order to replicate the input dtype. | |
doc_30290 |
Fills the 2D input Tensor as a sparse matrix, where the non-zero elements will be drawn from the normal distribution N(0,0.01)\mathcal{N}(0, 0.01) , as described in Deep learning via Hessian-free optimization - Martens, J. (2010). Parameters
tensor – an n-dimensional torch.Tensor
sparsity – The fraction of elements in each column to be set to zero
std – the standard deviation of the normal distribution used to generate the non-zero values Examples >>> w = torch.empty(3, 5)
>>> nn.init.sparse_(w, sparsity=0.1) | |
doc_30291 | Run all scheduled events. This method will wait (using the delayfunc() function passed to the constructor) for the next event, then execute it and so on until there are no more scheduled events. If blocking is false executes the scheduled events due to expire soonest (if any) and then return the deadline of the next scheduled call in the scheduler (if any). Either action or delayfunc can raise an exception. In either case, the scheduler will maintain a consistent state and propagate the exception. If an exception is raised by action, the event will not be attempted in future calls to run(). If a sequence of events takes longer to run than the time available before the next event, the scheduler will simply fall behind. No events will be dropped; the calling code is responsible for canceling events which are no longer pertinent. Changed in version 3.3: blocking parameter was added. | |
doc_30292 | Converts a frequency into a MIDI note. Rounds to the closest midi note. frequency_to_midi(midi_note) -> midi_note example: frequency_to_midi(27.5) == 21 New in pygame 1.9.5. | |
doc_30293 |
Test whether the mouse event occurred within the image. | |
doc_30294 |
Return local mean of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import mean
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> avg = mean(img, disk(5))
>>> avg_vol = mean(volume, ball(5)) | |
doc_30295 | returns a vector with the same direction but length 1. normalize() -> Vector2 Returns a new vector that has length equal to 1 and the same direction as self. | |
doc_30296 |
The transposed array. Same as self.transpose(). See also transpose
Examples >>> x = np.array([[1.,2.],[3.,4.]])
>>> x
array([[ 1., 2.],
[ 3., 4.]])
>>> x.T
array([[ 1., 3.],
[ 2., 4.]])
>>> x = np.array([1.,2.,3.,4.])
>>> x
array([ 1., 2., 3., 4.])
>>> x.T
array([ 1., 2., 3., 4.]) | |
doc_30297 | This is a subclass derived from IMAP4 that connects to the stdin/stdout file descriptors created by passing command to subprocess.Popen(). | |
doc_30298 |
Compute the linear kernel between X and Y. Read more in the User Guide. Parameters
Xndarray of shape (n_samples_X, n_features)
Yndarray of shape (n_samples_Y, n_features), default=None
dense_outputbool, default=True
Whether to return dense output even when the input is sparse. If False, the output is sparse if both input arrays are sparse. New in version 0.20. Returns
Gram matrixndarray of shape (n_samples_X, n_samples_Y) | |
doc_30299 | Asserts that two URLs are the same, ignoring the order of query string parameters except for parameters with the same name. For example, /path/?x=1&y=2 is equal to /path/?y=2&x=1, but /path/?a=1&a=2 isn’t equal to /path/?a=2&a=1. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.