language stringclasses 1 value | repo stringclasses 346 values | path stringlengths 6 201 | class_span dict | source stringlengths 21 2.38M | target stringlengths 1 96 |
|---|---|---|---|---|---|
python | tensorflow__tensorflow | tensorflow/python/ops/distributions/bijector_impl.py | {
"start": 4512,
"end": 43050
} | class ____(metaclass=abc.ABCMeta):
r"""Interface for transformations of a `Distribution` sample.
Bijectors can be used to represent any differentiable and injective
(one to one) function defined on an open subset of `R^n`. Some non-injective
transformations are also supported (see "Non Injective Transforms" below).
#### Mathematical Details
A `Bijector` implements a [smooth covering map](
https://en.wikipedia.org/wiki/Local_diffeomorphism), i.e., a local
diffeomorphism such that every point in the target has a neighborhood evenly
covered by a map ([see also](
https://en.wikipedia.org/wiki/Covering_space#Covering_of_a_manifold)).
A `Bijector` is used by `TransformedDistribution` but can be generally used
for transforming a `Distribution` generated `Tensor`. A `Bijector` is
characterized by three operations:
1. Forward
Useful for turning one random outcome into another random outcome from a
different distribution.
2. Inverse
Useful for "reversing" a transformation to compute one probability in
terms of another.
3. `log_det_jacobian(x)`
"The log of the absolute value of the determinant of the matrix of all
first-order partial derivatives of the inverse function."
Useful for inverting a transformation to compute one probability in terms
of another. Geometrically, the Jacobian determinant is the volume of the
transformation and is used to scale the probability.
We take the absolute value of the determinant before log to avoid NaN
values. Geometrically, a negative determinant corresponds to an
orientation-reversing transformation. It is ok for us to discard the sign
of the determinant because we only integrate everywhere-nonnegative
functions (probability densities) and the correct orientation is always the
one that produces a nonnegative integrand.
By convention, transformations of random variables are named in terms of the
forward transformation. The forward transformation creates samples, the
inverse is useful for computing probabilities.
#### Example Uses
- Basic properties:
```python
x = ... # A tensor.
# Evaluate forward transformation.
fwd_x = my_bijector.forward(x)
x == my_bijector.inverse(fwd_x)
x != my_bijector.forward(fwd_x) # Not equal because x != g(g(x)).
```
- Computing a log-likelihood:
```python
def transformed_log_prob(bijector, log_prob, x):
return (bijector.inverse_log_det_jacobian(x, event_ndims=0) +
log_prob(bijector.inverse(x)))
```
- Transforming a random outcome:
```python
def transformed_sample(bijector, x):
return bijector.forward(x)
```
#### Example Bijectors
- "Exponential"
```none
Y = g(X) = exp(X)
X ~ Normal(0, 1) # Univariate.
```
Implies:
```none
g^{-1}(Y) = log(Y)
|Jacobian(g^{-1})(y)| = 1 / y
Y ~ LogNormal(0, 1), i.e.,
prob(Y=y) = |Jacobian(g^{-1})(y)| * prob(X=g^{-1}(y))
= (1 / y) Normal(log(y); 0, 1)
```
Here is an example of how one might implement the `Exp` bijector:
```python
class Exp(Bijector):
def __init__(self, validate_args=False, name="exp"):
super(Exp, self).__init__(
validate_args=validate_args,
forward_min_event_ndims=0,
name=name)
def _forward(self, x):
return math_ops.exp(x)
def _inverse(self, y):
return math_ops.log(y)
def _inverse_log_det_jacobian(self, y):
return -self._forward_log_det_jacobian(self._inverse(y))
def _forward_log_det_jacobian(self, x):
# Notice that we needn't do any reducing, even when`event_ndims > 0`.
# The base Bijector class will handle reducing for us; it knows how
# to do so because we called `super` `__init__` with
# `forward_min_event_ndims = 0`.
return x
```
- "Affine"
```none
Y = g(X) = sqrtSigma * X + mu
X ~ MultivariateNormal(0, I_d)
```
Implies:
```none
g^{-1}(Y) = inv(sqrtSigma) * (Y - mu)
|Jacobian(g^{-1})(y)| = det(inv(sqrtSigma))
Y ~ MultivariateNormal(mu, sqrtSigma) , i.e.,
prob(Y=y) = |Jacobian(g^{-1})(y)| * prob(X=g^{-1}(y))
= det(sqrtSigma)^(-d) *
MultivariateNormal(inv(sqrtSigma) * (y - mu); 0, I_d)
```
#### Min_event_ndims and Naming
Bijectors are named for the dimensionality of data they act on (i.e. without
broadcasting). We can think of bijectors having an intrinsic `min_event_ndims`
, which is the minimum number of dimensions for the bijector act on. For
instance, a Cholesky decomposition requires a matrix, and hence
`min_event_ndims=2`.
Some examples:
`AffineScalar: min_event_ndims=0`
`Affine: min_event_ndims=1`
`Cholesky: min_event_ndims=2`
`Exp: min_event_ndims=0`
`Sigmoid: min_event_ndims=0`
`SoftmaxCentered: min_event_ndims=1`
Note the difference between `Affine` and `AffineScalar`. `AffineScalar`
operates on scalar events, whereas `Affine` operates on vector-valued events.
More generally, there is a `forward_min_event_ndims` and an
`inverse_min_event_ndims`. In most cases, these will be the same.
However, for some shape changing bijectors, these will be different
(e.g. a bijector which pads an extra dimension at the end, might have
`forward_min_event_ndims=0` and `inverse_min_event_ndims=1`.
#### Jacobian Determinant
The Jacobian determinant is a reduction over `event_ndims - min_event_ndims`
(`forward_min_event_ndims` for `forward_log_det_jacobian` and
`inverse_min_event_ndims` for `inverse_log_det_jacobian`).
To see this, consider the `Exp` `Bijector` applied to a `Tensor` which has
sample, batch, and event (S, B, E) shape semantics. Suppose the `Tensor`'s
partitioned-shape is `(S=[4], B=[2], E=[3, 3])`. The shape of the `Tensor`
returned by `forward` and `inverse` is unchanged, i.e., `[4, 2, 3, 3]`.
However the shape returned by `inverse_log_det_jacobian` is `[4, 2]` because
the Jacobian determinant is a reduction over the event dimensions.
Another example is the `Affine` `Bijector`. Because `min_event_ndims = 1`, the
Jacobian determinant reduction is over `event_ndims - 1`.
It is sometimes useful to implement the inverse Jacobian determinant as the
negative forward Jacobian determinant. For example,
```python
def _inverse_log_det_jacobian(self, y):
return -self._forward_log_det_jac(self._inverse(y)) # Note negation.
```
The correctness of this approach can be seen from the following claim.
- Claim:
Assume `Y = g(X)` is a bijection whose derivative exists and is nonzero
for its domain, i.e., `dY/dX = d/dX g(X) != 0`. Then:
```none
(log o det o jacobian o g^{-1})(Y) = -(log o det o jacobian o g)(X)
```
- Proof:
From the bijective, nonzero differentiability of `g`, the
[inverse function theorem](
https://en.wikipedia.org/wiki/Inverse_function_theorem)
implies `g^{-1}` is differentiable in the image of `g`.
Applying the chain rule to `y = g(x) = g(g^{-1}(y))` yields
`I = g'(g^{-1}(y))*g^{-1}'(y)`.
The same theorem also implies `g^{-1}'` is non-singular therefore:
`inv[ g'(g^{-1}(y)) ] = g^{-1}'(y)`.
The claim follows from [properties of determinant](
https://en.wikipedia.org/wiki/Determinant#Multiplicativity_and_matrix_groups).
Generally its preferable to directly implement the inverse Jacobian
determinant. This should have superior numerical stability and will often
share subgraphs with the `_inverse` implementation.
#### Is_constant_jacobian
Certain bijectors will have constant jacobian matrices. For instance, the
`Affine` bijector encodes multiplication by a matrix plus a shift, with
jacobian matrix, the same aforementioned matrix.
`is_constant_jacobian` encodes the fact that the jacobian matrix is constant.
The semantics of this argument are the following:
* Repeated calls to "log_det_jacobian" functions with the same
`event_ndims` (but not necessarily same input), will return the first
computed jacobian (because the matrix is constant, and hence is input
independent).
* `log_det_jacobian` implementations are merely broadcastable to the true
`log_det_jacobian` (because, again, the jacobian matrix is input
independent). Specifically, `log_det_jacobian` is implemented as the
log jacobian determinant for a single input.
```python
class Identity(Bijector):
def __init__(self, validate_args=False, name="identity"):
super(Identity, self).__init__(
is_constant_jacobian=True,
validate_args=validate_args,
forward_min_event_ndims=0,
name=name)
def _forward(self, x):
return x
def _inverse(self, y):
return y
def _inverse_log_det_jacobian(self, y):
return -self._forward_log_det_jacobian(self._inverse(y))
def _forward_log_det_jacobian(self, x):
# The full log jacobian determinant would be array_ops.zero_like(x).
# However, we circumvent materializing that, since the jacobian
# calculation is input independent, and we specify it for one input.
return constant_op.constant(0., x.dtype.base_dtype)
```
#### Subclass Requirements
- Subclasses typically implement:
- `_forward`,
- `_inverse`,
- `_inverse_log_det_jacobian`,
- `_forward_log_det_jacobian` (optional).
The `_forward_log_det_jacobian` is called when the bijector is inverted via
the `Invert` bijector. If undefined, a slightly less efficiently
calculation, `-1 * _inverse_log_det_jacobian`, is used.
If the bijector changes the shape of the input, you must also implement:
- _forward_event_shape_tensor,
- _forward_event_shape (optional),
- _inverse_event_shape_tensor,
- _inverse_event_shape (optional).
By default the event-shape is assumed unchanged from input.
- If the `Bijector`'s use is limited to `TransformedDistribution` (or friends
like `QuantizedDistribution`) then depending on your use, you may not need
to implement all of `_forward` and `_inverse` functions.
Examples:
1. Sampling (e.g., `sample`) only requires `_forward`.
2. Probability functions (e.g., `prob`, `cdf`, `survival`) only require
`_inverse` (and related).
3. Only calling probability functions on the output of `sample` means
`_inverse` can be implemented as a cache lookup.
See "Example Uses" [above] which shows how these functions are used to
transform a distribution. (Note: `_forward` could theoretically be
implemented as a cache lookup but this would require controlling the
underlying sample generation mechanism.)
#### Non Injective Transforms
**WARNING** Handing of non-injective transforms is subject to change.
Non injective maps `g` are supported, provided their domain `D` can be
partitioned into `k` disjoint subsets, `Union{D1, ..., Dk}`, such that,
ignoring sets of measure zero, the restriction of `g` to each subset is a
differentiable bijection onto `g(D)`. In particular, this implies that for
`y in g(D)`, the set inverse, i.e. `g^{-1}(y) = {x in D : g(x) = y}`, always
contains exactly `k` distinct points.
The property, `_is_injective` is set to `False` to indicate that the bijector
is not injective, yet satisfies the above condition.
The usual bijector API is modified in the case `_is_injective is False` (see
method docstrings for specifics). Here we show by example the `AbsoluteValue`
bijector. In this case, the domain `D = (-inf, inf)`, can be partitioned
into `D1 = (-inf, 0)`, `D2 = {0}`, and `D3 = (0, inf)`. Let `gi` be the
restriction of `g` to `Di`, then both `g1` and `g3` are bijections onto
`(0, inf)`, with `g1^{-1}(y) = -y`, and `g3^{-1}(y) = y`. We will use
`g1` and `g3` to define bijector methods over `D1` and `D3`. `D2 = {0}` is
an oddball in that `g2` is one to one, and the derivative is not well defined.
Fortunately, when considering transformations of probability densities
(e.g. in `TransformedDistribution`), sets of measure zero have no effect in
theory, and only a small effect in 32 or 64 bit precision. For that reason,
we define `inverse(0)` and `inverse_log_det_jacobian(0)` both as `[0, 0]`,
which is convenient and results in a left-semicontinuous pdf.
```python
abs = tfp.distributions.bijectors.AbsoluteValue()
abs.forward(-1.)
==> 1.
abs.forward(1.)
==> 1.
abs.inverse(1.)
==> (-1., 1.)
# The |dX/dY| is constant, == 1. So Log|dX/dY| == 0.
abs.inverse_log_det_jacobian(1., event_ndims=0)
==> (0., 0.)
# Special case handling of 0.
abs.inverse(0.)
==> (0., 0.)
abs.inverse_log_det_jacobian(0., event_ndims=0)
==> (0., 0.)
```
"""
@abc.abstractmethod
def __init__(self,
graph_parents=None,
is_constant_jacobian=False,
validate_args=False,
dtype=None,
forward_min_event_ndims=None,
inverse_min_event_ndims=None,
name=None):
"""Constructs Bijector.
A `Bijector` transforms random variables into new random variables.
Examples:
```python
# Create the Y = g(X) = X transform.
identity = Identity()
# Create the Y = g(X) = exp(X) transform.
exp = Exp()
```
See `Bijector` subclass docstring for more details and specific examples.
Args:
graph_parents: Python list of graph prerequisites of this `Bijector`.
is_constant_jacobian: Python `bool` indicating that the Jacobian matrix is
not a function of the input.
validate_args: Python `bool`, default `False`. Whether to validate input
with asserts. If `validate_args` is `False`, and the inputs are invalid,
correct behavior is not guaranteed.
dtype: `tf.dtype` supported by this `Bijector`. `None` means dtype is not
enforced.
forward_min_event_ndims: Python `integer` indicating the minimum number of
dimensions `forward` operates on.
inverse_min_event_ndims: Python `integer` indicating the minimum number of
dimensions `inverse` operates on. Will be set to
`forward_min_event_ndims` by default, if no value is provided.
name: The name to give Ops created by the initializer.
Raises:
ValueError: If neither `forward_min_event_ndims` and
`inverse_min_event_ndims` are specified, or if either of them is
negative.
ValueError: If a member of `graph_parents` is not a `Tensor`.
"""
self._graph_parents = graph_parents or []
if forward_min_event_ndims is None and inverse_min_event_ndims is None:
raise ValueError("Must specify at least one of `forward_min_event_ndims` "
"and `inverse_min_event_ndims`.")
elif inverse_min_event_ndims is None:
inverse_min_event_ndims = forward_min_event_ndims
elif forward_min_event_ndims is None:
forward_min_event_ndims = inverse_min_event_ndims
if not isinstance(forward_min_event_ndims, int):
raise TypeError("Expected forward_min_event_ndims to be of "
"type int, got {}".format(
type(forward_min_event_ndims).__name__))
if not isinstance(inverse_min_event_ndims, int):
raise TypeError("Expected inverse_min_event_ndims to be of "
"type int, got {}".format(
type(inverse_min_event_ndims).__name__))
if forward_min_event_ndims < 0:
raise ValueError("forward_min_event_ndims must be a non-negative "
"integer.")
if inverse_min_event_ndims < 0:
raise ValueError("inverse_min_event_ndims must be a non-negative "
"integer.")
self._forward_min_event_ndims = forward_min_event_ndims
self._inverse_min_event_ndims = inverse_min_event_ndims
self._is_constant_jacobian = is_constant_jacobian
self._constant_ildj_map = {}
self._validate_args = validate_args
self._dtype = dtype
# These dicts can only be accessed using _Mapping.x_key or _Mapping.y_key
self._from_y = {}
self._from_x = {}
if name:
self._name = name
else:
# We want the default convention to be snake_case rather than CamelCase
# since `Chain` uses bijector.name as the kwargs dictionary key.
def camel_to_snake(name):
s1 = re.sub("(.)([A-Z][a-z]+)", r"\1_\2", name)
return re.sub("([a-z0-9])([A-Z])", r"\1_\2", s1).lower()
self._name = camel_to_snake(type(self).__name__.lstrip("_"))
for i, t in enumerate(self._graph_parents):
if t is None or not tensor_util.is_tf_type(t):
raise ValueError("Graph parent item %d is not a Tensor; %s." % (i, t))
@property
def graph_parents(self):
"""Returns this `Bijector`'s graph_parents as a Python list."""
return self._graph_parents
@property
def forward_min_event_ndims(self):
"""Returns the minimal number of dimensions bijector.forward operates on."""
return self._forward_min_event_ndims
@property
def inverse_min_event_ndims(self):
"""Returns the minimal number of dimensions bijector.inverse operates on."""
return self._inverse_min_event_ndims
@property
def is_constant_jacobian(self):
"""Returns true iff the Jacobian matrix is not a function of x.
Note: Jacobian matrix is either constant for both forward and inverse or
neither.
Returns:
is_constant_jacobian: Python `bool`.
"""
return self._is_constant_jacobian
@property
def _is_injective(self):
"""Returns true iff the forward map `g` is injective (one-to-one function).
**WARNING** This hidden property and its behavior are subject to change.
Note: Non-injective maps `g` are supported, provided their domain `D` can
be partitioned into `k` disjoint subsets, `Union{D1, ..., Dk}`, such that,
ignoring sets of measure zero, the restriction of `g` to each subset is a
differentiable bijection onto `g(D)`.
Returns:
is_injective: Python `bool`.
"""
return True
@property
def validate_args(self):
"""Returns True if Tensor arguments will be validated."""
return self._validate_args
@property
def dtype(self):
"""dtype of `Tensor`s transformable by this distribution."""
return self._dtype
@property
def name(self):
"""Returns the string name of this `Bijector`."""
return self._name
def _forward_event_shape_tensor(self, input_shape):
"""Subclass implementation for `forward_event_shape_tensor` function."""
# By default, we assume event_shape is unchanged.
return input_shape
def forward_event_shape_tensor(self,
input_shape,
name="forward_event_shape_tensor"):
"""Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
Args:
input_shape: `Tensor`, `int32` vector indicating event-portion shape
passed into `forward` function.
name: name to give to the op
Returns:
forward_event_shape_tensor: `Tensor`, `int32` vector indicating
event-portion shape after applying `forward`.
"""
with self._name_scope(name, [input_shape]):
input_shape = ops.convert_to_tensor(input_shape, dtype=dtypes.int32,
name="input_shape")
return self._forward_event_shape_tensor(input_shape)
def _forward_event_shape(self, input_shape):
"""Subclass implementation for `forward_event_shape` public function."""
# By default, we assume event_shape is unchanged.
return input_shape
def forward_event_shape(self, input_shape):
"""Shape of a single sample from a single batch as a `TensorShape`.
Same meaning as `forward_event_shape_tensor`. May be only partially defined.
Args:
input_shape: `TensorShape` indicating event-portion shape passed into
`forward` function.
Returns:
forward_event_shape_tensor: `TensorShape` indicating event-portion shape
after applying `forward`. Possibly unknown.
"""
return self._forward_event_shape(tensor_shape.TensorShape(input_shape))
def _inverse_event_shape_tensor(self, output_shape):
"""Subclass implementation for `inverse_event_shape_tensor` function."""
# By default, we assume event_shape is unchanged.
return output_shape
def inverse_event_shape_tensor(self,
output_shape,
name="inverse_event_shape_tensor"):
"""Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
Args:
output_shape: `Tensor`, `int32` vector indicating event-portion shape
passed into `inverse` function.
name: name to give to the op
Returns:
inverse_event_shape_tensor: `Tensor`, `int32` vector indicating
event-portion shape after applying `inverse`.
"""
with self._name_scope(name, [output_shape]):
output_shape = ops.convert_to_tensor(output_shape, dtype=dtypes.int32,
name="output_shape")
return self._inverse_event_shape_tensor(output_shape)
def _inverse_event_shape(self, output_shape):
"""Subclass implementation for `inverse_event_shape` public function."""
# By default, we assume event_shape is unchanged.
return tensor_shape.TensorShape(output_shape)
def inverse_event_shape(self, output_shape):
"""Shape of a single sample from a single batch as a `TensorShape`.
Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
Args:
output_shape: `TensorShape` indicating event-portion shape passed into
`inverse` function.
Returns:
inverse_event_shape_tensor: `TensorShape` indicating event-portion shape
after applying `inverse`. Possibly unknown.
"""
return self._inverse_event_shape(output_shape)
def _forward(self, x):
"""Subclass implementation for `forward` public function."""
raise NotImplementedError("forward not implemented.")
def _call_forward(self, x, name, **kwargs):
with self._name_scope(name, [x]):
x = ops.convert_to_tensor(x, name="x")
self._maybe_assert_dtype(x)
if not self._is_injective: # No caching for non-injective
return self._forward(x, **kwargs)
mapping = self._lookup(x=x, kwargs=kwargs)
if mapping.y is not None:
return mapping.y
mapping = mapping.merge(y=self._forward(x, **kwargs))
self._cache(mapping)
return mapping.y
def forward(self, x, name="forward"):
"""Returns the forward `Bijector` evaluation, i.e., X = g(Y).
Args:
x: `Tensor`. The input to the "forward" evaluation.
name: The name to give this op.
Returns:
`Tensor`.
Raises:
TypeError: if `self.dtype` is specified and `x.dtype` is not
`self.dtype`.
NotImplementedError: if `_forward` is not implemented.
"""
return self._call_forward(x, name)
def _inverse(self, y):
"""Subclass implementation for `inverse` public function."""
raise NotImplementedError("inverse not implemented")
def _call_inverse(self, y, name, **kwargs):
with self._name_scope(name, [y]):
y = ops.convert_to_tensor(y, name="y")
self._maybe_assert_dtype(y)
if not self._is_injective: # No caching for non-injective
return self._inverse(y, **kwargs)
mapping = self._lookup(y=y, kwargs=kwargs)
if mapping.x is not None:
return mapping.x
mapping = mapping.merge(x=self._inverse(y, **kwargs))
self._cache(mapping)
return mapping.x
def inverse(self, y, name="inverse"):
"""Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
Args:
y: `Tensor`. The input to the "inverse" evaluation.
name: The name to give this op.
Returns:
`Tensor`, if this bijector is injective.
If not injective, returns the k-tuple containing the unique
`k` points `(x1, ..., xk)` such that `g(xi) = y`.
Raises:
TypeError: if `self.dtype` is specified and `y.dtype` is not
`self.dtype`.
NotImplementedError: if `_inverse` is not implemented.
"""
return self._call_inverse(y, name)
def _inverse_log_det_jacobian(self, y):
"""Subclass implementation of `inverse_log_det_jacobian` public function.
In particular, this method differs from the public function, in that it
does not take `event_ndims`. Thus, this implements the minimal Jacobian
determinant calculation (i.e. over `inverse_min_event_ndims`).
Args:
y: `Tensor`. The input to the "inverse_log_det_jacobian" evaluation.
Returns:
inverse_log_det_jacobian: `Tensor`, if this bijector is injective.
If not injective, returns the k-tuple containing jacobians for the
unique `k` points `(x1, ..., xk)` such that `g(xi) = y`.
"""
raise NotImplementedError("inverse_log_det_jacobian not implemented.")
def _call_inverse_log_det_jacobian(self, y, event_ndims, name, **kwargs):
with self._name_scope(name, [y]):
if event_ndims in self._constant_ildj_map:
return self._constant_ildj_map[event_ndims]
y = ops.convert_to_tensor(y, name="y")
self._maybe_assert_dtype(y)
with ops.control_dependencies(self._check_valid_event_ndims(
min_event_ndims=self.inverse_min_event_ndims,
event_ndims=event_ndims)):
if not self._is_injective: # No caching for non-injective
try:
ildjs = self._inverse_log_det_jacobian(y, **kwargs)
return tuple(self._reduce_jacobian_det_over_event(
y, ildj, self.inverse_min_event_ndims, event_ndims)
for ildj in ildjs)
except NotImplementedError as original_exception:
try:
x = self._inverse(y, **kwargs)
fldjs = self._forward_log_det_jacobian(x, **kwargs)
return tuple(self._reduce_jacobian_det_over_event(
x, -fldj, self.forward_min_event_ndims, event_ndims)
for fldj in fldjs)
except NotImplementedError:
raise original_exception
mapping = self._lookup(y=y, kwargs=kwargs)
if mapping.ildj_map is not None and event_ndims in mapping.ildj_map:
return mapping.ildj_map[event_ndims]
try:
x = None # Not needed; leave cache as is.
ildj = self._inverse_log_det_jacobian(y, **kwargs)
ildj = self._reduce_jacobian_det_over_event(
y, ildj, self.inverse_min_event_ndims, event_ndims)
except NotImplementedError as original_exception:
try:
x = (mapping.x if mapping.x is not None
else self._inverse(y, **kwargs))
ildj = -self._forward_log_det_jacobian(x, **kwargs)
ildj = self._reduce_jacobian_det_over_event(
x, ildj, self.forward_min_event_ndims, event_ndims)
except NotImplementedError:
raise original_exception
mapping = mapping.merge(x=x, ildj_map={event_ndims: ildj})
self._cache(mapping)
if self.is_constant_jacobian:
self._constant_ildj_map[event_ndims] = ildj
return ildj
def inverse_log_det_jacobian(
self, y, event_ndims, name="inverse_log_det_jacobian"):
"""Returns the (log o det o Jacobian o inverse)(y).
Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
Note that `forward_log_det_jacobian` is the negative of this function,
evaluated at `g^{-1}(y)`.
Args:
y: `Tensor`. The input to the "inverse" Jacobian determinant evaluation.
event_ndims: Number of dimensions in the probabilistic events being
transformed. Must be greater than or equal to
`self.inverse_min_event_ndims`. The result is summed over the final
dimensions to produce a scalar Jacobian determinant for each event,
i.e. it has shape `y.shape.ndims - event_ndims` dimensions.
name: The name to give this op.
Returns:
`Tensor`, if this bijector is injective.
If not injective, returns the tuple of local log det
Jacobians, `log(det(Dg_i^{-1}(y)))`, where `g_i` is the restriction
of `g` to the `ith` partition `Di`.
Raises:
TypeError: if `self.dtype` is specified and `y.dtype` is not
`self.dtype`.
NotImplementedError: if `_inverse_log_det_jacobian` is not implemented.
"""
return self._call_inverse_log_det_jacobian(y, event_ndims, name)
def _forward_log_det_jacobian(self, x):
"""Subclass implementation of `forward_log_det_jacobian` public function.
In particular, this method differs from the public function, in that it
does not take `event_ndims`. Thus, this implements the minimal Jacobian
determinant calculation (i.e. over `forward_min_event_ndims`).
Args:
x: `Tensor`. The input to the "forward_log_det_jacobian" evaluation.
Returns:
forward_log_det_jacobian: `Tensor`, if this bijector is injective.
If not injective, returns the k-tuple containing jacobians for the
unique `k` points `(x1, ..., xk)` such that `g(xi) = y`.
"""
raise NotImplementedError(
"forward_log_det_jacobian not implemented.")
def _call_forward_log_det_jacobian(self, x, event_ndims, name, **kwargs):
if not self._is_injective:
raise NotImplementedError(
"forward_log_det_jacobian cannot be implemented for non-injective "
"transforms.")
with self._name_scope(name, [x]):
with ops.control_dependencies(self._check_valid_event_ndims(
min_event_ndims=self.forward_min_event_ndims,
event_ndims=event_ndims)):
if event_ndims in self._constant_ildj_map:
# Need "-1. *" to avoid invalid-unary-operand-type linter warning.
return -1. * self._constant_ildj_map[event_ndims]
x = ops.convert_to_tensor(x, name="x")
self._maybe_assert_dtype(x)
if not self._is_injective: # No caching for non-injective
try:
fldjs = self._forward_log_det_jacobian(x, **kwargs) # No caching.
return tuple(self._reduce_jacobian_det_over_event(
x, fldj, self.forward_min_event_ndims, event_ndims)
for fldj in fldjs)
except NotImplementedError as original_exception:
try:
y = self._forward(x, **kwargs)
ildjs = self._inverse_log_det_jacobian(y, **kwargs)
return tuple(self._reduce_jacobian_det_over_event(
y, -ildj, self.inverse_min_event_ndims, event_ndims)
for ildj in ildjs)
except NotImplementedError:
raise original_exception
mapping = self._lookup(x=x, kwargs=kwargs)
if mapping.ildj_map is not None and event_ndims in mapping.ildj_map:
return -mapping.ildj_map[event_ndims]
try:
y = None # Not needed; leave cache as is.
ildj = -self._forward_log_det_jacobian(x, **kwargs)
ildj = self._reduce_jacobian_det_over_event(
x, ildj, self.forward_min_event_ndims, event_ndims)
except NotImplementedError as original_exception:
try:
y = (mapping.y if mapping.y is not None
else self._forward(x, **kwargs))
ildj = self._inverse_log_det_jacobian(y, **kwargs)
ildj = self._reduce_jacobian_det_over_event(
y, ildj, self.inverse_min_event_ndims, event_ndims)
except NotImplementedError:
raise original_exception
mapping = mapping.merge(y=y, ildj_map={event_ndims: ildj})
self._cache(mapping)
if self.is_constant_jacobian:
self._constant_ildj_map[event_ndims] = ildj
return -ildj
def forward_log_det_jacobian(
self, x, event_ndims, name="forward_log_det_jacobian"):
"""Returns both the forward_log_det_jacobian.
Args:
x: `Tensor`. The input to the "forward" Jacobian determinant evaluation.
event_ndims: Number of dimensions in the probabilistic events being
transformed. Must be greater than or equal to
`self.forward_min_event_ndims`. The result is summed over the final
dimensions to produce a scalar Jacobian determinant for each event,
i.e. it has shape `x.shape.ndims - event_ndims` dimensions.
name: The name to give this op.
Returns:
`Tensor`, if this bijector is injective.
If not injective this is not implemented.
Raises:
TypeError: if `self.dtype` is specified and `y.dtype` is not
`self.dtype`.
NotImplementedError: if neither `_forward_log_det_jacobian`
nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented, or
this is a non-injective bijector.
"""
return self._call_forward_log_det_jacobian(x, event_ndims, name)
@contextlib.contextmanager
def _name_scope(self, name=None, values=None):
"""Helper function to standardize op scope."""
with ops.name_scope(self.name):
with ops.name_scope(
name, values=(values or []) + self.graph_parents) as scope:
yield scope
def _maybe_assert_dtype(self, x):
"""Helper to check dtype when self.dtype is known."""
if self.dtype is not None and self.dtype.base_dtype != x.dtype.base_dtype:
raise TypeError("Input had dtype %s but expected %s." %
(self.dtype, x.dtype))
def _cache(self, mapping):
"""Helper which stores mapping info in forward/inverse dicts."""
# Merging from lookup is an added check that we're not overwriting anything
# which is not None.
mapping = mapping.merge(mapping=self._lookup(
mapping.x, mapping.y, mapping.kwargs))
if mapping.x is None and mapping.y is None:
raise ValueError("Caching expects at least one of (x,y) to be known, "
"i.e., not None.")
self._from_x[mapping.x_key] = mapping
self._from_y[mapping.y_key] = mapping
def _lookup(self, x=None, y=None, kwargs=None):
"""Helper which retrieves mapping info from forward/inverse dicts."""
mapping = _Mapping(x=x, y=y, kwargs=kwargs)
# Since _cache requires both x,y to be set, we only need to do one cache
# lookup since the mapping is always in both or neither.
if mapping.x is not None:
return self._from_x.get(mapping.x_key, mapping)
if mapping.y is not None:
return self._from_y.get(mapping.y_key, mapping)
return mapping
def _reduce_jacobian_det_over_event(
self, y, ildj, min_event_ndims, event_ndims):
"""Reduce jacobian over event_ndims - min_event_ndims."""
# In this case, we need to tile the Jacobian over the event and reduce.
y_rank = array_ops.rank(y)
y_shape = array_ops.shape(y)[
y_rank - event_ndims : y_rank - min_event_ndims]
ones = array_ops.ones(y_shape, ildj.dtype)
reduced_ildj = math_ops.reduce_sum(
ones * ildj,
axis=self._get_event_reduce_dims(min_event_ndims, event_ndims))
# The multiplication by ones can change the inferred static shape so we try
# to recover as much as possible.
event_ndims_ = self._maybe_get_static_event_ndims(event_ndims)
if (event_ndims_ is not None and
y.shape.ndims is not None and
ildj.shape.ndims is not None):
y_shape = y.shape[y.shape.ndims - event_ndims_ :
y.shape.ndims - min_event_ndims]
broadcast_shape = array_ops.broadcast_static_shape(ildj.shape, y_shape)
reduced_ildj.set_shape(
broadcast_shape[: broadcast_shape.ndims - (
event_ndims_ - min_event_ndims)])
return reduced_ildj
def _get_event_reduce_dims(self, min_event_ndims, event_ndims):
"""Compute the reduction dimensions given event_ndims."""
event_ndims_ = self._maybe_get_static_event_ndims(event_ndims)
if event_ndims_ is not None:
return [-index for index in range(1, event_ndims_ - min_event_ndims + 1)]
else:
reduce_ndims = event_ndims - min_event_ndims
return math_ops.range(-reduce_ndims, 0)
def _check_valid_event_ndims(self, min_event_ndims, event_ndims):
"""Check whether event_ndims is at least min_event_ndims."""
event_ndims = ops.convert_to_tensor(event_ndims, name="event_ndims")
event_ndims_ = tensor_util.constant_value(event_ndims)
assertions = []
if not event_ndims.dtype.is_integer:
raise ValueError("Expected integer dtype, got dtype {}".format(
event_ndims.dtype))
if event_ndims_ is not None:
if event_ndims.shape.ndims != 0:
raise ValueError("Expected scalar event_ndims, got shape {}".format(
event_ndims.shape))
if min_event_ndims > event_ndims_:
raise ValueError("event_ndims ({}) must be larger than "
"min_event_ndims ({})".format(
event_ndims_, min_event_ndims))
elif self.validate_args:
assertions += [
check_ops.assert_greater_equal(event_ndims, min_event_ndims)]
if event_ndims.shape.is_fully_defined():
if event_ndims.shape.ndims != 0:
raise ValueError("Expected scalar shape, got ndims {}".format(
event_ndims.shape.ndims))
elif self.validate_args:
assertions += [
check_ops.assert_rank(event_ndims, 0, message="Expected scalar.")]
return assertions
def _maybe_get_static_event_ndims(self, event_ndims):
"""Helper which returns tries to return an integer static value."""
event_ndims_ = distribution_util.maybe_get_static_value(event_ndims)
if isinstance(event_ndims_, (np.generic, np.ndarray)):
if event_ndims_.dtype not in (np.int32, np.int64):
raise ValueError("Expected integer dtype, got dtype {}".format(
event_ndims_.dtype))
if isinstance(event_ndims_, np.ndarray) and len(event_ndims_.shape):
raise ValueError("Expected a scalar integer, got {}".format(
event_ndims_))
event_ndims_ = int(event_ndims_)
return event_ndims_
| Bijector |
python | getsentry__sentry | src/sentry/models/rule.py | {
"start": 1014,
"end": 5479
} | class ____(Model):
__relocation_scope__ = RelocationScope.Organization
DEFAULT_CONDITION_MATCH = "all" # any, all
DEFAULT_FILTER_MATCH = "all" # match to apply on filters
DEFAULT_FREQUENCY = 30 # minutes
project = FlexibleForeignKey("sentry.Project")
environment_id = BoundedPositiveIntegerField(null=True)
label = models.CharField(max_length=256)
# `data` contain all the specifics of the rule - conditions, actions, frequency, etc.
data = LegacyTextJSONField(default=dict)
status = BoundedPositiveIntegerField(
default=ObjectStatus.ACTIVE,
choices=((ObjectStatus.ACTIVE, "Active"), (ObjectStatus.DISABLED, "Disabled")),
db_index=True,
)
# source is currently used as a way to distinguish rules created specifically
# for use in other parts of the product (e.g. cron monitor alerting rules)
source = BoundedPositiveIntegerField(
default=RuleSource.ISSUE,
db_default=RuleSource.ISSUE,
choices=RuleSource.as_choices(),
)
owner_user_id = HybridCloudForeignKey(settings.AUTH_USER_MODEL, null=True, on_delete="SET_NULL")
owner_team = FlexibleForeignKey("sentry.Team", null=True, on_delete=models.SET_NULL)
date_added = models.DateTimeField(default=timezone.now)
objects: ClassVar[BaseManager[Self]] = BaseManager(cache_fields=("pk",))
class Meta:
db_table = "sentry_rule"
app_label = "sentry"
indexes = (
models.Index(fields=("project", "status", "owner_team")),
models.Index(fields=("project", "status", "owner_user_id")),
)
constraints = (
models.CheckConstraint(
condition=(
models.Q(owner_user_id__isnull=True, owner_team__isnull=False)
| models.Q(owner_user_id__isnull=False, owner_team__isnull=True)
| models.Q(owner_user_id__isnull=True, owner_team__isnull=True)
),
name="rule_owner_user_or_team_check",
),
)
__repr__ = sane_repr("project_id", "label")
@classmethod
def get_for_project(cls, project_id):
cache_key = f"project:{project_id}:rules"
rules_list = cache.get(cache_key)
if rules_list is None:
rules_list = list(cls.objects.filter(project=project_id, status=ObjectStatus.ACTIVE))
cache.set(cache_key, rules_list, 60)
return rules_list
@property
def created_by_id(self):
try:
created_activity = RuleActivity.objects.get(
rule=self, type=RuleActivityType.CREATED.value
)
return created_activity.user_id
except RuleActivity.DoesNotExist:
pass
return None
@property
def owner(self) -> Actor | None:
"""Part of ActorOwned Protocol"""
return Actor.from_id(user_id=self.owner_user_id, team_id=self.owner_team_id)
@owner.setter
def owner(self, actor: Actor | None) -> None:
"""Part of ActorOwned Protocol"""
self.owner_team_id = None
self.owner_user_id = None
if actor and actor.is_user:
self.owner_user_id = actor.id
if actor and actor.is_team:
self.owner_team_id = actor.id
def delete(self, *args, **kwargs):
rv = super().delete(*args, **kwargs)
self._clear_project_rule_cache()
return rv
def save(self, *args, **kwargs):
rv = super().save(*args, **kwargs)
self._clear_project_rule_cache()
return rv
def _clear_project_rule_cache(self) -> None:
cache_key = f"project:{self.project_id}:rules"
cache.delete(cache_key)
def get_audit_log_data(self):
return {
"label": self.label,
"data": self.data,
"status": self.status,
"environment": self.environment_id,
}
def get_rule_action_details_by_uuid(self, rule_action_uuid: str) -> dict[str, Any] | None:
actions = self.data.get("actions", None)
if not actions:
return None
for action in actions:
action_uuid = action.get("uuid", None)
if action_uuid is None:
# This should not happen, but because the data object is a dictionary, it's better to be safe
continue
if action_uuid == rule_action_uuid:
return action
return None
| Rule |
python | huggingface__transformers | src/transformers/models/d_fine/modular_d_fine.py | {
"start": 49945,
"end": 50650
} | class ____(RTDetrConvNormLayer):
def __init__(
self,
config: DFineConfig,
in_channels: int,
out_channels: int,
kernel_size: int,
stride: int,
groups: int = 1,
padding: Optional[int] = None,
activation: Optional[str] = None,
):
super().__init__(config, in_channels, out_channels, kernel_size, stride, padding=None, activation=activation)
self.conv = nn.Conv2d(
in_channels,
out_channels,
kernel_size,
stride,
groups=groups,
padding=(kernel_size - 1) // 2 if padding is None else padding,
bias=False,
)
| DFineConvNormLayer |
python | mlflow__mlflow | mlflow/cli/genai_eval_utils.py | {
"start": 824,
"end": 1108
} | class ____:
"""
Structured cell data for table display with metadata.
"""
value: str
"""The formatted display value for the cell"""
assessment: Assessment | None = None
"""The assessment data for this cell, if it represents an assessment"""
@dataclass
| Cell |
python | davidhalter__jedi | jedi/inference/value/dynamic_arrays.py | {
"start": 5000,
"end": 6300
} | class ____(HelperValueMixin):
"""
Used for the usage of set() and list().
This is definitely a hack, but a good one :-)
It makes it possible to use set/list conversions.
This is not a proper context, because it doesn't have to be. It's not used
in the wild, it's just used within typeshed as an argument to `__init__`
for set/list and never used in any other place.
"""
def __init__(self, instance, arguments):
self._instance = instance
self._arguments = arguments
def py__class__(self):
tuple_, = self._instance.inference_state.builtins_module.py__getattribute__('tuple')
return tuple_
def py__iter__(self, contextualized_node=None):
arguments = self._arguments
try:
_, lazy_value = next(arguments.unpack())
except StopIteration:
pass
else:
yield from lazy_value.infer().iterate()
from jedi.inference.arguments import TreeArguments
if isinstance(arguments, TreeArguments):
additions = _internal_check_array_additions(arguments.context, self._instance)
yield from additions
def iterate(self, contextualized_node=None, is_async=False):
return self.py__iter__(contextualized_node)
| _DynamicArrayAdditions |
python | tensorflow__tensorflow | tensorflow/python/distribute/combinations_test.py | {
"start": 7556,
"end": 8056
} | class ____(test.TestCase, parameterized.TestCase):
@combinations.generate(
combinations.combine(
tf_function_1=combinations.tf_function,
tf_function_2=combinations.no_tf_function,
mode="eager",
))
def testFunc(self, tf_function_1, tf_function_2):
@tf_function_1
def foo():
self.assertFalse(context.executing_eagerly())
@tf_function_2
def bar():
self.assertTrue(context.executing_eagerly())
foo()
bar()
| TfFunctionTest |
python | facebook__pyre-check | tools/incremental_test/specification.py | {
"start": 9969,
"end": 10456
} | class ____(SingleUpdate):
patch: str
patch_flags: str
def update(self, environment: Environment, working_directory: Path) -> None:
environment.checked_run(
working_directory=working_directory,
command=f"patch {self.patch_flags}",
stdin=self.patch,
)
def to_json(self) -> Dict[str, Any]:
return {"kind": "patch", "patch": self.patch, "patch_flags": self.patch_flags}
@dataclass(frozen=True)
| PatchRepositoryUpdate |
python | numba__numba | numba/experimental/jitclass/base.py | {
"start": 1325,
"end": 2554
} | class ____(models.StructModel):
def __init__(self, dmm, fe_typ):
clsty = fe_typ.class_type
members = [(_mangle_attr(k), v) for k, v in clsty.struct.items()]
super(InstanceDataModel, self).__init__(dmm, fe_typ, members)
default_manager.register(types.ClassInstanceType, InstanceModel)
default_manager.register(types.ClassDataType, InstanceDataModel)
default_manager.register(types.ClassType, models.OpaqueModel)
def _mangle_attr(name):
"""
Mangle attributes.
The resulting name does not startswith an underscore '_'.
"""
return 'm_' + name
##############################################################################
# Class object
_ctor_template = """
def ctor({args}):
return __numba_cls_({args})
"""
def _getargs(fn_sig):
"""
Returns list of positional and keyword argument names in order.
"""
params = fn_sig.parameters
args = []
for k, v in params.items():
if (v.kind & v.POSITIONAL_OR_KEYWORD) == v.POSITIONAL_OR_KEYWORD:
args.append(k)
else:
msg = "%s argument type unsupported in jitclass" % v.kind
raise errors.UnsupportedError(msg)
return args
@disable_pickling
| InstanceDataModel |
python | getsentry__sentry | tests/sentry/users/api/bases/test_user.py | {
"start": 4943,
"end": 5093
} | class ____(BaseUserEndpointTest):
endpoint = UserEndpoint()
# TODO(HC): Delete this once region silo by default changes land
| ControlUserEndpointTest |
python | coleifer__peewee | tests/regressions.py | {
"start": 34910,
"end": 35008
} | class ____(TestModel):
site = ForeignKeyField(Site, backref='pages')
title = TextField()
| Page |
python | doocs__leetcode | solution/0800-0899/0834.Sum of Distances in Tree/Solution.py | {
"start": 0,
"end": 728
} | class ____:
def sumOfDistancesInTree(self, n: int, edges: List[List[int]]) -> List[int]:
def dfs1(i: int, fa: int, d: int):
ans[0] += d
size[i] = 1
for j in g[i]:
if j != fa:
dfs1(j, i, d + 1)
size[i] += size[j]
def dfs2(i: int, fa: int, t: int):
ans[i] = t
for j in g[i]:
if j != fa:
dfs2(j, i, t - size[j] + n - size[j])
g = defaultdict(list)
for a, b in edges:
g[a].append(b)
g[b].append(a)
ans = [0] * n
size = [0] * n
dfs1(0, -1, 0)
dfs2(0, -1, ans[0])
return ans
| Solution |
python | pytorch__pytorch | torch/onnx/_internal/torchscript_exporter/registration.py | {
"start": 6595,
"end": 11182
} | class ____:
"""Registry for symbolic functions.
The registry maintains a mapping from qualified names to symbolic functions.
It is used to register new symbolic functions and to dispatch calls to
the appropriate function.
"""
def __init__(self) -> None:
self._registry: dict[str, _SymbolicFunctionGroup] = {}
def register(
self, name: str, opset: OpsetVersion, func: Callable, custom: bool = False
) -> None:
"""Registers a symbolic function.
Args:
name: The qualified name of the function to register. In the form of 'domain::op'.
E.g. 'aten::add'.
opset: The opset version of the function to register.
func: The symbolic function to register.
custom: Whether the function is a custom function that overrides existing ones.
Raises:
ValueError: If the separator '::' is not in the name.
"""
if "::" not in name:
raise ValueError(
f"The name must be in the form of 'domain::op', not '{name}'"
)
symbolic_functions = self._registry.setdefault(
name, _SymbolicFunctionGroup(name)
)
if custom:
symbolic_functions.add_custom(func, opset)
else:
symbolic_functions.add(func, opset)
def unregister(self, name: str, opset: OpsetVersion) -> None:
"""Unregisters a symbolic function.
Args:
name: The qualified name of the function to unregister.
opset: The opset version of the function to unregister.
"""
if name not in self._registry:
return
self._registry[name].remove_custom(opset)
def get_function_group(self, name: str) -> Optional[_SymbolicFunctionGroup]:
"""Returns the function group for the given name."""
return self._registry.get(name)
def is_registered_op(self, name: str, version: int) -> bool:
"""Returns whether the given op is registered for the given opset version."""
functions = self.get_function_group(name)
if functions is None:
return False
return functions.get(version) is not None
def all_functions(self) -> set[str]:
"""Returns the set of all registered function names."""
return set(self._registry)
def onnx_symbolic(
name: str,
opset: Union[OpsetVersion, Sequence[OpsetVersion]],
decorate: Optional[Sequence[Callable]] = None,
custom: bool = False,
) -> Callable:
"""Registers a symbolic function.
Usage::
```
@onnx_symbolic(
"aten::symbolic_b",
opset=10,
decorate=[quantized_aten_handler(scale=1 / 128, zero_point=0)],
)
@symbolic_helper.parse_args("v", "v", "b")
def symbolic_b(g: _C.Graph, x: _C.Value, y: _C.Value, arg1: bool) -> _C.Value: ...
```
Args:
name: The qualified name of the function in the form of 'domain::op'.
E.g. 'aten::add'.
opset: The opset versions of the function to register at.
decorate: A sequence of decorators to apply to the function.
custom: Whether the function is a custom symbolic function.
Raises:
ValueError: If the separator '::' is not in the name.
"""
def wrapper(func: Callable[_P, _R]) -> Callable[_P, _R]:
decorated = func
if decorate is not None:
for decorate_func in decorate:
decorated = decorate_func(decorated)
global registry
nonlocal opset
if isinstance(opset, OpsetVersion):
opset = (opset,)
for opset_version in opset:
registry.register(name, opset_version, decorated, custom=custom)
# Return the original function because the decorators in "decorate" are only
# specific to the instance being registered.
return func
return wrapper
def custom_onnx_symbolic(
name: str,
opset: Union[OpsetVersion, Sequence[OpsetVersion]],
decorate: Optional[Sequence[Callable]] = None,
) -> Callable:
"""Registers a custom symbolic function.
Args:
name: the qualified name of the function.
opset: the opset version of the function.
decorate: a sequence of decorators to apply to the function.
Returns:
The decorator.
Raises:
ValueError: If the separator '::' is not in the name.
"""
return onnx_symbolic(name, opset, decorate, custom=True)
# The registry for all symbolic functions.
registry = SymbolicRegistry()
| SymbolicRegistry |
python | apache__airflow | providers/google/tests/unit/google/cloud/operators/test_bigquery.py | {
"start": 97318,
"end": 101982
} | class ____:
@pytest.mark.parametrize(
("check_type", "check_value", "check_result"),
[
("equal_to", 0, 0),
("greater_than", 0, 1),
("less_than", 0, -1),
("geq_to", 0, 1),
("geq_to", 0, 0),
("leq_to", 0, 0),
("leq_to", 0, -1),
],
)
@mock.patch("airflow.providers.google.cloud.operators.bigquery._BigQueryHookWithFlexibleProjectId")
@mock.patch("airflow.providers.google.cloud.hooks.bigquery.BigQueryJob")
def test_bigquery_column_check_operator_succeeds(
self, mock_job, mock_hook, check_type, check_value, check_result, create_task_instance_of_operator
):
mock_job.result.return_value.to_dataframe.return_value = pd.DataFrame(
{"col_name": ["col1"], "check_type": ["min"], "check_result": [check_result]}
)
mock_hook.return_value.insert_job.return_value = mock_job
ti = create_task_instance_of_operator(
BigQueryColumnCheckOperator,
dag_id="dag_id",
task_id="check_column_succeeds",
table=TEST_TABLE_ID,
use_legacy_sql=False,
column_mapping={
"col1": {"min": {check_type: check_value}},
},
)
ti.task.execute(MagicMock())
@pytest.mark.parametrize(
("check_type", "check_value", "check_result"),
[
("equal_to", 0, 1),
("greater_than", 0, -1),
("less_than", 0, 1),
("geq_to", 0, -1),
("leq_to", 0, 1),
],
)
@mock.patch("airflow.providers.google.cloud.operators.bigquery._BigQueryHookWithFlexibleProjectId")
@mock.patch("airflow.providers.google.cloud.hooks.bigquery.BigQueryJob")
def test_bigquery_column_check_operator_fails(
self, mock_job, mock_hook, check_type, check_value, check_result, create_task_instance_of_operator
):
mock_job.result.return_value.to_dataframe.return_value = pd.DataFrame(
{"col_name": ["col1"], "check_type": ["min"], "check_result": [check_result]}
)
mock_hook.return_value.insert_job.return_value = mock_job
ti = create_task_instance_of_operator(
BigQueryColumnCheckOperator,
dag_id="dag_id",
task_id="check_column_fails",
table=TEST_TABLE_ID,
use_legacy_sql=False,
column_mapping={
"col1": {"min": {check_type: check_value}},
},
)
with pytest.raises(AirflowException):
ti.task.execute(MagicMock())
@pytest.mark.parametrize(
("check_type", "check_value", "check_result"),
[
("equal_to", 0, 0),
("greater_than", 0, 1),
("less_than", 0, -1),
],
)
@mock.patch("airflow.providers.google.cloud.operators.bigquery._BigQueryHookWithFlexibleProjectId")
@mock.patch("airflow.providers.google.cloud.hooks.bigquery.BigQueryJob")
def test_encryption_configuration(self, mock_job, mock_hook, check_type, check_value, check_result):
encryption_configuration = {
"kmsKeyName": "projects/PROJECT/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY",
}
mock_job.result.return_value.to_dataframe.return_value = pd.DataFrame(
{"col_name": ["col1"], "check_type": ["min"], "check_result": [check_result]}
)
mock_hook.return_value.insert_job.return_value = mock_job
mock_hook.return_value.project_id = TEST_GCP_PROJECT_ID
operator = BigQueryColumnCheckOperator(
task_id="TASK_ID",
encryption_configuration=encryption_configuration,
table=f"{TEST_DATASET}.{TEST_TABLE_ID}",
column_mapping={"col1": {"min": {check_type: check_value}}},
location=TEST_DATASET_LOCATION,
)
operator.execute(MagicMock())
mock_hook.return_value.insert_job.assert_called_with(
configuration={
"query": {
"query": f"""SELECT col_name, check_type, check_result FROM (
SELECT 'col1' AS col_name, 'min' AS check_type, col1_min AS check_result
FROM (SELECT MIN(col1) AS col1_min FROM {TEST_DATASET}.{TEST_TABLE_ID} ) AS sq
) AS check_columns""",
"useLegacySql": True,
"destinationEncryptionConfiguration": encryption_configuration,
}
},
project_id=TEST_GCP_PROJECT_ID,
location=TEST_DATASET_LOCATION,
job_id="",
nowait=False,
)
| TestBigQueryColumnCheckOperator |
python | pytorch__pytorch | torch/fx/experimental/meta_tracer.py | {
"start": 1915,
"end": 3590
} | class ____(torch.fx.Proxy):
def install_tensor_meta(self, tensor_meta):
self._tensor_meta = tensor_meta
def size(self, dim=None):
if hasattr(self, "_tensor_meta") and self._tensor_meta is not None:
return self._tensor_meta.size(*[dim] if dim else [])
return self.tracer.create_proxy(
"call_method", "size", (self, dim) if dim else (self,), {}
)
def dim(self):
if hasattr(self, "_tensor_meta") and self._tensor_meta is not None:
return self._tensor_meta.dim()
return self.tracer.create_proxy("call_method", "dim", (self,), {})
@property
def shape(self):
if hasattr(self, "_tensor_meta") and self._tensor_meta is not None:
return self._tensor_meta.shape
return self.tracer.create_proxy(
"call_function", builtins.getattr, (self, "shape"), {}
)
@property
def dtype(self):
if hasattr(self, "_tensor_meta") and self._tensor_meta is not None:
return self._tensor_meta.dtype
return self.tracer.create_proxy(
"call_function", builtins.getattr, (self, "dtype"), {}
)
@property
def device(self):
# Hack so we can track when devices are used. During meta-tensor propagation,
# replace these values with a constant 'meta'
return MetaDeviceAttribute(self, "device")
def __getattr__(self, k):
if k == "_tensor_meta":
return self.__getattribute__(k)
# note: not added to the graph yet, if this is a method call
# we peephole optimize to the method invocation
return MetaAttribute(self, k)
| MetaProxy |
python | pdm-project__pdm | src/pdm/termui.py | {
"start": 3141,
"end": 3590
} | class ____:
if is_legacy_windows():
SUCC = "v"
FAIL = "x"
LOCK = " "
POPPER = " "
ELLIPSIS = "..."
ARROW_SEPARATOR = ">"
else:
SUCC = ":heavy_check_mark:"
FAIL = ":heavy_multiplication_x:"
LOCK = ":lock:"
POPPER = ":party_popper:"
ELLIPSIS = "…"
ARROW_SEPARATOR = "➤"
if is_legacy_windows():
SPINNER = "line"
else:
SPINNER = "dots"
| Emoji |
python | ApeWorX__ape | src/ape/api/transactions.py | {
"start": 10158,
"end": 22311
} | class ____(ExtraAttributesMixin, BaseInterfaceModel):
"""
An abstract class to represent a transaction receipt. The receipt
contains information about the transaction, such as the status
and required confirmations.
**NOTE**: Use a ``required_confirmations`` of ``0`` in your transaction
to not wait for confirmations.
Get a receipt by making transactions in ``ape``, such as interacting with
a :class:`ape.contracts.base.ContractInstance`.
"""
contract_address: Optional[AddressType] = None
block_number: HexInt
gas_used: HexInt
logs: list[dict] = []
status: HexInt
txn_hash: HexStr
transaction: TransactionAPI
_error: Optional[TransactionError] = None
@log_instead_of_fail(default="<ReceiptAPI>")
def __repr__(self) -> str:
cls_name = getattr(self.__class__, "__name__", ReceiptAPI.__name__)
return f"<{cls_name} {self.txn_hash}>"
def __ape_extra_attributes__(self) -> Iterator[ExtraModelAttributes]:
yield ExtraModelAttributes(name="transaction", attributes=lambda: vars(self.transaction))
@field_validator("transaction", mode="before")
@classmethod
def _validate_transaction(cls, value):
if not isinstance(value, dict):
# Already a `TransactionAPI`.
return value
# Attempt to create a transaction model for the data.
if provider := cls.network_manager.active_provider:
ecosystem = provider.network.ecosystem
else:
logger.warning(
"Given raw-transaction data when not connected to any provider. "
"Network is unknown. Assuming EVM-like transaction model."
)
ecosystem = cls.network_manager.ethereum
return ecosystem.create_transaction(**value)
@cached_property
def debug_logs_typed(self) -> list[tuple[Any]]:
"""Return any debug log data outputted by the transaction."""
return []
@cached_property
def debug_logs_lines(self) -> list[str]:
"""
Return any debug log data outputted by the transaction as strings suitable for printing
"""
return [" ".join(map(str, ln)) for ln in self.debug_logs_typed]
@property
def error(self) -> Optional[TransactionError]:
return self._error
@error.setter
def error(self, value: TransactionError):
self._error = value
def show_debug_logs(self):
"""
Output debug logs to logging system
"""
for ln in self.debug_logs_lines:
logger.info(f"[DEBUG-LOG] {ln}")
@property
def failed(self) -> bool:
"""
Whether the receipt represents a failing transaction.
Ecosystem plugins override this property when their receipts
are able to be failing.
"""
return False
@property
def confirmed(self) -> bool:
"""
``True`` when the number of confirmations is equal or greater
to the required amount of confirmations.
"""
return self._confirmations_occurred == self.required_confirmations
@property
@abstractmethod
def total_fees_paid(self) -> int:
"""
The total amount of fees paid for the transaction.
"""
@property
@abstractmethod
def ran_out_of_gas(self) -> bool:
"""
Check if a transaction ran out of gas and failed.
Returns:
bool: ``True`` when the transaction failed and used the
same amount of gas as the given ``gas_limit``.
"""
@cached_property
def trace(self) -> "TraceAPI":
"""
The :class:`~ape.api.trace.TraceAPI` of the transaction.
"""
return self.provider.get_transaction_trace(self.txn_hash)
@property
def _explorer(self) -> Optional["ExplorerAPI"]:
return self.provider.network.explorer
@property
def _block_time(self) -> int:
return self.provider.network.block_time
@property
def _confirmations_occurred(self) -> int:
latest_block = self.provider.get_block("latest")
if latest_block.number is None:
return 0
return latest_block.number - self.block_number
@cached_property
def block(self) -> "BlockAPI":
return self.chain_manager.blocks[self.block_number]
@property
def timestamp(self) -> int:
return self.block.timestamp
@property
def datetime(self) -> "datetime_type":
return self.block.datetime
@cached_property
def events(self) -> "ContractLogContainer":
"""
All the events that were emitted from this call.
"""
return self.decode_logs() # Decodes all logs by default.
@abstractmethod
def decode_logs(
self,
abi: Optional[
Union[list[Union["EventABI", "ContractEvent"]], Union["EventABI", "ContractEvent"]]
] = None,
) -> "ContractLogContainer":
"""
Decode the logs on the receipt.
Args:
abi (``EventABI``): The ABI of the event to decode into logs.
Returns:
list[:class:`~ape.types.ContractLog`]
"""
def raise_for_status(self) -> Optional[NoReturn]:
"""
Handle provider-specific errors regarding a non-successful
:class:`~api.providers.TransactionStatusEnum`.
"""
def await_confirmations(self) -> "ReceiptAPI":
"""
Wait for a transaction to be considered confirmed.
Returns:
:class:`~ape.api.ReceiptAPI`: The receipt that is now confirmed.
"""
# NOTE: Even when required_confirmations is `0`, we want to wait for the nonce to
# increment. Otherwise, users may end up with invalid nonce errors in tests.
self._await_sender_nonce_increment()
if self.required_confirmations == 0 or self._check_error_status() or self.confirmed:
return self
# Confirming now.
self._log_submission()
self._await_confirmations()
return self
def _await_sender_nonce_increment(self):
if not self.sender:
return
iterations_timeout = 20
iteration = 0
sender_nonce = self.provider.get_nonce(self.sender)
while sender_nonce == self.nonce:
time.sleep(1)
sender_nonce = self.provider.get_nonce(self.sender)
iteration += 1
if iteration != iterations_timeout:
continue
tx_err = TransactionError("Timeout waiting for sender's nonce to increase.")
self.error = tx_err
if self.transaction.raise_on_revert:
raise tx_err
else:
break
def _log_submission(self):
if explorer_url := self._explorer and self._explorer.get_transaction_url(self.txn_hash):
log_message = f"Submitted {explorer_url}"
else:
log_message = f"Submitted {self.txn_hash}"
logger.info(log_message)
def _check_error_status(self) -> bool:
try:
self.raise_for_status()
except TransactionError:
# Skip waiting for confirmations when the transaction has failed.
return True
return False
def _await_confirmations(self):
if self.required_confirmations <= 0:
return
with ConfirmationsProgressBar(self.required_confirmations) as progress_bar:
while not self.confirmed:
confirmations_occurred = self._confirmations_occurred
if confirmations_occurred >= self.required_confirmations:
break
progress_bar.confs = confirmations_occurred
time_to_sleep = int(self._block_time / 2)
time.sleep(time_to_sleep)
@property
def method_called(self) -> Optional["MethodABI"]:
"""
The method ABI of the method called to produce this receipt.
"""
return None
@cached_property
def return_value(self) -> Any:
"""
Obtain the final return value of the call. Requires tracing to function,
since this is not available from the receipt object.
"""
if trace := self.trace:
ret_val = trace.return_value
return ret_val[0] if isinstance(ret_val, tuple) and len(ret_val) == 1 else ret_val
return None
@property
@raises_not_implemented
def source_traceback(self) -> "SourceTraceback": # type: ignore[empty-body]
"""
A Pythonic style traceback for both failing and non-failing receipts.
Requires a provider that implements
:meth:~ape.api.providers.ProviderAPI.get_transaction_trace`.
"""
@raises_not_implemented
def show_trace(self, verbose: bool = False, file: IO[str] = sys.stdout):
"""
Display the complete sequence of contracts and methods called during
the transaction.
Args:
verbose (bool): Set to ``True`` to include more information.
file (IO[str]): The file to send output to. Defaults to stdout.
"""
@raises_not_implemented
def show_gas_report(self, file: IO[str] = sys.stdout):
"""
Display a gas report for the calls made in this transaction.
"""
@raises_not_implemented
def show_source_traceback(self):
"""
Show a receipt traceback mapping to lines in the source code.
Only works when the contract type and source code are both available,
like in local projects.
"""
@raises_not_implemented
def show_events(self):
"""
Show the events from the receipt.
"""
def track_gas(self):
"""
Track this receipt's gas in the on-going session gas-report.
Requires using a provider that supports transaction traces
to get full data. Else, is limited to receipt-level data.
This gets called when running tests with the ``--gas`` flag.
"""
address = self.receiver or self.contract_address
if not address or not self._test_runner:
return
if self.provider.supports_tracing and (trace := self.trace):
tracker = self._test_runner.gas_tracker
tracker.append_gas(trace, address)
elif (
(contract_type := self.chain_manager.contracts.get(address))
and contract_type.source_id
and (method := self.method_called)
):
# Can only track top-level gas.
if contract := self.local_project._create_contract_source(contract_type):
self._test_runner.gas_tracker.append_toplevel_gas(contract, method, self.gas_used)
def track_coverage(self):
"""
Track this receipt's source code coverage in the on-going
session coverage report. Requires using a provider that supports
transaction traces to track full coverage. Else, is limited
to receipt-level tracking. This gets called when running tests with
the ``--coverage`` flag.
"""
if not self.network_manager.active_provider or not self._test_runner:
return
if not (address := self.receiver):
# NOTE: Deploy txns are currently not tracked!
return
tracker = self._test_runner.coverage_tracker
if self.provider.supports_tracing and (traceback := self.source_traceback):
if len(traceback) > 0:
tracker.cover(traceback)
elif method := self.method_called:
# Unable to track detailed coverage like statement or branch
# The user will receive a warning at the end regarding this.
# At the very least, we can track function coverage.
contract_type = self.chain_manager.contracts.get(address)
if not contract_type or not contract_type.source_id:
return
if contract := self.local_project._create_contract_source(contract_type):
tracker.hit_function(contract, method)
| ReceiptAPI |
python | scipy__scipy | benchmarks/benchmarks/fft_basic.py | {
"start": 8805,
"end": 10009
} | class ____(Benchmark):
params = [
['100x100', '1000x100', '256x256', '512x512'],
[1, 8, 32, 100],
['workers', 'threading']
]
param_names = ['size', 'num_transforms', 'method']
def setup(self, size, num_transforms, method):
if not has_scipy_fft:
raise NotImplementedError
size = list(map(int, size.split("x")))
self.xs = [(random(size)+1j*random(size)).astype(np.complex128)
for _ in range(num_transforms)]
if method == 'threading':
self.pool = futures.ThreadPoolExecutor(os.cpu_count())
def map_thread(self, func):
f = []
for x in self.xs:
f.append(self.pool.submit(func, x))
futures.wait(f)
def time_fft(self, size, num_transforms, method):
if method == 'threading':
self.map_thread(scipy_fft.fft)
else:
for x in self.xs:
scipy_fft.fft(x, workers=-1)
def time_fftn(self, size, num_transforms, method):
if method == 'threading':
self.map_thread(scipy_fft.fftn)
else:
for x in self.xs:
scipy_fft.fftn(x, workers=-1)
| FftThreading |
python | tensorflow__tensorflow | tensorflow/python/data/util/options_test.py | {
"start": 1268,
"end": 1413
} | class ____(options.OptionsBase):
opts = options.create_option(
name="opts", ty=_TestOptions, docstring="nested options")
| _NestedTestOptions |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-github/source_github/github_schema.py | {
"start": 78208,
"end": 78555
} | class ____(sgqlc.types.Enum):
"""The possible target states when updating a pull request.
Enumeration Choices:
* `CLOSED`: A pull request that has been closed without being
merged.
* `OPEN`: A pull request that is still open.
"""
__schema__ = github_schema
__choices__ = ("CLOSED", "OPEN")
| PullRequestUpdateState |
python | nmslib__hnswlib | tests/python/bindings_test_metadata.py | {
"start": 54,
"end": 1584
} | class ____(unittest.TestCase):
def testMetadata(self):
dim = 16
num_elements = 10000
# Generating sample data
data = np.float32(np.random.random((num_elements, dim)))
# Declaring index
p = hnswlib.Index(space='l2', dim=dim) # possible options are l2, cosine or ip
# Initing index
# max_elements - the maximum number of elements, should be known beforehand
# (probably will be made optional in the future)
#
# ef_construction - controls index search speed/build speed tradeoff
# M - is tightly connected with internal dimensionality of the data
# stronlgy affects the memory consumption
p.init_index(max_elements=num_elements, ef_construction=100, M=16)
# Controlling the recall by setting ef:
# higher ef leads to better accuracy, but slower search
p.set_ef(100)
p.set_num_threads(4) # by default using all available cores
print("Adding all elements (%d)" % (len(data)))
p.add_items(data)
# test methods
self.assertEqual(p.get_max_elements(), num_elements)
self.assertEqual(p.get_current_count(), num_elements)
# test properties
self.assertEqual(p.space, 'l2')
self.assertEqual(p.dim, dim)
self.assertEqual(p.M, 16)
self.assertEqual(p.ef_construction, 100)
self.assertEqual(p.max_elements, num_elements)
self.assertEqual(p.element_count, num_elements)
| RandomSelfTestCase |
python | scipy__scipy | benchmarks/benchmarks/go_benchmark_functions/go_funcs_T.py | {
"start": 5935,
"end": 7091
} | class ____(Benchmark):
r"""
Three Hump Camel objective function.
This class defines the Three Hump Camel [1]_ global optimization problem. This
is a multimodal minimization problem defined as follows:
.. math::
f_{\text{ThreeHumpCamel}}(x) = 2x_1^2 - 1.05x_1^4 + \frac{x_1^6}{6}
+ x_1x_2 + x_2^2
with :math:`x_i \in [-5, 5]` for :math:`i = 1, 2`.
*Global optimum*: :math:`f(x) = 0` for :math:`x = [0, 0]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([-5.0] * self.N, [5.0] * self.N))
self.custom_bounds = [(-2, 2), (-1.5, 1.5)]
self.global_optimum = [[0.0, 0.0]]
self.fglob = 0.0
def fun(self, x, *args):
self.nfev += 1
return (2.0 * x[0] ** 2.0 - 1.05 * x[0] ** 4.0 + x[0] ** 6 / 6.0
+ x[0] * x[1] + x[1] ** 2.0)
| ThreeHumpCamel |
python | pypa__pip | src/pip/_internal/exceptions.py | {
"start": 25591,
"end": 26133
} | class ____(DiagnosticPipError):
reference = "uninstall-distutils-installed-package"
def __init__(self, *, distribution: BaseDistribution) -> None:
super().__init__(
message=Text(f"Cannot uninstall {distribution}"),
context=(
"It is a distutils installed project and thus we cannot accurately "
"determine which files belong to it which would lead to only a partial "
"uninstall."
),
hint_stmt=None,
)
| LegacyDistutilsInstall |
python | getsentry__sentry | tests/sentry/issue_detection/test_performance_detection.py | {
"start": 22346,
"end": 28503
} | class ____(TestCase):
def test_save_and_fetch(self) -> None:
event = Event(self.project.id, "something")
problem = PerformanceProblem(
"test",
"db",
"something bad happened",
PerformanceNPlusOneGroupType,
["1"],
["2", "3", "4"],
["4", "5", "6"],
{},
[],
)
EventPerformanceProblem(event, problem).save()
found = EventPerformanceProblem.fetch(event, problem.fingerprint)
assert found is not None
assert found.problem == problem
def test_fetch_multi(self) -> None:
event_1 = Event(self.project.id, "something")
event_1_problems = [
PerformanceProblem(
"test",
"db",
"something bad happened",
PerformanceNPlusOneGroupType,
["1"],
["2", "3", "4"],
["4", "5", "6"],
{},
[],
),
PerformanceProblem(
"test_2",
"db",
"something horrible happened",
PerformanceSlowDBQueryGroupType,
["234"],
["67", "87686", "786"],
["4", "5", "6"],
{},
[],
),
]
event_2 = Event(self.project.id, "something else")
event_2_problems = [
PerformanceProblem(
"event_2_test",
"db",
"something happened",
PerformanceNPlusOneGroupType,
["1"],
["a", "b", "c"],
["d", "e", "f"],
{},
[],
),
PerformanceProblem(
"event_2_test_2",
"db",
"hello",
PerformanceSlowDBQueryGroupType,
["234"],
["fdgh", "gdhgf", "gdgh"],
["gdf", "yu", "kjl"],
{},
[],
),
]
all_event_problems = [
(event, problem)
for event, problems in ((event_1, event_1_problems), (event_2, event_2_problems))
for problem in problems
]
for event, problem in all_event_problems:
EventPerformanceProblem(event, problem).save()
unsaved_problem = PerformanceProblem(
"fake_fingerprint",
"db",
"hello",
PerformanceSlowDBQueryGroupType,
["234"],
["fdgh", "gdhgf", "gdgh"],
["gdf", "yu", "kjl"],
{},
[],
)
result = EventPerformanceProblem.fetch_multi(
[
(event, problem.fingerprint)
for event, problem in all_event_problems + [(event, unsaved_problem)]
]
)
assert [r.problem if r else None for r in result] == [
problem for _, problem in all_event_problems
] + [None]
@pytest.mark.parametrize(
"spans, duration",
[
pytest.param(
[
{
"start_timestamp": 0,
"timestamp": 0.011,
}
],
11,
),
pytest.param(
[
{
"start_timestamp": 0,
"timestamp": 0.011,
},
{
"start_timestamp": 0,
"timestamp": 0.011,
},
],
11,
id="parallel spans",
),
pytest.param(
[
{
"start_timestamp": 0,
"timestamp": 0.011,
},
{
"start_timestamp": 1.0,
"timestamp": 1.011,
},
],
22,
id="separate spans",
),
pytest.param(
[
{
"start_timestamp": 0,
"timestamp": 0.011,
},
{
"start_timestamp": 0.005,
"timestamp": 0.016,
},
],
16,
id="overlapping spans",
),
pytest.param(
[
{
"start_timestamp": 0,
"timestamp": 0.011,
},
{
"start_timestamp": 0.005,
"timestamp": 0.016,
},
{
"start_timestamp": 0.015,
"timestamp": 0.032,
},
],
32,
id="multiple overlapping spans",
),
pytest.param(
[
{
"start_timestamp": 0,
"timestamp": 0.011,
},
{
"start_timestamp": 0.011,
"timestamp": 0.022,
},
{
"start_timestamp": 0.022,
"timestamp": 0.033,
},
],
33,
id="multiple overlapping touching spans",
),
pytest.param(
[
{
"start_timestamp": 0,
"timestamp": 0.011,
},
{
"start_timestamp": 0.005,
"timestamp": 0.022,
},
{
"start_timestamp": 0.033,
"timestamp": 0.045,
},
{
"start_timestamp": 0.045,
"timestamp": 0.055,
},
],
44,
id="multiple overlapping spans with gaps",
),
],
)
def test_total_span_time(spans: list[Span], duration: float) -> None:
assert total_span_time(spans) == pytest.approx(duration, 0.01)
| EventPerformanceProblemTest |
python | pypa__pip | tests/unit/test_cli_spinners.py | {
"start": 661,
"end": 1922
} | class ____:
@pytest.mark.parametrize(
"status, func",
[
("done", lambda: None),
("error", lambda: 1 / 0),
("canceled", Mock(side_effect=KeyboardInterrupt)),
],
)
def test_finish(self, status: str, func: Callable[[], None]) -> None:
"""
Check that the spinner finish message is set correctly depending
on how the spinner came to a stop.
"""
stream = StringIO()
try:
with patch_logger_level(logging.INFO):
with open_rich_spinner("working", Console(file=stream)):
func()
except BaseException:
pass
output = stream.getvalue()
assert output == f"working ... {status}"
@pytest.mark.parametrize(
"level, visible",
[(logging.ERROR, False), (logging.INFO, True), (logging.DEBUG, True)],
)
def test_verbosity(self, level: int, visible: bool) -> None:
"""Is the spinner hidden at the appropriate verbosity?"""
stream = StringIO()
with patch_logger_level(level):
with open_rich_spinner("working", Console(file=stream)):
pass
assert bool(stream.getvalue()) == visible
| TestRichSpinner |
python | chroma-core__chroma | chromadb/api/types.py | {
"start": 16357,
"end": 16553
} | class ____(TypedDict):
ids: IDs
embeddings: Embeddings
metadatas: Optional[Metadatas]
documents: Optional[Documents]
uris: Optional[URIs]
# Add result doesn't exist.
| AddRequest |
python | dask__dask | dask/dataframe/dask_expr/_expr.py | {
"start": 112961,
"end": 113100
} | class ____(MaybeAlignPartitions):
_parameters = ["frame", "other", "func", "fill_value"]
_expr_cls = CombineSeries
| CombineSeriesAlign |
python | django__django | tests/sites_tests/tests.py | {
"start": 9738,
"end": 13146
} | class ____(TestCase):
databases = {"default", "other"}
@classmethod
def setUpTestData(cls):
# Delete the site created as part of the default migration process.
Site.objects.all().delete()
def setUp(self):
self.app_config = apps.get_app_config("sites")
def test_basic(self):
"""
#15346, #15573 - create_default_site() creates an example site only if
none exist.
"""
with captured_stdout() as stdout:
create_default_site(self.app_config)
self.assertEqual(Site.objects.count(), 1)
self.assertIn("Creating example.com", stdout.getvalue())
with captured_stdout() as stdout:
create_default_site(self.app_config)
self.assertEqual(Site.objects.count(), 1)
self.assertEqual("", stdout.getvalue())
@override_settings(DATABASE_ROUTERS=[JustOtherRouter()])
def test_multi_db_with_router(self):
"""
#16353, #16828 - The default site creation should respect db routing.
"""
create_default_site(self.app_config, using="default", verbosity=0)
create_default_site(self.app_config, using="other", verbosity=0)
self.assertFalse(Site.objects.using("default").exists())
self.assertTrue(Site.objects.using("other").exists())
def test_multi_db(self):
create_default_site(self.app_config, using="default", verbosity=0)
create_default_site(self.app_config, using="other", verbosity=0)
self.assertTrue(Site.objects.using("default").exists())
self.assertTrue(Site.objects.using("other").exists())
def test_save_another(self):
"""
#17415 - Another site can be created right after the default one.
On some backends the sequence needs to be reset after saving with an
explicit ID. There shouldn't be a sequence collisions by saving another
site. This test is only meaningful with databases that use sequences
for automatic primary keys such as PostgreSQL and Oracle.
"""
create_default_site(self.app_config, verbosity=0)
Site(domain="example2.com", name="example2.com").save()
def test_signal(self):
"""
#23641 - Sending the ``post_migrate`` signal triggers creation of the
default site.
"""
post_migrate.send(
sender=self.app_config, app_config=self.app_config, verbosity=0
)
self.assertTrue(Site.objects.exists())
@override_settings(SITE_ID=35696)
def test_custom_site_id(self):
"""
#23945 - The configured ``SITE_ID`` should be respected.
"""
create_default_site(self.app_config, verbosity=0)
self.assertEqual(Site.objects.get().pk, 35696)
@override_settings() # Restore original ``SITE_ID`` afterward.
def test_no_site_id(self):
"""
#24488 - The pk should default to 1 if no ``SITE_ID`` is configured.
"""
del settings.SITE_ID
create_default_site(self.app_config, verbosity=0)
self.assertEqual(Site.objects.get().pk, 1)
def test_unavailable_site_model(self):
"""
#24075 - A Site shouldn't be created if the model isn't available.
"""
apps = Apps()
create_default_site(self.app_config, verbosity=0, apps=apps)
self.assertFalse(Site.objects.exists())
| CreateDefaultSiteTests |
python | pytorch__pytorch | tools/code_coverage/package/util/setting.py | {
"start": 1115,
"end": 1323
} | class ____:
need_build: bool = False
need_run: bool = False
need_merge: bool = False
need_export: bool = False
need_summary: bool = False
need_pytest: bool = False
# test platform
| Option |
python | walkccc__LeetCode | solutions/2567. Minimum Score by Changing Two Elements/2567.py | {
"start": 0,
"end": 475
} | class ____:
def minimizeSum(self, nums: list[int]) -> int:
nums.sort()
# Can always change the number to any other number in `nums`, so `low` becomes 0.
# Thus, rephrase the problem as finding the minimum `high`.
highOfChangingTwoMins = nums[-1] - nums[2]
highOfChangingTwoMaxs = nums[-3] - nums[0]
highOfChangingMinAndMax = nums[-2] - nums[1]
return min(highOfChangingTwoMins, highOfChangingTwoMaxs,
highOfChangingMinAndMax)
| Solution |
python | langchain-ai__langchain | libs/core/langchain_core/structured_query.py | {
"start": 3026,
"end": 3094
} | class ____(Expr, ABC):
"""Filtering expression."""
| FilterDirective |
python | doocs__leetcode | solution/2400-2499/2491.Divide Players Into Teams of Equal Skill/Solution.py | {
"start": 0,
"end": 351
} | class ____:
def dividePlayers(self, skill: List[int]) -> int:
skill.sort()
t = skill[0] + skill[-1]
i, j = 0, len(skill) - 1
ans = 0
while i < j:
if skill[i] + skill[j] != t:
return -1
ans += skill[i] * skill[j]
i, j = i + 1, j - 1
return ans
| Solution |
python | keras-team__keras | keras/src/layers/activations/softmax_test.py | {
"start": 115,
"end": 2911
} | class ____(testing.TestCase):
@pytest.mark.requires_trainable_backend
def test_softmax(self):
self.run_layer_test(
softmax.Softmax,
init_kwargs={},
input_shape=(2, 3, 4),
supports_masking=True,
assert_built_after_instantiation=True,
)
def test_softmax_correctness(self):
softmax_layer = softmax.Softmax()
input = np.array([[1.0, 2.0, 1.0], [1.0, 2.0, 1.0]])
expected_output = np.array(
[
[0.21194157, 0.5761169, 0.21194157],
[0.21194157, 0.5761169, 0.21194157],
]
)
result = softmax_layer(input)
self.assertAllClose(result, expected_output)
def test_softmax_correctness_with_mask(self):
softmax_layer = softmax.Softmax(axis=(1, 0))
input = np.array([[1.0, 2.0, 1.0], [1.0, 2.0, 1.0]])
mask = np.array([[1.0, 0.0, 1.0], [0.0, 1.0, 0.0]])
expected_output = np.array(
[[0.21194154, 0.0, 0.21194154], [0.0, 0.57611686, 0.0]]
)
result = softmax_layer(input, mask=mask)
self.assertAllClose(result, expected_output)
def test_softmax_correctness_with_axis(self):
softmax_layer = softmax.Softmax(axis=(1))
input = np.array([[1.0, 2.0, 1.0], [1.0, 2.0, 1.0]])
expected_output = np.array(
[
[0.21194157, 0.5761169, 0.21194157],
[0.21194157, 0.5761169, 0.21194157],
]
)
result = softmax_layer(input)
self.assertAllClose(result, expected_output)
def test_softmax_masked_values_are_zero_including_fully_masked(self):
"""
Tests softmax with mask on default axis (-1).
Ensures output is 0 where mask is False.
Includes a row where all elements are masked.
"""
softmax_layer = softmax.Softmax() # Default axis = -1
input = np.array(
[
[1.0, 2.0, 5.0, 1.0],
[1.0, 1.0, 1.0, 1.0],
[3.0, 1.0, 2.0, 4.0],
],
dtype=np.float32,
)
mask = np.array(
[
[True, True, False, False], # Partially masked
[False, False, False, False], # Fully masked
[True, True, True, True], # Not masked
],
dtype=bool,
)
expected_output = np.array(
[
[0.268941, 0.731059, 0.0, 0.0], # last two masked
[0.0, 0.0, 0.0, 0.0], # Fully masked row should be all zeros
[0.236883, 0.032059, 0.087144, 0.643914],
]
)
result = softmax_layer(input, mask=mask)
self.assertAllClose(result, expected_output)
| SoftmaxTest |
python | jmcnamara__XlsxWriter | xlsxwriter/test/drawing/test_write_c_nv_pr.py | {
"start": 341,
"end": 1548
} | class ____(unittest.TestCase):
"""
Test the Drawing _write_c_nv_pr() method.
"""
def setUp(self):
self.fh = StringIO()
self.drawing = Drawing()
self.drawing._set_filehandle(self.fh)
def test_write_c_nv_pr(self):
"""Test the _write_c_nv_pr() method"""
drawing_info = DrawingInfo()
self.drawing._write_c_nv_pr(2, drawing_info, "Chart 1")
exp = """<xdr:cNvPr id="2" name="Chart 1"/>"""
got = self.fh.getvalue()
self.assertEqual(exp, got)
def test_write_c_nv_pr_with_hyperlink(self):
"""Test the _write_c_nv_pr() method with a hyperlink"""
url = Url("https://test")
url.tip = "tip"
url._rel_index = 1
drawing_info = DrawingInfo()
drawing_info._tip = "tip"
drawing_info._rel_index = 1
drawing_info._url = url
self.drawing._write_c_nv_pr(2, drawing_info, "Chart 1")
exp = """<xdr:cNvPr id="2" name="Chart 1"><a:hlinkClick xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" r:id="rId1" tooltip="tip"/></xdr:cNvPr>"""
got = self.fh.getvalue()
self.assertEqual(exp, got)
| TestWriteXdrcNvPr |
python | apache__airflow | providers/google/src/airflow/providers/google/cloud/operators/vertex_ai/experiment_service.py | {
"start": 7145,
"end": 10457
} | class ____(GoogleCloudBaseOperator):
"""
Use the Vertex AI SDK to create experiment run.
:param project_id: Required. The ID of the Google Cloud project that the service belongs to.
:param location: Required. The ID of the Google Cloud location that the service belongs to.
:param experiment_name: Required. The name of the evaluation experiment.
:param experiment_run_name: Required. The specific run name or ID for this experiment.
:param experiment_run_tensorboard: Optional. A backing TensorBoard resource to enable and store time series
metrics logged to this experiment run using log_time_series_metrics.
:param run_after_creation: Optional. If True experiment run will be created with state running.
:param gcp_conn_id: The connection ID to use connecting to Google Cloud.
:param impersonation_chain: Optional service account to impersonate using short-term
credentials, or chained list of accounts required to get the access_token
of the last account in the list, which will be impersonated in the request.
If set as a string, the account must grant the originating account
the Service Account Token Creator IAM role.
If set as a sequence, the identities from the list must grant
Service Account Token Creator IAM role to the directly preceding identity, with first
account from the list granting this role to the originating account (templated).
"""
template_fields = (
"location",
"project_id",
"impersonation_chain",
"experiment_name",
"experiment_run_name",
)
def __init__(
self,
*,
project_id: str,
location: str,
experiment_name: str,
experiment_run_name: str,
experiment_run_tensorboard: str | None = None,
run_after_creation: bool = False,
gcp_conn_id: str = "google_cloud_default",
impersonation_chain: str | Sequence[str] | None = None,
**kwargs,
) -> None:
super().__init__(**kwargs)
self.project_id = project_id
self.location = location
self.experiment_name = experiment_name
self.experiment_run_name = experiment_run_name
self.experiment_run_tensorboard = experiment_run_tensorboard
self.run_after_creation = run_after_creation
self.gcp_conn_id = gcp_conn_id
self.impersonation_chain = impersonation_chain
def execute(self, context: Context) -> None:
self.hook = ExperimentRunHook(
gcp_conn_id=self.gcp_conn_id,
impersonation_chain=self.impersonation_chain,
)
try:
self.hook.create_experiment_run(
project_id=self.project_id,
location=self.location,
experiment_name=self.experiment_name,
experiment_run_name=self.experiment_run_name,
experiment_run_tensorboard=self.experiment_run_tensorboard,
run_after_creation=self.run_after_creation,
)
except exceptions.AlreadyExists:
raise AirflowException(f"Experiment Run with name {self.experiment_run_name} already exist")
self.log.info("Created experiment run: %s", self.experiment_run_name)
| CreateExperimentRunOperator |
python | crytic__slither | slither/slithir/variables/local_variable.py | {
"start": 287,
"end": 2528
} | class ____(
LocalVariable, SlithIRVariable
): # pylint: disable=too-many-instance-attributes
def __init__(self, local_variable: LocalVariable) -> None:
assert isinstance(local_variable, LocalVariable)
super().__init__()
# initiate ChildContract
self.set_function(local_variable.function)
# initiate Variable
self._name = local_variable.name
self._initial_expression = local_variable.expression
self._type = local_variable.type
self._initialized = local_variable.initialized
self._visibility = local_variable.visibility
self._is_constant = local_variable.is_constant
# initiate LocalVariable
self._location = local_variable.location
self._is_storage = local_variable.is_storage
self._index = 0
# Additional field
# points to state variables
self._refers_to: Set[StateIRVariable] = set()
# keep un-ssa version
if isinstance(local_variable, LocalIRVariable):
self._non_ssa_version = local_variable.non_ssa_version
else:
self._non_ssa_version = local_variable
@property
def index(self) -> int:
return self._index
@index.setter
def index(self, idx: int) -> None:
self._index = idx
@property
def refers_to(self):
"""
Return the alias for local variable that are storage pointers
"""
if self.is_storage:
return self._refers_to
return set()
@refers_to.setter
def refers_to(self, variables):
self._refers_to = variables
@property
def non_ssa_version(self) -> LocalVariable:
return self._non_ssa_version
def add_refers_to(self, variable: StateIRVariable) -> None:
# It is a temporaryVariable if its the return of a new ..
# ex: string[] memory dynargs = new string[](1);
assert isinstance(variable, (SlithIRVariable, TemporaryVariable))
self._refers_to.add(variable)
@property
def ssa_name(self):
if self.is_storage:
return f"{self._name}_{self.index} (-> {[v.name for v in self.refers_to]})"
return f"{self._name}_{self.index}"
| LocalIRVariable |
python | spyder-ide__spyder | spyder/plugins/updatemanager/workers.py | {
"start": 8349,
"end": 8445
} | class ____(Exception):
"""Error occured while downloading file"""
pass
| UpdateDownloadError |
python | charliermarsh__ruff | crates/ruff_linter/resources/test/fixtures/flake8_bugbear/B018.py | {
"start": 0,
"end": 28
} | class ____:
"""abc"""
| Foo1 |
python | pandas-dev__pandas | pandas/tests/window/test_numba.py | {
"start": 13805,
"end": 20114
} | class ____:
def test_table_series_valueerror(self):
def f(x):
return np.sum(x, axis=0) + 1
with pytest.raises(
ValueError, match="method='table' not applicable for Series objects."
):
Series(range(1)).rolling(1, method="table").apply(
f, engine="numba", raw=True
)
def test_table_method_rolling_methods(
self,
nogil,
parallel,
nopython,
arithmetic_numba_supported_operators,
step,
):
method, kwargs = arithmetic_numba_supported_operators
engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython}
df = DataFrame(np.eye(3))
roll_table = df.rolling(2, method="table", min_periods=0, step=step)
if method in ("var", "std"):
with pytest.raises(NotImplementedError, match=f"{method} not supported"):
getattr(roll_table, method)(
engine_kwargs=engine_kwargs, engine="numba", **kwargs
)
else:
roll_single = df.rolling(2, method="single", min_periods=0, step=step)
result = getattr(roll_table, method)(
engine_kwargs=engine_kwargs, engine="numba", **kwargs
)
expected = getattr(roll_single, method)(
engine_kwargs=engine_kwargs, engine="numba", **kwargs
)
tm.assert_frame_equal(result, expected)
def test_table_method_rolling_apply(self, nogil, parallel, nopython, step):
engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython}
def f(x):
return np.sum(x, axis=0) + 1
df = DataFrame(np.eye(3))
result = df.rolling(2, method="table", min_periods=0, step=step).apply(
f, raw=True, engine_kwargs=engine_kwargs, engine="numba"
)
expected = df.rolling(2, method="single", min_periods=0, step=step).apply(
f, raw=True, engine_kwargs=engine_kwargs, engine="numba"
)
tm.assert_frame_equal(result, expected)
def test_table_method_rolling_apply_col_order(self):
# GH#59666
def f(x):
return np.nanmean(x[:, 0] - x[:, 1])
df = DataFrame(
{
"a": [1, 2, 3, 4, 5, 6],
"b": [6, 7, 8, 5, 6, 7],
}
)
result = df.rolling(3, method="table", min_periods=0)[["a", "b"]].apply(
f, raw=True, engine="numba"
)
expected = DataFrame(
{
"a": [-5, -5, -5, -3.66667, -2.33333, -1],
"b": [-5, -5, -5, -3.66667, -2.33333, -1],
}
)
tm.assert_almost_equal(result, expected)
result = df.rolling(3, method="table", min_periods=0)[["b", "a"]].apply(
f, raw=True, engine="numba"
)
expected = DataFrame(
{
"b": [5, 5, 5, 3.66667, 2.33333, 1],
"a": [5, 5, 5, 3.66667, 2.33333, 1],
}
)
tm.assert_almost_equal(result, expected)
def test_table_method_rolling_weighted_mean(self, step):
def weighted_mean(x):
arr = np.ones((1, x.shape[1]))
arr[:, :2] = (x[:, :2] * x[:, 2]).sum(axis=0) / x[:, 2].sum()
return arr
df = DataFrame([[1, 2, 0.6], [2, 3, 0.4], [3, 4, 0.2], [4, 5, 0.7]])
result = df.rolling(2, method="table", min_periods=0, step=step).apply(
weighted_mean, raw=True, engine="numba"
)
expected = DataFrame(
[
[1.0, 2.0, 1.0],
[1.8, 2.0, 1.0],
[3.333333, 2.333333, 1.0],
[1.555556, 7, 1.0],
]
)[::step]
tm.assert_frame_equal(result, expected)
def test_table_method_expanding_apply(self, nogil, parallel, nopython):
engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython}
def f(x):
return np.sum(x, axis=0) + 1
df = DataFrame(np.eye(3))
result = df.expanding(method="table").apply(
f, raw=True, engine_kwargs=engine_kwargs, engine="numba"
)
expected = df.expanding(method="single").apply(
f, raw=True, engine_kwargs=engine_kwargs, engine="numba"
)
tm.assert_frame_equal(result, expected)
def test_table_method_expanding_methods(
self, nogil, parallel, nopython, arithmetic_numba_supported_operators
):
method, kwargs = arithmetic_numba_supported_operators
engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython}
df = DataFrame(np.eye(3))
expand_table = df.expanding(method="table")
if method in ("var", "std"):
with pytest.raises(NotImplementedError, match=f"{method} not supported"):
getattr(expand_table, method)(
engine_kwargs=engine_kwargs, engine="numba", **kwargs
)
else:
expand_single = df.expanding(method="single")
result = getattr(expand_table, method)(
engine_kwargs=engine_kwargs, engine="numba", **kwargs
)
expected = getattr(expand_single, method)(
engine_kwargs=engine_kwargs, engine="numba", **kwargs
)
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize("data", [np.eye(3), np.ones((2, 3)), np.ones((3, 2))])
@pytest.mark.parametrize("method", ["mean", "sum"])
def test_table_method_ewm(self, data, method, nogil, parallel, nopython):
engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython}
df = DataFrame(data)
result = getattr(df.ewm(com=1, method="table"), method)(
engine_kwargs=engine_kwargs, engine="numba"
)
expected = getattr(df.ewm(com=1, method="single"), method)(
engine_kwargs=engine_kwargs, engine="numba"
)
tm.assert_frame_equal(result, expected)
@td.skip_if_no("numba")
def test_npfunc_no_warnings():
df = DataFrame({"col1": [1, 2, 3, 4, 5]})
with tm.assert_produces_warning(False):
df.col1.rolling(2).apply(np.prod, raw=True, engine="numba")
| TestTableMethod |
python | readthedocs__readthedocs.org | readthedocs/projects/migrations/0010_migrate_domain_data.py | {
"start": 1338,
"end": 1888
} | class ____(migrations.Migration):
safe = Safe.after_deploy()
dependencies = [
("projects", "0009_add_domain_field"),
]
operations = [
migrations.RunPython(migrate_url),
migrations.AlterField(
model_name="domain",
name="domain",
field=models.CharField(
unique=True,
max_length=255,
verbose_name="Domain",
validators=[readthedocs.projects.validators.DomainNameValidator()],
),
),
]
| Migration |
python | airbytehq__airbyte | airbyte-integrations/bases/connector-acceptance-test/connector_acceptance_test/tests/test_core.py | {
"start": 3178,
"end": 33461
} | class ____(BaseTest):
@pytest.fixture(name="skip_backward_compatibility_tests")
async def skip_backward_compatibility_tests_fixture(
self,
inputs: SpecTestConfig,
previous_connector_docker_runner: ConnectorRunner,
previous_connector_spec: ConnectorSpecification,
actual_connector_spec: ConnectorSpecification,
) -> bool:
if actual_connector_spec == previous_connector_spec:
pytest.skip("The previous and actual specifications are identical.")
if previous_connector_docker_runner is None:
pytest.skip("The previous connector image could not be retrieved.")
# Get the real connector version in case 'latest' is used in the config:
previous_connector_version = await previous_connector_docker_runner.get_container_label("io.airbyte.version")
if previous_connector_version == inputs.backward_compatibility_tests_config.disable_for_version:
pytest.skip(f"Backward compatibility tests are disabled for version {previous_connector_version}.")
return False
@pytest.fixture(name="skip_oauth_default_method_test")
def skip_oauth_default_method_test_fixture(self, inputs: SpecTestConfig):
if inputs.auth_default_method and not inputs.auth_default_method.oauth:
pytest.skip(f"Skipping OAuth is default method test: {inputs.auth_default_method.bypass_reason}")
return False
def test_config_match_spec(self, actual_connector_spec: ConnectorSpecification, connector_config: Optional[SecretDict]):
"""Check that config matches the actual schema from the spec call"""
if not connector_config:
pytest.skip("Config is not provided")
# Getting rid of technical variables that start with an underscore
config = {key: value for key, value in connector_config.data.items() if not key.startswith("_")}
try:
jsonschema.validate(instance=config, schema=actual_connector_spec.connectionSpecification)
except jsonschema.exceptions.ValidationError as err:
pytest.fail(f"Config invalid: {err}")
except jsonschema.exceptions.SchemaError as err:
pytest.fail(f"Spec is invalid: {err}")
def test_match_expected(self, connector_spec: ConnectorSpecification, actual_connector_spec: ConnectorSpecification):
"""Check that spec call returns a spec equals to expected one"""
if connector_spec:
assert actual_connector_spec == connector_spec, "Spec should be equal to the one in spec.yaml or spec.json file"
else:
pytest.skip("The spec.yaml or spec.json does not exist. Hence, comparison with the actual one can't be performed")
def test_enum_usage(self, actual_connector_spec: ConnectorSpecification):
"""Check that enum lists in specs contain distinct values."""
docs_url = "https://docs.airbyte.io/connector-development/connector-specification-reference"
docs_msg = f"See specification reference at {docs_url}."
schema_helper = JsonSchemaHelper(actual_connector_spec.connectionSpecification)
enum_paths = schema_helper.find_nodes(keys=["enum"])
for path in enum_paths:
enum_list = schema_helper.get_node(path)
assert len(set(enum_list)) == len(
enum_list
), f"Enum lists should not contain duplicate values. Misconfigured enum array: {enum_list}. {docs_msg}"
def test_oneof_usage(self, actual_connector_spec: ConnectorSpecification):
"""Check that if spec contains oneOf it follows the rules according to reference
https://docs.airbyte.io/connector-development/connector-specification-reference
"""
docs_url = "https://docs.airbyte.io/connector-development/connector-specification-reference"
docs_msg = f"See specification reference at {docs_url}."
schema_helper = JsonSchemaHelper(actual_connector_spec.connectionSpecification)
variant_paths = schema_helper.find_nodes(keys=["oneOf", "anyOf"])
for variant_path in variant_paths:
top_level_obj = schema_helper.get_node(variant_path[:-1])
assert (
top_level_obj.get("type") == "object"
), f"The top-level definition in a `oneOf` block should have type: object. misconfigured object: {top_level_obj}. {docs_msg}"
variants = schema_helper.get_node(variant_path)
for variant in variants:
assert "properties" in variant, f"Each item in the oneOf array should be a property with type object. {docs_msg}"
oneof_path = ".".join(map(str, variant_path))
variant_props = [set(v["properties"].keys()) for v in variants]
common_props = set.intersection(*variant_props)
assert common_props, f"There should be at least one common property for {oneof_path} subobjects. {docs_msg}"
const_common_props = set()
enum_common_props = set()
for common_prop in common_props:
if all(["const" in variant["properties"][common_prop] for variant in variants]):
const_common_props.add(common_prop)
if all(["enum" in variant["properties"][common_prop] for variant in variants]):
enum_common_props.add(common_prop)
assert len(const_common_props) == 1 or (
len(const_common_props) == 0 and len(enum_common_props) == 1
), f"There should be exactly one common property with 'const' keyword (or equivalent) for {oneof_path} subobjects. {docs_msg}"
const_common_prop = const_common_props.pop() if const_common_props else enum_common_props.pop()
for n, variant in enumerate(variants):
prop_obj = variant["properties"][const_common_prop]
prop_info = f"common property {oneof_path}[{n}].{const_common_prop}. It's recommended to just use `const`."
if "const" in prop_obj:
const_value = prop_obj["const"]
assert (
"default" not in prop_obj or prop_obj["default"] == const_value
), f"'default' needs to be identical to 'const' in {prop_info}. {docs_msg}"
assert "enum" not in prop_obj or prop_obj["enum"] == [
const_value
], f"'enum' needs to be an array with a single item identical to 'const' in {prop_info}. {docs_msg}"
else:
assert (
"enum" in prop_obj and "default" in prop_obj and prop_obj["enum"] == [prop_obj["default"]]
), f"'enum' needs to be an array with a single item identical to 'default' in {prop_info}. {docs_msg}"
def test_required(self):
"""Check that connector will fail if any required field is missing"""
def test_optional(self):
"""Check that connector can work without any optional field"""
def test_has_secret(self):
"""Check that spec has a secret. Not sure if this should be always the case"""
def test_secret_never_in_the_output(self):
"""This test should be injected into any docker command it needs to know current config and spec"""
@staticmethod
def _is_spec_property_name_secret(path: str, secret_property_names) -> Tuple[Optional[str], bool]:
"""
Given a path to a type field, extract a field name and decide whether it is a name of secret or not
based on a provided list of secret names.
Split the path by `/`, drop the last item and make list reversed.
Then iterate over it and find the first item that's not a reserved keyword or an index.
Example:
properties/credentials/oneOf/1/properties/api_key/type -> [api_key, properties, 1, oneOf, credentials, properties] -> api_key
"""
reserved_keywords = ("anyOf", "oneOf", "allOf", "not", "properties", "items", "type", "prefixItems")
for part in reversed(path.split("/")[:-1]):
if part.isdigit() or part in reserved_keywords:
continue
return part, part.lower() in secret_property_names
return None, False
@staticmethod
def _property_can_store_secret(prop: dict) -> bool:
"""
Some fields can not hold a secret by design, others can.
Null type as well as boolean can not hold a secret value.
A string, a number or an integer type can always store secrets.
Secret objects and arrays can not be rendered correctly in the UI:
A field with a constant value can not hold a secret as well.
"""
unsecure_types = {"string", "integer", "number"}
type_ = prop["type"]
is_property_constant_value = bool(prop.get("const"))
can_store_secret = any(
[
isinstance(type_, str) and type_ in unsecure_types,
isinstance(type_, list) and (set(type_) & unsecure_types),
]
)
if not can_store_secret:
return False
# if a property can store a secret, additional check should be done if it's a constant value
return not is_property_constant_value
def test_secret_is_properly_marked(self, connector_spec_dict: dict, detailed_logger, secret_property_names):
"""
Each field has a type, therefore we can make a flat list of fields from the returned specification.
Iterate over the list, check if a field name is a secret name, can potentially hold a secret value
and make sure it is marked as `airbyte_secret`.
"""
secrets_exposed = []
non_secrets_hidden = []
spec_properties = connector_spec_dict["connectionSpecification"]["properties"]
for type_path, type_value in dpath.util.search(spec_properties, "**/type", yielded=True):
_, is_property_name_secret = self._is_spec_property_name_secret(type_path, secret_property_names)
if not is_property_name_secret:
continue
absolute_path = f"/{type_path}"
property_path, _ = absolute_path.rsplit(sep="/", maxsplit=1)
property_definition = dpath.util.get(spec_properties, property_path)
marked_as_secret = property_definition.get("airbyte_secret", False)
possibly_a_secret = self._property_can_store_secret(property_definition)
if marked_as_secret and not possibly_a_secret:
non_secrets_hidden.append(property_path)
if not marked_as_secret and possibly_a_secret:
secrets_exposed.append(property_path)
if non_secrets_hidden:
properties = "\n".join(non_secrets_hidden)
pytest.fail(
f"""Some properties are marked with `airbyte_secret` although they probably should not be.
Please double check them. If they're okay, please fix this test.
{properties}"""
)
if secrets_exposed:
properties = "\n".join(secrets_exposed)
pytest.fail(
f"""The following properties should be marked with `airbyte_secret!`
{properties}"""
)
def _fail_on_errors(self, errors: List[str]):
if len(errors) > 0:
pytest.fail("\n".join(errors))
def test_property_type_is_not_array(self, actual_connector_spec: ConnectorSpecification):
"""
Each field has one or multiple types, but the UI only supports a single type and optionally "null" as a second type.
"""
errors = []
for type_path, type_value in dpath.util.search(actual_connector_spec.connectionSpecification, "**/properties/*/type", yielded=True):
if isinstance(type_value, List):
number_of_types = len(type_value)
if number_of_types != 2 and number_of_types != 1:
errors.append(
f"{type_path} is not either a simple type or an array of a simple type plus null: {type_value} (for example: type: [string, null])"
)
if number_of_types == 2 and type_value[1] != "null":
errors.append(
f"Second type of {type_path} is not null: {type_value}. Type can either be a simple type or an array of a simple type plus null (for example: type: [string, null])"
)
self._fail_on_errors(errors)
def test_object_not_empty(self, actual_connector_spec: ConnectorSpecification):
"""
Each object field needs to have at least one property as the UI won't be able to show them otherwise.
If the whole spec is empty, it's allowed to have a single empty object at the top level
"""
schema_helper = JsonSchemaHelper(actual_connector_spec.connectionSpecification)
errors = []
for type_path, type_value in dpath.util.search(actual_connector_spec.connectionSpecification, "**/type", yielded=True):
if type_path == "type":
# allow empty root object
continue
if type_value == "object":
property = schema_helper.get_parent(type_path)
if "oneOf" not in property and ("properties" not in property or len(property["properties"]) == 0):
errors.append(
f"{type_path} is an empty object which will not be represented correctly in the UI. Either remove or add specific properties"
)
self._fail_on_errors(errors)
def test_array_type(self, actual_connector_spec: ConnectorSpecification):
"""
Each array has one or multiple types for its items, but the UI only supports a single type which can either be object, string or an enum
"""
schema_helper = JsonSchemaHelper(actual_connector_spec.connectionSpecification)
errors = []
for type_path, type_type in dpath.util.search(actual_connector_spec.connectionSpecification, "**/type", yielded=True):
property_definition = schema_helper.get_parent(type_path)
if type_type != "array":
# unrelated "items", not an array definition
continue
items_value = property_definition.get("items", None)
if items_value is None:
continue
elif isinstance(items_value, List):
errors.append(f"{type_path} is not just a single item type: {items_value}")
elif items_value.get("type") not in ["object", "string", "number", "integer"] and "enum" not in items_value:
errors.append(f"Items of {type_path} has to be either object or string or define an enum")
self._fail_on_errors(errors)
def test_forbidden_complex_types(self, actual_connector_spec: ConnectorSpecification):
"""
not, anyOf, patternProperties, prefixItems, allOf, if, then, else, dependentSchemas and dependentRequired are not allowed
"""
forbidden_keys = [
"not",
"anyOf",
"patternProperties",
"prefixItems",
"allOf",
"if",
"then",
"else",
"dependentSchemas",
"dependentRequired",
]
found_keys = set()
for forbidden_key in forbidden_keys:
for path, value in dpath.util.search(actual_connector_spec.connectionSpecification, f"**/{forbidden_key}", yielded=True):
found_keys.add(path)
for forbidden_key in forbidden_keys:
# remove forbidden keys if they are used as properties directly
for path, _value in dpath.util.search(
actual_connector_spec.connectionSpecification, f"**/properties/{forbidden_key}", yielded=True
):
found_keys.remove(path)
if len(found_keys) > 0:
key_list = ", ".join(found_keys)
pytest.fail(f"Found the following disallowed JSON schema features: {key_list}")
def test_date_pattern(self, actual_connector_spec: ConnectorSpecification, detailed_logger):
"""
Properties with format date or date-time should always have a pattern defined how the date/date-time should be formatted
that corresponds with the format the datepicker component is creating.
"""
schema_helper = JsonSchemaHelper(actual_connector_spec.connectionSpecification)
for format_path, format in dpath.util.search(actual_connector_spec.connectionSpecification, "**/format", yielded=True):
if not isinstance(format, str):
# format is not a format definition here but a property named format
continue
property_definition = schema_helper.get_parent(format_path)
pattern = property_definition.get("pattern")
if format == "date" and not pattern == DATE_PATTERN:
detailed_logger.warning(
f"{format_path} is defining a date format without the corresponding pattern. Consider setting the pattern to {DATE_PATTERN} to make it easier for users to edit this field in the UI."
)
if format == "date-time" and not pattern == DATETIME_PATTERN:
detailed_logger.warning(
f"{format_path} is defining a date-time format without the corresponding pattern Consider setting the pattern to {DATETIME_PATTERN} to make it easier for users to edit this field in the UI."
)
def test_date_format(self, actual_connector_spec: ConnectorSpecification, detailed_logger):
"""
Properties with a pattern that looks like a date should have their format set to date or date-time.
"""
schema_helper = JsonSchemaHelper(actual_connector_spec.connectionSpecification)
for pattern_path, pattern in dpath.util.search(actual_connector_spec.connectionSpecification, "**/pattern", yielded=True):
if not isinstance(pattern, str):
# pattern is not a pattern definition here but a property named pattern
continue
if pattern == DATE_PATTERN or pattern == DATETIME_PATTERN:
property_definition = schema_helper.get_parent(pattern_path)
format = property_definition.get("format")
if not format == "date" and pattern == DATE_PATTERN:
detailed_logger.warning(
f"{pattern_path} is defining a pattern that looks like a date without setting the format to `date`. Consider specifying the format to make it easier for users to edit this field in the UI."
)
if not format == "date-time" and pattern == DATETIME_PATTERN:
detailed_logger.warning(
f"{pattern_path} is defining a pattern that looks like a date-time without setting the format to `date-time`. Consider specifying the format to make it easier for users to edit this field in the UI."
)
def test_duplicate_order(self, actual_connector_spec: ConnectorSpecification):
"""
Custom ordering of field (via the "order" property defined in the field) is not allowed to have duplicates within the same group.
`{ "a": { "order": 1 }, "b": { "order": 1 } }` is invalid because there are two fields with order 1
`{ "a": { "order": 1 }, "b": { "order": 1, "group": "x" } }` is valid because the fields with the same order are in different groups
"""
schema_helper = JsonSchemaHelper(actual_connector_spec.connectionSpecification)
errors = []
for properties_path, properties in dpath.util.search(actual_connector_spec.connectionSpecification, "**/properties", yielded=True):
definition = schema_helper.get_parent(properties_path)
if definition.get("type") != "object":
# unrelated "properties", not an actual object definition
continue
used_orders: Dict[str, Set[int]] = {}
for property in properties.values():
if "order" not in property:
continue
order = property.get("order")
group = property.get("group", "")
if group not in used_orders:
used_orders[group] = set()
orders_for_group = used_orders[group]
if order in orders_for_group:
errors.append(f"{properties_path} has duplicate order: {order}")
orders_for_group.add(order)
self._fail_on_errors(errors)
def test_nested_group(self, actual_connector_spec: ConnectorSpecification):
"""
Groups can only be defined on the top level properties
`{ "a": { "group": "x" }}` is valid because field "a" is a top level field
`{ "a": { "oneOf": [{ "type": "object", "properties": { "b": { "group": "x" } } }] }}` is invalid because field "b" is nested in a oneOf
"""
errors = []
schema_helper = JsonSchemaHelper(actual_connector_spec.connectionSpecification)
for result in dpath.util.search(actual_connector_spec.connectionSpecification, "/properties/**/group", yielded=True):
group_path = result[0]
parent_path = schema_helper.get_parent_path(group_path)
is_property_named_group = parent_path.endswith("properties")
grandparent_path = schema_helper.get_parent_path(parent_path)
if grandparent_path != "/properties" and not is_property_named_group:
errors.append(f"Groups can only be defined on top level, is defined at {group_path}")
self._fail_on_errors(errors)
def test_display_type(self, actual_connector_spec: ConnectorSpecification):
"""
The display_type property can only be set on fields which have a oneOf property, and must be either "dropdown" or "radio"
"""
errors = []
schema_helper = JsonSchemaHelper(actual_connector_spec.connectionSpecification)
for result in dpath.util.search(actual_connector_spec.connectionSpecification, "/properties/**/display_type", yielded=True):
display_type_path = result[0]
parent_path = schema_helper.get_parent_path(display_type_path)
is_property_named_display_type = parent_path.endswith("properties")
if is_property_named_display_type:
continue
parent_object = schema_helper.get_parent(display_type_path)
if "oneOf" not in parent_object:
errors.append(f"display_type is only allowed on fields which have a oneOf property, but is set on {parent_path}")
display_type_value = parent_object.get("display_type")
if display_type_value != "dropdown" and display_type_value != "radio":
errors.append(
f"display_type must be either 'dropdown' or 'radio', but is set to '{display_type_value}' at {display_type_path}"
)
self._fail_on_errors(errors)
def test_defined_refs_exist_in_json_spec_file(self, connector_spec_dict: dict):
"""Checking for the presence of unresolved `$ref`s values within each json spec file"""
check_result = list(find_all_values_for_key_in_schema(connector_spec_dict, "$ref"))
assert not check_result, "Found unresolved `$refs` value in spec.json file"
def test_oauth_flow_parameters(self, actual_connector_spec: ConnectorSpecification):
"""Check if connector has correct oauth flow parameters according to
https://docs.airbyte.io/connector-development/connector-specification-reference
"""
advanced_auth = actual_connector_spec.advanced_auth
if not advanced_auth:
return
spec_schema = actual_connector_spec.connectionSpecification
paths_to_validate = set()
if advanced_auth.predicate_key:
paths_to_validate.add("/" + "/".join(advanced_auth.predicate_key))
oauth_config_specification = advanced_auth.oauth_config_specification
if oauth_config_specification:
if oauth_config_specification.oauth_user_input_from_connector_config_specification:
paths_to_validate.update(
get_paths_in_connector_config(
oauth_config_specification.oauth_user_input_from_connector_config_specification["properties"]
)
)
if oauth_config_specification.complete_oauth_output_specification:
paths_to_validate.update(
get_paths_in_connector_config(oauth_config_specification.complete_oauth_output_specification["properties"])
)
if oauth_config_specification.complete_oauth_server_output_specification:
paths_to_validate.update(
get_paths_in_connector_config(oauth_config_specification.complete_oauth_server_output_specification["properties"])
)
diff = paths_to_validate - set(get_expected_schema_structure(spec_schema))
assert diff == set(), f"Specified oauth fields are missed from spec schema: {diff}"
def test_oauth_is_default_method(self, skip_oauth_default_method_test: bool, actual_connector_spec: ConnectorSpecification):
"""
OAuth is default check.
If credentials do have oneOf: we check that the OAuth is listed at first.
If there is no oneOf and Oauth: OAuth is only option to authenticate the source and no check is needed.
"""
advanced_auth = actual_connector_spec.advanced_auth
if not advanced_auth:
pytest.skip("Source does not have OAuth method.")
if not advanced_auth.predicate_key:
pytest.skip("Advanced Auth object does not have predicate_key, only one option to authenticate.")
spec_schema = actual_connector_spec.connectionSpecification
credentials = advanced_auth.predicate_key[0]
try:
one_of_default_method = dpath.util.get(spec_schema, f"/**/{credentials}/oneOf/0")
except KeyError as e: # Key Error when oneOf is not in credentials object
pytest.skip("Credentials object does not have oneOf option.")
path_in_credentials = "/".join(advanced_auth.predicate_key[1:])
auth_method_predicate_const = dpath.util.get(one_of_default_method, f"/**/{path_in_credentials}/const")
assert (
auth_method_predicate_const == advanced_auth.predicate_value
), f"Oauth method should be a default option. Current default method is {auth_method_predicate_const}."
@pytest.mark.default_timeout(ONE_MINUTE)
@pytest.mark.backward_compatibility
def test_backward_compatibility(
self,
skip_backward_compatibility_tests: bool,
actual_connector_spec: ConnectorSpecification,
previous_connector_spec: ConnectorSpecification,
number_of_configs_to_generate: int = 100,
):
"""Check if the current spec is backward_compatible with the previous one"""
assert isinstance(actual_connector_spec, ConnectorSpecification) and isinstance(previous_connector_spec, ConnectorSpecification)
checker = SpecDiffChecker(previous=previous_connector_spec.dict(), current=actual_connector_spec.dict())
checker.assert_is_backward_compatible()
validate_previous_configs(previous_connector_spec, actual_connector_spec, number_of_configs_to_generate)
def test_additional_properties_is_true(self, actual_connector_spec: ConnectorSpecification):
"""Check that value of the "additionalProperties" field is always true.
A spec declaring "additionalProperties": false introduces the risk of accidental breaking changes.
Specifically, when removing a property from the spec, existing connector configs will no longer be valid.
False value introduces the risk of accidental breaking changes.
Read https://github.com/airbytehq/airbyte/issues/14196 for more details"""
additional_properties_values = find_all_values_for_key_in_schema(
actual_connector_spec.connectionSpecification, "additionalProperties"
)
if additional_properties_values:
assert all(
[additional_properties_value is True for additional_properties_value in additional_properties_values]
), "When set, additionalProperties field value must be true for backward compatibility."
# This test should not be part of TestSpec because it's testing the connector's docker image content, not the spec itself
# But it's cumbersome to declare a separate, non configurable, test class
# See https://github.com/airbytehq/airbyte/issues/15551
async def test_image_labels(self, docker_runner: ConnectorRunner, connector_metadata: dict):
"""Check that connector's docker image has required labels"""
assert (
await docker_runner.get_container_label("io.airbyte.name") == connector_metadata["data"]["dockerRepository"]
), "io.airbyte.name must be equal to dockerRepository in metadata.yaml"
assert (
await docker_runner.get_container_label("io.airbyte.version") == connector_metadata["data"]["dockerImageTag"]
), "io.airbyte.version must be equal to dockerImageTag in metadata.yaml"
# This test should not be part of TestSpec because it's testing the connector's docker image content, not the spec itself
# But it's cumbersome to declare a separate, non configurable, test class
# See https://github.com/airbytehq/airbyte/issues/15551
async def test_image_environment_variables(self, docker_runner: ConnectorRunner):
"""Check that connector's docker image has required envs"""
assert await docker_runner.get_container_env_variable_value("AIRBYTE_ENTRYPOINT"), "AIRBYTE_ENTRYPOINT must be set in dockerfile"
assert await docker_runner.get_container_env_variable_value("AIRBYTE_ENTRYPOINT") == await docker_runner.get_container_entrypoint()
@pytest.mark.default_timeout(ONE_MINUTE)
| TestSpec |
python | huggingface__transformers | src/transformers/models/roberta/modeling_roberta.py | {
"start": 34560,
"end": 38117
} | class ____(RobertaPreTrainedModel):
_tied_weights_keys = {
"lm_head.decoder.weight": "roberta.embeddings.word_embeddings.weight",
"lm_head.decoder.bias": "lm_head.bias",
}
def __init__(self, config):
super().__init__(config)
if config.is_decoder:
logger.warning(
"If you want to use `RobertaForMaskedLM` make sure `config.is_decoder=False` for "
"bi-directional self-attention."
)
self.roberta = RobertaModel(config, add_pooling_layer=False)
self.lm_head = RobertaLMHead(config)
# Initialize weights and apply final processing
self.post_init()
def get_output_embeddings(self):
return self.lm_head.decoder
def set_output_embeddings(self, new_embeddings):
self.lm_head.decoder = new_embeddings
@can_return_tuple
@auto_docstring
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
token_type_ids: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
encoder_hidden_states: Optional[torch.FloatTensor] = None,
encoder_attention_mask: Optional[torch.FloatTensor] = None,
labels: Optional[torch.LongTensor] = None,
**kwargs: Unpack[TransformersKwargs],
) -> Union[tuple[torch.Tensor], MaskedLMOutput]:
r"""
token_type_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0,1]`:
- 0 corresponds to a *sentence A* token,
- 1 corresponds to a *sentence B* token.
This parameter can only be used when the model is initialized with `type_vocab_size` parameter with value
>= 2. All the value in this tensor should be always < type_vocab_size.
[What are token type IDs?](../glossary#token-type-ids)
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`
"""
outputs = self.roberta(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
inputs_embeds=inputs_embeds,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
return_dict=True,
**kwargs,
)
sequence_output = outputs[0]
prediction_scores = self.lm_head(sequence_output)
masked_lm_loss = None
if labels is not None:
# move labels to correct device
labels = labels.to(prediction_scores.device)
loss_fct = CrossEntropyLoss()
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
return MaskedLMOutput(
loss=masked_lm_loss,
logits=prediction_scores,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
| RobertaForMaskedLM |
python | cython__cython | Cython/Compiler/Interpreter.py | {
"start": 253,
"end": 1831
} | class ____:
def lookup(self, name):
return None
empty_scope = EmptyScope()
def interpret_compiletime_options(optlist, optdict, type_env=None, type_args=()):
"""
Tries to interpret a list of compile time option nodes.
The result will be a tuple (optlist, optdict) but where
all expression nodes have been interpreted. The result is
in the form of tuples (value, pos).
optlist is a list of nodes, while optdict is a DictNode (the
result optdict is a dict)
If type_env is set, all type nodes will be analysed and the resulting
type set. Otherwise only interpretateable ExprNodes
are allowed, other nodes raises errors.
A CompileError will be raised if there are problems.
"""
def interpret(node, ix):
if ix in type_args:
if type_env:
type = node.analyse_as_type(type_env)
if not type:
raise CompileError(node.pos, "Invalid type.")
return (type, node.pos)
else:
raise CompileError(node.pos, "Type not allowed here.")
return (node.compile_time_value(empty_scope), node.pos)
if optlist:
optlist = [interpret(x, ix) for ix, x in enumerate(optlist)]
if optdict:
assert isinstance(optdict, DictNode)
new_optdict = {}
for item in optdict.key_value_pairs:
new_key, dummy = interpret(item.key, None)
new_optdict[new_key] = interpret(item.value, item.key.value)
optdict = new_optdict
return (optlist, new_optdict)
| EmptyScope |
python | dask__distributed | distributed/tests/test_worker_memory.py | {
"start": 37193,
"end": 37811
} | class ____(UserDict):
def __getitem__(self, k):
raise AssertionError()
@gen_cluster(client=True, nthreads=[("", 1)], worker_kwargs={"data": WriteOnlyBuffer})
async def test_delete_spilled_keys(c, s, a):
"""Test that freeing an in-memory key that has been spilled to disk does not
accidentally unspill it
"""
x = c.submit(inc, 1, key="x")
await wait_for_state("x", "memory", a)
assert a.data.keys() == {"x"}
with pytest.raises(AssertionError):
a.data["x"]
x.release()
await async_poll_for(lambda: not a.data, timeout=2)
assert not a.state.tasks
| WriteOnlyBuffer |
python | aimacode__aima-python | csp.py | {
"start": 353,
"end": 20241
} | class ____(search.Problem):
"""This class describes finite-domain Constraint Satisfaction Problems.
A CSP is specified by the following inputs:
variables A list of variables; each is atomic (e.g. int or string).
domains A dict of {var:[possible_value, ...]} entries.
neighbors A dict of {var:[var,...]} that for each variable lists
the other variables that participate in constraints.
constraints A function f(A, a, B, b) that returns true if neighbors
A, B satisfy the constraint when they have values A=a, B=b
In the textbook and in most mathematical definitions, the
constraints are specified as explicit pairs of allowable values,
but the formulation here is easier to express and more compact for
most cases (for example, the n-Queens problem can be represented
in O(n) space using this notation, instead of O(n^4) for the
explicit representation). In terms of describing the CSP as a
problem, that's all there is.
However, the class also supports data structures and methods that help you
solve CSPs by calling a search function on the CSP. Methods and slots are
as follows, where the argument 'a' represents an assignment, which is a
dict of {var:val} entries:
assign(var, val, a) Assign a[var] = val; do other bookkeeping
unassign(var, a) Do del a[var], plus other bookkeeping
nconflicts(var, val, a) Return the number of other variables that
conflict with var=val
curr_domains[var] Slot: remaining consistent values for var
Used by constraint propagation routines.
The following methods are used only by graph_search and tree_search:
actions(state) Return a list of actions
result(state, action) Return a successor of state
goal_test(state) Return true if all constraints satisfied
The following are just for debugging purposes:
nassigns Slot: tracks the number of assignments made
display(a) Print a human-readable representation
"""
def __init__(self, variables, domains, neighbors, constraints):
"""Construct a CSP problem. If variables is empty, it becomes domains.keys()."""
super().__init__(())
variables = variables or list(domains.keys())
self.variables = variables
self.domains = domains
self.neighbors = neighbors
self.constraints = constraints
self.curr_domains = None
self.nassigns = 0
def assign(self, var, val, assignment):
"""Add {var: val} to assignment; Discard the old value if any."""
assignment[var] = val
self.nassigns += 1
def unassign(self, var, assignment):
"""Remove {var: val} from assignment.
DO NOT call this if you are changing a variable to a new value;
just call assign for that."""
if var in assignment:
del assignment[var]
def nconflicts(self, var, val, assignment):
"""Return the number of conflicts var=val has with other variables."""
# Subclasses may implement this more efficiently
def conflict(var2):
return var2 in assignment and not self.constraints(var, val, var2, assignment[var2])
return count(conflict(v) for v in self.neighbors[var])
def display(self, assignment):
"""Show a human-readable representation of the CSP."""
# Subclasses can print in a prettier way, or display with a GUI
print(assignment)
# These methods are for the tree and graph-search interface:
def actions(self, state):
"""Return a list of applicable actions: non conflicting
assignments to an unassigned variable."""
if len(state) == len(self.variables):
return []
else:
assignment = dict(state)
var = first([v for v in self.variables if v not in assignment])
return [(var, val) for val in self.domains[var]
if self.nconflicts(var, val, assignment) == 0]
def result(self, state, action):
"""Perform an action and return the new state."""
(var, val) = action
return state + ((var, val),)
def goal_test(self, state):
"""The goal is to assign all variables, with all constraints satisfied."""
assignment = dict(state)
return (len(assignment) == len(self.variables)
and all(self.nconflicts(variables, assignment[variables], assignment) == 0
for variables in self.variables))
# These are for constraint propagation
def support_pruning(self):
"""Make sure we can prune values from domains. (We want to pay
for this only if we use it.)"""
if self.curr_domains is None:
self.curr_domains = {v: list(self.domains[v]) for v in self.variables}
def suppose(self, var, value):
"""Start accumulating inferences from assuming var=value."""
self.support_pruning()
removals = [(var, a) for a in self.curr_domains[var] if a != value]
self.curr_domains[var] = [value]
return removals
def prune(self, var, value, removals):
"""Rule out var=value."""
self.curr_domains[var].remove(value)
if removals is not None:
removals.append((var, value))
def choices(self, var):
"""Return all values for var that aren't currently ruled out."""
return (self.curr_domains or self.domains)[var]
def infer_assignment(self):
"""Return the partial assignment implied by the current inferences."""
self.support_pruning()
return {v: self.curr_domains[v][0]
for v in self.variables if 1 == len(self.curr_domains[v])}
def restore(self, removals):
"""Undo a supposition and all inferences from it."""
for B, b in removals:
self.curr_domains[B].append(b)
# This is for min_conflicts search
def conflicted_vars(self, current):
"""Return a list of variables in current assignment that are in conflict"""
return [var for var in self.variables
if self.nconflicts(var, current[var], current) > 0]
# ______________________________________________________________________________
# Constraint Propagation with AC3
def no_arc_heuristic(csp, queue):
return queue
def dom_j_up(csp, queue):
return SortedSet(queue, key=lambda t: neg(len(csp.curr_domains[t[1]])))
def AC3(csp, queue=None, removals=None, arc_heuristic=dom_j_up):
"""[Figure 6.3]"""
if queue is None:
queue = {(Xi, Xk) for Xi in csp.variables for Xk in csp.neighbors[Xi]}
csp.support_pruning()
queue = arc_heuristic(csp, queue)
checks = 0
while queue:
(Xi, Xj) = queue.pop()
revised, checks = revise(csp, Xi, Xj, removals, checks)
if revised:
if not csp.curr_domains[Xi]:
return False, checks # CSP is inconsistent
for Xk in csp.neighbors[Xi]:
if Xk != Xj:
queue.add((Xk, Xi))
return True, checks # CSP is satisfiable
def revise(csp, Xi, Xj, removals, checks=0):
"""Return true if we remove a value."""
revised = False
for x in csp.curr_domains[Xi][:]:
# If Xi=x conflicts with Xj=y for every possible y, eliminate Xi=x
# if all(not csp.constraints(Xi, x, Xj, y) for y in csp.curr_domains[Xj]):
conflict = True
for y in csp.curr_domains[Xj]:
if csp.constraints(Xi, x, Xj, y):
conflict = False
checks += 1
if not conflict:
break
if conflict:
csp.prune(Xi, x, removals)
revised = True
return revised, checks
# Constraint Propagation with AC3b: an improved version
# of AC3 with double-support domain-heuristic
def AC3b(csp, queue=None, removals=None, arc_heuristic=dom_j_up):
if queue is None:
queue = {(Xi, Xk) for Xi in csp.variables for Xk in csp.neighbors[Xi]}
csp.support_pruning()
queue = arc_heuristic(csp, queue)
checks = 0
while queue:
(Xi, Xj) = queue.pop()
# Si_p values are all known to be supported by Xj
# Sj_p values are all known to be supported by Xi
# Dj - Sj_p = Sj_u values are unknown, as yet, to be supported by Xi
Si_p, Sj_p, Sj_u, checks = partition(csp, Xi, Xj, checks)
if not Si_p:
return False, checks # CSP is inconsistent
revised = False
for x in set(csp.curr_domains[Xi]) - Si_p:
csp.prune(Xi, x, removals)
revised = True
if revised:
for Xk in csp.neighbors[Xi]:
if Xk != Xj:
queue.add((Xk, Xi))
if (Xj, Xi) in queue:
if isinstance(queue, set):
# or queue -= {(Xj, Xi)} or queue.remove((Xj, Xi))
queue.difference_update({(Xj, Xi)})
else:
queue.difference_update((Xj, Xi))
# the elements in D_j which are supported by Xi are given by the union of Sj_p with the set of those
# elements of Sj_u which further processing will show to be supported by some vi_p in Si_p
for vj_p in Sj_u:
for vi_p in Si_p:
conflict = True
if csp.constraints(Xj, vj_p, Xi, vi_p):
conflict = False
Sj_p.add(vj_p)
checks += 1
if not conflict:
break
revised = False
for x in set(csp.curr_domains[Xj]) - Sj_p:
csp.prune(Xj, x, removals)
revised = True
if revised:
for Xk in csp.neighbors[Xj]:
if Xk != Xi:
queue.add((Xk, Xj))
return True, checks # CSP is satisfiable
def partition(csp, Xi, Xj, checks=0):
Si_p = set()
Sj_p = set()
Sj_u = set(csp.curr_domains[Xj])
for vi_u in csp.curr_domains[Xi]:
conflict = True
# now, in order to establish support for a value vi_u in Di it seems better to try to find a support among
# the values in Sj_u first, because for each vj_u in Sj_u the check (vi_u, vj_u) is a double-support check
# and it is just as likely that any vj_u in Sj_u supports vi_u than it is that any vj_p in Sj_p does...
for vj_u in Sj_u - Sj_p:
# double-support check
if csp.constraints(Xi, vi_u, Xj, vj_u):
conflict = False
Si_p.add(vi_u)
Sj_p.add(vj_u)
checks += 1
if not conflict:
break
# ... and only if no support can be found among the elements in Sj_u, should the elements vj_p in Sj_p be used
# for single-support checks (vi_u, vj_p)
if conflict:
for vj_p in Sj_p:
# single-support check
if csp.constraints(Xi, vi_u, Xj, vj_p):
conflict = False
Si_p.add(vi_u)
checks += 1
if not conflict:
break
return Si_p, Sj_p, Sj_u - Sj_p, checks
# Constraint Propagation with AC4
def AC4(csp, queue=None, removals=None, arc_heuristic=dom_j_up):
if queue is None:
queue = {(Xi, Xk) for Xi in csp.variables for Xk in csp.neighbors[Xi]}
csp.support_pruning()
queue = arc_heuristic(csp, queue)
support_counter = Counter()
variable_value_pairs_supported = defaultdict(set)
unsupported_variable_value_pairs = []
checks = 0
# construction and initialization of support sets
while queue:
(Xi, Xj) = queue.pop()
revised = False
for x in csp.curr_domains[Xi][:]:
for y in csp.curr_domains[Xj]:
if csp.constraints(Xi, x, Xj, y):
support_counter[(Xi, x, Xj)] += 1
variable_value_pairs_supported[(Xj, y)].add((Xi, x))
checks += 1
if support_counter[(Xi, x, Xj)] == 0:
csp.prune(Xi, x, removals)
revised = True
unsupported_variable_value_pairs.append((Xi, x))
if revised:
if not csp.curr_domains[Xi]:
return False, checks # CSP is inconsistent
# propagation of removed values
while unsupported_variable_value_pairs:
Xj, y = unsupported_variable_value_pairs.pop()
for Xi, x in variable_value_pairs_supported[(Xj, y)]:
revised = False
if x in csp.curr_domains[Xi][:]:
support_counter[(Xi, x, Xj)] -= 1
if support_counter[(Xi, x, Xj)] == 0:
csp.prune(Xi, x, removals)
revised = True
unsupported_variable_value_pairs.append((Xi, x))
if revised:
if not csp.curr_domains[Xi]:
return False, checks # CSP is inconsistent
return True, checks # CSP is satisfiable
# ______________________________________________________________________________
# CSP Backtracking Search
# Variable ordering
def first_unassigned_variable(assignment, csp):
"""The default variable order."""
return first([var for var in csp.variables if var not in assignment])
def mrv(assignment, csp):
"""Minimum-remaining-values heuristic."""
return argmin_random_tie([v for v in csp.variables if v not in assignment],
key=lambda var: num_legal_values(csp, var, assignment))
def num_legal_values(csp, var, assignment):
if csp.curr_domains:
return len(csp.curr_domains[var])
else:
return count(csp.nconflicts(var, val, assignment) == 0 for val in csp.domains[var])
# Value ordering
def unordered_domain_values(var, assignment, csp):
"""The default value order."""
return csp.choices(var)
def lcv(var, assignment, csp):
"""Least-constraining-values heuristic."""
return sorted(csp.choices(var), key=lambda val: csp.nconflicts(var, val, assignment))
# Inference
def no_inference(csp, var, value, assignment, removals):
return True
def forward_checking(csp, var, value, assignment, removals):
"""Prune neighbor values inconsistent with var=value."""
csp.support_pruning()
for B in csp.neighbors[var]:
if B not in assignment:
for b in csp.curr_domains[B][:]:
if not csp.constraints(var, value, B, b):
csp.prune(B, b, removals)
if not csp.curr_domains[B]:
return False
return True
def mac(csp, var, value, assignment, removals, constraint_propagation=AC3b):
"""Maintain arc consistency."""
return constraint_propagation(csp, {(X, var) for X in csp.neighbors[var]}, removals)
# The search, proper
def backtracking_search(csp, select_unassigned_variable=first_unassigned_variable,
order_domain_values=unordered_domain_values, inference=no_inference):
"""[Figure 6.5]"""
def backtrack(assignment):
if len(assignment) == len(csp.variables):
return assignment
var = select_unassigned_variable(assignment, csp)
for value in order_domain_values(var, assignment, csp):
if 0 == csp.nconflicts(var, value, assignment):
csp.assign(var, value, assignment)
removals = csp.suppose(var, value)
if inference(csp, var, value, assignment, removals):
result = backtrack(assignment)
if result is not None:
return result
csp.restore(removals)
csp.unassign(var, assignment)
return None
result = backtrack({})
assert result is None or csp.goal_test(result)
return result
# ______________________________________________________________________________
# Min-conflicts Hill Climbing search for CSPs
def min_conflicts(csp, max_steps=100000):
"""Solve a CSP by stochastic Hill Climbing on the number of conflicts."""
# Generate a complete assignment for all variables (probably with conflicts)
csp.current = current = {}
for var in csp.variables:
val = min_conflicts_value(csp, var, current)
csp.assign(var, val, current)
# Now repeatedly choose a random conflicted variable and change it
for i in range(max_steps):
conflicted = csp.conflicted_vars(current)
if not conflicted:
return current
var = random.choice(conflicted)
val = min_conflicts_value(csp, var, current)
csp.assign(var, val, current)
return None
def min_conflicts_value(csp, var, current):
"""Return the value that will give var the least number of conflicts.
If there is a tie, choose at random."""
return argmin_random_tie(csp.domains[var], key=lambda val: csp.nconflicts(var, val, current))
# ______________________________________________________________________________
def tree_csp_solver(csp):
"""[Figure 6.11]"""
assignment = {}
root = csp.variables[0]
X, parent = topological_sort(csp, root)
csp.support_pruning()
for Xj in reversed(X[1:]):
if not make_arc_consistent(parent[Xj], Xj, csp):
return None
assignment[root] = csp.curr_domains[root][0]
for Xi in X[1:]:
assignment[Xi] = assign_value(parent[Xi], Xi, csp, assignment)
if not assignment[Xi]:
return None
return assignment
def topological_sort(X, root):
"""Returns the topological sort of X starting from the root.
Input:
X is a list with the nodes of the graph
N is the dictionary with the neighbors of each node
root denotes the root of the graph.
Output:
stack is a list with the nodes topologically sorted
parents is a dictionary pointing to each node's parent
Other:
visited shows the state (visited - not visited) of nodes
"""
neighbors = X.neighbors
visited = defaultdict(lambda: False)
stack = []
parents = {}
build_topological(root, None, neighbors, visited, stack, parents)
return stack, parents
def build_topological(node, parent, neighbors, visited, stack, parents):
"""Build the topological sort and the parents of each node in the graph."""
visited[node] = True
for n in neighbors[node]:
if not visited[n]:
build_topological(n, node, neighbors, visited, stack, parents)
parents[node] = parent
stack.insert(0, node)
def make_arc_consistent(Xj, Xk, csp):
"""Make arc between parent (Xj) and child (Xk) consistent under the csp's constraints,
by removing the possible values of Xj that cause inconsistencies."""
# csp.curr_domains[Xj] = []
for val1 in csp.domains[Xj]:
keep = False # Keep or remove val1
for val2 in csp.domains[Xk]:
if csp.constraints(Xj, val1, Xk, val2):
# Found a consistent assignment for val1, keep it
keep = True
break
if not keep:
# Remove val1
csp.prune(Xj, val1, None)
return csp.curr_domains[Xj]
def assign_value(Xj, Xk, csp, assignment):
"""Assign a value to Xk given Xj's (Xk's parent) assignment.
Return the first value that satisfies the constraints."""
parent_assignment = assignment[Xj]
for val in csp.curr_domains[Xk]:
if csp.constraints(Xj, parent_assignment, Xk, val):
return val
# No consistent assignment available
return None
# ______________________________________________________________________________
# Map Coloring CSP Problems
| CSP |
python | sqlalchemy__sqlalchemy | test/orm/inheritance/test_single.py | {
"start": 63796,
"end": 68289
} | class ____(fixtures.MappedTest, AssertsCompiledSQL):
__dialect__ = "default"
@classmethod
def define_tables(cls, metadata):
Table(
"parent",
metadata,
Column(
"id", Integer, primary_key=True, test_needs_autoincrement=True
),
)
Table(
"m2m",
metadata,
Column(
"parent_id", Integer, ForeignKey("parent.id"), primary_key=True
),
Column(
"child_id", Integer, ForeignKey("child.id"), primary_key=True
),
)
Table(
"child",
metadata,
Column(
"id", Integer, primary_key=True, test_needs_autoincrement=True
),
Column("discriminator", String(20)),
Column("name", String(20)),
)
@classmethod
def setup_classes(cls):
class Parent(cls.Comparable):
pass
class Child(cls.Comparable):
pass
class SubChild1(Child):
pass
class SubChild2(Child):
pass
@classmethod
def setup_mappers(cls):
cls.mapper_registry.map_imperatively(
cls.classes.Parent,
cls.tables.parent,
properties={
"s1": relationship(
cls.classes.SubChild1,
secondary=cls.tables.m2m,
uselist=False,
),
"s2": relationship(
cls.classes.SubChild2, secondary=cls.tables.m2m
),
},
)
cls.mapper_registry.map_imperatively(
cls.classes.Child,
cls.tables.child,
polymorphic_on=cls.tables.child.c.discriminator,
)
cls.mapper_registry.map_imperatively(
cls.classes.SubChild1,
inherits=cls.classes.Child,
polymorphic_identity="sub1",
)
cls.mapper_registry.map_imperatively(
cls.classes.SubChild2,
inherits=cls.classes.Child,
polymorphic_identity="sub2",
)
@classmethod
def insert_data(cls, connection):
Parent = cls.classes.Parent
SubChild1 = cls.classes.SubChild1
SubChild2 = cls.classes.SubChild2
s = Session(connection)
s.add_all(
[
Parent(
s1=SubChild1(name="sc1_1"),
s2=[SubChild2(name="sc2_1"), SubChild2(name="sc2_2")],
)
]
)
s.commit()
def test_eager_join(self):
Parent = self.classes.Parent
SubChild1 = self.classes.SubChild1
s = fixture_session()
p1 = s.query(Parent).options(joinedload(Parent.s1)).all()[0]
eq_(p1.__dict__["s1"], SubChild1(name="sc1_1"))
def test_manual_join(self):
Parent = self.classes.Parent
Child = self.classes.Child
SubChild1 = self.classes.SubChild1
s = fixture_session()
p1, c1 = s.query(Parent, Child).outerjoin(Parent.s1).all()[0]
eq_(c1, SubChild1(name="sc1_1"))
def test_assert_join_sql(self):
Parent = self.classes.Parent
Child = self.classes.Child
s = fixture_session()
self.assert_compile(
s.query(Parent, Child).outerjoin(Parent.s1),
"SELECT parent.id AS parent_id, child.id AS child_id, "
"child.discriminator AS child_discriminator, "
"child.name AS child_name "
"FROM parent LEFT OUTER JOIN (m2m AS m2m_1 "
"JOIN child ON child.id = m2m_1.child_id "
"AND child.discriminator IN (__[POSTCOMPILE_discriminator_1])) "
"ON parent.id = m2m_1.parent_id",
)
def test_assert_joinedload_sql(self):
Parent = self.classes.Parent
s = fixture_session()
self.assert_compile(
s.query(Parent).options(joinedload(Parent.s1)),
"SELECT parent.id AS parent_id, child_1.id AS child_1_id, "
"child_1.discriminator AS child_1_discriminator, "
"child_1.name AS child_1_name "
"FROM parent LEFT OUTER JOIN "
"(m2m AS m2m_1 JOIN child AS child_1 "
"ON child_1.id = m2m_1.child_id AND child_1.discriminator "
"IN (__[POSTCOMPILE_discriminator_1])) "
"ON parent.id = m2m_1.parent_id",
)
| ManyToManyToSingleTest |
python | HypothesisWorks__hypothesis | hypothesis-python/src/hypothesis/internal/conjecture/choice.py | {
"start": 1253,
"end": 1354
} | class ____(TypedDict):
intervals: IntervalSet
min_size: int
max_size: int
| StringConstraints |
python | pypa__pip | src/pip/_vendor/urllib3/exceptions.py | {
"start": 1417,
"end": 1657
} | class ____(HTTPError):
"""Raised when something unexpected happens mid-request/response."""
pass
#: Renamed to ProtocolError but aliased for backwards compatibility.
ConnectionError = ProtocolError
# Leaf Exceptions
| ProtocolError |
python | pandas-dev__pandas | pandas/tests/indexes/period/test_indexing.py | {
"start": 25751,
"end": 27107
} | class ____:
def test_contains(self):
# GH 17717
p0 = Period("2017-09-01")
p1 = Period("2017-09-02")
p2 = Period("2017-09-03")
p3 = Period("2017-09-04")
ps0 = [p0, p1, p2]
idx0 = PeriodIndex(ps0)
for p in ps0:
assert p in idx0
assert str(p) in idx0
# GH#31172
# Higher-resolution period-like are _not_ considered as contained
key = "2017-09-01 00:00:01"
assert key not in idx0
with pytest.raises(KeyError, match=key):
idx0.get_loc(key)
assert "2017-09" in idx0
assert p3 not in idx0
def test_contains_freq_mismatch(self):
rng = period_range("2007-01", freq="M", periods=10)
assert Period("2007-01", freq="M") in rng
assert Period("2007-01", freq="D") not in rng
assert Period("2007-01", freq="2M") not in rng
def test_contains_nat(self):
# see gh-13582
idx = period_range("2007-01", freq="M", periods=10)
assert NaT not in idx
assert None not in idx
assert float("nan") not in idx
assert np.nan not in idx
idx = PeriodIndex(["2011-01", "NaT", "2011-02"], freq="M")
assert NaT in idx
assert None in idx
assert float("nan") in idx
assert np.nan in idx
| TestContains |
python | prabhupant__python-ds | data_structures/bst/print_ancestor.py | {
"start": 0,
"end": 379
} | class ____():
def __init__(self, val):
self.val = val
self.left = None
self.right = None
def print_ancestor_recursive(root, key):
if not root:
return False
if root.val == key:
return True
if print_ancestor_recursive(root.left, key) or print_ancestor_recursive(root.right, key):
return root.data
return False
| Node |
python | getsentry__sentry | src/sentry/workflow_engine/migrations/0094_backfill_issue_stream_detector_workflows.py | {
"start": 2310,
"end": 3770
} | class ____(CheckedMigration):
# This flag is used to mark that a migration shouldn't be automatically run in production.
# This should only be used for operations where it's safe to run the migration after your
# code has deployed. So this should not be used for most operations that alter the schema
# of a table.
# Here are some things that make sense to mark as post deployment:
# - Large data migrations. Typically we want these to be run manually so that they can be
# monitored and not block the deploy for a long period of time while they run.
# - Adding indexes to large tables. Since this can take a long time, we'd generally prefer to
# run this outside deployments so that we don't block them. Note that while adding an index
# is a schema change, it's completely safe to run the operation after the code has deployed.
# Once deployed, run these manually via: https://develop.sentry.dev/database-migrations/#migration-deployment
is_post_deployment = True
dependencies = [
("workflow_engine", "0093_add_action_config_index"),
]
operations = [
migrations.RunPython(
backfill_issue_stream_detector_workflows,
migrations.RunPython.noop,
hints={
"tables": [
"workflow_engine_detector",
"workflow_engine_detectorworkflow",
]
},
),
]
| Migration |
python | tensorflow__tensorflow | tensorflow/tools/ci_build/linux/mkl/set-build-env.py | {
"start": 4454,
"end": 5059
} | class ____(IntelPlatform):
def __init__(self):
IntelPlatform.__init__(self, 4, 8)
def get_bazel_gcc_flags(self):
HASWELL_ARCH_OLD = "core-avx2" # Only missing the POPCNT instruction
HASWELL_ARCH_NEW = "haswell"
POPCNT_FLAG = "popcnt"
if self.use_old_arch_names(4, 9):
ret_val = self.BAZEL_PREFIX_ + self.ARCH_PREFIX_ + \
HASWELL_ARCH_OLD + " "
return ret_val + self.BAZEL_PREFIX_ + self.FLAG_PREFIX_ + \
POPCNT_FLAG + " "
else:
return self.BAZEL_PREFIX_ + self.ARCH_PREFIX_ + \
HASWELL_ARCH_NEW + " "
| HaswellPlatform |
python | has2k1__plotnine | tests/test_ggsave.py | {
"start": 796,
"end": 3440
} | class ____:
def test_default_filename(self):
p.save(verbose=False)
fn = p._save_filename("pdf")
assert_exist_and_clean(fn, "default filename")
def test_save_method(self):
fn = next(filename_gen)
with pytest.warns(PlotnineWarning) as record:
p.save(fn)
assert_exist_and_clean(fn, "save method")
res = ("saving" in str(item.message).lower() for item in record)
assert any(res)
res = ("filename" in str(item.message).lower() for item in record)
assert any(res)
# verbose
fn = next(filename_gen)
with warnings.catch_warnings(record=True) as record:
p.save(fn, verbose=False)
assert_exist_and_clean(fn, "save method")
assert not record, "Issued an unexpected warning"
def test_filename_plot_path(self):
fn = next(filename_gen)
p.save(fn, path=".", verbose=False)
assert_exist_and_clean(fn, "fn, plot and path")
def test_format_png(self):
p.save(format="png", verbose=False)
fn = p._save_filename("png")
assert_exist_and_clean(fn, "format png")
def test_dpi(self):
fn = next(filename_gen)
p.save(fn, dpi=100, verbose=False)
assert_exist_and_clean(fn, "dpi = 100")
def test_ggsave(self):
ggsave(p, verbose=False)
fn = p._save_filename("pdf")
assert_exist_and_clean(fn, "default filename")
def test_save_big(self):
fn = next(filename_gen)
# supplying the ggplot object will work without
# printing it first! 26 is the current limit, just go
# over it to not use too much memory
p.save(fn, width=26, height=26, limitsize=False, verbose=False)
assert_exist_and_clean(fn, "big height and width")
# Using the global option
fn = next(filename_gen)
set_option("limitsize", False)
p.save(fn, width=26, height=26, verbose=False)
set_option("limitsize", True)
assert_exist_and_clean(fn, "big height and width")
def test_dpi_theme_xkcd(self):
fn1 = next(filename_gen)
fn2 = next(filename_gen)
data = pd.DataFrame({"x": range(4), "y": range(4), "b": list("aabb")})
p = (
ggplot(data)
+ geom_point(aes("x", "y"))
+ facet_wrap("b")
+ theme_xkcd()
)
p.save(fn1, verbose=False)
assert_exist_and_clean(fn1, "Saving with theme_xkcd and dpi (1)")
p.save(fn2, dpi=72, verbose=False)
assert_exist_and_clean(fn2, "Saving with theme_xkcd and dpi (2)")
| TestArguments |
python | django__django | django/contrib/gis/db/models/lookups.py | {
"start": 6601,
"end": 6699
} | class ____(GISLookup):
lookup_name = "contains"
@BaseSpatialField.register_lookup
| ContainsLookup |
python | allegroai__clearml | clearml/backend_config/bucket_config.py | {
"start": 645,
"end": 3326
} | class ____(object):
"""Configuration for an S3 bucket"""
bucket = attrib(type=str, converter=_url_stripper, default="")
subdir = attrib(type=str, converter=_url_stripper, default="")
host = attrib(type=str, converter=_none_to_empty_string, default="")
key = attrib(type=str, converter=_none_to_empty_string, default="")
secret = attrib(type=str, converter=_none_to_empty_string, default="")
token = attrib(type=str, converter=_none_to_empty_string, default="")
multipart = attrib(type=bool, default=True)
acl = attrib(type=str, converter=_none_to_empty_string, default="")
secure = attrib(type=bool, default=True)
region = attrib(type=str, converter=_none_to_empty_string, default="")
verify = attrib(type=bool, default=None)
use_credentials_chain = attrib(type=bool, default=False)
extra_args = attrib(type=dict, default=None)
profile = attrib(type=str, default="")
def update(
self,
key: str = "",
secret: str = "",
multipart: bool = True,
region: str = None,
use_credentials_chain: bool = False,
token: str = "",
extra_args: dict = None,
secure: bool = True,
profile: str = "",
) -> None:
self.key = key
self.secret = secret
self.token = token
self.multipart = multipart
self.region = region
self.use_credentials_chain = use_credentials_chain
self.extra_args = extra_args
self.secure = secure
self.profile = profile
def is_valid(self) -> bool:
return (self.key and self.secret) or self.use_credentials_chain
def get_bucket_host(self) -> Tuple[str, str]:
return self.bucket, self.host
@classmethod
def from_list(
cls,
dict_list: Union[Tuple[Dict], List[Dict]],
log: Optional[logging.Logger] = None,
) -> List["S3BucketConfig"]:
if not isinstance(dict_list, (tuple, list)) or not all(isinstance(x, dict) for x in dict_list):
raise ValueError("Expecting a list of configurations dictionaries")
configs = [cls(**entry) for entry in dict_list]
valid_configs = [conf for conf in configs if conf.is_valid()]
if log and len(valid_configs) < len(configs):
log.warning(
"Invalid bucket configurations detected for {}".format(
", ".join(
"/".join((config.host, config.bucket)) for config in configs if config not in valid_configs
)
)
)
return valid_configs
BucketConfig = S3BucketConfig
@six.add_metaclass(abc.ABCMeta)
| S3BucketConfig |
python | mlflow__mlflow | mlflow/types/responses_helpers.py | {
"start": 1124,
"end": 1231
} | class ____(BaseModel):
file_id: str
index: int
type: str = "file_citation"
| AnnotationFileCitation |
python | doocs__leetcode | solution/1000-1099/1081.Smallest Subsequence of Distinct Characters/Solution.py | {
"start": 0,
"end": 410
} | class ____:
def smallestSubsequence(self, s: str) -> str:
last = {c: i for i, c in enumerate(s)}
stk = []
vis = set()
for i, c in enumerate(s):
if c in vis:
continue
while stk and stk[-1] > c and last[stk[-1]] > i:
vis.remove(stk.pop())
stk.append(c)
vis.add(c)
return "".join(stk)
| Solution |
python | vyperlang__vyper | vyper/codegen/memory_allocator.py | {
"start": 137,
"end": 1003
} | class ____:
__slots__ = ("position", "size")
def __init__(self, position: int, size: int) -> None:
self.position = position
self.size = size
def __repr__(self):
return f"(FreeMemory: pos={self.position}, size={self.size})"
def partially_allocate(self, size: int) -> int:
"""
Reduce the size of the free memory by allocating from the initial offset.
Arguments
---------
size : int
Number of bytes to allocate
Returns
-------
int
Position of the newly allocated memory
"""
if size >= self.size: # pragma: nocover
raise CompilerPanic("Attempted to allocate more memory than available")
position = self.position
self.position += size
self.size -= size
return position
| FreeMemory |
python | huggingface__transformers | src/transformers/models/sam_hq/modular_sam_hq.py | {
"start": 9759,
"end": 9810
} | class ____(SamFeedForward):
pass
| SamHQFeedForward |
python | PrefectHQ__prefect | src/prefect/server/api/ui/flows.py | {
"start": 917,
"end": 5902
} | class ____(PrefectBaseModel):
id: UUID = Field(default=..., description="The flow run id.")
flow_id: UUID = Field(default=..., description="The flow id.")
name: str = Field(default=..., description="The flow run name")
state_name: str = Field(default=..., description="The state name.")
state_type: StateType = Field(default=..., description="The state type.")
next_scheduled_start_time: DateTime = Field(
default=..., description="The next scheduled start time"
)
@field_validator("next_scheduled_start_time", mode="before")
@classmethod
def validate_next_scheduled_start_time(cls, v: DateTime | datetime) -> DateTime:
if isinstance(v, datetime):
return create_datetime_instance(v)
return v
@router.post("/count-deployments")
async def count_deployments_by_flow(
flow_ids: List[UUID] = Body(default=..., embed=True, max_items=200),
db: PrefectDBInterface = Depends(provide_database_interface),
) -> Dict[UUID, int]:
"""
Get deployment counts by flow id.
"""
async with db.session_context() as session:
query = (
sa.select(
db.Deployment.flow_id,
sa.func.count(db.Deployment.id).label("deployment_count"),
)
.where(db.Deployment.flow_id.in_(flow_ids))
.group_by(db.Deployment.flow_id)
)
results = await session.execute(query)
deployment_counts_by_flow = {
flow_id: deployment_count for flow_id, deployment_count in results.all()
}
return {
flow_id: deployment_counts_by_flow.get(flow_id, 0) for flow_id in flow_ids
}
def _get_postgres_next_runs_query(flow_ids: List[UUID]):
# Here we use the raw query because CROSS LATERAL JOINS are very
# difficult to express correctly in sqlalchemy.
raw_query = sa.text(
"""
SELECT fr.id, fr.name, fr.flow_id, fr.state_name, fr.state_type, fr.state_name, fr.next_scheduled_start_time
FROM (
SELECT DISTINCT flow_id FROM flow_run
WHERE flow_id IN :flow_ids
AND state_type = 'SCHEDULED'
) AS unique_flows
CROSS JOIN LATERAL (
SELECT *
FROM flow_run fr
WHERE fr.flow_id = unique_flows.flow_id
AND fr.state_type = 'SCHEDULED'
ORDER BY fr.next_scheduled_start_time ASC
LIMIT 1
) fr;
"""
)
bindparams = [
sa.bindparam(
"flow_ids",
flow_ids,
expanding=True,
type_=UUIDTypeDecorator,
),
]
query = raw_query.bindparams(*bindparams)
return query
def _get_sqlite_next_runs_query(flow_ids: List[UUID]):
raw_query = sa.text(
"""
WITH min_times AS (
SELECT flow_id, MIN(next_scheduled_start_time) AS min_next_scheduled_start_time
FROM flow_run
WHERE flow_id IN :flow_ids
AND state_type = 'SCHEDULED'
GROUP BY flow_id
)
SELECT fr.id, fr.name, fr.flow_id, fr.state_name, fr.state_type, fr.next_scheduled_start_time
FROM flow_run fr
JOIN min_times mt ON fr.flow_id = mt.flow_id AND fr.next_scheduled_start_time = mt.min_next_scheduled_start_time
WHERE fr.state_type = 'SCHEDULED';
"""
)
bindparams = [
sa.bindparam(
"flow_ids",
flow_ids,
expanding=True,
type_=UUIDTypeDecorator,
),
]
query = raw_query.bindparams(*bindparams)
return query
@router.post("/next-runs")
async def next_runs_by_flow(
flow_ids: List[UUID] = Body(default=..., embed=True, max_items=200),
db: PrefectDBInterface = Depends(provide_database_interface),
) -> Dict[UUID, Optional[SimpleNextFlowRun]]:
"""
Get the next flow run by flow id.
"""
async with db.session_context() as session:
if db.dialect.name == "postgresql":
query = _get_postgres_next_runs_query(flow_ids=flow_ids)
else:
query = _get_sqlite_next_runs_query(flow_ids=flow_ids)
results = await session.execute(query)
results_by_flow_id = {
UUID(str(result.flow_id)): SimpleNextFlowRun(
id=result.id,
flow_id=result.flow_id,
name=result.name,
state_name=result.state_name,
state_type=result.state_type,
next_scheduled_start_time=parse_datetime(
result.next_scheduled_start_time
).replace(tzinfo=ZoneInfo("UTC"))
if isinstance(result.next_scheduled_start_time, str)
else result.next_scheduled_start_time,
)
for result in results.all()
}
response = {
flow_id: results_by_flow_id.get(flow_id, None) for flow_id in flow_ids
}
return response
| SimpleNextFlowRun |
python | tensorflow__tensorflow | tensorflow/python/keras/metrics.py | {
"start": 118936,
"end": 129453
} | class ____(SumOverBatchSize):
"""Wraps a function with the `SumOverBatchSizeMetricWrapper` metric."""
def __init__(self, fn, name=None, dtype=None, **kwargs):
"""Creates a `SumOverBatchSizeMetricWrapper` instance.
Args:
fn: The metric function to wrap, with signature `fn(y_true, y_pred,
**kwargs)`.
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
**kwargs: The keyword arguments that are passed on to `fn`.
"""
super(SumOverBatchSizeMetricWrapper, self).__init__(name=name, dtype=dtype)
self._fn = fn
self._fn_kwargs = kwargs
def update_state(self, y_true, y_pred, sample_weight=None):
y_true = math_ops.cast(y_true, self._dtype)
y_pred = math_ops.cast(y_pred, self._dtype)
y_pred, y_true = losses_utils.squeeze_or_expand_dimensions(
y_pred, y_true)
ag_fn = autograph.tf_convert(self._fn, ag_ctx.control_status_ctx())
matches = ag_fn(y_true, y_pred, **self._fn_kwargs)
return super(SumOverBatchSizeMetricWrapper, self).update_state(
matches, sample_weight=sample_weight)
def get_config(self):
config = {}
for k, v in self._fn_kwargs.items():
config[k] = backend.eval(v) if is_tensor_or_variable(v) else v
base_config = super(SumOverBatchSizeMetricWrapper, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def accuracy(y_true, y_pred):
[y_pred, y_true], _ = \
metrics_utils.ragged_assert_compatible_and_get_flat_values(
[y_pred, y_true])
y_true.shape.assert_is_compatible_with(y_pred.shape)
if y_true.dtype != y_pred.dtype:
y_pred = math_ops.cast(y_pred, y_true.dtype)
return math_ops.cast(math_ops.equal(y_true, y_pred), backend.floatx())
@dispatch.add_dispatch_support
def binary_accuracy(y_true, y_pred, threshold=0.5):
"""Calculates how often predictions match binary labels.
Standalone usage:
>>> y_true = [[1], [1], [0], [0]]
>>> y_pred = [[1], [1], [0], [0]]
>>> m = tf.keras.metrics.binary_accuracy(y_true, y_pred)
>>> assert m.shape == (4,)
>>> m.numpy()
array([1., 1., 1., 1.], dtype=float32)
Args:
y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.
y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.
threshold: (Optional) Float representing the threshold for deciding whether
prediction values are 1 or 0.
Returns:
Binary accuracy values. shape = `[batch_size, d0, .. dN-1]`
"""
y_pred = tensor_conversion.convert_to_tensor_v2_with_dispatch(y_pred)
threshold = math_ops.cast(threshold, y_pred.dtype)
y_pred = math_ops.cast(y_pred > threshold, y_pred.dtype)
return backend.mean(math_ops.equal(y_true, y_pred), axis=-1)
@dispatch.add_dispatch_support
def categorical_accuracy(y_true, y_pred):
"""Calculates how often predictions match one-hot labels.
Standalone usage:
>>> y_true = [[0, 0, 1], [0, 1, 0]]
>>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]
>>> m = tf.keras.metrics.categorical_accuracy(y_true, y_pred)
>>> assert m.shape == (2,)
>>> m.numpy()
array([0., 1.], dtype=float32)
You can provide logits of classes as `y_pred`, since argmax of
logits and probabilities are same.
Args:
y_true: One-hot ground truth values.
y_pred: The prediction values.
Returns:
Categorical accuracy values.
"""
return math_ops.cast(
math_ops.equal(
math_ops.argmax(y_true, axis=-1), math_ops.argmax(y_pred, axis=-1)),
backend.floatx())
@dispatch.add_dispatch_support
def sparse_categorical_accuracy(y_true, y_pred):
"""Calculates how often predictions match integer labels.
Standalone usage:
>>> y_true = [2, 1]
>>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]
>>> m = tf.keras.metrics.sparse_categorical_accuracy(y_true, y_pred)
>>> assert m.shape == (2,)
>>> m.numpy()
array([0., 1.], dtype=float32)
You can provide logits of classes as `y_pred`, since argmax of
logits and probabilities are same.
Args:
y_true: Integer ground truth values.
y_pred: The prediction values.
Returns:
Sparse categorical accuracy values.
"""
y_pred = tensor_conversion.convert_to_tensor_v2_with_dispatch(y_pred)
y_true = tensor_conversion.convert_to_tensor_v2_with_dispatch(y_true)
y_pred_rank = y_pred.shape.ndims
y_true_rank = y_true.shape.ndims
# If the shape of y_true is (num_samples, 1), squeeze to (num_samples,)
if (y_true_rank is not None) and (y_pred_rank is not None) and (len(
backend.int_shape(y_true)) == len(backend.int_shape(y_pred))):
y_true = array_ops.squeeze(y_true, [-1])
y_pred = math_ops.argmax(y_pred, axis=-1)
# If the predicted output and actual output types don't match, force cast them
# to match.
if backend.dtype(y_pred) != backend.dtype(y_true):
y_pred = math_ops.cast(y_pred, backend.dtype(y_true))
return math_ops.cast(math_ops.equal(y_true, y_pred), backend.floatx())
@dispatch.add_dispatch_support
def top_k_categorical_accuracy(y_true, y_pred, k=5):
"""Computes how often targets are in the top `K` predictions.
Standalone usage:
>>> y_true = [[0, 0, 1], [0, 1, 0]]
>>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]
>>> m = tf.keras.metrics.top_k_categorical_accuracy(y_true, y_pred, k=3)
>>> assert m.shape == (2,)
>>> m.numpy()
array([1., 1.], dtype=float32)
Args:
y_true: The ground truth values.
y_pred: The prediction values.
k: (Optional) Number of top elements to look at for computing accuracy.
Defaults to 5.
Returns:
Top K categorical accuracy value.
"""
return math_ops.cast(
nn.in_top_k(
y_pred, math_ops.argmax(y_true, axis=-1), k), backend.floatx())
@dispatch.add_dispatch_support
def sparse_top_k_categorical_accuracy(y_true, y_pred, k=5):
"""Computes how often integer targets are in the top `K` predictions.
Standalone usage:
>>> y_true = [2, 1]
>>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]
>>> m = tf.keras.metrics.sparse_top_k_categorical_accuracy(
... y_true, y_pred, k=3)
>>> assert m.shape == (2,)
>>> m.numpy()
array([1., 1.], dtype=float32)
Args:
y_true: tensor of true targets.
y_pred: tensor of predicted targets.
k: (Optional) Number of top elements to look at for computing accuracy.
Defaults to 5.
Returns:
Sparse top K categorical accuracy value.
"""
y_pred_rank = tensor_conversion.convert_to_tensor_v2_with_dispatch(
y_pred
).shape.ndims
y_true_rank = tensor_conversion.convert_to_tensor_v2_with_dispatch(
y_true
).shape.ndims
# Flatten y_pred to (batch_size, num_samples) and y_true to (num_samples,)
if (y_true_rank is not None) and (y_pred_rank is not None):
if y_pred_rank > 2:
y_pred = array_ops.reshape(y_pred, [-1, y_pred.shape[-1]])
if y_true_rank > 1:
y_true = array_ops.reshape(y_true, [-1])
return math_ops.cast(
nn.in_top_k(y_pred, math_ops.cast(y_true, 'int32'), k), backend.floatx())
def cosine_proximity(y_true, y_pred, axis=-1):
"""Computes the cosine similarity between labels and predictions.
Args:
y_true: The ground truth values.
y_pred: The prediction values.
axis: (Optional) Defaults to -1. The dimension along which the cosine
similarity is computed.
Returns:
Cosine similarity value.
"""
y_true = nn.l2_normalize(y_true, axis=axis)
y_pred = nn.l2_normalize(y_pred, axis=axis)
return math_ops.reduce_sum(y_true * y_pred, axis=axis)
# Aliases
acc = ACC = accuracy
bce = BCE = binary_crossentropy
mse = MSE = mean_squared_error
mae = MAE = mean_absolute_error
mape = MAPE = mean_absolute_percentage_error
msle = MSLE = mean_squared_logarithmic_error
cosine_similarity = cosine_proximity
log_cosh = logcosh
def clone_metric(metric):
"""Returns a clone of the metric if stateful, otherwise returns it as is."""
if isinstance(metric, Metric):
with ops.init_scope():
return metric.__class__.from_config(metric.get_config())
return metric
def clone_metrics(metrics):
"""Clones the given metric list/dict."""
return nest.map_structure(clone_metric, metrics)
def serialize(metric):
"""Serializes metric function or `Metric` instance.
Args:
metric: A Keras `Metric` instance or a metric function.
Returns:
Metric configuration dictionary.
"""
return serialize_keras_object(metric)
def deserialize(config, custom_objects=None):
"""Deserializes a serialized metric class/function instance.
Args:
config: Metric configuration.
custom_objects: Optional dictionary mapping names (strings) to custom
objects (classes and functions) to be considered during deserialization.
Returns:
A Keras `Metric` instance or a metric function.
"""
return deserialize_keras_object(
config,
module_objects=globals(),
custom_objects=custom_objects,
printable_module_name='metric function')
def get(identifier):
"""Retrieves a Keras metric as a `function`/`Metric` class instance.
The `identifier` may be the string name of a metric function or class.
>>> metric = tf.keras.metrics.get("categorical_crossentropy")
>>> type(metric)
<class 'function'>
>>> metric = tf.keras.metrics.get("CategoricalCrossentropy")
>>> type(metric)
<class '...keras.metrics.CategoricalCrossentropy'>
You can also specify `config` of the metric to this function by passing dict
containing `class_name` and `config` as an identifier. Also note that the
`class_name` must map to a `Metric` class
>>> identifier = {"class_name": "CategoricalCrossentropy",
... "config": {"from_logits": True}}
>>> metric = tf.keras.metrics.get(identifier)
>>> type(metric)
<class '...keras.metrics.CategoricalCrossentropy'>
Args:
identifier: A metric identifier. One of None or string name of a metric
function/class or metric configuration dictionary or a metric function or
a metric class instance
Returns:
A Keras metric as a `function`/ `Metric` class instance.
Raises:
ValueError: If `identifier` cannot be interpreted.
"""
if isinstance(identifier, dict):
return deserialize(identifier)
elif isinstance(identifier, str):
return deserialize(str(identifier))
elif callable(identifier):
return identifier
else:
raise ValueError(
'Could not interpret metric function identifier: {}'.format(identifier))
def is_built_in(cls):
return cls.__module__ == Metric.__module__
| SumOverBatchSizeMetricWrapper |
python | huggingface__transformers | src/transformers/models/idefics2/modeling_idefics2.py | {
"start": 26241,
"end": 29057
} | class ____(nn.Module):
def __init__(self, config, layer_idx: int):
super().__init__()
self.hidden_size = config.hidden_size
self.n_latents = config.resampler_n_latents
self.depth = config.resampler_depth
self.rms_norm_eps = config.rms_norm_eps
self.input_latents_norm = Idefics2RMSNorm(self.hidden_size, eps=self.rms_norm_eps)
self.input_context_norm = Idefics2RMSNorm(self.hidden_size, eps=self.rms_norm_eps)
self.self_attn = Idefics2PerceiverAttention(config, layer_idx=layer_idx)
self.post_attention_layernorm = Idefics2RMSNorm(self.hidden_size, eps=self.rms_norm_eps)
self.mlp = Idefics2MLP(
hidden_size=config.hidden_size,
intermediate_size=config.hidden_size * 4,
output_size=config.hidden_size,
hidden_act=config.hidden_act,
)
def forward(
self,
latents: torch.Tensor,
context: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[Cache] = None,
**kwargs: Unpack[TransformersKwargs],
) -> torch.FloatTensor:
"""
Args:
latents (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
context (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
`(batch, sequence_length)` where padding elements are indicated by 0.
output_attentions (`bool`, *optional*):
Whether or not to return the attentions tensors of all attention layers. See `attentions` under
returned tensors for more detail.
use_cache (`bool`, *optional*):
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
(see `past_key_values`).
past_key_values (`Cache`, *optional*): cached past key and value projection states
"""
residual = latents
latents = self.input_latents_norm(latents)
context = self.input_context_norm(context)
latents, _ = self.self_attn(
latents=latents,
context=context,
attention_mask=attention_mask,
**kwargs,
)
latents = residual + latents
residual = latents
latents = self.post_attention_layernorm(latents)
latents = self.mlp(latents)
latents = residual + latents
return latents
@auto_docstring(
custom_intro="""
Idefics2 perceiver resampler model that performs `depth` blocks of cross-attention with a fixed
"""
)
| Idefics2PerceiverLayer |
python | getsentry__sentry | src/sentry/api/serializers/rest_framework/dashboard.py | {
"start": 48073,
"end": 49108
} | class ____(serializers.Serializer):
dashboard_ids = serializers.ListField(child=serializers.IntegerField(), required=True)
def validate_dashboard_ids(self, dashboard_ids):
if len(dashboard_ids) != len(set(dashboard_ids)):
raise serializers.ValidationError("Single dashboard cannot take up multiple positions")
return dashboard_ids
def schedule_update_project_configs(dashboard: Dashboard):
"""
Schedule a task to update project configs for all projects of an organization when a dashboard is updated.
"""
org = dashboard.organization
on_demand_metrics = features.has("organizations:on-demand-metrics-extraction", org)
dashboard_on_demand_metrics = features.has(
"organizations:on-demand-metrics-extraction-experimental", org
)
if not on_demand_metrics or not dashboard_on_demand_metrics:
return
schedule_invalidate_project_config(
trigger="dashboards:create-on-demand-metric", organization_id=org.id
)
| DashboardStarredOrderSerializer |
python | openai__openai-python | src/openai/resources/beta/threads/runs/steps.py | {
"start": 7896,
"end": 14899
} | class ____(AsyncAPIResource):
@cached_property
def with_raw_response(self) -> AsyncStepsWithRawResponse:
"""
This property can be used as a prefix for any HTTP method call to return
the raw response object instead of the parsed content.
For more information, see https://www.github.com/openai/openai-python#accessing-raw-response-data-eg-headers
"""
return AsyncStepsWithRawResponse(self)
@cached_property
def with_streaming_response(self) -> AsyncStepsWithStreamingResponse:
"""
An alternative to `.with_raw_response` that doesn't eagerly read the response body.
For more information, see https://www.github.com/openai/openai-python#with_streaming_response
"""
return AsyncStepsWithStreamingResponse(self)
@typing_extensions.deprecated("The Assistants API is deprecated in favor of the Responses API")
async def retrieve(
self,
step_id: str,
*,
thread_id: str,
run_id: str,
include: List[RunStepInclude] | Omit = omit,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = not_given,
) -> RunStep:
"""
Retrieves a run step.
Args:
include: A list of additional fields to include in the response. Currently the only
supported value is `step_details.tool_calls[*].file_search.results[*].content`
to fetch the file search result content.
See the
[file search tool documentation](https://platform.openai.com/docs/assistants/tools/file-search#customizing-file-search-settings)
for more information.
extra_headers: Send extra headers
extra_query: Add additional query parameters to the request
extra_body: Add additional JSON properties to the request
timeout: Override the client-level default timeout for this request, in seconds
"""
if not thread_id:
raise ValueError(f"Expected a non-empty value for `thread_id` but received {thread_id!r}")
if not run_id:
raise ValueError(f"Expected a non-empty value for `run_id` but received {run_id!r}")
if not step_id:
raise ValueError(f"Expected a non-empty value for `step_id` but received {step_id!r}")
extra_headers = {"OpenAI-Beta": "assistants=v2", **(extra_headers or {})}
return await self._get(
f"/threads/{thread_id}/runs/{run_id}/steps/{step_id}",
options=make_request_options(
extra_headers=extra_headers,
extra_query=extra_query,
extra_body=extra_body,
timeout=timeout,
query=await async_maybe_transform({"include": include}, step_retrieve_params.StepRetrieveParams),
),
cast_to=RunStep,
)
@typing_extensions.deprecated("The Assistants API is deprecated in favor of the Responses API")
def list(
self,
run_id: str,
*,
thread_id: str,
after: str | Omit = omit,
before: str | Omit = omit,
include: List[RunStepInclude] | Omit = omit,
limit: int | Omit = omit,
order: Literal["asc", "desc"] | Omit = omit,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = not_given,
) -> AsyncPaginator[RunStep, AsyncCursorPage[RunStep]]:
"""
Returns a list of run steps belonging to a run.
Args:
after: A cursor for use in pagination. `after` is an object ID that defines your place
in the list. For instance, if you make a list request and receive 100 objects,
ending with obj_foo, your subsequent call can include after=obj_foo in order to
fetch the next page of the list.
before: A cursor for use in pagination. `before` is an object ID that defines your place
in the list. For instance, if you make a list request and receive 100 objects,
starting with obj_foo, your subsequent call can include before=obj_foo in order
to fetch the previous page of the list.
include: A list of additional fields to include in the response. Currently the only
supported value is `step_details.tool_calls[*].file_search.results[*].content`
to fetch the file search result content.
See the
[file search tool documentation](https://platform.openai.com/docs/assistants/tools/file-search#customizing-file-search-settings)
for more information.
limit: A limit on the number of objects to be returned. Limit can range between 1 and
100, and the default is 20.
order: Sort order by the `created_at` timestamp of the objects. `asc` for ascending
order and `desc` for descending order.
extra_headers: Send extra headers
extra_query: Add additional query parameters to the request
extra_body: Add additional JSON properties to the request
timeout: Override the client-level default timeout for this request, in seconds
"""
if not thread_id:
raise ValueError(f"Expected a non-empty value for `thread_id` but received {thread_id!r}")
if not run_id:
raise ValueError(f"Expected a non-empty value for `run_id` but received {run_id!r}")
extra_headers = {"OpenAI-Beta": "assistants=v2", **(extra_headers or {})}
return self._get_api_list(
f"/threads/{thread_id}/runs/{run_id}/steps",
page=AsyncCursorPage[RunStep],
options=make_request_options(
extra_headers=extra_headers,
extra_query=extra_query,
extra_body=extra_body,
timeout=timeout,
query=maybe_transform(
{
"after": after,
"before": before,
"include": include,
"limit": limit,
"order": order,
},
step_list_params.StepListParams,
),
),
model=RunStep,
)
| AsyncSteps |
python | facebookresearch__faiss | tests/test_merge_index.py | {
"start": 4433,
"end": 7975
} | class ____(unittest.TestCase):
def do_flat_codes_test(self, factory_key):
ds = SyntheticDataset(32, 300, 300, 100)
index1 = faiss.index_factory(ds.d, factory_key)
index1.train(ds.get_train())
index1.add(ds.get_database())
_, Iref = index1.search(ds.get_queries(), 5)
index1.reset()
index2 = faiss.clone_index(index1)
index1.add(ds.get_database()[:100])
index2.add(ds.get_database()[100:])
index1.merge_from(index2)
_, Inew = index1.search(ds.get_queries(), 5)
np.testing.assert_array_equal(Inew, Iref)
def test_merge_IndexFlat(self):
self.do_flat_codes_test("Flat")
def test_merge_IndexPQ(self):
self.do_flat_codes_test("PQ8np")
def test_merge_IndexLSH(self):
self.do_flat_codes_test("LSHr")
def test_merge_IndexScalarQuantizer(self):
self.do_flat_codes_test("SQ4")
def test_merge_PreTransform(self):
self.do_flat_codes_test("PCA16,SQ4")
def do_fast_scan_test(self, factory_key, size1, with_add_id=False):
ds = SyntheticDataset(110, 1000, 1000, 100)
index_trained = faiss.index_factory(ds.d, factory_key)
index_trained.train(ds.get_train())
# test both clone and index_read/write
if True:
index1 = faiss.deserialize_index(
faiss.serialize_index(index_trained))
else:
index1 = faiss.clone_index(index_trained)
# assert index1.aq.qnorm.ntotal == index_trained.aq.qnorm.ntotal
index1.add(ds.get_database())
_, Iref = index1.search(ds.get_queries(), 5)
index1.reset()
index2 = faiss.clone_index(index_trained)
index1.add(ds.get_database()[:size1])
index2.add(ds.get_database()[size1:])
if with_add_id:
index1.merge_from(index2, add_id=index1.ntotal)
else:
index1.merge_from(index2)
_, Inew = index1.search(ds.get_queries(), 5)
np.testing.assert_array_equal(Inew, Iref)
def test_merge_IndexFastScan_complete_block(self):
self.do_fast_scan_test("PQ5x4fs", 320)
def test_merge_IndexFastScan_not_complete_block(self):
self.do_fast_scan_test("PQ11x4fs", 310)
def test_merge_IndexFastScan_even_M(self):
self.do_fast_scan_test("PQ10x4fs", 500)
def test_merge_IndexAdditiveQuantizerFastScan(self):
self.do_fast_scan_test("RQ10x4fs_32_Nrq2x4", 330)
def test_merge_IVFFastScan(self):
self.do_fast_scan_test("IVF20,PQ5x4fs", 123, with_add_id=True)
def do_test_with_ids(self, factory_key):
ds = SyntheticDataset(32, 300, 300, 100)
rs = np.random.RandomState(123)
ids = rs.choice(10000, ds.nb, replace=False).astype('int64')
index1 = faiss.index_factory(ds.d, factory_key)
index1.train(ds.get_train())
index1.add_with_ids(ds.get_database(), ids)
_, Iref = index1.search(ds.get_queries(), 5)
index1.reset()
index2 = faiss.clone_index(index1)
index1.add_with_ids(ds.get_database()[:100], ids[:100])
index2.add_with_ids(ds.get_database()[100:], ids[100:])
index1.merge_from(index2)
_, Inew = index1.search(ds.get_queries(), 5)
np.testing.assert_array_equal(Inew, Iref)
if "IDMap2" in factory_key:
index1.check_consistency()
def test_merge_IDMap(self):
self.do_test_with_ids("Flat,IDMap")
def test_merge_IDMap2(self):
self.do_test_with_ids("Flat,IDMap2")
| TestMerge2 |
python | apache__airflow | airflow-core/src/airflow/api_fastapi/core_api/datamodels/common.py | {
"start": 1240,
"end": 1400
} | class ____(str, enum.Enum):
"""Bulk Action to be performed on the used model."""
CREATE = "create"
DELETE = "delete"
UPDATE = "update"
| BulkAction |
python | scrapy__scrapy | tests/test_downloadermiddleware.py | {
"start": 727,
"end": 1985
} | class ____:
settings_dict = None
# should be a fixture but async fixtures that use Futures are problematic with pytest-twisted
@asynccontextmanager
async def get_mwman(self) -> AsyncGenerator[DownloaderMiddlewareManager]:
crawler = get_crawler(Spider, self.settings_dict)
crawler.spider = crawler._create_spider("foo")
mwman = DownloaderMiddlewareManager.from_crawler(crawler)
crawler.engine = crawler._create_engine()
await crawler.engine.open_spider_async()
try:
yield mwman
finally:
await crawler.engine.close_spider_async()
@staticmethod
async def _download(
mwman: DownloaderMiddlewareManager,
request: Request,
response: Response | None = None,
) -> Response | Request:
"""Executes downloader mw manager's download method and returns
the result (Request or Response) or raises exception in case of
failure.
"""
if not response:
response = Response(request.url)
def download_func(request: Request) -> Deferred[Response]:
return succeed(response)
return await maybe_deferred_to_future(mwman.download(download_func, request))
| TestManagerBase |
python | rapidsai__cudf | python/cudf/cudf/core/accessors/lists.py | {
"start": 760,
"end": 14310
} | class ____(BaseAccessor):
"""
List methods for Series
"""
_column: ListColumn
def __init__(self, parent: Series | Index):
if not is_dtype_obj_list(parent.dtype):
raise AttributeError(
"Can only use .list accessor with a 'list' dtype"
)
super().__init__(parent=parent)
def __getitem__(self, key) -> Series | Index:
if isinstance(key, slice):
return self.slice(start=key.start, stop=key.stop, step=key.step) # type: ignore[attr-defined]
else:
return self.get(key)
def get(
self,
index: int | ColumnLike,
default: ScalarLike | ColumnLike | None = None,
) -> Series | Index:
"""
Extract element at the given index from each list in a Series of lists.
``index`` can be an integer or a sequence of integers. If
``index`` is an integer, the element at position ``index`` is
extracted from each list. If ``index`` is a sequence, it must
be of the same length as the Series, and ``index[i]``
specifies the position of the element to extract from the
``i``-th list in the Series.
If the index is out of bounds for any list, return <NA> or, if
provided, ``default``. Thus, this method never raises an
``IndexError``.
Parameters
----------
index : int or sequence of ints
default : scalar, optional
Returns
-------
Series or Index
Examples
--------
>>> s = cudf.Series([[1, 2, 3], [3, 4, 5], [4, 5, 6]])
>>> s.list.get(-1)
0 3
1 5
2 6
dtype: int64
>>> s = cudf.Series([[1, 2], [3, 4, 5], [4, 5, 6]])
>>> s.list.get(2)
0 <NA>
1 5
2 6
dtype: int64
>>> s.list.get(2, default=0)
0 0
1 5
2 6
dtype: int64
>>> s.list.get([0, 1, 2])
0 1
1 4
2 6
dtype: int64
"""
if isinstance(index, int):
out = self._column.extract_element_scalar(index)
else:
index = as_column(index)
out = self._column.extract_element_column(index)
if not (default is None or default is pd.NA):
# determine rows for which `index` is out-of-bounds
lengths = self._column.count_elements()
out_of_bounds_mask = ((-1 * index) > lengths) | (index >= lengths)
# replace the value in those rows (should be NA) with `default`
if out_of_bounds_mask.any():
out = out._scatter_by_column(
out_of_bounds_mask,
pa_scalar_to_plc_scalar(pa.scalar(default)),
)
if self._column.element_type != out.dtype:
# libcudf doesn't maintain struct labels so we must transfer over
# manually from the input column if we lost some information
# somewhere. Not doing this unilaterally since the cost is
# non-zero..
out = out._with_type_metadata(self._column.element_type)
return self._return_or_inplace(out)
def contains(self, search_key: ScalarLike) -> Series | Index:
"""
Returns boolean values indicating whether the specified scalar
is an element of each row.
Parameters
----------
search_key : scalar
element being searched for in each row of the list column
Returns
-------
Series or Index
Examples
--------
>>> s = cudf.Series([[1, 2, 3], [3, 4, 5], [4, 5, 6]])
>>> s.list.contains(4)
Series([False, True, True])
dtype: bool
"""
return self._return_or_inplace(
self._column.contains_scalar(pa.scalar(search_key))
)
def index(self, search_key: ScalarLike | ColumnLike) -> Series | Index:
"""
Returns integers representing the index of the search key for each row.
If ``search_key`` is a sequence, it must be the same length as the
Series and ``search_key[i]`` represents the search key for the
``i``-th row of the Series.
If the search key is not contained in a row, -1 is returned. If either
the row or the search key are null, <NA> is returned. If the search key
is contained multiple times, the smallest matching index is returned.
Parameters
----------
search_key : scalar or sequence of scalars
Element or elements being searched for in each row of the list
column
Returns
-------
Series or Index
Examples
--------
>>> s = cudf.Series([[1, 2, 3], [3, 4, 5], [4, 5, 6]])
>>> s.list.index(4)
0 -1
1 1
2 0
dtype: int32
>>> s = cudf.Series([["a", "b", "c"], ["x", "y", "z"]])
>>> s.list.index(["b", "z"])
0 1
1 2
dtype: int32
>>> s = cudf.Series([[4, 5, 6], None, [-3, -2, -1]])
>>> s.list.index([None, 3, -2])
0 <NA>
1 <NA>
2 1
dtype: int32
"""
if is_scalar(search_key):
result = self._column.index_of_scalar(pa.scalar(search_key))
else:
result = self._column.index_of_column(as_column(search_key))
return self._return_or_inplace(result)
@property
def leaves(self) -> Series | Index:
"""
From a Series of (possibly nested) lists, obtain the elements from
the innermost lists as a flat Series (one value per row).
Returns
-------
Series or Index
Examples
--------
>>> a = cudf.Series([[[1, None], [3, 4]], None, [[5, 6]]])
>>> a.list.leaves
0 1
1 <NA>
2 3
3 4
4 5
5 6
dtype: int64
"""
return self._return_or_inplace(
self._column.leaves(), retain_index=False
)
def len(self) -> Series | Index:
"""
Computes the length of each element in the Series/Index.
Returns
-------
Series or Index
Examples
--------
>>> s = cudf.Series([[1, 2, 3], None, [4, 5]])
>>> s
0 [1, 2, 3]
1 None
2 [4, 5]
dtype: list
>>> s.list.len()
0 3
1 <NA>
2 2
dtype: int32
"""
return self._return_or_inplace(self._column.count_elements())
def take(self, lists_indices: ColumnLike) -> Series | Index:
"""
Collect list elements based on given indices.
Parameters
----------
lists_indices: Series-like of lists
Specifies what to collect from each row
Returns
-------
Series or Index
Examples
--------
>>> s = cudf.Series([[1, 2, 3], None, [4, 5]])
>>> s
0 [1, 2, 3]
1 None
2 [4, 5]
dtype: list
>>> s.list.take([[0, 1], [], []])
0 [1, 2]
1 None
2 []
dtype: list
"""
lists_indices_col = as_column(lists_indices)
if not isinstance(lists_indices_col.dtype, ListDtype):
raise ValueError("lists_indices should be list type array.")
if not lists_indices_col.size == self._column.size:
raise ValueError(
"lists_indices and list column is of different size."
)
if (
not is_dtype_obj_numeric(
lists_indices_col.children[1].dtype, include_decimal=False
)
or lists_indices_col.children[1].dtype.kind not in "iu"
):
raise TypeError(
"lists_indices should be column of values of index types."
)
return self._return_or_inplace(
self._column.segmented_gather(lists_indices_col)
)
def unique(self) -> Series | Index:
"""
Returns the unique elements in each list.
The ordering of elements is not guaranteed.
Returns
-------
Series or Index
Examples
--------
>>> s = cudf.Series([[1, 1, 2, None, None], None, [4, 4], []])
>>> s
0 [1.0, 1.0, 2.0, nan, nan]
1 None
2 [4.0, 4.0]
3 []
dtype: list
>>> s.list.unique() # Order of list element is not guaranteed
0 [1.0, 2.0, nan]
1 None
2 [4.0]
3 []
dtype: list
"""
if isinstance(self._column.children[1].dtype, ListDtype):
raise NotImplementedError("Nested lists unique is not supported.")
return self._return_or_inplace(
self._column.distinct(nulls_equal=True, nans_all_equal=True)
)
def sort_values(
self,
ascending: bool = True,
inplace: bool = False,
kind: str = "quicksort",
na_position: Literal["first", "last"] = "last",
ignore_index: bool = False,
) -> Series | Index:
"""
Sort each list by the values.
Sort the lists in ascending or descending order by some criterion.
Parameters
----------
ascending : bool, default True
If True, sort values in ascending order, otherwise descending.
na_position : {'first', 'last'}, default 'last'
'first' puts nulls at the beginning, 'last' puts nulls at the end.
ignore_index : bool, default False
If True, the resulting axis will be labeled 0, 1, ..., n - 1.
Returns
-------
Series or Index with each list sorted
Examples
--------
>>> s = cudf.Series([[4, 2, None, 9], [8, 8, 2], [2, 1]])
>>> s.list.sort_values(ascending=True, na_position="last")
0 [2.0, 4.0, 9.0, nan]
1 [2.0, 8.0, 8.0]
2 [1.0, 2.0]
dtype: list
.. pandas-compat::
`pandas.Series.list.sort_values`
This method does not exist in pandas but it can be run
as:
>>> import pandas as pd
>>> s = pd.Series([[3, 2, 1], [2, 4, 3]])
>>> print(s.apply(sorted))
0 [1, 2, 3]
1 [2, 3, 4]
dtype: object
"""
if inplace:
raise NotImplementedError("`inplace` not currently implemented.")
if kind != "quicksort":
raise NotImplementedError("`kind` not currently implemented.")
if na_position not in {"first", "last"}:
raise ValueError(f"Unknown `na_position` value {na_position}")
if isinstance(self._column.children[1].dtype, ListDtype):
raise NotImplementedError("Nested lists sort is not supported.")
return self._return_or_inplace(
self._column.sort_lists(ascending, na_position),
retain_index=not ignore_index,
)
def concat(self, dropna: bool = True) -> Series | Index:
"""
For a column with at least one level of nesting, concatenate the
lists in each row.
Parameters
----------
dropna: bool, optional
If True (default), ignores top-level null elements in each row.
If False, and top-level null elements are present, the resulting
row in the output is null.
Returns
-------
Series or Index
Examples
--------
>>> s1
0 [[1.0, 2.0], [3.0, 4.0, 5.0]]
1 [[6.0, None], [7.0], [8.0, 9.0]]
dtype: list
>>> s1.list.concat()
0 [1.0, 2.0, 3.0, 4.0, 5.0]
1 [6.0, None, 7.0, 8.0, 9.0]
dtype: list
Null values at the top-level in each row are dropped by default:
>>> s2
0 [[1.0, 2.0], None, [3.0, 4.0, 5.0]]
1 [[6.0, None], [7.0], [8.0, 9.0]]
dtype: list
>>> s2.list.concat()
0 [1.0, 2.0, 3.0, 4.0, 5.0]
1 [6.0, None, 7.0, 8.0, 9.0]
dtype: list
Use ``dropna=False`` to produce a null instead:
>>> s2.list.concat(dropna=False)
0 None
1 [6.0, nan, 7.0, 8.0, 9.0]
dtype: list
"""
return self._return_or_inplace(
self._column.concatenate_list_elements(dropna)
)
def astype(self, dtype: Dtype) -> Series | Index:
"""
Return a new list Series with the leaf values casted
to the specified data type.
Parameters
----------
dtype: data type to cast leaves values to
Returns
-------
A new Series of lists
Examples
--------
>>> s = cudf.Series([[1, 2], [3, 4]])
>>> s.dtype
ListDtype(int64)
>>> s2 = s.list.astype("float64")
>>> s2.dtype
ListDtype(float64)
"""
return self._return_or_inplace(
self._column._transform_leaves(
lambda col, dtype: col.astype(cudf_dtype(dtype)), dtype
)
)
| ListMethods |
python | getsentry__sentry | tests/sentry/sentry_apps/api/endpoints/test_sentry_app_avatar.py | {
"start": 449,
"end": 1758
} | class ____(APITestCase):
endpoint = "sentry-api-0-sentry-app-avatar"
def setUp(self) -> None:
super().setUp()
self.unpublished_app = self.create_sentry_app(name="Meow", organization=self.organization)
SentryAppAvatar.objects.create(sentry_app=self.unpublished_app, color=True, avatar_type=0)
SentryAppAvatar.objects.create(sentry_app=self.unpublished_app, color=False, avatar_type=0)
self.login_as(self.user)
def get_avatar(
self, resp: Response, is_color: bool = True
) -> SentryAppAvatarSerializerResponse:
avatars = resp.data["avatars"]
for avatar in avatars:
if avatar.get("color") == is_color:
return avatar
raise AssertionError("Invariant violation: expect avatar to be returned")
def create_avatar(self, is_color: bool) -> Response:
avatar_photo = (
b64encode(self.load_fixture("rookout-color.png"))
if is_color is True
else b64encode(self.load_fixture("rookout-bw.png"))
)
data = {
"color": is_color,
"avatar_type": "upload",
"avatar_photo": avatar_photo,
}
return self.get_success_response(self.unpublished_app.slug, **data)
@control_silo_test
| SentryAppAvatarTestBase |
python | qdrant__qdrant-client | qdrant_client/async_qdrant_client.py | {
"start": 928,
"end": 105344
} | class ____(AsyncQdrantFastembedMixin):
"""Entry point to communicate with Qdrant service via REST or gRPC API.
It combines interface classes and endpoint implementation.
Additionally, it provides custom implementations for frequently used methods like initial collection upload.
All methods in QdrantClient accept both gRPC and REST structures as an input.
Conversion will be performed automatically.
.. note::
This module methods are wrappers around generated client code for gRPC and REST methods.
If you need lower-level access to generated clients, use following properties:
- :py:attr:`QdrantClient.grpc_points`
- :py:attr:`QdrantClient.grpc_collections`
- :py:attr:`QdrantClient.rest`
.. note::
If you need async, please consider using Async Implementations of QdrantClient.
- :class:`qdrant_client.async_qdrant_client`
Args:
location:
If `":memory:"` - use in-memory Qdrant instance.
If `str` - use it as a `url` parameter.
If `None` - use default values for `host` and `port`.
url: either host or str of "Optional[scheme], host, Optional[port], Optional[prefix]".
Default: `None`
port: Port of the REST API interface. Default: 6333
grpc_port: Port of the gRPC interface. Default: 6334
prefer_grpc: If `true` - use gPRC interface whenever possible in custom methods.
https: If `true` - use HTTPS(SSL) protocol. Default: `None`
api_key: API key for authentication in Qdrant Cloud. Default: `None`
prefix:
If not `None` - add `prefix` to the REST URL path.
Example: `service/v1` will result in `http://localhost:6333/service/v1/{qdrant-endpoint}` for REST API.
Default: `None`
timeout:
Timeout for REST and gRPC API requests.
Default: 5 seconds for REST and unlimited for gRPC
host: Host name of Qdrant service. If url and host are None, set to 'localhost'.
Default: `None`
path: Persistence path for QdrantLocal. Default: `None`
force_disable_check_same_thread:
For QdrantLocal, force disable check_same_thread. Default: `False`
Only use this if you can guarantee that you can resolve the thread safety outside QdrantClient.
auth_token_provider: Callback function to get Bearer access token. If given, the function will be called before each request to get the token.
check_compatibility: If `true` - check compatibility with the server version. Default: `true`
grpc_options: a mapping of gRPC channel options
cloud_inference: If `true` - do inference of `models.Document` and other models in Qdrant Cloud. Default: `False`.
local_inference_batch_size: inference batch size used by fastembed when using local inference with `models.Document` and other models.
pool_size: connection pool size, Default: None. Default value for gRPC connection pool is 3, rest default is
inherited from `httpx` (default: 100)
**kwargs: Additional arguments passed directly into REST client initialization
"""
def __init__(
self,
location: Optional[str] = None,
url: Optional[str] = None,
port: Optional[int] = 6333,
grpc_port: int = 6334,
prefer_grpc: bool = False,
https: Optional[bool] = None,
api_key: Optional[str] = None,
prefix: Optional[str] = None,
timeout: Optional[int] = None,
host: Optional[str] = None,
path: Optional[str] = None,
force_disable_check_same_thread: bool = False,
grpc_options: Optional[dict[str, Any]] = None,
auth_token_provider: Optional[
Union[Callable[[], str], Callable[[], Awaitable[str]]]
] = None,
cloud_inference: bool = False,
local_inference_batch_size: Optional[int] = None,
check_compatibility: bool = True,
pool_size: Optional[int] = None,
**kwargs: Any,
):
self._init_options = {
key: value
for (key, value) in locals().items()
if key not in ("self", "__class__", "kwargs")
}
self._init_options.update({k: v for (k, v) in kwargs.items()})
if sum([param is not None for param in (location, url, host, path)]) > 1:
raise ValueError(
"Only one of <location>, <url>, <host> or <path> should be specified."
)
self._client: AsyncQdrantBase
server_version = None
if location == ":memory:":
self._client = AsyncQdrantLocal(
location=location, force_disable_check_same_thread=force_disable_check_same_thread
)
elif path is not None:
self._client = AsyncQdrantLocal(
location=path, force_disable_check_same_thread=force_disable_check_same_thread
)
else:
if location is not None and url is None:
url = location
self._client = AsyncQdrantRemote(
url=url,
port=port,
grpc_port=grpc_port,
prefer_grpc=prefer_grpc,
https=https,
api_key=api_key,
prefix=prefix,
timeout=timeout,
host=host,
grpc_options=grpc_options,
auth_token_provider=auth_token_provider,
check_compatibility=check_compatibility,
pool_size=pool_size,
**kwargs,
)
server_version = self._client.server_version
if isinstance(self._client, AsyncQdrantLocal) and cloud_inference:
raise ValueError(
"Cloud inference is not supported for local Qdrant, consider using FastEmbed or switch to Qdrant Cloud"
)
self.cloud_inference = cloud_inference
self.local_inference_batch_size = local_inference_batch_size
self._inference_inspector = Inspector()
super().__init__(
parser=self._inference_inspector.parser,
is_local_mode=isinstance(self._client, AsyncQdrantLocal),
server_version=server_version,
)
async def close(self, grpc_grace: Optional[float] = None, **kwargs: Any) -> None:
"""Closes the connection to Qdrant
Args:
grpc_grace: Grace period for gRPC connection close. Default: None
"""
if hasattr(self, "_client"):
await self._client.close(grpc_grace=grpc_grace, **kwargs)
@property
def grpc_collections(self) -> grpc.CollectionsStub:
"""gRPC client for collections methods
Returns:
An instance of raw gRPC client, generated from Protobuf
"""
if isinstance(self._client, AsyncQdrantRemote):
return self._client.grpc_collections
raise NotImplementedError(f"gRPC client is not supported for {type(self._client)}")
@property
def grpc_points(self) -> grpc.PointsStub:
"""gRPC client for points methods
Returns:
An instance of raw gRPC client, generated from Protobuf
"""
if isinstance(self._client, AsyncQdrantRemote):
return self._client.grpc_points
raise NotImplementedError(f"gRPC client is not supported for {type(self._client)}")
@property
def http(self) -> AsyncApis[AsyncApiClient]:
"""REST Client
Returns:
An instance of raw REST API client, generated from OpenAPI schema
"""
if isinstance(self._client, AsyncQdrantRemote):
return self._client.http
raise NotImplementedError(f"REST client is not supported for {type(self._client)}")
@property
def init_options(self) -> dict[str, Any]:
"""`__init__` Options
Returns:
A dictionary of options the client class was instantiated with
"""
return self._init_options
async def query_batch_points(
self,
collection_name: str,
requests: Sequence[types.QueryRequest],
consistency: Optional[types.ReadConsistency] = None,
timeout: Optional[int] = None,
**kwargs: Any,
) -> list[types.QueryResponse]:
"""Perform any search, recommend, discovery, context search operations in batch, and mitigate network overhead
Args:
collection_name: Name of the collection
requests: List of query requests
consistency:
Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:
- int - number of replicas to query, values should present in all queried replicas
- 'majority' - query all replicas, but return values present in the majority of replicas
- 'quorum' - query the majority of replicas, return values present in all of them
- 'all' - query all replicas, and return values present in all replicas
timeout:
Overrides global timeout for this search. Unit is seconds.
Returns:
List of query responses
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
requests = self._resolve_query_batch_request(requests)
if not self.cloud_inference and self._inference_inspector.inspect(requests):
requests = list(
self._embed_models(
requests, is_query=True, batch_size=self.local_inference_batch_size
)
)
return await self._client.query_batch_points(
collection_name=collection_name,
requests=requests,
consistency=consistency,
timeout=timeout,
**kwargs,
)
async def query_points(
self,
collection_name: str,
query: Union[
types.PointId,
list[float],
list[list[float]],
types.SparseVector,
types.Query,
types.NumpyArray,
types.Document,
types.Image,
types.InferenceObject,
None,
] = None,
using: Optional[str] = None,
prefetch: Union[types.Prefetch, list[types.Prefetch], None] = None,
query_filter: Optional[types.Filter] = None,
search_params: Optional[types.SearchParams] = None,
limit: int = 10,
offset: Optional[int] = None,
with_payload: Union[bool, Sequence[str], types.PayloadSelector] = True,
with_vectors: Union[bool, Sequence[str]] = False,
score_threshold: Optional[float] = None,
lookup_from: Optional[types.LookupLocation] = None,
consistency: Optional[types.ReadConsistency] = None,
shard_key_selector: Optional[types.ShardKeySelector] = None,
timeout: Optional[int] = None,
**kwargs: Any,
) -> types.QueryResponse:
"""Universal endpoint to run any available operation, such as search, recommendation, discovery, context search.
Args:
collection_name: Collection to search in
query:
Query for the chosen search type operation.
- If `str` - use string as UUID of the existing point as a search query.
- If `int` - use integer as ID of the existing point as a search query.
- If `list[float]` - use as a dense vector for nearest search.
- If `list[list[float]]` - use as a multi-vector for nearest search.
- If `SparseVector` - use as a sparse vector for nearest search.
- If `Query` - use as a query for specific search type.
- If `NumpyArray` - use as a dense vector for nearest search.
- If `Document` - infer vector from the document text and use it for nearest search (requires `fastembed` package installed).
- If `None` - return first `limit` points from the collection.
prefetch: prefetch queries to make a selection of the data to be used with the main query
query_filter:
- Exclude vectors which doesn't fit given conditions.
- If `None` - search among all vectors
search_params: Additional search params
limit: How many results return
offset:
Offset of the first result to return.
May be used to paginate results.
Note: large offset values may cause performance issues.
with_payload:
- Specify which stored payload should be attached to the result.
- If `True` - attach all payload
- If `False` - do not attach any payload
- If List of string - include only specified fields
- If `PayloadSelector` - use explicit rules
with_vectors:
- If `True` - Attach stored vector to the search result.
- If `False` - Do not attach vector.
- If List of string - include only specified fields
- Default: `False`
score_threshold:
Define a minimal score threshold for the result.
If defined, less similar results will not be returned.
Score of the returned result might be higher or smaller than the threshold depending
on the Distance function used.
E.g. for cosine similarity only higher scores will be returned.
using:
Name of the vectors to use for query.
If `None` - use default vectors or provided in named vector structures.
lookup_from:
Defines a location (collection and vector field name), used to lookup vectors for recommendations,
discovery and context queries.
If `None` - current collection will be used.
consistency:
Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:
- int - number of replicas to query, values should present in all queried replicas
- 'majority' - query all replicas, but return values present in the majority of replicas
- 'quorum' - query the majority of replicas, return values present in all of them
- 'all' - query all replicas, and return values present in all replicas
shard_key_selector:
This parameter allows to specify which shards should be queried.
If `None` - query all shards. Only works for collections with `custom` sharding method.
timeout:
Overrides global timeout for this search. Unit is seconds.
Examples:
`Search for closest points with a filter`::
qdrant.query(
collection_name="test_collection",
query=[1.0, 0.1, 0.2, 0.7],
query_filter=Filter(
must=[
FieldCondition(
key='color',
range=Match(
value="red"
)
)
]
)
)
Returns:
QueryResponse structure containing list of found close points with similarity scores.
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
query = self._resolve_query(query)
if not self.cloud_inference:
if self._inference_inspector.inspect(query) or self._inference_inspector.inspect(
prefetch
):
query = (
next(
iter(
self._embed_models(
query, is_query=True, batch_size=self.local_inference_batch_size
)
)
)
if query is not None
else None
)
if isinstance(prefetch, list):
prefetch = list(
self._embed_models(
prefetch, is_query=True, batch_size=self.local_inference_batch_size
)
)
else:
prefetch = (
next(
iter(
self._embed_models(
prefetch,
is_query=True,
batch_size=self.local_inference_batch_size,
)
)
)
if prefetch is not None
else None
)
return await self._client.query_points(
collection_name=collection_name,
query=query,
prefetch=prefetch,
query_filter=query_filter,
search_params=search_params,
limit=limit,
offset=offset,
with_payload=with_payload,
with_vectors=with_vectors,
score_threshold=score_threshold,
using=using,
lookup_from=lookup_from,
consistency=consistency,
shard_key_selector=shard_key_selector,
timeout=timeout,
**kwargs,
)
async def query_points_groups(
self,
collection_name: str,
group_by: str,
query: Union[
types.PointId,
list[float],
list[list[float]],
types.SparseVector,
types.Query,
types.NumpyArray,
types.Document,
types.Image,
types.InferenceObject,
None,
] = None,
using: Optional[str] = None,
prefetch: Union[types.Prefetch, list[types.Prefetch], None] = None,
query_filter: Optional[types.Filter] = None,
search_params: Optional[types.SearchParams] = None,
limit: int = 10,
group_size: int = 3,
with_payload: Union[bool, Sequence[str], types.PayloadSelector] = True,
with_vectors: Union[bool, Sequence[str]] = False,
score_threshold: Optional[float] = None,
with_lookup: Optional[types.WithLookupInterface] = None,
lookup_from: Optional[types.LookupLocation] = None,
consistency: Optional[types.ReadConsistency] = None,
shard_key_selector: Optional[types.ShardKeySelector] = None,
timeout: Optional[int] = None,
**kwargs: Any,
) -> types.GroupsResult:
"""Universal endpoint to group on any available operation, such as search, recommendation, discovery, context search.
Args:
collection_name: Collection to search in
query:
Query for the chosen search type operation.
- If `str` - use string as UUID of the existing point as a search query.
- If `int` - use integer as ID of the existing point as a search query.
- If `list[float]` - use as a dense vector for nearest search.
- If `list[list[float]]` - use as a multi-vector for nearest search.
- If `SparseVector` - use as a sparse vector for nearest search.
- If `Query` - use as a query for specific search type.
- If `NumpyArray` - use as a dense vector for nearest search.
- If `Document` - infer vector from the document text and use it for nearest search (requires `fastembed` package installed).
- If `None` - return first `limit` points from the collection.
prefetch: prefetch queries to make a selection of the data to be used with the main query
query_filter:
- Exclude vectors which doesn't fit given conditions.
- If `None` - search among all vectors
search_params: Additional search params
limit: How many results return
group_size: How many results return for each group
group_by: Name of the payload field to group by. Field must be of type "keyword" or "integer".
Nested fields are specified using dot notation, e.g. "nested_field.subfield".
with_payload:
- Specify which stored payload should be attached to the result.
- If `True` - attach all payload
- If `False` - do not attach any payload
- If List of string - include only specified fields
- If `PayloadSelector` - use explicit rules
with_vectors:
- If `True` - Attach stored vector to the search result.
- If `False` - Do not attach vector.
- If List of string - include only specified fields
- Default: `False`
score_threshold:
Define a minimal score threshold for the result.
If defined, less similar results will not be returned.
Score of the returned result might be higher or smaller than the threshold depending
on the Distance function used.
E.g. for cosine similarity only higher scores will be returned.
using:
Name of the vectors to use for query.
If `None` - use default vectors or provided in named vector structures.
with_lookup:
Look for points in another collection using the group ids.
If specified, each group will contain a record from the specified collection
with the same id as the group id. In addition, the parameter allows to specify
which parts of the record should be returned, like in `with_payload` and `with_vectors` parameters.
lookup_from:
Defines a location (collection and vector field name), used to lookup vectors being referenced in the query as IDs.
If `None` - current collection will be used.
consistency:
Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:
- int - number of replicas to query, values should present in all queried replicas
- 'majority' - query all replicas, but return values present in the majority of replicas
- 'quorum' - query the majority of replicas, return values present in all of them
- 'all' - query all replicas, and return values present in all replicas
shard_key_selector:
This parameter allows to specify which shards should be queried.
If `None` - query all shards. Only works for collections with `custom` sharding method.
timeout:
Overrides global timeout for this search. Unit is seconds.
Examples:
`Search for closest points and group results`::
qdrant.query_points_groups(
collection_name="test_collection",
query=[1.0, 0.1, 0.2, 0.7],
group_by="color",
group_size=3,
)
Returns:
List of groups with not more than `group_size` hits in each group.
Each group also contains an id of the group, which is the value of the payload field.
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
query = self._resolve_query(query)
if not self.cloud_inference:
if self._inference_inspector.inspect(query) or self._inference_inspector.inspect(
prefetch
):
query = (
next(
iter(
self._embed_models(
query, is_query=True, batch_size=self.local_inference_batch_size
)
)
)
if query is not None
else None
)
if isinstance(prefetch, list):
prefetch = list(
self._embed_models(
prefetch, is_query=True, batch_size=self.local_inference_batch_size
)
)
elif prefetch is not None:
prefetch = next(
iter(
self._embed_models(
prefetch, is_query=True, batch_size=self.local_inference_batch_size
)
)
)
return await self._client.query_points_groups(
collection_name=collection_name,
query=query,
prefetch=prefetch,
query_filter=query_filter,
search_params=search_params,
group_by=group_by,
limit=limit,
group_size=group_size,
with_payload=with_payload,
with_vectors=with_vectors,
score_threshold=score_threshold,
using=using,
with_lookup=with_lookup,
consistency=consistency,
shard_key_selector=shard_key_selector,
timeout=timeout,
**kwargs,
)
async def search_matrix_pairs(
self,
collection_name: str,
query_filter: Optional[types.Filter] = None,
limit: int = 3,
sample: int = 10,
using: Optional[str] = None,
consistency: Optional[types.ReadConsistency] = None,
timeout: Optional[int] = None,
shard_key_selector: Optional[types.ShardKeySelector] = None,
**kwargs: Any,
) -> types.SearchMatrixPairsResponse:
"""
Compute distance matrix for sampled points with a pair-based output format.
Args:
collection_name: Name of the collection.
query_filter: Filter to apply.
limit: How many neighbors per sample to find.
sample: How many points to select and search within.
using: Name of the vectors to use for search. If `None`, use default vectors.
consistency: Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:
- int: Number of replicas to query, values should be present in all queried replicas.
- 'majority': Query all replicas, but return values present in the majority of replicas.
- 'quorum': Query the majority of replicas, return values present in all of them.
- 'all': Query all replicas, and return values present in all replicas.
timeout: Overrides global timeout for this search. Unit is seconds.
shard_key_selector: This parameter allows specifying which shards should be queried.
If `None`, query all shards. Only works for collections with the `custom` sharding method.
Returns:
Distance matrix using a pair-based encoding.
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.search_matrix_pairs(
collection_name=collection_name,
query_filter=query_filter,
limit=limit,
sample=sample,
using=using,
consistency=consistency,
timeout=timeout,
shard_key_selector=shard_key_selector,
**kwargs,
)
async def search_matrix_offsets(
self,
collection_name: str,
query_filter: Optional[types.Filter] = None,
limit: int = 3,
sample: int = 10,
using: Optional[str] = None,
consistency: Optional[types.ReadConsistency] = None,
timeout: Optional[int] = None,
shard_key_selector: Optional[types.ShardKeySelector] = None,
**kwargs: Any,
) -> types.SearchMatrixOffsetsResponse:
"""
Compute distance matrix for sampled points with an offset-based output format.
Args:
collection_name: Name of the collection.
query_filter: Filter to apply.
limit: How many neighbors per sample to find.
sample: How many points to select and search within.
using: Name of the vectors to use for search. If `None`, use default vectors.
consistency: Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:
- int: Number of replicas to query, values should present in all queried replicas.
- 'majority': Query all replicas, but return values present in the majority of replicas.
- 'quorum': Query the majority of replicas, return values present in all of them.
- 'all': Query all replicas and return values present in all replicas.
timeout: Overrides global timeout for this search. Unit is seconds.
shard_key_selector: This parameter allows specifying which shards should be queried.
If `None`, query all shards. Only works for collections with the `custom` sharding method.
Returns:
Distance matrix using an offset-based encoding.
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.search_matrix_offsets(
collection_name=collection_name,
query_filter=query_filter,
limit=limit,
sample=sample,
using=using,
consistency=consistency,
timeout=timeout,
shard_key_selector=shard_key_selector,
**kwargs,
)
async def scroll(
self,
collection_name: str,
scroll_filter: Optional[types.Filter] = None,
limit: int = 10,
order_by: Optional[types.OrderBy] = None,
offset: Optional[types.PointId] = None,
with_payload: Union[bool, Sequence[str], types.PayloadSelector] = True,
with_vectors: Union[bool, Sequence[str]] = False,
consistency: Optional[types.ReadConsistency] = None,
shard_key_selector: Optional[types.ShardKeySelector] = None,
timeout: Optional[int] = None,
**kwargs: Any,
) -> tuple[list[types.Record], Optional[types.PointId]]:
"""Scroll over all (matching) points in the collection.
This method provides a way to iterate over all stored points with some optional filtering condition.
Scroll does not apply any similarity estimations, it will return points sorted by id in ascending order.
Args:
collection_name: Name of the collection
scroll_filter: If provided - only returns points matching filtering conditions
limit: How many points to return
order_by: Order the records by a payload key. If `None` - order by id
offset: If provided - skip points with ids less than given `offset`
with_payload:
- Specify which stored payload should be attached to the result.
- If `True` - attach all payload
- If `False` - do not attach any payload
- If List of string - include only specified fields
- If `PayloadSelector` - use explicit rules
with_vectors:
- If `True` - Attach stored vector to the search result.
- If `False` (default) - Do not attach vector.
- If List of string - include only specified fields
consistency:
Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:
- int - number of replicas to query, values should present in all queried replicas
- 'majority' - query all replicas, but return values present in the majority of replicas
- 'quorum' - query the majority of replicas, return values present in all of them
- 'all' - query all replicas, and return values present in all replicas
shard_key_selector:
This parameter allows to specify which shards should be queried.
If `None` - query all shards. Only works for collections with `custom` sharding method.
timeout:
Overrides global timeout for this operation. Unit is seconds.
Returns:
A pair of (List of points) and (optional offset for the next scroll request).
If next page offset is `None` - there is no more points in the collection to scroll.
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.scroll(
collection_name=collection_name,
scroll_filter=scroll_filter,
limit=limit,
order_by=order_by,
offset=offset,
with_payload=with_payload,
with_vectors=with_vectors,
consistency=consistency,
shard_key_selector=shard_key_selector,
timeout=timeout,
**kwargs,
)
async def count(
self,
collection_name: str,
count_filter: Optional[types.Filter] = None,
exact: bool = True,
shard_key_selector: Optional[types.ShardKeySelector] = None,
timeout: Optional[int] = None,
**kwargs: Any,
) -> types.CountResult:
"""Count points in the collection.
Count points in the collection matching the given filter.
Args:
collection_name: name of the collection to count points in
count_filter: filtering conditions
exact:
If `True` - provide the exact count of points matching the filter.
If `False` - provide the approximate count of points matching the filter. Works faster.
shard_key_selector:
This parameter allows to specify which shards should be queried.
If `None` - query all shards. Only works for collections with `custom` sharding method.
timeout:
Overrides global timeout for this operation. Unit is seconds.
Returns:
Amount of points in the collection matching the filter.
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.count(
collection_name=collection_name,
count_filter=count_filter,
exact=exact,
shard_key_selector=shard_key_selector,
timeout=timeout,
**kwargs,
)
async def facet(
self,
collection_name: str,
key: str,
facet_filter: Optional[types.Filter] = None,
limit: int = 10,
exact: bool = False,
consistency: Optional[types.ReadConsistency] = None,
timeout: Optional[int] = None,
shard_key_selector: Optional[types.ShardKeySelector] = None,
**kwargs: Any,
) -> types.FacetResponse:
"""Facet counts for the collection. For a specific payload key, returns unique values along with their counts.
Higher counts come first in the results.
Args:
collection_name: Name of the collection
key: Payload field to facet
facet_filter: Filter to apply
limit: Maximum number of hits to return
exact: If `True` - provide the exact count of points matching the filter. If `False` - provide the approximate count of points matching the filter. Works faster.
consistency:
Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:
- int - number of replicas to query, values should present in all queried replicas
- 'majority' - query all replicas, but return values present in the majority of replicas
- 'quorum' - query the majority of replicas, return values present in all of them
- 'all' - query all replicas, and return values present in all replicas
timeout: Overrides global timeout for this search. Unit is seconds.
shard_key_selector:
This parameter allows to specify which shards should be queried.
If `None` - query all shards. Only works for collections with `custom` sharding method.
Returns:
Unique values in the facet and the amount of points that they cover.
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.facet(
collection_name=collection_name,
key=key,
facet_filter=facet_filter,
limit=limit,
exact=exact,
consistency=consistency,
timeout=timeout,
shard_key_selector=shard_key_selector,
**kwargs,
)
async def upsert(
self,
collection_name: str,
points: types.Points,
wait: bool = True,
ordering: Optional[types.WriteOrdering] = None,
shard_key_selector: Optional[types.ShardKeySelector] = None,
update_filter: Optional[types.Filter] = None,
**kwargs: Any,
) -> types.UpdateResult:
"""
Update or insert a new point into the collection.
If point with given ID already exists - it will be overwritten.
Args:
collection_name (str): To which collection to insert
points (Point): Batch or list of points to insert
wait (bool): Await for the results to be processed.
- If `true`, result will be returned only when all changes are applied
- If `false`, result will be returned immediately after the confirmation of receiving.
ordering (Optional[WriteOrdering]): Define strategy for ordering of the points. Possible values:
- `weak` (default) - write operations may be reordered, works faster
- `medium` - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change
- `strong` - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down
shard_key_selector:
Defines the shard groups that should be used to write updates into.
If multiple shard_keys are provided, the update will be written to each of them.
Only works for collections with `custom` sharding method.
update_filter: If specified, only points that match this filter will be updated, others will be inserted
Returns:
Operation Result(UpdateResult)
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
if (
not isinstance(points, types.Batch)
and len(points) > 0
and isinstance(points[0], grpc.PointStruct)
):
show_warning_once(
message="\n Usage of `grpc.PointStruct` is deprecated. Please use `models.PointStruct` instead.\n ",
category=DeprecationWarning,
idx="grpc-input",
stacklevel=4,
)
if not self.cloud_inference and self._inference_inspector.inspect(points):
if isinstance(points, types.Batch):
points = next(
iter(
self._embed_models(
points, is_query=False, batch_size=self.local_inference_batch_size
)
)
)
else:
points = list(
self._embed_models(
points, is_query=False, batch_size=self.local_inference_batch_size
)
)
return await self._client.upsert(
collection_name=collection_name,
points=points,
wait=wait,
ordering=ordering,
shard_key_selector=shard_key_selector,
update_filter=update_filter,
**kwargs,
)
async def update_vectors(
self,
collection_name: str,
points: Sequence[types.PointVectors],
wait: bool = True,
ordering: Optional[types.WriteOrdering] = None,
shard_key_selector: Optional[types.ShardKeySelector] = None,
update_filter: Optional[types.Filter] = None,
**kwargs: Any,
) -> types.UpdateResult:
"""Update specified vectors in the collection. Keeps payload and unspecified vectors unchanged.
Args:
collection_name (str): Name of the collection to update vectors in
points (Point): List of (id, vector) pairs to update. Vector might be a list of numbers or a dict of named vectors.
Examples:
- `PointVectors(id=1, vector=[1, 2, 3])`
- `PointVectors(id=2, vector={'vector_1': [1, 2, 3], 'vector_2': [4, 5, 6]})`
wait (bool): Await for the results to be processed.
- If `true`, result will be returned only when all changes are applied
- If `false`, result will be returned immediately after the confirmation of receiving.
ordering (Optional[WriteOrdering]): Define strategy for ordering of the points. Possible values:
- `weak` (default) - write operations may be reordered, works faster
- `medium` - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change
- `strong` - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down
shard_key_selector:
Defines the shard groups that should be used to write updates into.
If multiple shard_keys are provided, the update will be written to each of them.
Only works for collections with `custom` sharding method.
update_filter:
If specified, only points that match this filter will be updated
Returns:
Operation Result(UpdateResult)
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
if not self.cloud_inference and self._inference_inspector.inspect(points):
points = list(
self._embed_models(
points, is_query=False, batch_size=self.local_inference_batch_size
)
)
return await self._client.update_vectors(
collection_name=collection_name,
points=points,
wait=wait,
ordering=ordering,
shard_key_selector=shard_key_selector,
update_filter=update_filter,
)
async def delete_vectors(
self,
collection_name: str,
vectors: Sequence[str],
points: types.PointsSelector,
wait: bool = True,
ordering: Optional[types.WriteOrdering] = None,
shard_key_selector: Optional[types.ShardKeySelector] = None,
**kwargs: Any,
) -> types.UpdateResult:
"""Delete specified vector from the collection. Does not affect payload.
Args:
collection_name (str): Name of the collection to delete vector from
vectors: List of names of the vectors to delete. Use `""` to delete the default vector. At least one vector should be specified.
points (Point): Selects points based on list of IDs or filter
Examples:
- `points=[1, 2, 3, "cd3b53f0-11a7-449f-bc50-d06310e7ed90"]`
- `points=Filter(must=[FieldCondition(key='rand_number', range=Range(gte=0.7))])`
wait (bool): Await for the results to be processed.
- If `true`, result will be returned only when all changes are applied
- If `false`, result will be returned immediately after the confirmation of receiving.
ordering (Optional[WriteOrdering]): Define strategy for ordering of the points. Possible values:
- `weak` (default) - write operations may be reordered, works faster
- `medium` - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change
- `strong` - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down
shard_key_selector:
Defines the shard groups that should be used to write updates into.
If multiple shard_keys are provided, the update will be written to each of them.
Only works for collections with `custom` sharding method.
Returns:
Operation result
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.delete_vectors(
collection_name=collection_name,
vectors=vectors,
points=points,
wait=wait,
ordering=ordering,
shard_key_selector=shard_key_selector,
)
async def retrieve(
self,
collection_name: str,
ids: Sequence[types.PointId],
with_payload: Union[bool, Sequence[str], types.PayloadSelector] = True,
with_vectors: Union[bool, Sequence[str]] = False,
consistency: Optional[types.ReadConsistency] = None,
shard_key_selector: Optional[types.ShardKeySelector] = None,
timeout: Optional[int] = None,
**kwargs: Any,
) -> list[types.Record]:
"""Retrieve stored points by IDs
Args:
collection_name: Name of the collection to lookup in
ids: list of IDs to lookup
with_payload:
- Specify which stored payload should be attached to the result.
- If `True` - attach all payload
- If `False` - do not attach any payload
- If List of string - include only specified fields
- If `PayloadSelector` - use explicit rules
with_vectors:
- If `True` - Attach stored vector to the search result.
- If `False` - Do not attach vector.
- If List of string - Attach only specified vectors.
- Default: `False`
consistency:
Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:
- int - number of replicas to query, values should present in all queried replicas
- 'majority' - query all replicas, but return values present in the majority of replicas
- 'quorum' - query the majority of replicas, return values present in all of them
- 'all' - query all replicas, and return values present in all replicas
shard_key_selector:
This parameter allows to specify which shards should be queried.
If `None` - query all shards. Only works for collections with `custom` sharding method.
timeout:
Overrides global timeout for this operation. Unit is seconds.
Returns:
List of points
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.retrieve(
collection_name=collection_name,
ids=ids,
with_payload=with_payload,
with_vectors=with_vectors,
consistency=consistency,
shard_key_selector=shard_key_selector,
timeout=timeout,
**kwargs,
)
async def delete(
self,
collection_name: str,
points_selector: types.PointsSelector,
wait: bool = True,
ordering: Optional[types.WriteOrdering] = None,
shard_key_selector: Optional[types.ShardKeySelector] = None,
**kwargs: Any,
) -> types.UpdateResult:
"""Deletes selected points from collection
Args:
collection_name: Name of the collection
wait: Await for the results to be processed.
- If `true`, result will be returned only when all changes are applied
- If `false`, result will be returned immediately after the confirmation of receiving.
points_selector: Selects points based on list of IDs or filter.
Examples:
- `points=[1, 2, 3, "cd3b53f0-11a7-449f-bc50-d06310e7ed90"]`
- `points=Filter(must=[FieldCondition(key='rand_number', range=Range(gte=0.7))])`
ordering (Optional[WriteOrdering]): Define strategy for ordering of the points. Possible values:
- `weak` (default) - write operations may be reordered, works faster
- `medium` - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change
- `strong` - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down
shard_key_selector:
Defines the shard groups that should be used to write updates into.
If multiple shard_keys are provided, the update will be written to each of them.
Only works for collections with `custom` sharding method.
Returns:
Operation result
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.delete(
collection_name=collection_name,
points_selector=points_selector,
wait=wait,
ordering=ordering,
shard_key_selector=shard_key_selector,
**kwargs,
)
async def set_payload(
self,
collection_name: str,
payload: types.Payload,
points: types.PointsSelector,
key: Optional[str] = None,
wait: bool = True,
ordering: Optional[types.WriteOrdering] = None,
shard_key_selector: Optional[types.ShardKeySelector] = None,
**kwargs: Any,
) -> types.UpdateResult:
"""
Modifies payload of the specified points.
Examples:
`Set payload`::
# Assign payload value with key `"key"` to points 1, 2, 3.
# If payload value with specified key already exists - it will be overwritten
qdrant_client.set_payload(
collection_name="test_collection",
wait=True,
payload={
"key": "value"
},
points=[1, 2, 3]
)
Args:
collection_name: Name of the collection.
wait: Await for the results to be processed.
- If `true`, the result will be returned only when all changes are applied.
- If `false`, the result will be returned immediately after confirmation of receipt.
payload: Key-value pairs of payload to assign.
points: List of affected points, filter, or points selector.
Example:
- `points=[1, 2, 3, "cd3b53f0-11a7-449f-bc50-d06310e7ed90"]`
- `points=Filter(must=[FieldCondition(key='rand_number', range=Range(gte=0.7))])`
ordering (Optional[WriteOrdering]): Define strategy for ordering of the points. Possible values:
- `weak` (default): Write operations may be reordered, works faster.
- `medium`: Write operations go through a dynamically selected leader, may be inconsistent for a short period of time in case of leader change.
- `strong`: Write operations go through the permanent leader, consistent, but may be unavailable if the leader is down.
shard_key_selector: Defines the shard groups that should be used to write updates into.
If multiple shard keys are provided, the update will be written to each of them.
Only works for collections with the `custom` sharding method.
key: Path to the nested field in the payload to modify. If not specified, modifies the root of the payload.
E.g.::
PointStruct(
id=42,
vector=[...],
payload={
"recipe": {
"fruits": {"apple": "100g"}
}
}
)
qdrant_client.set_payload(
...,
payload={"cinnamon": "2g"},
key="recipe.fruits",
points=[42]
)
PointStruct(
id=42,
vector=[...],
payload={
"recipe": {
"fruits": {
"apple": "100g",
"cinnamon": "2g"
}
}
}
)
Returns:
Operation result.
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.set_payload(
collection_name=collection_name,
payload=payload,
points=points,
wait=wait,
ordering=ordering,
shard_key_selector=shard_key_selector,
key=key,
**kwargs,
)
async def overwrite_payload(
self,
collection_name: str,
payload: types.Payload,
points: types.PointsSelector,
wait: bool = True,
ordering: Optional[types.WriteOrdering] = None,
shard_key_selector: Optional[types.ShardKeySelector] = None,
**kwargs: Any,
) -> types.UpdateResult:
"""Overwrites payload of the specified points
After this operation is applied, only the specified payload will be present in the point.
The existing payload, even if the key is not specified in the payload, will be deleted.
Examples:
`Set payload`::
# Overwrite payload value with key `"key"` to points 1, 2, 3.
# If any other valid payload value exists - it will be deleted
qdrant_client.overwrite_payload(
collection_name="test_collection",
wait=True,
payload={
"key": "value"
},
points=[1,2,3]
)
Args:
collection_name: Name of the collection
wait: Await for the results to be processed.
- If `true`, result will be returned only when all changes are applied
- If `false`, result will be returned immediately after the confirmation of receiving.
payload: Key-value pairs of payload to assign
points: List of affected points, filter or points selector.
Example:
- `points=[1, 2, 3, "cd3b53f0-11a7-449f-bc50-d06310e7ed90"]`
- `points=Filter(must=[FieldCondition(key='rand_number', range=Range(gte=0.7))])`
ordering (Optional[WriteOrdering]): Define strategy for ordering of the points. Possible values:
- `weak` (default) - write operations may be reordered, works faster
- `medium` - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change
- `strong` - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down
shard_key_selector:
Defines the shard groups that should be used to write updates into.
If multiple shard_keys are provided, the update will be written to each of them.
Only works for collections with `custom` sharding method.
Returns:
Operation result
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.overwrite_payload(
collection_name=collection_name,
payload=payload,
points=points,
wait=wait,
ordering=ordering,
shard_key_selector=shard_key_selector,
**kwargs,
)
async def delete_payload(
self,
collection_name: str,
keys: Sequence[str],
points: types.PointsSelector,
wait: bool = True,
ordering: Optional[types.WriteOrdering] = None,
shard_key_selector: Optional[types.ShardKeySelector] = None,
**kwargs: Any,
) -> types.UpdateResult:
"""Remove values from point's payload
Args:
collection_name: Name of the collection
wait: Await for the results to be processed.
- If `true`, result will be returned only when all changes are applied
- If `false`, result will be returned immediately after the confirmation of receiving.
keys: List of payload keys to remove
points: List of affected points, filter or points selector.
Example:
- `points=[1, 2, 3, "cd3b53f0-11a7-449f-bc50-d06310e7ed90"]`
- `points=Filter(must=[FieldCondition(key='rand_number', range=Range(gte=0.7))])`
ordering (Optional[WriteOrdering]): Define strategy for ordering of the points. Possible values:
- `weak` (default) - write operations may be reordered, works faster
- `medium` - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change
- `strong` - Write operations go through the permanent leader, consistent, but may be unavailable if leader is downn
shard_key_selector:
Defines the shard groups that should be used to write updates into.
If multiple shard_keys are provided, the update will be written to each of them.
Only works for collections with `custom` sharding method.
Returns:
Operation result
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.delete_payload(
collection_name=collection_name,
keys=keys,
points=points,
wait=wait,
ordering=ordering,
shard_key_selector=shard_key_selector,
**kwargs,
)
async def clear_payload(
self,
collection_name: str,
points_selector: types.PointsSelector,
wait: bool = True,
ordering: Optional[types.WriteOrdering] = None,
shard_key_selector: Optional[types.ShardKeySelector] = None,
**kwargs: Any,
) -> types.UpdateResult:
"""Delete all payload for selected points
Args:
collection_name: Name of the collection
wait: Await for the results to be processed.
- If `true`, result will be returned only when all changes are applied
- If `false`, result will be returned immediately after the confirmation of receiving.
points_selector: List of affected points, filter or points selector. Example:
- `points=[1, 2, 3, "cd3b53f0-11a7-449f-bc50-d06310e7ed90"]`
- `points=Filter(must=[FieldCondition(key='rand_number', range=Range(gte=0.7))])`
ordering (Optional[WriteOrdering]): Define strategy for ordering of the points. Possible values:
- `weak` (default) - write operations may be reordered, works faster
- `medium` - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change
- `strong` - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down
shard_key_selector:
Defines the shard groups that should be used to write updates into.
If multiple shard_keys are provided, the update will be written to each of them.
Only works for collections with `custom` sharding method.
Returns:
Operation result
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.clear_payload(
collection_name=collection_name,
points_selector=points_selector,
wait=wait,
ordering=ordering,
shard_key_selector=shard_key_selector,
**kwargs,
)
async def batch_update_points(
self,
collection_name: str,
update_operations: Sequence[types.UpdateOperation],
wait: bool = True,
ordering: Optional[types.WriteOrdering] = None,
**kwargs: Any,
) -> list[types.UpdateResult]:
"""Batch update points in the collection.
Args:
collection_name: Name of the collection
update_operations: List of update operations
wait: Await for the results to be processed.
- If `true`, result will be returned only when all changes are applied
- If `false`, result will be returned immediately after the confirmation of receiving.
ordering (Optional[WriteOrdering]): Define strategy for ordering of the points. Possible values:
- `weak` (default) - write operations may be reordered, works faster
- `medium` - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change
- `strong` - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down
Returns:
Operation results
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
if not self.cloud_inference and self._inference_inspector.inspect(update_operations):
update_operations = list(
self._embed_models(
update_operations, is_query=False, batch_size=self.local_inference_batch_size
)
)
return await self._client.batch_update_points(
collection_name=collection_name,
update_operations=update_operations,
wait=wait,
ordering=ordering,
**kwargs,
)
async def update_collection_aliases(
self,
change_aliases_operations: Sequence[types.AliasOperations],
timeout: Optional[int] = None,
**kwargs: Any,
) -> bool:
"""Operation for performing changes of collection aliases.
Alias changes are atomic, meaning that no collection modifications can happen between alias operations.
Args:
change_aliases_operations: List of operations to perform
timeout:
Wait for operation commit timeout in seconds.
If timeout is reached - request will return with service error.
Returns:
Operation result
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.update_collection_aliases(
change_aliases_operations=change_aliases_operations, timeout=timeout, **kwargs
)
async def get_collection_aliases(
self, collection_name: str, **kwargs: Any
) -> types.CollectionsAliasesResponse:
"""Get collection aliases
Args:
collection_name: Name of the collection
Returns:
Collection aliases
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.get_collection_aliases(collection_name=collection_name, **kwargs)
async def get_aliases(self, **kwargs: Any) -> types.CollectionsAliasesResponse:
"""Get all aliases
Returns:
All aliases of all collections
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.get_aliases(**kwargs)
async def get_collections(self, **kwargs: Any) -> types.CollectionsResponse:
"""Get list name of all existing collections
Returns:
List of the collections
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.get_collections(**kwargs)
async def get_collection(self, collection_name: str, **kwargs: Any) -> types.CollectionInfo:
"""Get detailed information about specified existing collection
Args:
collection_name: Name of the collection
Returns:
Detailed information about the collection
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.get_collection(collection_name=collection_name, **kwargs)
async def collection_exists(self, collection_name: str, **kwargs: Any) -> bool:
"""Check whether collection already exists
Args:
collection_name: Name of the collection
Returns:
True if collection exists, False if not
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.collection_exists(collection_name=collection_name, **kwargs)
async def update_collection(
self,
collection_name: str,
optimizers_config: Optional[types.OptimizersConfigDiff] = None,
collection_params: Optional[types.CollectionParamsDiff] = None,
vectors_config: Optional[types.VectorsConfigDiff] = None,
hnsw_config: Optional[types.HnswConfigDiff] = None,
quantization_config: Optional[types.QuantizationConfigDiff] = None,
timeout: Optional[int] = None,
sparse_vectors_config: Optional[Mapping[str, types.SparseVectorParams]] = None,
strict_mode_config: Optional[types.StrictModeConfig] = None,
metadata: Optional[types.Payload] = None,
**kwargs: Any,
) -> bool:
"""Update parameters of the collection
Args:
collection_name: Name of the collection
optimizers_config: Override for optimizer configuration
collection_params: Override for collection parameters
vectors_config: Override for vector-specific configuration
hnsw_config: Override for HNSW index params
quantization_config: Override for quantization params
timeout:
Wait for operation commit timeout in seconds.
If timeout is reached - request will return with service error.
sparse_vectors_config: Override for sparse vector-specific configuration
strict_mode_config: Override for strict mode configuration
metadata: Arbitrary JSON-like metadata for the collection, will be merged with already stored metadata
Returns:
Operation result
"""
if "optimizer_config" in kwargs and optimizers_config is not None:
raise ValueError(
"Only one of optimizer_config and optimizers_config should be specified"
)
if "optimizer_config" in kwargs:
optimizers_config = kwargs.pop("optimizer_config")
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.update_collection(
collection_name=collection_name,
optimizers_config=optimizers_config,
collection_params=collection_params,
vectors_config=vectors_config,
hnsw_config=hnsw_config,
quantization_config=quantization_config,
timeout=timeout,
sparse_vectors_config=sparse_vectors_config,
strict_mode_config=strict_mode_config,
metadata=metadata,
**kwargs,
)
async def delete_collection(
self, collection_name: str, timeout: Optional[int] = None, **kwargs: Any
) -> bool:
"""Removes collection and all it's data
Args:
collection_name: Name of the collection to delete
timeout:
Wait for operation commit timeout in seconds.
If timeout is reached - request will return with service error.
Returns:
Operation result
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.delete_collection(
collection_name=collection_name, timeout=timeout, **kwargs
)
async def create_collection(
self,
collection_name: str,
vectors_config: Optional[
Union[types.VectorParams, Mapping[str, types.VectorParams]]
] = None,
sparse_vectors_config: Optional[Mapping[str, types.SparseVectorParams]] = None,
shard_number: Optional[int] = None,
sharding_method: Optional[types.ShardingMethod] = None,
replication_factor: Optional[int] = None,
write_consistency_factor: Optional[int] = None,
on_disk_payload: Optional[bool] = None,
hnsw_config: Optional[types.HnswConfigDiff] = None,
optimizers_config: Optional[types.OptimizersConfigDiff] = None,
wal_config: Optional[types.WalConfigDiff] = None,
quantization_config: Optional[types.QuantizationConfig] = None,
timeout: Optional[int] = None,
strict_mode_config: Optional[types.StrictModeConfig] = None,
metadata: Optional[types.Payload] = None,
**kwargs: Any,
) -> bool:
"""Create empty collection with given parameters
Args:
collection_name: Name of the collection to recreate
vectors_config:
Configuration of the vector storage. Vector params contains size and distance for the vector storage.
If dict is passed, service will create a vector storage for each key in the dict.
If single VectorParams is passed, service will create a single anonymous vector storage.
sparse_vectors_config:
Configuration of the sparse vector storage.
The service will create a sparse vector storage for each key in the dict.
shard_number: Number of shards in collection. Default is 1, minimum is 1.
sharding_method:
Defines strategy for shard creation.
Option `auto` (default) creates defined number of shards automatically.
Data will be distributed between shards automatically.
After creation, shards could be additionally replicated, but new shards could not be created.
Option `custom` allows to create shards manually, each shard should be created with assigned
unique `shard_key`. Data will be distributed between based on `shard_key` value.
replication_factor:
Replication factor for collection. Default is 1, minimum is 1.
Defines how many copies of each shard will be created.
Have effect only in distributed mode.
write_consistency_factor:
Write consistency factor for collection. Default is 1, minimum is 1.
Defines how many replicas should apply the operation for us to consider it successful.
Increasing this number will make the collection more resilient to inconsistencies, but will
also make it fail if not enough replicas are available.
Does not have any performance impact.
Have effect only in distributed mode.
on_disk_payload:
If true - point`s payload will not be stored in memory.
It will be read from the disk every time it is requested.
This setting saves RAM by (slightly) increasing the response time.
Note: those payload values that are involved in filtering and are indexed - remain in RAM.
hnsw_config: Params for HNSW index
optimizers_config: Params for optimizer
wal_config: Params for Write-Ahead-Log
quantization_config: Params for quantization, if None - quantization will be disabled
timeout:
Wait for operation commit timeout in seconds.
If timeout is reached - request will return with service error.
strict_mode_config: Configure limitations for the collection, such as max size, rate limits, etc.
metadata: Arbitrary JSON-like metadata for the collection
Returns:
Operation result
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.create_collection(
collection_name=collection_name,
vectors_config=vectors_config,
shard_number=shard_number,
sharding_method=sharding_method,
replication_factor=replication_factor,
write_consistency_factor=write_consistency_factor,
on_disk_payload=on_disk_payload,
hnsw_config=hnsw_config,
optimizers_config=optimizers_config,
wal_config=wal_config,
quantization_config=quantization_config,
timeout=timeout,
sparse_vectors_config=sparse_vectors_config,
strict_mode_config=strict_mode_config,
metadata=metadata,
**kwargs,
)
async def recreate_collection(
self,
collection_name: str,
vectors_config: Union[types.VectorParams, Mapping[str, types.VectorParams]],
sparse_vectors_config: Optional[Mapping[str, types.SparseVectorParams]] = None,
shard_number: Optional[int] = None,
sharding_method: Optional[types.ShardingMethod] = None,
replication_factor: Optional[int] = None,
write_consistency_factor: Optional[int] = None,
on_disk_payload: Optional[bool] = None,
hnsw_config: Optional[types.HnswConfigDiff] = None,
optimizers_config: Optional[types.OptimizersConfigDiff] = None,
wal_config: Optional[types.WalConfigDiff] = None,
quantization_config: Optional[types.QuantizationConfig] = None,
timeout: Optional[int] = None,
strict_mode_config: Optional[types.StrictModeConfig] = None,
metadata: Optional[types.Payload] = None,
**kwargs: Any,
) -> bool:
"""Delete and create empty collection with given parameters
Args:
collection_name: Name of the collection to recreate
vectors_config:
Configuration of the vector storage. Vector params contains size and distance for the vector storage.
If dict is passed, service will create a vector storage for each key in the dict.
If single VectorParams is passed, service will create a single anonymous vector storage.
sparse_vectors_config:
Configuration of the sparse vector storage.
The service will create a sparse vector storage for each key in the dict.
shard_number: Number of shards in collection. Default is 1, minimum is 1.
sharding_method:
Defines strategy for shard creation.
Option `auto` (default) creates defined number of shards automatically.
Data will be distributed between shards automatically.
After creation, shards could be additionally replicated, but new shards could not be created.
Option `custom` allows to create shards manually, each shard should be created with assigned
unique `shard_key`. Data will be distributed between based on `shard_key` value.
replication_factor:
Replication factor for collection. Default is 1, minimum is 1.
Defines how many copies of each shard will be created.
Have effect only in distributed mode.
write_consistency_factor:
Write consistency factor for collection. Default is 1, minimum is 1.
Defines how many replicas should apply the operation for us to consider it successful.
Increasing this number will make the collection more resilient to inconsistencies, but will
also make it fail if not enough replicas are available.
Does not have any performance impact.
Have effect only in distributed mode.
on_disk_payload:
If true - point`s payload will not be stored in memory.
It will be read from the disk every time it is requested.
This setting saves RAM by (slightly) increasing the response time.
Note: those payload values that are involved in filtering and are indexed - remain in RAM.
hnsw_config: Params for HNSW index
optimizers_config: Params for optimizer
wal_config: Params for Write-Ahead-Log
quantization_config: Params for quantization, if None - quantization will be disabled
timeout:
Wait for operation commit timeout in seconds.
If timeout is reached - request will return with service error.
strict_mode_config: Configure limitations for the collection, such as max size, rate limits, etc.
metadata: Arbitrary JSON metadata for the collection
Returns:
Operation result
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
warnings.warn(
"`recreate_collection` method is deprecated and will be removed in the future. Use `collection_exists` to check collection existence and `create_collection` instead.",
DeprecationWarning,
stacklevel=2,
)
return await self._client.recreate_collection(
collection_name=collection_name,
vectors_config=vectors_config,
shard_number=shard_number,
sharding_method=sharding_method,
replication_factor=replication_factor,
write_consistency_factor=write_consistency_factor,
on_disk_payload=on_disk_payload,
hnsw_config=hnsw_config,
optimizers_config=optimizers_config,
wal_config=wal_config,
quantization_config=quantization_config,
timeout=timeout,
sparse_vectors_config=sparse_vectors_config,
strict_mode_config=strict_mode_config,
metadata=metadata,
**kwargs,
)
def upload_points(
self,
collection_name: str,
points: Iterable[types.PointStruct],
batch_size: int = 64,
parallel: int = 1,
method: Optional[str] = None,
max_retries: int = 3,
wait: bool = False,
shard_key_selector: Optional[types.ShardKeySelector] = None,
update_filter: Optional[types.Filter] = None,
**kwargs: Any,
) -> None:
"""Upload points to the collection
Similar to `upload_collection` method, but operates with points, rather than vector and payload individually.
Args:
collection_name: Name of the collection to upload to
points: Iterator over points to upload
batch_size: How many vectors upload per-request, Default: 64
parallel: Number of parallel processes of upload
method: Start method for parallel processes, Default: forkserver
max_retries: maximum number of retries in case of a failure
during the upload of a batch
wait:
Await for the results to be applied on the server side.
If `true`, each update request will explicitly wait for the confirmation of completion. Might be slower.
If `false`, each update request will return immediately after the confirmation of receiving.
Default: `false`
shard_key_selector: Defines the shard groups that should be used to write updates into.
If multiple shard_keys are provided, the update will be written to each of them.
Only works for collections with `custom` sharding method.
This parameter overwrites shard keys written in the records.
update_filter: If specified, only points that match this filter will be updated, others will be inserted
"""
def chain(*iterables: Iterable) -> Iterable:
for iterable in iterables:
yield from iterable
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
if not self.cloud_inference:
iter_points = iter(points)
requires_inference = False
try:
point = next(iter_points)
requires_inference = self._inference_inspector.inspect(point)
points = chain(iter([point]), iter_points)
except (StopIteration, StopAsyncIteration):
points = []
if requires_inference:
points = self._embed_models_strict(
points, parallel=parallel, batch_size=self.local_inference_batch_size
)
return self._client.upload_points(
collection_name=collection_name,
points=points,
batch_size=batch_size,
parallel=parallel,
method=method,
max_retries=max_retries,
wait=wait,
shard_key_selector=shard_key_selector,
update_filter=update_filter,
)
def upload_collection(
self,
collection_name: str,
vectors: Union[
Iterable[types.VectorStruct], dict[str, types.NumpyArray], types.NumpyArray
],
payload: Optional[Iterable[dict[Any, Any]]] = None,
ids: Optional[Iterable[types.PointId]] = None,
batch_size: int = 64,
parallel: int = 1,
method: Optional[str] = None,
max_retries: int = 3,
wait: bool = False,
shard_key_selector: Optional[types.ShardKeySelector] = None,
update_filter: Optional[types.Filter] = None,
**kwargs: Any,
) -> None:
"""Upload vectors and payload to the collection.
This method will perform automatic batching of the data.
If you need to perform a single update, use `upsert` method.
Note: use `upload_points` method if you want to upload multiple vectors with single payload.
Args:
collection_name: Name of the collection to upload to
vectors: np.ndarray or an iterable over vectors to upload. Might be mmaped
payload: Iterable of vectors payload, Optional, Default: None
ids: Iterable of custom vectors ids, Optional, Default: None
batch_size: How many vectors upload per-request, Default: 64
parallel: Number of parallel processes of upload
method: Start method for parallel processes, Default: forkserver
max_retries: maximum number of retries in case of a failure
during the upload of a batch
wait:
Await for the results to be applied on the server side.
If `true`, each update request will explicitly wait for the confirmation of completion. Might be slower.
If `false`, each update request will return immediately after the confirmation of receiving.
Default: `false`
shard_key_selector: Defines the shard groups that should be used to write updates into.
If multiple shard_keys are provided, the update will be written to each of them.
Only works for collections with `custom` sharding method.
update_filter: If specified, only points that match this filter will be updated, others will be inserted
"""
def chain(*iterables: Iterable) -> Iterable:
for iterable in iterables:
yield from iterable
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
if not self.cloud_inference:
if not isinstance(vectors, dict) and (not isinstance(vectors, np.ndarray)):
requires_inference = False
try:
iter_vectors = iter(vectors)
vector = next(iter_vectors)
requires_inference = self._inference_inspector.inspect(vector)
vectors = chain(iter([vector]), iter_vectors)
except (StopIteration, StopAsyncIteration):
vectors = []
if requires_inference:
vectors = self._embed_models_strict(
vectors, parallel=parallel, batch_size=self.local_inference_batch_size
)
return self._client.upload_collection(
collection_name=collection_name,
vectors=vectors,
payload=payload,
ids=ids,
batch_size=batch_size,
parallel=parallel,
method=method,
max_retries=max_retries,
wait=wait,
shard_key_selector=shard_key_selector,
update_filter=update_filter,
)
async def create_payload_index(
self,
collection_name: str,
field_name: str,
field_schema: Optional[types.PayloadSchemaType] = None,
field_type: Optional[types.PayloadSchemaType] = None,
wait: bool = True,
ordering: Optional[types.WriteOrdering] = None,
**kwargs: Any,
) -> types.UpdateResult:
"""Creates index for a given payload field.
Indexed fields allow to perform filtered search operations faster.
Args:
collection_name: Name of the collection
field_name: Name of the payload field
field_schema: Type of data to index
field_type: Same as field_schema, but deprecated
wait: Await for the results to be processed.
- If `true`, result will be returned only when all changes are applied
- If `false`, result will be returned immediately after the confirmation of receiving.
ordering (Optional[WriteOrdering]): Define strategy for ordering of the points. Possible values:
- `weak` (default) - write operations may be reordered, works faster
- `medium` - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change
- `strong` - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down
Returns:
Operation Result
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.create_payload_index(
collection_name=collection_name,
field_name=field_name,
field_schema=field_schema,
field_type=field_type,
wait=wait,
ordering=ordering,
**kwargs,
)
async def delete_payload_index(
self,
collection_name: str,
field_name: str,
wait: bool = True,
ordering: Optional[types.WriteOrdering] = None,
**kwargs: Any,
) -> types.UpdateResult:
"""Removes index for a given payload field.
Args:
collection_name: Name of the collection
field_name: Name of the payload field
wait: Await for the results to be processed.
- If `true`, result will be returned only when all changes are applied
- If `false`, result will be returned immediately after the confirmation of receiving.
ordering (Optional[WriteOrdering]): Define strategy for ordering of the points. Possible values:
- `weak` (default) - write operations may be reordered, works faster
- `medium` - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change
- `strong` - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down
Returns:
Operation Result
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.delete_payload_index(
collection_name=collection_name,
field_name=field_name,
wait=wait,
ordering=ordering,
**kwargs,
)
async def list_snapshots(
self, collection_name: str, **kwargs: Any
) -> list[types.SnapshotDescription]:
"""List all snapshots for a given collection.
Args:
collection_name: Name of the collection
Returns:
List of snapshots
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.list_snapshots(collection_name=collection_name, **kwargs)
async def create_snapshot(
self, collection_name: str, wait: bool = True, **kwargs: Any
) -> Optional[types.SnapshotDescription]:
"""Create snapshot for a given collection.
Args:
collection_name: Name of the collection
wait: Await for the snapshot to be created.
- If `true`, result will be returned only when a snapshot is created
- If `false`, result will be returned immediately after the confirmation of receiving.
Returns:
Snapshot description
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.create_snapshot(
collection_name=collection_name, wait=wait, **kwargs
)
async def delete_snapshot(
self, collection_name: str, snapshot_name: str, wait: bool = True, **kwargs: Any
) -> Optional[bool]:
"""Delete snapshot for a given collection.
Args:
collection_name: Name of the collection
snapshot_name: Snapshot id
wait: Await for the snapshot to be deleted.
- If `true`, result will be returned only when the snapshot is deleted
- If `false`, result will be returned immediately after the confirmation of receiving.
Returns:
True if snapshot was deleted
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.delete_snapshot(
collection_name=collection_name, snapshot_name=snapshot_name, wait=wait, **kwargs
)
async def list_full_snapshots(self, **kwargs: Any) -> list[types.SnapshotDescription]:
"""List all snapshots for a whole storage
Returns:
List of snapshots
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.list_full_snapshots(**kwargs)
async def create_full_snapshot(
self, wait: bool = True, **kwargs: Any
) -> Optional[types.SnapshotDescription]:
"""Create snapshot for a whole storage.
Args:
wait: Await for the snapshot to be created.
- If `true`, result will be returned only when the snapshot is created
- If `false`, result will be returned immediately after the confirmation of receiving.
Returns:
Snapshot description
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.create_full_snapshot(wait=wait, **kwargs)
async def delete_full_snapshot(
self, snapshot_name: str, wait: bool = True, **kwargs: Any
) -> Optional[bool]:
"""Delete snapshot for a whole storage.
Args:
snapshot_name: Snapshot name
wait: Await for the snapshot to be deleted.
- If `true`, result will be returned only when the snapshot is deleted
- If `false`, result will be returned immediately after the confirmation of receiving.
Returns:
True if snapshot was deleted
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.delete_full_snapshot(
snapshot_name=snapshot_name, wait=wait, **kwargs
)
async def recover_snapshot(
self,
collection_name: str,
location: str,
api_key: Optional[str] = None,
checksum: Optional[str] = None,
priority: Optional[types.SnapshotPriority] = None,
wait: bool = True,
**kwargs: Any,
) -> Optional[bool]:
"""Recover collection from snapshot.
Args:
collection_name: Name of the collection
location: URL of the snapshot
Example:
- URL `http://localhost:8080/collections/my_collection/snapshots/my_snapshot`
- Local path `file:///qdrant/snapshots/test_collection/test_collection-6194298859870377-2023-11-09-15-17-51.snapshot`
api_key: API key to use for accessing the snapshot on another server.
checksum: Checksum of the snapshot to verify the integrity of the snapshot.
priority: Defines source of truth for snapshot recovery
- `replica` (default) means - prefer existing data over the snapshot
- `no_sync` means - do not sync shard with other shards
- `snapshot` means - prefer snapshot data over the current state
wait: Await for the recovery to be done.
- If `true`, result will be returned only when the recovery is done
- If `false`, result will be returned immediately after the confirmation of receiving.
Returns:
True if snapshot was recovered
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.recover_snapshot(
collection_name=collection_name,
location=location,
api_key=api_key,
checksum=checksum,
priority=priority,
wait=wait,
**kwargs,
)
async def list_shard_snapshots(
self, collection_name: str, shard_id: int, **kwargs: Any
) -> list[types.SnapshotDescription]:
"""List all snapshots of a given shard
Args:
collection_name: Name of the collection
shard_id: Index of the shard
Returns:
List of snapshots
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.list_shard_snapshots(
collection_name=collection_name, shard_id=shard_id, **kwargs
)
async def create_shard_snapshot(
self, collection_name: str, shard_id: int, wait: bool = True, **kwargs: Any
) -> Optional[types.SnapshotDescription]:
"""Create snapshot for a given shard.
Args:
collection_name: Name of the collection
shard_id: Index of the shard
wait: Await for the snapshot to be created.
- If `true`, result will be returned only when the snapshot is created.
- If `false`, result will be returned immediately after the confirmation of receiving.
Returns:
Snapshot description
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.create_shard_snapshot(
collection_name=collection_name, shard_id=shard_id, wait=wait, **kwargs
)
async def delete_shard_snapshot(
self,
collection_name: str,
shard_id: int,
snapshot_name: str,
wait: bool = True,
**kwargs: Any,
) -> Optional[bool]:
"""Delete snapshot for a given shard.
Args:
collection_name: Name of the collection
shard_id: Index of the shard
snapshot_name: Snapshot id
wait: Await for the snapshot to be deleted.
- If `true`, result will be returned only when the snapshot is deleted
- If `false`, result will be returned immediately after the confirmation of receiving.
Returns:
True if snapshot was deleted
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.delete_shard_snapshot(
collection_name=collection_name,
shard_id=shard_id,
snapshot_name=snapshot_name,
wait=wait,
**kwargs,
)
async def recover_shard_snapshot(
self,
collection_name: str,
shard_id: int,
location: str,
api_key: Optional[str] = None,
checksum: Optional[str] = None,
priority: Optional[types.SnapshotPriority] = None,
wait: bool = True,
**kwargs: Any,
) -> Optional[bool]:
"""Recover shard from snapshot.
Args:
collection_name: Name of the collection
shard_id: Index of the shard
location: URL of the snapshot
Example:
- URL `http://localhost:8080/collections/my_collection/snapshots/my_snapshot`
api_key: API key to use for accessing the snapshot on another server.
checksum: Checksum of the snapshot to verify the integrity of the snapshot.
priority: Defines source of truth for snapshot recovery
- `replica` (default) means - prefer existing data over the snapshot
- `no_sync` means - do not sync shard with other shards
- `snapshot` means - prefer snapshot data over the current state
wait: Await for the recovery to be done.
- If `true`, result will be returned only when the recovery is done
- If `false`, result will be returned immediately after the confirmation of receiving.
Returns:
True if snapshot was recovered
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.recover_shard_snapshot(
collection_name=collection_name,
shard_id=shard_id,
location=location,
api_key=api_key,
checksum=checksum,
priority=priority,
wait=wait,
**kwargs,
)
async def create_shard_key(
self,
collection_name: str,
shard_key: types.ShardKey,
shards_number: Optional[int] = None,
replication_factor: Optional[int] = None,
placement: Optional[list[int]] = None,
**kwargs: Any,
) -> bool:
"""Create shard key for collection.
Only works for collections with `custom` sharding method.
Args:
collection_name: Name of the collection
shard_key: Shard key to create
shards_number: How many shards to create for this key
replication_factor: Replication factor for this key
placement: List of peers to place shards on. If None - place on all peers.
Returns:
Operation result
"""
return await self._client.create_shard_key(
collection_name=collection_name,
shard_key=shard_key,
shards_number=shards_number,
replication_factor=replication_factor,
placement=placement,
**kwargs,
)
async def delete_shard_key(
self, collection_name: str, shard_key: types.ShardKey, **kwargs: Any
) -> bool:
"""Delete shard key for collection.
Only works for collections with `custom` sharding method.
Args:
collection_name: Name of the collection
shard_key: Shard key to delete
Returns:
Operation result
"""
return await self._client.delete_shard_key(
collection_name=collection_name, shard_key=shard_key, **kwargs
)
async def info(self) -> types.VersionInfo:
"""Returns information about the running Qdrant instance like version and commit id
Returns:
Title, version and optionally commit info
"""
return await self._client.info()
async def cluster_collection_update(
self,
collection_name: str,
cluster_operation: types.ClusterOperations,
timeout: Optional[int] = None,
**kwargs: Any,
) -> bool:
"""Updates the cluster configuration for a specified collection.
Args:
collection_name: Name of the collection
cluster_operation: Cluster operation to update
timeout: Timeout in seconds to wait for the operation to complete
Returns:
bool: Operation result
"""
assert len(kwargs) == 0, f"Unknown arguments: {list(kwargs.keys())}"
return await self._client.cluster_collection_update(
collection_name=collection_name,
cluster_operation=cluster_operation,
timeout=timeout,
**kwargs,
)
async def collection_cluster_info(self, collection_name: str) -> types.CollectionClusterInfo:
"""Retrieves cluster details for a specified collection.
Args:
collection_name: Name of the collection
Returns:
types.CollectionClusterInfo: cluster details
"""
return await self._client.collection_cluster_info(collection_name=collection_name)
async def cluster_status(self) -> types.ClusterStatus:
"""Returns information about the cluster's current state and composition.
Returns: types.ClusterStatus
"""
return await self._client.cluster_status()
async def recover_current_peer(self) -> bool:
"""Attempts to restore or synchronize the node's current state with that of its peers.
Returns:
bool: Operation result
"""
return await self._client.recover_current_peer()
async def remove_peer(
self,
peer_id: int,
force: Optional[bool] = None,
timeout: Optional[int] = None,
**kwargs: Any,
) -> bool:
"""Attempts to remove the node from the cluster. This endpoint returns an error if the node (peer) has
shards on it.
Args:
peer_id: Peer ID
force: If true - removes peer even if it has shards/replicas on it.
timeout: Wait for operation commit timeout in seconds. If timeout is reached - request will fail
Returns:
bool: Operation result
"""
return await self._client.remove_peer(peer_id, force=force, timeout=timeout, **kwargs)
| AsyncQdrantClient |
python | PyCQA__pyflakes | pyflakes/messages.py | {
"start": 4865,
"end": 5210
} | class ____(Message):
"""A `global` or `nonlocal` statement where the name is never reassigned"""
message = '`%s %s` is unused: name is never assigned in scope'
def __init__(self, filename, loc, name):
Message.__init__(self, filename, loc)
self.message_args = (type(loc).__name__.lower(), name)
| UnusedIndirectAssignment |
python | numpy__numpy | numpy/lib/tests/test_function_base.py | {
"start": 143190,
"end": 159177
} | class ____:
# most of this is already tested by TestPercentile
def V(self, x, y, alpha):
# Identification function used in several tests.
return (x >= y) - alpha
def test_max_ulp(self):
x = [0.0, 0.2, 0.4]
a = np.quantile(x, 0.45)
# The default linear method would result in 0 + 0.2 * (0.45/2) = 0.18.
# 0.18 is not exactly representable and the formula leads to a 1 ULP
# different result. Ensure it is this exact within 1 ULP, see gh-20331.
np.testing.assert_array_max_ulp(a, 0.18, maxulp=1)
def test_basic(self):
x = np.arange(8) * 0.5
assert_equal(np.quantile(x, 0), 0.)
assert_equal(np.quantile(x, 1), 3.5)
assert_equal(np.quantile(x, 0.5), 1.75)
def test_correct_quantile_value(self):
a = np.array([True])
tf_quant = np.quantile(True, False)
assert_equal(tf_quant, a[0])
assert_equal(type(tf_quant), a.dtype)
a = np.array([False, True, True])
quant_res = np.quantile(a, a)
assert_array_equal(quant_res, a)
assert_equal(quant_res.dtype, a.dtype)
def test_fraction(self):
# fractional input, integral quantile
x = [Fraction(i, 2) for i in range(8)]
q = np.quantile(x, 0)
assert_equal(q, 0)
assert_equal(type(q), Fraction)
q = np.quantile(x, 1)
assert_equal(q, Fraction(7, 2))
assert_equal(type(q), Fraction)
q = np.quantile(x, .5)
assert_equal(q, 1.75)
assert isinstance(q, float)
q = np.quantile(x, Fraction(1, 2))
assert_equal(q, Fraction(7, 4))
assert_equal(type(q), Fraction)
q = np.quantile(x, [Fraction(1, 2)])
assert_equal(q, np.array([Fraction(7, 4)]))
assert_equal(type(q), np.ndarray)
q = np.quantile(x, [[Fraction(1, 2)]])
assert_equal(q, np.array([[Fraction(7, 4)]]))
assert_equal(type(q), np.ndarray)
# repeat with integral input but fractional quantile
x = np.arange(8)
assert_equal(np.quantile(x, Fraction(1, 2)), Fraction(7, 2))
def test_complex(self):
# gh-22652
arr_c = np.array([0.5 + 3.0j, 2.1 + 0.5j, 1.6 + 2.3j], dtype='G')
assert_raises(TypeError, np.quantile, arr_c, 0.5)
arr_c = np.array([0.5 + 3.0j, 2.1 + 0.5j, 1.6 + 2.3j], dtype='D')
assert_raises(TypeError, np.quantile, arr_c, 0.5)
arr_c = np.array([0.5 + 3.0j, 2.1 + 0.5j, 1.6 + 2.3j], dtype='F')
assert_raises(TypeError, np.quantile, arr_c, 0.5)
def test_no_p_overwrite(self):
# this is worth retesting, because quantile does not make a copy
p0 = np.array([0, 0.75, 0.25, 0.5, 1.0])
p = p0.copy()
np.quantile(np.arange(100.), p, method="midpoint")
assert_array_equal(p, p0)
p0 = p0.tolist()
p = p.tolist()
np.quantile(np.arange(100.), p, method="midpoint")
assert_array_equal(p, p0)
@pytest.mark.parametrize("dtype", np.typecodes["AllInteger"])
def test_quantile_preserve_int_type(self, dtype):
res = np.quantile(np.array([1, 2], dtype=dtype), [0.5],
method="nearest")
assert res.dtype == dtype
@pytest.mark.parametrize("method", quantile_methods)
def test_q_zero_one(self, method):
# gh-24710
arr = [10, 11, 12]
quantile = np.quantile(arr, q=[0, 1], method=method)
assert_equal(quantile, np.array([10, 12]))
@pytest.mark.parametrize("method", quantile_methods)
def test_quantile_monotonic(self, method):
# GH 14685
# test that the return value of quantile is monotonic if p0 is ordered
# Also tests that the boundary values are not mishandled.
p0 = np.linspace(0, 1, 101)
quantile = np.quantile(np.array([0, 1, 1, 2, 2, 3, 3, 4, 5, 5, 1, 1, 9, 9, 9,
8, 8, 7]) * 0.1, p0, method=method)
assert_equal(np.sort(quantile), quantile)
# Also test one where the number of data points is clearly divisible:
quantile = np.quantile([0., 1., 2., 3.], p0, method=method)
assert_equal(np.sort(quantile), quantile)
@hypothesis.given(
arr=arrays(dtype=np.float64,
shape=st.integers(min_value=3, max_value=1000),
elements=st.floats(allow_infinity=False, allow_nan=False,
min_value=-1e300, max_value=1e300)))
def test_quantile_monotonic_hypo(self, arr):
p0 = np.arange(0, 1, 0.01)
quantile = np.quantile(arr, p0)
assert_equal(np.sort(quantile), quantile)
def test_quantile_scalar_nan(self):
a = np.array([[10., 7., 4.], [3., 2., 1.]])
a[0][1] = np.nan
actual = np.quantile(a, 0.5)
assert np.isscalar(actual)
assert_equal(np.quantile(a, 0.5), np.nan)
@pytest.mark.parametrize("weights", [False, True])
@pytest.mark.parametrize("method", quantile_methods)
@pytest.mark.parametrize("alpha", [0.2, 0.5, 0.9])
def test_quantile_identification_equation(self, weights, method, alpha):
# Test that the identification equation holds for the empirical
# CDF:
# E[V(x, Y)] = 0 <=> x is quantile
# with Y the random variable for which we have observed values and
# V(x, y) the canonical identification function for the quantile (at
# level alpha), see
# https://doi.org/10.48550/arXiv.0912.0902
if weights and method not in methods_supporting_weights:
pytest.skip("Weights not supported by method.")
rng = np.random.default_rng(4321)
# We choose n and alpha such that we cover 3 cases:
# - n * alpha is an integer
# - n * alpha is a float that gets rounded down
# - n * alpha is a float that gest rounded up
n = 102 # n * alpha = 20.4, 51. , 91.8
y = rng.random(n)
w = rng.integers(low=0, high=10, size=n) if weights else None
x = np.quantile(y, alpha, method=method, weights=w)
if method in ("higher",):
# These methods do not fulfill the identification equation.
assert np.abs(np.mean(self.V(x, y, alpha))) > 0.1 / n
elif int(n * alpha) == n * alpha and not weights:
# We can expect exact results, up to machine precision.
assert_allclose(
np.average(self.V(x, y, alpha), weights=w), 0, atol=1e-14,
)
else:
# V = (x >= y) - alpha cannot sum to zero exactly but within
# "sample precision".
assert_allclose(np.average(self.V(x, y, alpha), weights=w), 0,
atol=1 / n / np.amin([alpha, 1 - alpha]))
@pytest.mark.parametrize("weights", [False, True])
@pytest.mark.parametrize("method", quantile_methods)
@pytest.mark.parametrize("alpha", [0.2, 0.5, 0.9])
def test_quantile_add_and_multiply_constant(self, weights, method, alpha):
# Test that
# 1. quantile(c + x) = c + quantile(x)
# 2. quantile(c * x) = c * quantile(x)
# 3. quantile(-x) = -quantile(x, 1 - alpha)
# On empirical quantiles, this equation does not hold exactly.
# Koenker (2005) "Quantile Regression" Chapter 2.2.3 calls these
# properties equivariance.
if weights and method not in methods_supporting_weights:
pytest.skip("Weights not supported by method.")
rng = np.random.default_rng(4321)
# We choose n and alpha such that we have cases for
# - n * alpha is an integer
# - n * alpha is a float that gets rounded down
# - n * alpha is a float that gest rounded up
n = 102 # n * alpha = 20.4, 51. , 91.8
y = rng.random(n)
w = rng.integers(low=0, high=10, size=n) if weights else None
q = np.quantile(y, alpha, method=method, weights=w)
c = 13.5
# 1
assert_allclose(np.quantile(c + y, alpha, method=method, weights=w),
c + q)
# 2
assert_allclose(np.quantile(c * y, alpha, method=method, weights=w),
c * q)
# 3
if weights:
# From here on, we would need more methods to support weights.
return
q = -np.quantile(-y, 1 - alpha, method=method)
if method == "inverted_cdf":
if (
n * alpha == int(n * alpha)
or np.round(n * alpha) == int(n * alpha) + 1
):
assert_allclose(q, np.quantile(y, alpha, method="higher"))
else:
assert_allclose(q, np.quantile(y, alpha, method="lower"))
elif method == "closest_observation":
if n * alpha == int(n * alpha):
assert_allclose(q, np.quantile(y, alpha, method="higher"))
elif np.round(n * alpha) == int(n * alpha) + 1:
assert_allclose(
q, np.quantile(y, alpha + 1 / n, method="higher"))
else:
assert_allclose(q, np.quantile(y, alpha, method="lower"))
elif method == "interpolated_inverted_cdf":
assert_allclose(q, np.quantile(y, alpha + 1 / n, method=method))
elif method == "nearest":
if n * alpha == int(n * alpha):
assert_allclose(q, np.quantile(y, alpha + 1 / n, method=method))
else:
assert_allclose(q, np.quantile(y, alpha, method=method))
elif method == "lower":
assert_allclose(q, np.quantile(y, alpha, method="higher"))
elif method == "higher":
assert_allclose(q, np.quantile(y, alpha, method="lower"))
else:
# "averaged_inverted_cdf", "hazen", "weibull", "linear",
# "median_unbiased", "normal_unbiased", "midpoint"
assert_allclose(q, np.quantile(y, alpha, method=method))
@pytest.mark.parametrize("method", methods_supporting_weights)
@pytest.mark.parametrize("alpha", [0.2, 0.5, 0.9])
def test_quantile_constant_weights(self, method, alpha):
rng = np.random.default_rng(4321)
# We choose n and alpha such that we have cases for
# - n * alpha is an integer
# - n * alpha is a float that gets rounded down
# - n * alpha is a float that gest rounded up
n = 102 # n * alpha = 20.4, 51. , 91.8
y = rng.random(n)
q = np.quantile(y, alpha, method=method)
w = np.ones_like(y)
qw = np.quantile(y, alpha, method=method, weights=w)
assert_allclose(qw, q)
w = 8.125 * np.ones_like(y)
qw = np.quantile(y, alpha, method=method, weights=w)
assert_allclose(qw, q)
@pytest.mark.parametrize("method", methods_supporting_weights)
@pytest.mark.parametrize("alpha", [0, 0.2, 0.5, 0.9, 1])
def test_quantile_with_integer_weights(self, method, alpha):
# Integer weights can be interpreted as repeated observations.
rng = np.random.default_rng(4321)
# We choose n and alpha such that we have cases for
# - n * alpha is an integer
# - n * alpha is a float that gets rounded down
# - n * alpha is a float that gest rounded up
n = 102 # n * alpha = 20.4, 51. , 91.8
y = rng.random(n)
w = rng.integers(low=0, high=10, size=n, dtype=np.int32)
qw = np.quantile(y, alpha, method=method, weights=w)
q = np.quantile(np.repeat(y, w), alpha, method=method)
assert_allclose(qw, q)
@pytest.mark.parametrize("method", methods_supporting_weights)
def test_quantile_with_weights_and_axis(self, method):
rng = np.random.default_rng(4321)
# 1d weight and single alpha
y = rng.random((2, 10, 3))
w = np.abs(rng.random(10))
alpha = 0.5
q = np.quantile(y, alpha, weights=w, method=method, axis=1)
q_res = np.zeros(shape=(2, 3))
for i in range(2):
for j in range(3):
q_res[i, j] = np.quantile(
y[i, :, j], alpha, method=method, weights=w
)
assert_allclose(q, q_res)
# 1d weight and 1d alpha
alpha = [0, 0.2, 0.4, 0.6, 0.8, 1] # shape (6,)
q = np.quantile(y, alpha, weights=w, method=method, axis=1)
q_res = np.zeros(shape=(6, 2, 3))
for i in range(2):
for j in range(3):
q_res[:, i, j] = np.quantile(
y[i, :, j], alpha, method=method, weights=w
)
assert_allclose(q, q_res)
# 1d weight and 2d alpha
alpha = [[0, 0.2], [0.4, 0.6], [0.8, 1]] # shape (3, 2)
q = np.quantile(y, alpha, weights=w, method=method, axis=1)
q_res = q_res.reshape((3, 2, 2, 3))
assert_allclose(q, q_res)
# shape of weights equals shape of y
w = np.abs(rng.random((2, 10, 3)))
alpha = 0.5
q = np.quantile(y, alpha, weights=w, method=method, axis=1)
q_res = np.zeros(shape=(2, 3))
for i in range(2):
for j in range(3):
q_res[i, j] = np.quantile(
y[i, :, j], alpha, method=method, weights=w[i, :, j]
)
assert_allclose(q, q_res)
@pytest.mark.parametrize("method", methods_supporting_weights)
def test_quantile_weights_min_max(self, method):
# Test weighted quantile at 0 and 1 with leading and trailing zero
# weights.
w = [0, 0, 1, 2, 3, 0]
y = np.arange(6)
y_min = np.quantile(y, 0, weights=w, method="inverted_cdf")
y_max = np.quantile(y, 1, weights=w, method="inverted_cdf")
assert y_min == y[2] # == 2
assert y_max == y[4] # == 4
def test_quantile_weights_raises_negative_weights(self):
y = [1, 2]
w = [-0.5, 1]
with pytest.raises(ValueError, match="Weights must be non-negative"):
np.quantile(y, 0.5, weights=w, method="inverted_cdf")
@pytest.mark.parametrize(
"method",
sorted(set(quantile_methods) - set(methods_supporting_weights)),
)
def test_quantile_weights_raises_unsupported_methods(self, method):
y = [1, 2]
w = [0.5, 1]
msg = "Only method 'inverted_cdf' supports weights"
with pytest.raises(ValueError, match=msg):
np.quantile(y, 0.5, weights=w, method=method)
def test_weibull_fraction(self):
arr = [Fraction(0, 1), Fraction(1, 10)]
quantile = np.quantile(arr, [0, ], method='weibull')
assert_equal(quantile, np.array(Fraction(0, 1)))
quantile = np.quantile(arr, [Fraction(1, 2)], method='weibull')
assert_equal(quantile, np.array(Fraction(1, 20)))
def test_closest_observation(self):
# Round ties to nearest even order statistic (see #26656)
m = 'closest_observation'
q = 0.5
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
assert_equal(2, np.quantile(arr[0:3], q, method=m))
assert_equal(2, np.quantile(arr[0:4], q, method=m))
assert_equal(2, np.quantile(arr[0:5], q, method=m))
assert_equal(3, np.quantile(arr[0:6], q, method=m))
assert_equal(4, np.quantile(arr[0:7], q, method=m))
assert_equal(4, np.quantile(arr[0:8], q, method=m))
assert_equal(4, np.quantile(arr[0:9], q, method=m))
assert_equal(5, np.quantile(arr, q, method=m))
def test_quantile_gh_29003_Fraction(self):
r = np.quantile([1, 2], q=Fraction(1))
assert r == Fraction(2)
assert isinstance(r, Fraction)
r = np.quantile([1, 2], q=Fraction(.5))
assert r == Fraction(3, 2)
assert isinstance(r, Fraction)
def test_float16_gh_29003(self):
a = np.arange(50_001, dtype=np.float16)
q = .999
value = np.quantile(a, q)
assert value == q * 50_000
assert value.dtype == np.float16
| TestQuantile |
python | airbytehq__airbyte | airbyte-integrations/connectors/destination-snowflake-cortex/destination_snowflake_cortex/config.py | {
"start": 2290,
"end": 2373
} | class ____(VectorDBConfigModel):
indexing: SnowflakeCortexIndexingModel
| ConfigModel |
python | celery__celery | celery/worker/components.py | {
"start": 925,
"end": 1825
} | class ____(bootsteps.Step):
"""Timer bootstep."""
def create(self, w):
if w.use_eventloop:
# does not use dedicated timer thread.
w.timer = _Timer(max_interval=10.0)
else:
if not w.timer_cls:
# Default Timer is set by the pool, as for example, the
# eventlet pool needs a custom timer implementation.
w.timer_cls = w.pool_cls.Timer
w.timer = self.instantiate(w.timer_cls,
max_interval=w.timer_precision,
on_error=self.on_timer_error,
on_tick=self.on_timer_tick)
def on_timer_error(self, exc):
logger.error('Timer error: %r', exc, exc_info=True)
def on_timer_tick(self, delay):
logger.debug('Timer wake-up! Next ETA %s secs.', delay)
| Timer |
python | kamyu104__LeetCode-Solutions | Python/print-immutable-linked-list-in-reverse.py | {
"start": 1358,
"end": 1718
} | class ____(object):
def printLinkedListInReverse(self, head):
"""
:type head: ImmutableListNode
:rtype: None
"""
tail = None
while head != tail:
curr = head
while curr.getNext() != tail:
curr = curr.getNext()
curr.printValue()
tail = curr
| Solution3 |
python | getsentry__sentry | src/sentry/lang/dart/plugin.py | {
"start": 325,
"end": 1483
} | class ____(Plugin2):
"""
This plugin is responsible for Dart specific processing on events or attachments.
"""
def can_configure_for_project(self, project, **kwargs) -> bool:
return False
def get_event_preprocessors(self, data: Mapping[str, Any]) -> Sequence[EventPreprocessor]:
sdk_name = data.get("sdk", {}).get("name", "")
if sdk_name not in ("sentry.dart", "sentry.dart.flutter"):
return []
debug_ids = get_debug_meta_image_ids(dict(data))
if len(debug_ids) == 0:
return []
# Check if any stacktrace contains native platform frames.
# This indicates that the Flutter build is most likely obfuscated.
has_native_frames = _has_native_frames_in_stacktraces(data)
if not has_native_frames:
return []
return [deobfuscate_exception_type]
def _has_native_frames_in_stacktraces(data):
for stacktrace_info in find_stacktraces_in_data(data):
frames = stacktrace_info.get_frames()
if frames and any(frame.get("platform") == "native" for frame in frames):
return True
return False
| DartPlugin |
python | sqlalchemy__sqlalchemy | lib/sqlalchemy/dialects/mysql/types.py | {
"start": 11361,
"end": 12214
} | class ____(_IntegerType):
"""MySQL TINYINT type."""
__visit_name__ = "TINYINT"
def __init__(self, display_width: Optional[int] = None, **kw: Any):
"""Construct a TINYINT.
:param display_width: Optional, maximum display width for this number.
:param unsigned: a boolean, optional.
:param zerofill: Optional. If true, values will be stored as strings
left-padded with zeros. Note that this does not effect the values
returned by the underlying database API, which continue to be
numeric.
"""
super().__init__(display_width=display_width, **kw)
def _compare_type_affinity(self, other: TypeEngine[Any]) -> bool:
return (
self._type_affinity is other._type_affinity
or other._type_affinity is sqltypes.Boolean
)
| TINYINT |
python | run-llama__llama_index | llama-index-core/llama_index/core/instrumentation/events/agent.py | {
"start": 371,
"end": 773
} | class ____(BaseEvent):
"""
AgentRunStepStartEvent.
Args:
task_id (str): Task ID.
step (Optional[Any]): Task step.
input (Optional[str]): Optional input.
"""
task_id: str
step: Optional[Any]
input: Optional[str]
@classmethod
def class_name(cls) -> str:
"""Class name."""
return "AgentRunStepStartEvent"
| AgentRunStepStartEvent |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-github/source_github/github_schema.py | {
"start": 1599294,
"end": 1599468
} | class ____(sgqlc.types.Union):
"""Entities that can be sponsored via GitHub Sponsors"""
__schema__ = github_schema
__types__ = (Organization, User)
| SponsorableItem |
python | ansible__ansible | test/lib/ansible_test/_internal/cli/parsers/host_config_parsers.py | {
"start": 1191,
"end": 1758
} | class ____(Parser):
"""Composite argument parser for the origin."""
def parse(self, state: ParserState) -> t.Any:
"""Parse the input from the given state and return the result."""
namespace = OriginConfig()
state.set_namespace(namespace)
parser = OriginKeyValueParser()
parser.parse(state)
return namespace
def document(self, state: DocumentationState) -> t.Optional[str]:
"""Generate and return documentation for this parser."""
return OriginKeyValueParser().document(state)
| OriginParser |
python | numpy__numpy | numpy/matrixlib/tests/test_defmatrix.py | {
"start": 12726,
"end": 14950
} | class ____:
a = np.array([[1], [2]])
m = matrix([[1], [2]])
def test_shape(self):
assert_equal(self.a.shape, (2, 1))
assert_equal(self.m.shape, (2, 1))
def test_numpy_ravel(self):
assert_equal(np.ravel(self.a).shape, (2,))
assert_equal(np.ravel(self.m).shape, (2,))
def test_member_ravel(self):
assert_equal(self.a.ravel().shape, (2,))
assert_equal(self.m.ravel().shape, (1, 2))
def test_member_flatten(self):
assert_equal(self.a.flatten().shape, (2,))
assert_equal(self.m.flatten().shape, (1, 2))
def test_numpy_ravel_order(self):
x = np.array([[1, 2, 3], [4, 5, 6]])
assert_equal(np.ravel(x), [1, 2, 3, 4, 5, 6])
assert_equal(np.ravel(x, order='F'), [1, 4, 2, 5, 3, 6])
assert_equal(np.ravel(x.T), [1, 4, 2, 5, 3, 6])
assert_equal(np.ravel(x.T, order='A'), [1, 2, 3, 4, 5, 6])
x = matrix([[1, 2, 3], [4, 5, 6]])
assert_equal(np.ravel(x), [1, 2, 3, 4, 5, 6])
assert_equal(np.ravel(x, order='F'), [1, 4, 2, 5, 3, 6])
assert_equal(np.ravel(x.T), [1, 4, 2, 5, 3, 6])
assert_equal(np.ravel(x.T, order='A'), [1, 2, 3, 4, 5, 6])
def test_matrix_ravel_order(self):
x = matrix([[1, 2, 3], [4, 5, 6]])
assert_equal(x.ravel(), [[1, 2, 3, 4, 5, 6]])
assert_equal(x.ravel(order='F'), [[1, 4, 2, 5, 3, 6]])
assert_equal(x.T.ravel(), [[1, 4, 2, 5, 3, 6]])
assert_equal(x.T.ravel(order='A'), [[1, 2, 3, 4, 5, 6]])
def test_array_memory_sharing(self):
assert_(np.may_share_memory(self.a, self.a.ravel()))
assert_(not np.may_share_memory(self.a, self.a.flatten()))
def test_matrix_memory_sharing(self):
assert_(np.may_share_memory(self.m, self.m.ravel()))
assert_(not np.may_share_memory(self.m, self.m.flatten()))
def test_expand_dims_matrix(self):
# matrices are always 2d - so expand_dims only makes sense when the
# type is changed away from matrix.
a = np.arange(10).reshape((2, 5)).view(np.matrix)
expanded = np.expand_dims(a, axis=1)
assert_equal(expanded.ndim, 3)
assert_(not isinstance(expanded, np.matrix))
| TestShape |
python | plotly__plotly.py | plotly/graph_objs/layout/_activeselection.py | {
"start": 235,
"end": 3117
} | class ____(_BaseLayoutHierarchyType):
_parent_path_str = "layout"
_path_str = "layout.activeselection"
_valid_props = {"fillcolor", "opacity"}
@property
def fillcolor(self):
"""
Sets the color filling the active selection' interior.
The 'fillcolor' property is a color and may be specified as:
- A hex string (e.g. '#ff0000')
- An rgb/rgba string (e.g. 'rgb(255,0,0)')
- An hsl/hsla string (e.g. 'hsl(0,100%,50%)')
- An hsv/hsva string (e.g. 'hsv(0,100%,100%)')
- A named CSS color: see https://plotly.com/python/css-colors/ for a list
Returns
-------
str
"""
return self["fillcolor"]
@fillcolor.setter
def fillcolor(self, val):
self["fillcolor"] = val
@property
def opacity(self):
"""
Sets the opacity of the active selection.
The 'opacity' property is a number and may be specified as:
- An int or float in the interval [0, 1]
Returns
-------
int|float
"""
return self["opacity"]
@opacity.setter
def opacity(self, val):
self["opacity"] = val
@property
def _prop_descriptions(self):
return """\
fillcolor
Sets the color filling the active selection' interior.
opacity
Sets the opacity of the active selection.
"""
def __init__(self, arg=None, fillcolor=None, opacity=None, **kwargs):
"""
Construct a new Activeselection object
Parameters
----------
arg
dict of properties compatible with this constructor or
an instance of
:class:`plotly.graph_objs.layout.Activeselection`
fillcolor
Sets the color filling the active selection' interior.
opacity
Sets the opacity of the active selection.
Returns
-------
Activeselection
"""
super().__init__("activeselection")
if "_parent" in kwargs:
self._parent = kwargs["_parent"]
return
if arg is None:
arg = {}
elif isinstance(arg, self.__class__):
arg = arg.to_plotly_json()
elif isinstance(arg, dict):
arg = _copy.copy(arg)
else:
raise ValueError("""\
The first argument to the plotly.graph_objs.layout.Activeselection
constructor must be a dict or
an instance of :class:`plotly.graph_objs.layout.Activeselection`""")
self._skip_invalid = kwargs.pop("skip_invalid", False)
self._validate = kwargs.pop("_validate", True)
self._set_property("fillcolor", arg, fillcolor)
self._set_property("opacity", arg, opacity)
self._process_kwargs(**dict(arg, **kwargs))
self._skip_invalid = False
| Activeselection |
python | astropy__astropy | astropy/units/tests/test_quantity_ufuncs.py | {
"start": 5390,
"end": 12345
} | class ____:
"""
Test trigonometric functions
"""
@pytest.mark.parametrize(
"tc",
(
testcase(
f=np.sin,
q_in=(30.0 * u.degree,),
q_out=(0.5 * u.dimensionless_unscaled,),
),
testcase(
f=np.sin,
q_in=(np.array([0.0, np.pi / 4.0, np.pi / 2.0]) * u.radian,),
q_out=(np.array([0.0, 1.0 / np.sqrt(2.0), 1.0]) * u.one,),
),
testcase(
f=np.arcsin,
q_in=(np.sin(30.0 * u.degree),),
q_out=(np.radians(30.0) * u.radian,),
),
testcase(
f=np.arcsin,
q_in=(np.sin(np.array([0.0, np.pi / 4.0, np.pi / 2.0]) * u.radian),),
q_out=(np.array([0.0, np.pi / 4.0, np.pi / 2.0]) * u.radian,),
),
testcase(
f=np.cos,
q_in=(np.pi / 3.0 * u.radian,),
q_out=(0.5 * u.dimensionless_unscaled,),
),
testcase(
f=np.cos,
q_in=(np.array([0.0, np.pi / 4.0, np.pi / 2.0]) * u.radian,),
q_out=(np.array([1.0, 1.0 / np.sqrt(2.0), 0.0]) * u.one,),
),
testcase(
f=np.arccos,
q_in=(np.cos(np.pi / 3.0 * u.radian),),
q_out=(np.pi / 3.0 * u.radian,),
),
testcase(
f=np.arccos,
q_in=(np.cos(np.array([0.0, np.pi / 4.0, np.pi / 2.0]) * u.radian),),
q_out=(np.array([0.0, np.pi / 4.0, np.pi / 2.0]) * u.radian,),
),
testcase(
f=np.tan,
q_in=(np.pi / 3.0 * u.radian,),
q_out=(np.sqrt(3.0) * u.dimensionless_unscaled,),
),
testcase(
f=np.tan,
q_in=(np.array([0.0, 45.0, 135.0, 180.0]) * u.degree,),
q_out=(np.array([0.0, 1.0, -1.0, 0.0]) * u.dimensionless_unscaled,),
),
testcase(
f=np.arctan,
q_in=(np.tan(np.pi / 3.0 * u.radian),),
q_out=(np.pi / 3.0 * u.radian,),
),
testcase(
f=np.arctan,
q_in=(np.tan(np.array([10.0, 30.0, 70.0, 80.0]) * u.degree),),
q_out=(np.radians(np.array([10.0, 30.0, 70.0, 80.0]) * u.degree),),
),
testcase(
f=np.arctan2,
q_in=(np.array([10.0, 30.0, 70.0, 80.0]) * u.m, 2.0 * u.km),
q_out=(
np.arctan2(np.array([10.0, 30.0, 70.0, 80.0]), 2000.0) * u.radian,
),
),
testcase(
f=np.arctan2,
q_in=((np.array([10.0, 80.0]) * u.m / (2.0 * u.km)).to(u.one), 1.0),
q_out=(np.arctan2(np.array([10.0, 80.0]) / 2000.0, 1.0) * u.radian,),
),
testcase(f=np.deg2rad, q_in=(180.0 * u.degree,), q_out=(np.pi * u.radian,)),
testcase(f=np.radians, q_in=(180.0 * u.degree,), q_out=(np.pi * u.radian,)),
testcase(f=np.deg2rad, q_in=(3.0 * u.radian,), q_out=(3.0 * u.radian,)),
testcase(f=np.radians, q_in=(3.0 * u.radian,), q_out=(3.0 * u.radian,)),
testcase(f=np.rad2deg, q_in=(60.0 * u.degree,), q_out=(60.0 * u.degree,)),
testcase(f=np.degrees, q_in=(60.0 * u.degree,), q_out=(60.0 * u.degree,)),
testcase(f=np.rad2deg, q_in=(np.pi * u.radian,), q_out=(180.0 * u.degree,)),
testcase(f=np.degrees, q_in=(np.pi * u.radian,), q_out=(180.0 * u.degree,)),
),
)
def test_testcases(self, tc):
return test_testcase(tc)
@pytest.mark.parametrize(
"te",
(
testexc(f=np.deg2rad, q_in=(3.0 * u.m,), exc=TypeError, msg=None),
testexc(f=np.radians, q_in=(3.0 * u.m,), exc=TypeError, msg=None),
testexc(f=np.rad2deg, q_in=(3.0 * u.m), exc=TypeError, msg=None),
testexc(f=np.degrees, q_in=(3.0 * u.m), exc=TypeError, msg=None),
testexc(
f=np.sin,
q_in=(3.0 * u.m,),
exc=TypeError,
msg="Can only apply 'sin' function to quantities with angle units",
),
testexc(
f=np.arcsin,
q_in=(3.0 * u.m,),
exc=TypeError,
msg="Can only apply 'arcsin' function to dimensionless quantities",
),
testexc(
f=np.cos,
q_in=(3.0 * u.s,),
exc=TypeError,
msg="Can only apply 'cos' function to quantities with angle units",
),
testexc(
f=np.arccos,
q_in=(3.0 * u.s,),
exc=TypeError,
msg="Can only apply 'arccos' function to dimensionless quantities",
),
testexc(
f=np.tan,
q_in=(np.array([1, 2, 3]) * u.N,),
exc=TypeError,
msg="Can only apply 'tan' function to quantities with angle units",
),
testexc(
f=np.arctan,
q_in=(np.array([1, 2, 3]) * u.N,),
exc=TypeError,
msg="Can only apply 'arctan' function to dimensionless quantities",
),
testexc(
f=np.arctan2,
q_in=(np.array([1, 2, 3]) * u.N, 1.0 * u.s),
exc=u.UnitsError,
msg="compatible dimensions",
),
testexc(
f=np.arctan2,
q_in=(np.array([1, 2, 3]) * u.N, 1.0),
exc=u.UnitsError,
msg="dimensionless quantities when other arg",
),
),
)
def test_testexcs(self, te):
return test_testexc(te)
@pytest.mark.parametrize(
"tw",
(testwarn(f=np.arcsin, q_in=(27.0 * u.pc / (15 * u.kpc),), wfilter="error"),),
)
def test_testwarns(self, tw):
return test_testwarn(tw)
def test_sin_with_quantity_out(self):
# Test for a useful error message - see gh-16873.
# Non-quantity input should be treated as dimensionless and thus cannot
# be converted to radians.
out = u.Quantity(0)
with pytest.raises(
AttributeError,
match=(
"'NoneType' object has no attribute 'get_converter'"
".*\n.*treated as dimensionless"
),
):
np.sin(0.5, out=out)
# Except if we have the right equivalency in place.
with u.add_enabled_equivalencies(u.dimensionless_angles()):
result = np.sin(0.5, out=out)
assert result is out
assert result == np.sin(0.5) * u.dimensionless_unscaled
| TestQuantityTrigonometricFuncs |
python | astropy__astropy | astropy/modeling/powerlaws.py | {
"start": 15034,
"end": 17142
} | class ____(Fittable1DModel):
"""
One dimensional log parabola model (sometimes called curved power law).
Parameters
----------
amplitude : float
Model amplitude
x_0 : float
Reference point
alpha : float
Power law index
beta : float
Power law curvature
See Also
--------
PowerLaw1D, BrokenPowerLaw1D, ExponentialCutoffPowerLaw1D
Notes
-----
Model formula (with :math:`A` for ``amplitude`` and
:math:`\\alpha` for ``alpha`` and :math:`\\beta` for ``beta``):
.. math:: f(x) = A \\left(
\\frac{x}{x_{0}}\\right)^{- \\alpha - \\beta \\log{\\left (\\frac{x}{x_{0}}
\\right )}}
"""
amplitude = Parameter(default=1, description="Peak value of model")
x_0 = Parameter(default=1, description="Reference point")
alpha = Parameter(default=1, description="Power law index")
beta = Parameter(default=0, description="Power law curvature")
@staticmethod
def evaluate(x, amplitude, x_0, alpha, beta):
"""One dimensional log parabola model function."""
xx = x / x_0
exponent = -alpha - beta * np.log(xx)
return amplitude * xx**exponent
@staticmethod
def fit_deriv(x, amplitude, x_0, alpha, beta):
"""One dimensional log parabola derivative with respect to parameters."""
xx = x / x_0
log_xx = np.log(xx)
exponent = -alpha - beta * log_xx
d_amplitude = xx**exponent
d_beta = -amplitude * d_amplitude * log_xx**2
d_x_0 = amplitude * d_amplitude * (beta * log_xx / x_0 - exponent / x_0)
d_alpha = -amplitude * d_amplitude * log_xx
return [d_amplitude, d_x_0, d_alpha, d_beta]
@property
def input_units(self):
if self.x_0.input_unit is None:
return None
return {self.inputs[0]: self.x_0.input_unit}
def _parameter_units_for_data_units(self, inputs_unit, outputs_unit):
return {
"x_0": inputs_unit[self.inputs[0]],
"amplitude": outputs_unit[self.outputs[0]],
}
| LogParabola1D |
python | redis__redis-py | redis/commands/bf/__init__.py | {
"start": 2675,
"end": 3612
} | class ____(CMSCommands, AbstractBloom):
def __init__(self, client, **kwargs):
"""Create a new RedisBloom client."""
# Set the module commands' callbacks
_MODULE_CALLBACKS = {
CMS_INITBYDIM: bool_ok,
CMS_INITBYPROB: bool_ok,
# CMS_INCRBY: spaceHolder,
# CMS_QUERY: spaceHolder,
CMS_MERGE: bool_ok,
}
_RESP2_MODULE_CALLBACKS = {
CMS_INFO: CMSInfo,
}
_RESP3_MODULE_CALLBACKS = {}
self.client = client
self.commandmixin = CMSCommands
self.execute_command = client.execute_command
if get_protocol_version(self.client) in ["3", 3]:
_MODULE_CALLBACKS.update(_RESP3_MODULE_CALLBACKS)
else:
_MODULE_CALLBACKS.update(_RESP2_MODULE_CALLBACKS)
for k, v in _MODULE_CALLBACKS.items():
self.client.set_response_callback(k, v)
| CMSBloom |
python | pandas-dev__pandas | pandas/tests/series/methods/test_convert_dtypes.py | {
"start": 188,
"end": 11486
} | class ____:
@pytest.mark.parametrize(
"data, maindtype, expected_default, expected_other",
[
(
# data
[1, 2, 3],
# original dtype
np.dtype("int32"),
# default expected dtype
"Int32",
# exceptions on expected dtype
{("convert_integer", False): np.dtype("int32")},
),
(
[1, 2, 3],
np.dtype("int64"),
"Int64",
{("convert_integer", False): np.dtype("int64")},
),
(
["x", "y", "z"],
np.dtype("O"),
pd.StringDtype(),
{("convert_string", False): np.dtype("O")},
),
(
[True, False, np.nan],
np.dtype("O"),
pd.BooleanDtype(),
{("convert_boolean", False): np.dtype("O")},
),
(
["h", "i", np.nan],
np.dtype("O"),
pd.StringDtype(),
{("convert_string", False): np.dtype("O")},
),
( # GH32117
["h", "i", 1],
np.dtype("O"),
np.dtype("O"),
{},
),
(
[10, np.nan, 20],
np.dtype("float"),
"Int64",
{
("convert_integer", False, "convert_floating", True): "Float64",
("convert_integer", False, "convert_floating", False): np.dtype(
"float"
),
},
),
(
[np.nan, 100.5, 200],
np.dtype("float"),
"Float64",
{("convert_floating", False): np.dtype("float")},
),
(
[3, 4, 5],
"Int8",
"Int8",
{},
),
(
[[1, 2], [3, 4], [5]],
None,
np.dtype("O"),
{},
),
(
[4, 5, 6],
np.dtype("uint32"),
"UInt32",
{("convert_integer", False): np.dtype("uint32")},
),
(
[-10, 12, 13],
np.dtype("i1"),
"Int8",
{("convert_integer", False): np.dtype("i1")},
),
(
[1.2, 1.3],
np.dtype("float32"),
"Float32",
{("convert_floating", False): np.dtype("float32")},
),
(
[1, 2.0],
object,
"Int64",
{
("convert_integer", False): "Float64",
("convert_integer", False, "convert_floating", False): np.dtype(
"float"
),
("infer_objects", False): np.dtype("object"),
},
),
(
[1, 2.5],
object,
"Float64",
{
("convert_floating", False): np.dtype("float"),
("infer_objects", False): np.dtype("object"),
},
),
(["a", "b"], pd.CategoricalDtype(), pd.CategoricalDtype(), {}),
(
pd.to_datetime(["2020-01-14 10:00", "2020-01-15 11:11"]).as_unit("s"),
pd.DatetimeTZDtype(tz="UTC"),
pd.DatetimeTZDtype(tz="UTC"),
{},
),
(
pd.to_datetime(["2020-01-14 10:00", "2020-01-15 11:11"]).as_unit("ms"),
pd.DatetimeTZDtype(tz="UTC"),
pd.DatetimeTZDtype(tz="UTC"),
{},
),
(
pd.to_datetime(["2020-01-14 10:00", "2020-01-15 11:11"]).as_unit("us"),
pd.DatetimeTZDtype(tz="UTC"),
pd.DatetimeTZDtype(tz="UTC"),
{},
),
(
pd.to_datetime(["2020-01-14 10:00", "2020-01-15 11:11"]).as_unit("ns"),
pd.DatetimeTZDtype(tz="UTC"),
pd.DatetimeTZDtype(tz="UTC"),
{},
),
(
pd.to_datetime(["2020-01-14 10:00", "2020-01-15 11:11"]).as_unit("ns"),
"datetime64[ns]",
np.dtype("datetime64[ns]"),
{},
),
(
pd.to_datetime(["2020-01-14 10:00", "2020-01-15 11:11"]).as_unit("ns"),
object,
np.dtype("datetime64[ns]"),
{("infer_objects", False): np.dtype("object")},
),
(
pd.period_range("1/1/2011", freq="M", periods=3),
None,
pd.PeriodDtype("M"),
{},
),
(
pd.arrays.IntervalArray([pd.Interval(0, 1), pd.Interval(1, 5)]),
None,
pd.IntervalDtype("int64", "right"),
{},
),
],
)
@pytest.mark.parametrize("params", product(*[(True, False)] * 5))
def test_convert_dtypes(
self,
data,
maindtype,
expected_default,
expected_other,
params,
using_infer_string,
using_nan_is_na,
):
if (
hasattr(data, "dtype")
and lib.is_np_dtype(data.dtype, "M")
and isinstance(maindtype, pd.DatetimeTZDtype)
):
# this astype is deprecated in favor of tz_localize
msg = "Cannot use .astype to convert from timezone-naive dtype"
with pytest.raises(TypeError, match=msg):
pd.Series(data, dtype=maindtype)
return
if maindtype is not None:
series = pd.Series(data, dtype=maindtype)
else:
series = pd.Series(data)
result = series.convert_dtypes(*params)
param_names = [
"infer_objects",
"convert_string",
"convert_integer",
"convert_boolean",
"convert_floating",
]
params_dict = dict(zip(param_names, params, strict=True))
expected_dtype = expected_default
for spec, dtype in expected_other.items():
if all(
params_dict[key] is val
for key, val in zip(spec[::2], spec[1::2], strict=False)
):
expected_dtype = dtype
if (
using_infer_string
and expected_default == "string"
and expected_dtype == object
and params[0]
and not params[1]
):
# If convert_string=False and infer_objects=True, we end up with the
# default string dtype instead of preserving object for string data
expected_dtype = pd.StringDtype(na_value=np.nan)
if (
not using_nan_is_na
and expected_dtype == "Int64"
and isinstance(data[1], float)
and np.isnan(data[1])
):
if params_dict["convert_floating"]:
expected_dtype = "Float64"
else:
expected_dtype = "float64"
expected = pd.Series(data, dtype=expected_dtype)
tm.assert_series_equal(result, expected)
# Test that it is a copy
copy = series.copy(deep=True)
if result.notna().sum() > 0 and result.dtype in ["interval[int64, right]"]:
with pytest.raises(TypeError, match="Invalid value"):
result[result.notna()] = np.nan
else:
result[result.notna()] = pd.NA
# Make sure original not changed
tm.assert_series_equal(series, copy)
def test_convert_string_dtype(self, nullable_string_dtype):
# https://github.com/pandas-dev/pandas/issues/31731 -> converting columns
# that are already string dtype
df = pd.DataFrame(
{"A": ["a", "b", pd.NA], "B": ["ä", "ö", "ü"]}, dtype=nullable_string_dtype
)
result = df.convert_dtypes()
tm.assert_frame_equal(df, result)
def test_convert_bool_dtype(self):
# GH32287
df = pd.DataFrame({"A": pd.array([True])})
tm.assert_frame_equal(df, df.convert_dtypes())
def test_convert_byte_string_dtype(self):
# GH-43183
byte_str = b"binary-string"
df = pd.DataFrame(data={"A": byte_str}, index=[0])
result = df.convert_dtypes()
expected = df
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize(
"infer_objects, dtype", [(True, "Int64"), (False, "object")]
)
def test_convert_dtype_object_with_na(self, infer_objects, dtype):
# GH#48791
ser = pd.Series([1, pd.NA])
result = ser.convert_dtypes(infer_objects=infer_objects)
expected = pd.Series([1, pd.NA], dtype=dtype)
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize(
"infer_objects, dtype", [(True, "Float64"), (False, "object")]
)
def test_convert_dtype_object_with_na_float(self, infer_objects, dtype):
# GH#48791
ser = pd.Series([1.5, pd.NA])
result = ser.convert_dtypes(infer_objects=infer_objects)
expected = pd.Series([1.5, pd.NA], dtype=dtype)
tm.assert_series_equal(result, expected)
def test_convert_dtypes_pyarrow_to_np_nullable(self):
# GH 53648
pytest.importorskip("pyarrow")
ser = pd.Series(range(2), dtype="int32[pyarrow]")
result = ser.convert_dtypes(dtype_backend="numpy_nullable")
expected = pd.Series(range(2), dtype="Int32")
tm.assert_series_equal(result, expected)
def test_convert_dtypes_pyarrow_null(self):
# GH#55346
pa = pytest.importorskip("pyarrow")
ser = pd.Series([None, None])
result = ser.convert_dtypes(dtype_backend="pyarrow")
expected = pd.Series([None, None], dtype=pd.ArrowDtype(pa.null()))
tm.assert_series_equal(result, expected)
@td.skip_if_no("pyarrow")
@pytest.mark.parametrize("categories", [None, ["S1", "S2"]])
def test_convert_empty_categorical_to_pyarrow(self, categories):
# GH#59934
ser = pd.Series(pd.Categorical([None] * 5, categories=categories))
converted = ser.convert_dtypes(dtype_backend="pyarrow")
expected = ser
tm.assert_series_equal(converted, expected)
def test_convert_dtype_pyarrow_timezone_preserve(self):
# GH 60237
pytest.importorskip("pyarrow")
ser = pd.Series(
pd.to_datetime(range(5), utc=True, unit="h"),
dtype="timestamp[ns, tz=UTC][pyarrow]",
)
result = ser.convert_dtypes(dtype_backend="pyarrow")
expected = ser.copy()
tm.assert_series_equal(result, expected)
def test_convert_dtypes_complex(self):
# GH 60129
ser = pd.Series([1.5 + 3.0j, 1.5 - 3.0j])
result = ser.convert_dtypes()
tm.assert_series_equal(result, ser)
| TestSeriesConvertDtypes |
python | huggingface__transformers | src/transformers/models/sew/modeling_sew.py | {
"start": 10065,
"end": 13425
} | class ____(nn.Module):
"""Multi-headed attention from 'Attention Is All You Need' paper"""
def __init__(
self,
embed_dim: int,
num_heads: int,
dropout: float = 0.0,
is_decoder: bool = False,
bias: bool = True,
is_causal: bool = False,
config: Optional[SEWConfig] = None,
):
super().__init__()
self.embed_dim = embed_dim
self.num_heads = num_heads
self.dropout = dropout
self.head_dim = embed_dim // num_heads
self.config = config
if (self.head_dim * num_heads) != self.embed_dim:
raise ValueError(
f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}"
f" and `num_heads`: {num_heads})."
)
self.scaling = self.head_dim**-0.5
self.is_decoder = is_decoder
self.is_causal = is_causal
self.k_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
def forward(
self,
hidden_states: torch.Tensor,
key_value_states: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = False,
# TODO: we need a refactor so that the different attention modules can get their specific kwargs
# ATM, we have mixed things encoder, decoder, and encoder-decoder attn
**kwargs: Unpack[FlashAttentionKwargs],
) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]:
"""Input shape: Batch x Time x Channel"""
# if key_value_states are provided this layer is used as a cross-attention layer
# for the decoder
is_cross_attention = key_value_states is not None
# determine input shapes
bsz, tgt_len = hidden_states.shape[:-1]
src_len = key_value_states.shape[1] if is_cross_attention else tgt_len
q_input_shape = (bsz, tgt_len, -1, self.head_dim)
kv_input_shape = (bsz, src_len, -1, self.head_dim)
# get query proj
query_states = self.q_proj(hidden_states).view(*q_input_shape).transpose(1, 2)
current_states = key_value_states if is_cross_attention else hidden_states
key_states = self.k_proj(current_states).view(*kv_input_shape).transpose(1, 2)
value_states = self.v_proj(current_states).view(*kv_input_shape).transpose(1, 2)
attention_interface: Callable = eager_attention_forward
if self.config._attn_implementation != "eager":
attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
attn_output, attn_weights = attention_interface(
self,
query_states,
key_states,
value_states,
attention_mask,
dropout=0.0 if not self.training else self.dropout,
scaling=self.scaling,
output_attentions=output_attentions,
**kwargs,
)
attn_output = attn_output.reshape(bsz, tgt_len, -1).contiguous()
attn_output = self.out_proj(attn_output)
return attn_output, attn_weights, None
| SEWAttention |
python | pytorch__pytorch | test/distributed/checkpoint/_experimental/test_checkpoint_writer.py | {
"start": 1724,
"end": 7106
} | class ____(TestCase):
def setUp(self):
super().setUp()
# Create a temporary directory for test checkpoints
self.temp_dir = tempfile.mkdtemp()
# Create test objects
self.rank_info = RankInfo(
global_rank=0,
global_world_size=1,
)
self.options = CheckpointWriterConfig()
self.mock_barrier = MagicMock()
self.mock_hook = MockWriterHook()
# Create the checkpoint writer
self.writer = CheckpointWriter(
config=self.options,
rank_info=self.rank_info,
barrier=self.mock_barrier,
commit_hook=self.mock_hook,
)
# Create a test state dictionary
self.state_dict = {
"model": torch.nn.Linear(10, 5).state_dict(),
"optimizer": {"param_groups": [{"lr": 0.01}]},
"epoch": 5,
"step": 1000,
}
def tearDown(self):
# Clean up the temporary directory
shutil.rmtree(self.temp_dir)
def test_write_creates_checkpoint_file(self):
"""Test that write creates a checkpoint file with the correct content."""
# Set up the checkpoint path
checkpoint_path = os.path.join(self.temp_dir, "checkpoint")
# Call write
self.writer.write(checkpoint_path, self.state_dict)
# Verify that the checkpoint file exists
expected_file_path = os.path.join(
checkpoint_path, f"checkpoint_{self.rank_info.global_rank}.pt"
)
self.assertTrue(os.path.exists(expected_file_path))
# Load the checkpoint and verify its contents
loaded_state_dict = torch.load(expected_file_path)
self.assertIn("model", loaded_state_dict)
self.assertIn("optimizer", loaded_state_dict)
self.assertEqual(loaded_state_dict["epoch"], 5)
self.assertEqual(loaded_state_dict["step"], 1000)
def test_write_calls_barrier(self):
"""Test that write calls the barrier with the correct parameters."""
# Set up the checkpoint path
checkpoint_path = os.path.join(self.temp_dir, "checkpoint")
# Call write
self.writer.write(checkpoint_path, self.state_dict)
# Verify that the barrier was called
self.mock_barrier.execute_barrier.assert_called_once()
def test_write_calls_commit_hooks(self):
"""Test that write calls the commit hooks with the correct parameters."""
# Set up the checkpoint path
checkpoint_path = os.path.join(self.temp_dir, "checkpoint")
# Call write with additional kwargs
kwargs = {"extra": "value"}
self.writer.write(checkpoint_path, self.state_dict, **kwargs)
# Verify that the pre_commit hook was called with the correct parameters
self.assertTrue(self.mock_hook.pre_commit_called)
self.assertEqual(self.mock_hook.pre_commit_path, checkpoint_path)
self.assertEqual(
self.mock_hook.pre_commit_kwargs is not None
and self.mock_hook.pre_commit_kwargs["extra"],
"value",
)
# Verify that the commit hook was called with the correct parameters
self.assertTrue(self.mock_hook.commit_called)
self.assertEqual(self.mock_hook.commit_path, checkpoint_path)
self.assertEqual(
self.mock_hook.commit_kwargs is not None
and self.mock_hook.commit_kwargs["extra"],
"value",
)
def test_write_without_barrier(self):
"""Test that write works correctly without a barrier."""
# Create a writer without a barrier
writer = CheckpointWriter(
config=self.options,
rank_info=self.rank_info,
barrier=None,
commit_hook=self.mock_hook,
)
# Set up the checkpoint path
checkpoint_path = os.path.join(self.temp_dir, "checkpoint_no_barrier")
# Call write
writer.write(checkpoint_path, self.state_dict)
# Verify that the checkpoint file exists
expected_file_path = os.path.join(
checkpoint_path, f"checkpoint_{self.rank_info.global_rank}.pt"
)
self.assertTrue(os.path.exists(expected_file_path))
def test_write_without_commit_hook(self):
"""Test that write works correctly without a commit hook."""
# Create a writer without a commit hook
writer = CheckpointWriter(
config=self.options,
rank_info=self.rank_info,
barrier=self.mock_barrier,
commit_hook=None,
)
# Set up the checkpoint path
checkpoint_path = os.path.join(self.temp_dir, "checkpoint_no_hook")
# Call write
writer.write(checkpoint_path, self.state_dict)
# Verify that the checkpoint file exists
expected_file_path = os.path.join(
checkpoint_path, f"checkpoint_{self.rank_info.global_rank}.pt"
)
self.assertTrue(os.path.exists(expected_file_path))
# Verify that the barrier was still called
self.mock_barrier.execute_barrier.assert_called_once()
def test_close(self):
"""Test that close doesn't raise any exceptions."""
# This is a no-op in the base class, so just verify it doesn't raise
self.writer.close()
if __name__ == "__main__":
run_tests()
| TestCheckpointWriter |
python | more-itertools__more-itertools | tests/test_more.py | {
"start": 157661,
"end": 157868
} | class ____:
def __init__(self, value):
self.value = value
def __lt__(self, other):
return self.value < other.value
def __int__(self):
return int(self.value)
| BarelySortable |
python | ray-project__ray | python/ray/air/tests/test_integration_mlflow.py | {
"start": 11085,
"end": 15680
} | class ____(unittest.TestCase):
def setUp(self):
self.dirpath = tempfile.mkdtemp()
import mlflow
mlflow.set_tracking_uri("sqlite:///" + self.dirpath + "/mlflow.sqlite")
mlflow.create_experiment(name="existing_experiment")
self.mlflow_util = _MLflowLoggerUtil()
self.tracking_uri = mlflow.get_tracking_uri()
def tearDown(self):
shutil.rmtree(self.dirpath)
def test_experiment_id(self):
self.mlflow_util.setup_mlflow(tracking_uri=self.tracking_uri, experiment_id="0")
assert self.mlflow_util.experiment_id == "0"
def test_experiment_id_env_var(self):
os.environ["MLFLOW_EXPERIMENT_ID"] = "0"
self.mlflow_util.setup_mlflow(tracking_uri=self.tracking_uri)
assert self.mlflow_util.experiment_id == "0"
del os.environ["MLFLOW_EXPERIMENT_ID"]
def test_experiment_name(self):
self.mlflow_util.setup_mlflow(
tracking_uri=self.tracking_uri, experiment_name="existing_experiment"
)
assert self.mlflow_util.experiment_id == "1"
def test_run_started_with_correct_experiment(self):
experiment_name = "my_experiment_name"
# Make sure run is started under the correct experiment.
self.mlflow_util.setup_mlflow(
tracking_uri=self.tracking_uri, experiment_name=experiment_name
)
run = self.mlflow_util.start_run(set_active=True)
assert (
run.info.experiment_id
== self.mlflow_util._mlflow.get_experiment_by_name(
experiment_name
).experiment_id
)
self.mlflow_util.end_run()
def test_experiment_name_env_var(self):
os.environ["MLFLOW_EXPERIMENT_NAME"] = "existing_experiment"
self.mlflow_util.setup_mlflow(tracking_uri=self.tracking_uri)
assert self.mlflow_util.experiment_id == "1"
del os.environ["MLFLOW_EXPERIMENT_NAME"]
def test_id_precedence(self):
os.environ["MLFLOW_EXPERIMENT_ID"] = "0"
self.mlflow_util.setup_mlflow(
tracking_uri=self.tracking_uri, experiment_name="new_experiment"
)
assert self.mlflow_util.experiment_id == "0"
del os.environ["MLFLOW_EXPERIMENT_ID"]
def test_new_experiment(self):
self.mlflow_util.setup_mlflow(
tracking_uri=self.tracking_uri, experiment_name="new_experiment"
)
assert self.mlflow_util.experiment_id == "2"
def test_setup_fail(self):
with self.assertRaises(ValueError):
self.mlflow_util.setup_mlflow(
tracking_uri=self.tracking_uri,
experiment_name="new_experiment2",
create_experiment_if_not_exists=False,
)
def test_log_params(self):
params = {"a": "a", "x": {"y": "z"}}
self.mlflow_util.setup_mlflow(
tracking_uri=self.tracking_uri, experiment_name="new_experiment"
)
run = self.mlflow_util.start_run()
run_id = run.info.run_id
self.mlflow_util.log_params(params_to_log=params, run_id=run_id)
run = self.mlflow_util._mlflow.get_run(run_id=run_id)
assert run.data.params == flatten_dict(params)
params2 = {"b": "b"}
self.mlflow_util.start_run(set_active=True)
self.mlflow_util.log_params(params_to_log=params2, run_id=run_id)
run = self.mlflow_util._mlflow.get_run(run_id=run_id)
assert run.data.params == flatten_dict(
{
**params,
**params2,
}
)
self.mlflow_util.end_run()
def test_log_metrics(self):
metrics = {"a": 1.0, "x": {"y": 2.0}}
self.mlflow_util.setup_mlflow(
tracking_uri=self.tracking_uri, experiment_name="new_experiment"
)
run = self.mlflow_util.start_run()
run_id = run.info.run_id
self.mlflow_util.log_metrics(metrics_to_log=metrics, run_id=run_id, step=0)
run = self.mlflow_util._mlflow.get_run(run_id=run_id)
assert run.data.metrics == flatten_dict(metrics)
metrics2 = {"b": 1.0}
self.mlflow_util.start_run(set_active=True)
self.mlflow_util.log_metrics(metrics_to_log=metrics2, run_id=run_id, step=0)
assert self.mlflow_util._mlflow.get_run(
run_id=run_id
).data.metrics == flatten_dict(
{
**metrics,
**metrics2,
}
)
self.mlflow_util.end_run()
if __name__ == "__main__":
sys.exit(pytest.main(["-v", __file__]))
| MLflowUtilTest |
python | xlwings__xlwings | xlwings/constants.py | {
"start": 126219,
"end": 126466
} | class ____:
xlWBATChart = -4109 # from enum XlWBATemplate
xlWBATExcel4IntlMacroSheet = 4 # from enum XlWBATemplate
xlWBATExcel4MacroSheet = 3 # from enum XlWBATemplate
xlWBATWorksheet = -4167 # from enum XlWBATemplate
| WBATemplate |
python | ionelmc__pytest-benchmark | tests/test_storage.py | {
"start": 1263,
"end": 1571
} | class ____(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
def __getitem__(self, item):
return self.__dict__[item]
def getoption(self, item, default=None):
try:
return self[item]
except KeyError:
return default
| Namespace |
python | eth-brownie__brownie | brownie/utils/docopt.py | {
"start": 13147,
"end": 13854
} | class ____(_BranchPattern):
def match(self, left: list[_Pattern], collected: list[_Pattern] | None = None) -> Any:
assert len(self.children) == 1
collected = [] if collected is None else collected
original_collected = collected
original_left = left
last_left = None
matched = True
times = 0
while matched:
matched, left, collected = self.children[0].match(left, collected)
times += 1 if matched else 0
if last_left == left:
break
last_left = left
if times >= 1:
return True, left, collected
return False, original_left, original_collected
| _OneOrMore |
python | HypothesisWorks__hypothesis | hypothesis-python/tests/django/toystore/forms.py | {
"start": 3456,
"end": 3907
} | class ____(ReprForm):
num_validators = (MinValueValidator(1), MaxValueValidator(5))
_int_one_to_five = forms.IntegerField(validators=num_validators)
_decimal_one_to_five = forms.FloatField(validators=num_validators)
_float_one_to_five = forms.FloatField(validators=num_validators)
len_validators = (MinLengthValidator(5), MaxLengthValidator(10))
_string_five_to_ten = forms.CharField(validators=len_validators)
| WithValidatorsForm |
python | wandb__wandb | wandb/vendor/pygments/lexers/perl.py | {
"start": 550,
"end": 10459
} | class ____(RegexLexer):
"""
For `Perl <http://www.perl.org>`_ source code.
"""
name = 'Perl'
aliases = ['perl', 'pl']
filenames = ['*.pl', '*.pm', '*.t']
mimetypes = ['text/x-perl', 'application/x-perl']
flags = re.DOTALL | re.MULTILINE
# TODO: give this to a perl guy who knows how to parse perl...
tokens = {
'balanced-regex': [
(r'/(\\\\|\\[^\\]|[^\\/])*/[egimosx]*', String.Regex, '#pop'),
(r'!(\\\\|\\[^\\]|[^\\!])*![egimosx]*', String.Regex, '#pop'),
(r'\\(\\\\|[^\\])*\\[egimosx]*', String.Regex, '#pop'),
(r'\{(\\\\|\\[^\\]|[^\\}])*\}[egimosx]*', String.Regex, '#pop'),
(r'<(\\\\|\\[^\\]|[^\\>])*>[egimosx]*', String.Regex, '#pop'),
(r'\[(\\\\|\\[^\\]|[^\\\]])*\][egimosx]*', String.Regex, '#pop'),
(r'\((\\\\|\\[^\\]|[^\\)])*\)[egimosx]*', String.Regex, '#pop'),
(r'@(\\\\|\\[^\\]|[^\\@])*@[egimosx]*', String.Regex, '#pop'),
(r'%(\\\\|\\[^\\]|[^\\%])*%[egimosx]*', String.Regex, '#pop'),
(r'\$(\\\\|\\[^\\]|[^\\$])*\$[egimosx]*', String.Regex, '#pop'),
],
'root': [
(r'\A\#!.+?$', Comment.Hashbang),
(r'\#.*?$', Comment.Single),
(r'^=[a-zA-Z0-9]+\s+.*?\n=cut', Comment.Multiline),
(words((
'case', 'continue', 'do', 'else', 'elsif', 'for', 'foreach',
'if', 'last', 'my', 'next', 'our', 'redo', 'reset', 'then',
'unless', 'until', 'while', 'print', 'new', 'BEGIN',
'CHECK', 'INIT', 'END', 'return'), suffix=r'\b'),
Keyword),
(r'(format)(\s+)(\w+)(\s*)(=)(\s*\n)',
bygroups(Keyword, Text, Name, Text, Punctuation, Text), 'format'),
(r'(eq|lt|gt|le|ge|ne|not|and|or|cmp)\b', Operator.Word),
# common delimiters
(r's/(\\\\|\\[^\\]|[^\\/])*/(\\\\|\\[^\\]|[^\\/])*/[egimosx]*',
String.Regex),
(r's!(\\\\|\\!|[^!])*!(\\\\|\\!|[^!])*![egimosx]*', String.Regex),
(r's\\(\\\\|[^\\])*\\(\\\\|[^\\])*\\[egimosx]*', String.Regex),
(r's@(\\\\|\\[^\\]|[^\\@])*@(\\\\|\\[^\\]|[^\\@])*@[egimosx]*',
String.Regex),
(r's%(\\\\|\\[^\\]|[^\\%])*%(\\\\|\\[^\\]|[^\\%])*%[egimosx]*',
String.Regex),
# balanced delimiters
(r's\{(\\\\|\\[^\\]|[^\\}])*\}\s*', String.Regex, 'balanced-regex'),
(r's<(\\\\|\\[^\\]|[^\\>])*>\s*', String.Regex, 'balanced-regex'),
(r's\[(\\\\|\\[^\\]|[^\\\]])*\]\s*', String.Regex,
'balanced-regex'),
(r's\((\\\\|\\[^\\]|[^\\)])*\)\s*', String.Regex,
'balanced-regex'),
(r'm?/(\\\\|\\[^\\]|[^\\/\n])*/[gcimosx]*', String.Regex),
(r'm(?=[/!\\{<\[(@%$])', String.Regex, 'balanced-regex'),
(r'((?<==~)|(?<=\())\s*/(\\\\|\\[^\\]|[^\\/])*/[gcimosx]*',
String.Regex),
(r'\s+', Text),
(words((
'abs', 'accept', 'alarm', 'atan2', 'bind', 'binmode', 'bless', 'caller', 'chdir',
'chmod', 'chomp', 'chop', 'chown', 'chr', 'chroot', 'close', 'closedir', 'connect',
'continue', 'cos', 'crypt', 'dbmclose', 'dbmopen', 'defined', 'delete', 'die',
'dump', 'each', 'endgrent', 'endhostent', 'endnetent', 'endprotoent',
'endpwent', 'endservent', 'eof', 'eval', 'exec', 'exists', 'exit', 'exp', 'fcntl',
'fileno', 'flock', 'fork', 'format', 'formline', 'getc', 'getgrent', 'getgrgid',
'getgrnam', 'gethostbyaddr', 'gethostbyname', 'gethostent', 'getlogin',
'getnetbyaddr', 'getnetbyname', 'getnetent', 'getpeername', 'getpgrp',
'getppid', 'getpriority', 'getprotobyname', 'getprotobynumber',
'getprotoent', 'getpwent', 'getpwnam', 'getpwuid', 'getservbyname',
'getservbyport', 'getservent', 'getsockname', 'getsockopt', 'glob', 'gmtime',
'goto', 'grep', 'hex', 'import', 'index', 'int', 'ioctl', 'join', 'keys', 'kill', 'last',
'lc', 'lcfirst', 'length', 'link', 'listen', 'local', 'localtime', 'log', 'lstat',
'map', 'mkdir', 'msgctl', 'msgget', 'msgrcv', 'msgsnd', 'my', 'next', 'oct', 'open',
'opendir', 'ord', 'our', 'pack', 'pipe', 'pop', 'pos', 'printf',
'prototype', 'push', 'quotemeta', 'rand', 'read', 'readdir',
'readline', 'readlink', 'readpipe', 'recv', 'redo', 'ref', 'rename',
'reverse', 'rewinddir', 'rindex', 'rmdir', 'scalar', 'seek', 'seekdir',
'select', 'semctl', 'semget', 'semop', 'send', 'setgrent', 'sethostent', 'setnetent',
'setpgrp', 'setpriority', 'setprotoent', 'setpwent', 'setservent',
'setsockopt', 'shift', 'shmctl', 'shmget', 'shmread', 'shmwrite', 'shutdown',
'sin', 'sleep', 'socket', 'socketpair', 'sort', 'splice', 'split', 'sprintf', 'sqrt',
'srand', 'stat', 'study', 'substr', 'symlink', 'syscall', 'sysopen', 'sysread',
'sysseek', 'system', 'syswrite', 'tell', 'telldir', 'tie', 'tied', 'time', 'times', 'tr',
'truncate', 'uc', 'ucfirst', 'umask', 'undef', 'unlink', 'unpack', 'unshift', 'untie',
'utime', 'values', 'vec', 'wait', 'waitpid', 'wantarray', 'warn', 'write'), suffix=r'\b'),
Name.Builtin),
(r'((__(DATA|DIE|WARN)__)|(STD(IN|OUT|ERR)))\b', Name.Builtin.Pseudo),
(r'(<<)([\'"]?)([a-zA-Z_]\w*)(\2;?\n.*?\n)(\3)(\n)',
bygroups(String, String, String.Delimiter, String, String.Delimiter, Text)),
(r'__END__', Comment.Preproc, 'end-part'),
(r'\$\^[ADEFHILMOPSTWX]', Name.Variable.Global),
(r"\$[\\\"\[\]'&`+*.,;=%~?@$!<>(^|/-](?!\w)", Name.Variable.Global),
(r'[$@%#]+', Name.Variable, 'varname'),
(r'0_?[0-7]+(_[0-7]+)*', Number.Oct),
(r'0x[0-9A-Fa-f]+(_[0-9A-Fa-f]+)*', Number.Hex),
(r'0b[01]+(_[01]+)*', Number.Bin),
(r'(?i)(\d*(_\d*)*\.\d+(_\d*)*|\d+(_\d*)*\.\d+(_\d*)*)(e[+-]?\d+)?',
Number.Float),
(r'(?i)\d+(_\d*)*e[+-]?\d+(_\d*)*', Number.Float),
(r'\d+(_\d+)*', Number.Integer),
(r"'(\\\\|\\[^\\]|[^'\\])*'", String),
(r'"(\\\\|\\[^\\]|[^"\\])*"', String),
(r'`(\\\\|\\[^\\]|[^`\\])*`', String.Backtick),
(r'<([^\s>]+)>', String.Regex),
(r'(q|qq|qw|qr|qx)\{', String.Other, 'cb-string'),
(r'(q|qq|qw|qr|qx)\(', String.Other, 'rb-string'),
(r'(q|qq|qw|qr|qx)\[', String.Other, 'sb-string'),
(r'(q|qq|qw|qr|qx)\<', String.Other, 'lt-string'),
(r'(q|qq|qw|qr|qx)([\W_])(.|\n)*?\2', String.Other),
(r'(package)(\s+)([a-zA-Z_]\w*(?:::[a-zA-Z_]\w*)*)',
bygroups(Keyword, Text, Name.Namespace)),
(r'(use|require|no)(\s+)([a-zA-Z_]\w*(?:::[a-zA-Z_]\w*)*)',
bygroups(Keyword, Text, Name.Namespace)),
(r'(sub)(\s+)', bygroups(Keyword, Text), 'funcname'),
(words((
'no', 'package', 'require', 'use'), suffix=r'\b'),
Keyword),
(r'(\[\]|\*\*|::|<<|>>|>=|<=>|<=|={3}|!=|=~|'
r'!~|&&?|\|\||\.{1,3})', Operator),
(r'[-+/*%=<>&^|!\\~]=?', Operator),
(r'[()\[\]:;,<>/?{}]', Punctuation), # yes, there's no shortage
# of punctuation in Perl!
(r'(?=\w)', Name, 'name'),
],
'format': [
(r'\.\n', String.Interpol, '#pop'),
(r'[^\n]*\n', String.Interpol),
],
'varname': [
(r'\s+', Text),
(r'\{', Punctuation, '#pop'), # hash syntax?
(r'\)|,', Punctuation, '#pop'), # argument specifier
(r'\w+::', Name.Namespace),
(r'[\w:]+', Name.Variable, '#pop'),
],
'name': [
(r'[a-zA-Z_]\w*(::[a-zA-Z_]\w*)*(::)?(?=\s*->)', Name.Namespace, '#pop'),
(r'[a-zA-Z_]\w*(::[a-zA-Z_]\w*)*::', Name.Namespace, '#pop'),
(r'[\w:]+', Name, '#pop'),
(r'[A-Z_]+(?=\W)', Name.Constant, '#pop'),
(r'(?=\W)', Text, '#pop'),
],
'funcname': [
(r'[a-zA-Z_]\w*[!?]?', Name.Function),
(r'\s+', Text),
# argument declaration
(r'(\([$@%]*\))(\s*)', bygroups(Punctuation, Text)),
(r';', Punctuation, '#pop'),
(r'.*?\{', Punctuation, '#pop'),
],
'cb-string': [
(r'\\[{}\\]', String.Other),
(r'\\', String.Other),
(r'\{', String.Other, 'cb-string'),
(r'\}', String.Other, '#pop'),
(r'[^{}\\]+', String.Other)
],
'rb-string': [
(r'\\[()\\]', String.Other),
(r'\\', String.Other),
(r'\(', String.Other, 'rb-string'),
(r'\)', String.Other, '#pop'),
(r'[^()]+', String.Other)
],
'sb-string': [
(r'\\[\[\]\\]', String.Other),
(r'\\', String.Other),
(r'\[', String.Other, 'sb-string'),
(r'\]', String.Other, '#pop'),
(r'[^\[\]]+', String.Other)
],
'lt-string': [
(r'\\[<>\\]', String.Other),
(r'\\', String.Other),
(r'\<', String.Other, 'lt-string'),
(r'\>', String.Other, '#pop'),
(r'[^<>]+', String.Other)
],
'end-part': [
(r'.+', Comment.Preproc, '#pop')
]
}
def analyse_text(text):
if shebang_matches(text, r'perl'):
return True
if re.search('(?:my|our)\s+[$@%(]', text):
return 0.9
| PerlLexer |
python | google__pytype | pytype/tests/test_flax_overlay.py | {
"start": 1935,
"end": 9821
} | class ____(test_base.BaseTest):
"""Test dataclass construction in flax.linen.Module subclasses."""
def _setup_linen_pyi(self, d):
d.create_file(
"flax/linen/__init__.pyi",
"""
from .module import Module
""",
)
d.create_file(
"flax/linen/module.pyi",
"""
class Module:
def make_rng(self, x: str) -> None: ...
""",
)
def test_constructor(self):
with test_utils.Tempdir() as d:
self._setup_linen_pyi(d)
ty = self.Infer(
"""
from flax import linen as nn
class Foo(nn.Module):
x: bool
y: int = 10
""",
pythonpath=[d.path],
module_name="foo",
)
self.assertTypesMatchPytd(
ty,
"""
from flax import linen as nn
from typing import Dict, TypeVar
_TFoo = TypeVar('_TFoo', bound=Foo)
@dataclasses.dataclass
class Foo(nn.module.Module):
x: bool
y: int = ...
def __init__(self, x: bool, y: int = ..., name: str = ..., parent = ...) -> None: ...
def replace(self: _TFoo, **kwargs) -> _TFoo: ...
""",
)
def test_unexported_constructor(self):
with test_utils.Tempdir() as d:
self._setup_linen_pyi(d)
ty = self.Infer(
"""
from flax.linen import module
class Foo(module.Module):
x: bool
y: int = 10
""",
pythonpath=[d.path],
module_name="foo",
)
self.assertTypesMatchPytd(
ty,
"""
from flax.linen import module
from typing import Dict, TypeVar
_TFoo = TypeVar('_TFoo', bound=Foo)
@dataclasses.dataclass
class Foo(module.Module):
x: bool
y: int = ...
def __init__(self, x: bool, y: int = ..., name: str = ..., parent = ...) -> None: ...
def replace(self: _TFoo, **kwargs) -> _TFoo: ...
""",
)
def test_relative_import_from_package_module(self):
with test_utils.Tempdir() as d:
self._setup_linen_pyi(d)
ty = self.Infer(
"""
from .module import Module
class Foo(Module):
x: bool
y: int = 10
""",
pythonpath=[d.path],
module_name="flax.linen.foo",
)
self.assertTypesMatchPytd(
ty,
"""
from typing import Dict, Type, TypeVar
import flax.linen.module
Module: Type[flax.linen.module.Module]
_TFoo = TypeVar('_TFoo', bound=Foo)
@dataclasses.dataclass
class Foo(flax.linen.module.Module):
x: bool
y: int = ...
def __init__(self, x: bool, y: int = ..., name: str = ..., parent = ...) -> None: ...
def replace(self: _TFoo, **kwargs) -> _TFoo: ...
""",
)
def test_parent_import_from_package_module(self):
with test_utils.Tempdir() as d:
self._setup_linen_pyi(d)
ty = self.Infer(
"""
from .. import linen
class Foo(linen.Module):
x: bool
y: int = 10
""",
pythonpath=[d.path],
module_name="flax.linen.foo",
)
self.assertTypesMatchPytd(
ty,
"""
from flax import linen
from typing import Dict, TypeVar
_TFoo = TypeVar('_TFoo', bound=Foo)
@dataclasses.dataclass
class Foo(linen.module.Module):
x: bool
y: int = ...
def __init__(self, x: bool, y: int = ..., name: str = ..., parent = ...) -> None: ...
def replace(self: _TFoo, **kwargs) -> _TFoo: ...
""",
)
def test_self_type(self):
"""Match self: f.l.module.Module even if imported as f.l.Module."""
with test_utils.Tempdir() as d:
self._setup_linen_pyi(d)
self.Check(
"""
from flax import linen
class Foo(linen.Module):
x: int
a = Foo(10)
b = a.make_rng("a") # called on base class
""",
pythonpath=[d.path],
)
def test_invalid_field(self):
with test_utils.Tempdir() as d:
self._setup_linen_pyi(d)
errors = self.CheckWithErrors(
"""
from flax import linen as nn
class Foo(nn.Module): # invalid-annotation[e]
x: bool
name: str
""",
pythonpath=[d.path],
)
self.assertErrorRegexes(errors, {"e": r"name.*implicitly"})
def test_setup(self):
with test_utils.Tempdir() as d:
self._setup_linen_pyi(d)
self.Check(
"""
from flax import linen
class Foo(linen.Module):
x: int
def setup(self):
self.y = 10
a = Foo(10)
b = a.y
""",
pythonpath=[d.path],
)
def test_reingest(self):
with test_utils.Tempdir() as d:
self._setup_linen_pyi(d)
foo_ty = self.Infer(
"""
from flax import linen
class Foo(linen.Module):
pass
""",
pythonpath=[d.path],
)
d.create_file("foo.pyi", pytd_utils.Print(foo_ty))
ty = self.Infer(
"""
import foo
class Bar(foo.Foo):
x: int
""",
pythonpath=[d.path],
)
self.assertTypesMatchPytd(
ty,
"""
import dataclasses
import foo
from typing import Any, Dict, TypeVar
_TBar = TypeVar('_TBar', bound=Bar)
@dataclasses.dataclass
class Bar(foo.Foo):
x: int
def __init__(
self, x: int, name: str = ..., parent: Any = ...) -> None: ...
def replace(self: _TBar, **kwargs) -> _TBar: ...
""",
)
def test_reingest_and_subclass(self):
with test_utils.Tempdir() as d:
self._setup_linen_pyi(d)
foo_ty = self.Infer(
"""
from flax import linen
class Foo(linen.Module):
pass
""",
pythonpath=[d.path],
)
d.create_file("foo.pyi", pytd_utils.Print(foo_ty))
ty = self.Infer(
"""
import foo
class Bar(foo.Foo):
pass
class Baz(Bar):
x: int
""",
pythonpath=[d.path],
)
self.assertTypesMatchPytd(
ty,
"""
import dataclasses
import foo
from typing import Any, Dict, TypeVar
_TBar = TypeVar('_TBar', bound=Bar)
@dataclasses.dataclass
class Bar(foo.Foo):
def __init__(self, name: str = ..., parent: Any = ...) -> None: ...
def replace(self: _TBar, **kwargs) -> _TBar: ...
_TBaz = TypeVar('_TBaz', bound=Baz)
@dataclasses.dataclass
class Baz(Bar):
x: int
def __init__(
self, x: int, name: str = ..., parent: Any = ...) -> None: ...
def replace(self: _TBaz, **kwargs) -> _TBaz: ...
""",
)
@test_utils.skipBeforePy((3, 10), "KW_ONLY is new in 3.10")
def test_kwonly(self):
with test_utils.Tempdir() as d:
self._setup_linen_pyi(d)
ty = self.Infer(
"""
import dataclasses
from flax import linen as nn
class C(nn.Module):
_: dataclasses.KW_ONLY
x: int = 0
y: str
""",
pythonpath=[d.path],
)
self.assertTypesMatchPytd(
ty,
"""
import dataclasses
from flax import linen as nn
from typing import Any, TypeVar
_TC = TypeVar('_TC', bound=C)
@dataclasses.dataclass
class C(nn.module.Module):
x: int = ...
y: str
_: dataclasses.KW_ONLY
def __init__(self, *, x: int = ..., y: str, name: str = ..., parent: Any = ...) -> None: ...
def replace(self: _TC, **kwargs) -> _TC: ...
""",
)
if __name__ == "__main__":
test_base.main()
| TestLinenModule |
python | astropy__astropy | astropy/io/ascii/latex.py | {
"start": 3614,
"end": 5425
} | class ____(core.BaseHeader):
"""Class to read the header of Latex Tables."""
header_start = r"\begin{tabular}"
splitter_class = LatexSplitter
def start_line(self, lines):
line = find_latex_line(lines, self.header_start)
if line is not None:
return line + 1
else:
return None
def _get_units(self) -> dict[str, str]:
units = {}
col_units = [col.info.unit for col in self.cols]
for name, unit in zip(self.colnames, col_units):
if unit:
try:
units[name] = unit.to_string(format="latex_inline")
except AttributeError:
units[name] = unit
return units
def write(self, lines):
if "col_align" not in self.latex:
self.latex["col_align"] = len(self.cols) * "c"
if "tablealign" in self.latex:
align = "[" + self.latex["tablealign"] + "]"
else:
align = ""
if self.latex["tabletype"] is not None:
lines.append(r"\begin{" + self.latex["tabletype"] + r"}" + align)
add_dictval_to_list(self.latex, "preamble", lines)
if "caption" in self.latex:
lines.append(r"\caption{" + self.latex["caption"] + "}")
lines.append(self.header_start + r"{" + self.latex["col_align"] + r"}")
add_dictval_to_list(self.latex, "header_start", lines)
lines.append(self.splitter.join(self.colnames))
units = self._get_units()
if "units" in self.latex:
units.update(self.latex["units"])
if units:
lines.append(
self.splitter.join([units.get(name, " ") for name in self.colnames])
)
add_dictval_to_list(self.latex, "header_end", lines)
| LatexHeader |
python | dagster-io__dagster | helm/dagster/schema/schema/charts/utils/kubernetes.py | {
"start": 231,
"end": 493
} | class ____(RootModel[dict[str, str]]):
model_config = {
"json_schema_extra": {
"$ref": create_definition_ref(
"io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta/properties/annotations"
)
}
}
| Annotations |
python | realpython__materials | python-contact-book/source_code_step_5/rpcontacts/views.py | {
"start": 389,
"end": 1953
} | class ____(QMainWindow):
"""Main Window."""
def __init__(self, parent=None):
"""Initializer."""
super().__init__(parent)
self.setWindowTitle("RP Contacts")
self.resize(550, 250)
self.centralWidget = QWidget()
self.setCentralWidget(self.centralWidget)
self.layout = QHBoxLayout()
self.centralWidget.setLayout(self.layout)
self.contactsModel = ContactsModel()
self.setupUI()
def setupUI(self):
"""Setup the main window's GUI."""
# Create the table view widget
self.table = QTableView()
self.table.setModel(self.contactsModel.model)
self.table.setSelectionBehavior(QAbstractItemView.SelectRows)
self.table.resizeColumnsToContents()
# Create buttons
self.addButton = QPushButton("Add...")
self.addButton.clicked.connect(self.openAddDialog)
self.deleteButton = QPushButton("Delete")
self.clearAllButton = QPushButton("Clear All")
# Lay out the GUI
layout = QVBoxLayout()
layout.addWidget(self.addButton)
layout.addWidget(self.deleteButton)
layout.addStretch()
layout.addWidget(self.clearAllButton)
self.layout.addWidget(self.table)
self.layout.addLayout(layout)
def openAddDialog(self):
"""Open the Add Contact dialog."""
dialog = AddDialog(self)
if dialog.exec() == QDialog.Accepted:
self.contactsModel.addContact(dialog.data)
self.table.resizeColumnsToContents()
| Window |
python | ansible__ansible | test/lib/ansible_test/_internal/host_configs.py | {
"start": 13948,
"end": 14082
} | class ____(HostConfig, metaclass=abc.ABCMeta):
"""Base class for network host configuration."""
@dataclasses.dataclass
| NetworkConfig |
python | getsentry__sentry | src/sentry/data_export/processors/discover.py | {
"start": 604,
"end": 701
} | class ____(Protocol):
def __call__(self, offset: int, limit: int) -> dict[str, Any]: ...
| DataFn |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.