language stringclasses 1
value | repo stringclasses 346
values | path stringlengths 6 201 | class_span dict | source stringlengths 21 2.38M | target stringlengths 1 96 |
|---|---|---|---|---|---|
python | bokeh__bokeh | src/bokeh/server/callbacks.py | {
"start": 2505,
"end": 2866
} | class ____(SessionCallback):
''' Represent a callback to execute on the next ``IOLoop`` "tick".
'''
def __init__(self, callback: Callback, *, callback_id: ID) -> None:
'''
Args:
callback (callable) :
id (ID) :
'''
super().__init__(callback=callback, callback_id=callback_id)
| NextTickCallback |
python | Farama-Foundation__Gymnasium | gymnasium/envs/mujoco/pusher_v5.py | {
"start": 240,
"end": 15656
} | class ____(MujocoEnv, utils.EzPickle):
r"""
## Description
"Pusher" is a multi-jointed robot arm that is very similar to a human arm.
The goal is to move a target cylinder (called *object*) to a goal position using the robot's end effector (called *fingertip*).
The robot consists of shoulder, elbow, forearm and wrist joints.
## Action Space
```{figure} action_space_figures/pusher.png
:name: pusher
```
The action space is a `Box(-2, 2, (7,), float32)`. An action `(a, b)` represents the torques applied at the hinge joints.
| Num | Action | Control Min | Control Max | Name (in corresponding XML file) | Joint | Type (Unit) |
|-----|--------------------------------------------------------------------|-------------|-------------|----------------------------------|-------|--------------|
| 0 | Rotation of the panning the shoulder | -2 | 2 | r_shoulder_pan_joint | hinge | torque (N m) |
| 1 | Rotation of the shoulder lifting joint | -2 | 2 | r_shoulder_lift_joint | hinge | torque (N m) |
| 2 | Rotation of the shoulder rolling joint | -2 | 2 | r_upper_arm_roll_joint | hinge | torque (N m) |
| 3 | Rotation of hinge joint that flexed the elbow | -2 | 2 | r_elbow_flex_joint | hinge | torque (N m) |
| 4 | Rotation of hinge that rolls the forearm | -2 | 2 | r_forearm_roll_joint | hinge | torque (N m) |
| 5 | Rotation of flexing the wrist | -2 | 2 | r_wrist_flex_joint | hinge | torque (N m) |
| 6 | Rotation of rolling the wrist | -2 | 2 | r_wrist_roll_joint | hinge | torque (N m) |
## Observation Space
The observation space consists of the following parts (in order):
- *qpos (7 elements):* Position values of the robot's body parts.
- *qvel (7 elements):* The velocities of these individual body parts (their derivatives).
- *xpos (3 elements):* The coordinates of the fingertip of the pusher.
- *xpos (3 elements):* The coordinates of the object to be moved.
- *xpos (3 elements):* The coordinates of the goal position.
The observation space is a `Box(-Inf, Inf, (17,), float64)` where the elements are as follows:
| Num | Observation | Min | Max | Name (in corresponding XML file) | Joint | Type (Unit) |
| --- | -------------------------------------------------------- | ---- | --- | -------------------------------- | -------- | ------------------------ |
| 0 | Rotation of the panning the shoulder | -Inf | Inf | r_shoulder_pan_joint | hinge | angle (rad) |
| 1 | Rotation of the shoulder lifting joint | -Inf | Inf | r_shoulder_lift_joint | hinge | angle (rad) |
| 2 | Rotation of the shoulder rolling joint | -Inf | Inf | r_upper_arm_roll_joint | hinge | angle (rad) |
| 3 | Rotation of hinge joint that flexed the elbow | -Inf | Inf | r_elbow_flex_joint | hinge | angle (rad) |
| 4 | Rotation of hinge that rolls the forearm | -Inf | Inf | r_forearm_roll_joint | hinge | angle (rad) |
| 5 | Rotation of flexing the wrist | -Inf | Inf | r_wrist_flex_joint | hinge | angle (rad) |
| 6 | Rotation of rolling the wrist | -Inf | Inf | r_wrist_roll_joint | hinge | angle (rad) |
| 7 | Rotational velocity of the panning the shoulder | -Inf | Inf | r_shoulder_pan_joint | hinge | angular velocity (rad/s) |
| 8 | Rotational velocity of the shoulder lifting joint | -Inf | Inf | r_shoulder_lift_joint | hinge | angular velocity (rad/s) |
| 9 | Rotational velocity of the shoulder rolling joint | -Inf | Inf | r_upper_arm_roll_joint | hinge | angular velocity (rad/s) |
| 10 | Rotational velocity of hinge joint that flexed the elbow | -Inf | Inf | r_elbow_flex_joint | hinge | angular velocity (rad/s) |
| 11 | Rotational velocity of hinge that rolls the forearm | -Inf | Inf | r_forearm_roll_joint | hinge | angular velocity (rad/s) |
| 12 | Rotational velocity of flexing the wrist | -Inf | Inf | r_wrist_flex_joint | hinge | angular velocity (rad/s) |
| 13 | Rotational velocity of rolling the wrist | -Inf | Inf | r_wrist_roll_joint | hinge | angular velocity (rad/s) |
| 14 | x-coordinate of the fingertip of the pusher | -Inf | Inf | tips_arm | slide | position (m) |
| 15 | y-coordinate of the fingertip of the pusher | -Inf | Inf | tips_arm | slide | position (m) |
| 16 | z-coordinate of the fingertip of the pusher | -Inf | Inf | tips_arm | slide | position (m) |
| 17 | x-coordinate of the object to be moved | -Inf | Inf | object (obj_slidex) | slide | position (m) |
| 18 | y-coordinate of the object to be moved | -Inf | Inf | object (obj_slidey) | slide | position (m) |
| 19 | z-coordinate of the object to be moved | -Inf | Inf | object | cylinder | position (m) |
| 20 | x-coordinate of the goal position of the object | -Inf | Inf | goal (goal_slidex) | slide | position (m) |
| 21 | y-coordinate of the goal position of the object | -Inf | Inf | goal (goal_slidey) | slide | position (m) |
| 22 | z-coordinate of the goal position of the object | -Inf | Inf | goal | sphere | position (m) |
To understand the state space, an analogy can be drawn to a human arm, where the words "flex" and "roll" have the same meaning as in human joints.
## Rewards
The total reward is: ***reward*** *=* *reward_dist + reward_ctrl + reward_near*.
- *reward_dist*:
This reward is a measure of how far the object is from the target goal position,
with a more negative value assigned if the object is further away from the target.
It is $-w_{dist} \|(P_{object} - P_{target})\|_2$.
where $w_{dist}$ is the `reward_dist_weight` (default is $1$).
- *reward_ctrl*:
A negative reward to penalize the pusher for taking actions that are too large.
It is measured as the negative squared Euclidean norm of the action, i.e. as $-w_{control} \|action\|_2^2$.
where $w_{control}$ is the `reward_control_weight` (default is $0.1$).
- *reward_near*:
This reward is a measure of how far the *fingertip* of the pusher (the unattached end) is from the object,
with a more negative value assigned for when the pusher's *fingertip* is further away from the target.
It is $-w_{near} \|(P_{fingertip} - P_{target})\|_2$.
where $w_{near}$ is the `reward_near_weight` (default is $0.5$).
`info` contains the individual reward terms.
## Starting State
The initial position state of the Pusher arm is $0_{6}$.
The initial position state of the object is $\mathcal{U}_{[[-0.3, -0.2], [0, 0.2]]}$.
The position state of the goal is (permanently) $[0.45, -0.05, -0.323]$.
The initial velocity state of the Pusher arm is $\mathcal{U}_{[-0.005 \times I_{6}, 0.005 \times I_{6}]}$.
The initial velocity state of the object is $0_2$.
The velocity state of the goal is (permanently) $0_3$.
where $\mathcal{U}$ is the multivariate uniform continuous distribution.
Note that the initial position state of the object is sampled until its distance to the goal is $ > 0.17 m$.
The default frame rate is 5, with each frame lasting 0.01, so *dt = 5 * 0.01 = 0.05*.
## Episode End
### Termination
The Pusher never terminates.
### Truncation
The default duration of an episode is 100 timesteps.
## Arguments
Pusher provides a range of parameters to modify the observation space, reward function, initial state, and termination condition.
These parameters can be applied during `gymnasium.make` in the following way:
```python
import gymnasium as gym
env = gym.make('Pusher-v5', xml_file=...)
```
| Parameter | Type | Default |Description |
|-------------------------|------------|-----------------|----------------------------------------------------------|
| `xml_file` | **str** |`"pusher_v5.xml"`| Path to a MuJoCo model |
| `reward_near_weight` | **float** | `0.5` | Weight for _reward_near_ term (see `Rewards` section) |
| `reward_dist_weight` | **float** | `1` | Weight for _reward_dist_ term (see `Rewards` section) |
| `reward_control_weight` | **float** | `0.1` | Weight for _reward_control_ term (see `Rewards` section) |
## Version History
* v5:
- Minimum `mujoco` version is now 2.3.3.
- Fixed bug: increased the density of the object to be higher than air (related [GitHub issue](https://github.com/Farama-Foundation/Gymnasium/issues/950)).
- Added `default_camera_config` argument, a dictionary for setting the `mj_camera` properties, mainly useful for custom environments.
- Added `frame_skip` argument, used to configure the `dt` (duration of `step()`), default varies by environment check environment documentation pages.
- Added `xml_file` argument.
- Fixed bug: `reward_distance` & `reward_near` was based on the state before the physics step, now it is based on the state after the physics step (related [GitHub issue](https://github.com/Farama-Foundation/Gymnasium/issues/821)).
- Added `reward_near_weight`, `reward_dist_weight`, `reward_control_weight` arguments to configure the reward function (defaults are effectively the same as in `v4`).
- Fixed `info["reward_ctrl"]` not being multiplied by the reward weight.
- Added `info["reward_near"]` which is equal to the reward term `reward_near`.
* v4: All MuJoCo environments now use the MuJoCo bindings in mujoco >= 2.1.3.
- Warning: This version of the environment is not compatible with `mujoco>=3.0.0` (related [GitHub issue](https://github.com/Farama-Foundation/Gymnasium/issues/950)).
* v3: This environment does not have a v3 release. Moved to the [gymnasium-robotics repo](https://github.com/Farama-Foundation/gymnasium-robotics).
* v2: All continuous control environments now use mujoco-py >= 1.50. Moved to the [gymnasium-robotics repo](https://github.com/Farama-Foundation/gymnasium-robotics).
* v1: max_time_steps raised to 1000 for robot based tasks (not including pusher, which has a max_time_steps of 100). Added reward_threshold to environments.
* v0: Initial versions release.
"""
metadata = {
"render_modes": [
"human",
"rgb_array",
"depth_array",
"rgbd_tuple",
],
}
def __init__(
self,
xml_file: str = "pusher_v5.xml",
frame_skip: int = 5,
default_camera_config: dict[str, float | int] = DEFAULT_CAMERA_CONFIG,
reward_near_weight: float = 0.5,
reward_dist_weight: float = 1,
reward_control_weight: float = 0.1,
**kwargs,
):
utils.EzPickle.__init__(
self,
xml_file,
frame_skip,
default_camera_config,
reward_near_weight,
reward_dist_weight,
reward_control_weight,
**kwargs,
)
self._reward_near_weight = reward_near_weight
self._reward_dist_weight = reward_dist_weight
self._reward_control_weight = reward_control_weight
observation_space = Box(low=-np.inf, high=np.inf, shape=(23,), dtype=np.float64)
MujocoEnv.__init__(
self,
xml_file,
frame_skip,
observation_space=observation_space,
default_camera_config=default_camera_config,
**kwargs,
)
self.metadata = {
"render_modes": [
"human",
"rgb_array",
"depth_array",
"rgbd_tuple",
],
"render_fps": int(np.round(1.0 / self.dt)),
}
def step(self, action):
self.do_simulation(action, self.frame_skip)
observation = self._get_obs()
reward, reward_info = self._get_rew(action)
info = reward_info
if self.render_mode == "human":
self.render()
# truncation=False as the time limit is handled by the `TimeLimit` wrapper added during `make`
return observation, reward, False, False, info
def _get_rew(self, action):
vec_1 = self.get_body_com("object") - self.get_body_com("tips_arm")
vec_2 = self.get_body_com("object") - self.get_body_com("goal")
reward_near = -np.linalg.norm(vec_1) * self._reward_near_weight
reward_dist = -np.linalg.norm(vec_2) * self._reward_dist_weight
reward_ctrl = -np.square(action).sum() * self._reward_control_weight
reward = reward_dist + reward_ctrl + reward_near
reward_info = {
"reward_dist": reward_dist,
"reward_ctrl": reward_ctrl,
"reward_near": reward_near,
}
return reward, reward_info
def reset_model(self):
qpos = self.init_qpos
self.goal_pos = np.asarray([0, 0])
while True:
self.cylinder_pos = np.concatenate(
[
self.np_random.uniform(low=-0.3, high=0, size=1),
self.np_random.uniform(low=-0.2, high=0.2, size=1),
]
)
if np.linalg.norm(self.cylinder_pos - self.goal_pos) > 0.17:
break
qpos[-4:-2] = self.cylinder_pos
qpos[-2:] = self.goal_pos
qvel = self.init_qvel + self.np_random.uniform(
low=-0.005, high=0.005, size=self.model.nv
)
qvel[-4:] = 0
self.set_state(qpos, qvel)
return self._get_obs()
def _get_obs(self):
return np.concatenate(
[
self.data.qpos.flatten()[:7],
self.data.qvel.flatten()[:7],
self.get_body_com("tips_arm"),
self.get_body_com("object"),
self.get_body_com("goal"),
]
)
| PusherEnv |
python | kamyu104__LeetCode-Solutions | Python/once-twice.py | {
"start": 44,
"end": 535
} | class ____(object):
def onceTwice(self, nums):
"""
:type nums: List[int]
:rtype: List[int]
"""
dp = [0]*3
dp[0] = ~0
for x in nums:
dp = [(x&dp[i-1])|(~x&dp[i]) for i in xrange(3)]
dp2 = [0]*3
dp2[0] = ~0
for x in nums:
if ~x&dp[1] or x&dp[2]:
continue
dp2 = [(x&dp2[i-1])|(~x&dp2[i]) for i in xrange(3)]
return [dp2[1], (dp2[1]^dp[1])|dp[2]]
| Solution |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-xero/components.py | {
"start": 2798,
"end": 4003
} | class ____(RecordExtractor):
field_path: List[Union[InterpolatedString, str]]
config: Config
parameters: InitVar[Mapping[str, Any]]
decoder: Decoder = JsonDecoder(parameters={})
def __post_init__(self, parameters: Mapping[str, Any]):
for path_index in range(len(self.field_path)):
if isinstance(self.field_path[path_index], str):
self.field_path[path_index] = InterpolatedString.create(self.field_path[path_index], parameters=parameters)
def extract_records(self, response: requests.Response) -> List[Mapping[str, Any]]:
response_body = self.decoder.decode(response)
if len(self.field_path) == 0:
extracted = response_body
else:
path = [path.eval(self.config) for path in self.field_path]
if "*" in path:
extracted = dpath.util.values(response_body, path)
else:
extracted = dpath.util.get(response_body, path, default=[])
ParseDates.convert_dates(extracted)
if isinstance(extracted, list):
return extracted
elif extracted:
return [extracted]
else:
return []
| CustomExtractor |
python | networkx__networkx | networkx/utils/configs.py | {
"start": 7370,
"end": 9492
} | class ____(Config, strict=False):
"""Configuration to control automatic conversion to and calling of backends.
Priority is given to backends listed earlier.
Parameters
----------
algos : list of backend names
This controls "algorithms" such as ``nx.pagerank`` that don't return a graph.
generators : list of backend names
This controls "generators" such as ``nx.from_pandas_edgelist`` that return a graph.
classes : list of backend names
This controls graph classes such as ``nx.Graph()``.
kwargs : variadic keyword arguments of function name to list of backend names
This allows each function to be configured separately and will override the config
in ``algos`` or ``generators`` if present. The dispatchable function name may be
gotten from the ``.name`` attribute such as ``nx.pagerank.name`` (it's typically
the same as the name of the function).
"""
algos: list[str]
generators: list[str]
classes: list[str]
def _on_setattr(self, key, value):
from .backends import _registered_algorithms, backend_info
if key in {"algos", "generators", "classes"}:
pass
elif key not in _registered_algorithms:
raise AttributeError(
f"Invalid config name: {key!r}. Expected 'algos', 'generators', "
"'classes', or a name of a dispatchable function "
"(e.g. `.name` attribute of the function)."
)
if not (isinstance(value, list) and all(isinstance(x, str) for x in value)):
raise TypeError(
f"{key!r} config must be a list of backend names; got {value!r}"
)
if missing := {x for x in value if x not in backend_info}:
missing = ", ".join(map(repr, sorted(missing)))
raise ValueError(f"Unknown backend when setting {key!r}: {missing}")
return value
def _on_delattr(self, key):
if key in {"algos", "generators", "classes"}:
raise TypeError(f"{key!r} configuration item can't be deleted.")
| BackendPriorities |
python | django__django | tests/queries/models.py | {
"start": 4744,
"end": 4901
} | class ____(models.Manager):
def get_queryset(self):
qs = super().get_queryset()
return qs.filter(public=True, tag__name="t1")
| CustomManager |
python | django__django | tests/resolve_url/models.py | {
"start": 87,
"end": 248
} | class ____(models.Model):
importance = models.IntegerField()
def get_absolute_url(self):
return "/importance/%d/" % self.importance
| UnimportantThing |
python | PyCQA__pylint | tests/functional/i/invalid/invalid_getnewargs/invalid_getnewargs_returned.py | {
"start": 483,
"end": 612
} | class ____(type):
def __getnewargs__(cls):
return (1, 2, 3)
@six.add_metaclass(GetNewArgsMetaclass)
| GetNewArgsMetaclass |
python | pytest-dev__pytest | src/_pytest/mark/structures.py | {
"start": 8239,
"end": 10641
} | class ____:
"""A pytest mark."""
#: Name of the mark.
name: str
#: Positional arguments of the mark decorator.
args: tuple[Any, ...]
#: Keyword arguments of the mark decorator.
kwargs: Mapping[str, Any]
#: Source Mark for ids with parametrize Marks.
_param_ids_from: Mark | None = dataclasses.field(default=None, repr=False)
#: Resolved/generated ids with parametrize Marks.
_param_ids_generated: Sequence[str] | None = dataclasses.field(
default=None, repr=False
)
def __init__(
self,
name: str,
args: tuple[Any, ...],
kwargs: Mapping[str, Any],
param_ids_from: Mark | None = None,
param_ids_generated: Sequence[str] | None = None,
*,
_ispytest: bool = False,
) -> None:
""":meta private:"""
check_ispytest(_ispytest)
# Weirdness to bypass frozen=True.
object.__setattr__(self, "name", name)
object.__setattr__(self, "args", args)
object.__setattr__(self, "kwargs", kwargs)
object.__setattr__(self, "_param_ids_from", param_ids_from)
object.__setattr__(self, "_param_ids_generated", param_ids_generated)
def _has_param_ids(self) -> bool:
return "ids" in self.kwargs or len(self.args) >= 4
def combined_with(self, other: Mark) -> Mark:
"""Return a new Mark which is a combination of this
Mark and another Mark.
Combines by appending args and merging kwargs.
:param Mark other: The mark to combine with.
:rtype: Mark
"""
assert self.name == other.name
# Remember source of ids with parametrize Marks.
param_ids_from: Mark | None = None
if self.name == "parametrize":
if other._has_param_ids():
param_ids_from = other
elif self._has_param_ids():
param_ids_from = self
return Mark(
self.name,
self.args + other.args,
dict(self.kwargs, **other.kwargs),
param_ids_from=param_ids_from,
_ispytest=True,
)
# A generic parameter designating an object to which a Mark may
# be applied -- a test function (callable) or class.
# Note: a lambda is not allowed, but this can't be represented.
Markable = TypeVar("Markable", bound=Callable[..., object] | type)
@dataclasses.dataclass
| Mark |
python | facebook__pyre-check | client/configuration/scheduler_policies.py | {
"start": 1819,
"end": 5349
} | class ____:
value: Union[FixedChunkSize, FixedChunkCount]
@staticmethod
def from_json(value: object, identifier: str) -> "SchedulerPolicy":
if not isinstance(value, dict):
raise InvalidConfiguration(
f"Invalid scheduler policy for `{identifier}`: expected object, but got `{type(value).__name__}`"
)
if "kind" not in value:
raise InvalidConfiguration(
f"Invalid scheduler policy for `{identifier}`: missing `kind` key in `{value}`"
)
kind = value["kind"]
if kind == "fixed_chunk_size":
minimum_chunk_size = optional_positive_int_member(
value, "minimum_chunk_size", identifier
)
minimum_chunks_per_worker = positive_int_member(
value, "minimum_chunks_per_worker", identifier
)
preferred_chunk_size = positive_int_member(
value, "preferred_chunk_size", identifier
)
return SchedulerPolicy(
value=FixedChunkSize(
minimum_chunk_size=minimum_chunk_size,
minimum_chunks_per_worker=minimum_chunks_per_worker,
preferred_chunk_size=preferred_chunk_size,
)
)
elif kind == "fixed_chunk_count":
minimum_chunks_per_worker = optional_positive_int_member(
value, "minimum_chunks_per_worker", identifier
)
minimum_chunk_size = positive_int_member(
value, "minimum_chunk_size", identifier
)
preferred_chunks_per_worker = positive_int_member(
value, "preferred_chunks_per_worker", identifier
)
return SchedulerPolicy(
value=FixedChunkCount(
minimum_chunks_per_worker=minimum_chunks_per_worker,
minimum_chunk_size=minimum_chunk_size,
preferred_chunks_per_worker=preferred_chunks_per_worker,
)
)
else:
raise InvalidConfiguration(
f"Invalid scheduler policy kind: got `{kind}`, expected `fixed_chunk_size` or `fixed_chunk_count`"
)
def to_json(self) -> Dict[str, Union[int, str]]:
value = self.value
if isinstance(value, FixedChunkSize):
minimum_chunk_size = value.minimum_chunk_size
return {
"kind": "fixed_chunk_size",
**(
{"minimum_chunk_size": minimum_chunk_size}
if minimum_chunk_size is not None
else {}
),
"minimum_chunks_per_worker": value.minimum_chunks_per_worker,
"preferred_chunk_size": value.preferred_chunk_size,
}
elif isinstance(value, FixedChunkCount):
minimum_chunks_per_worker = value.minimum_chunks_per_worker
return {
"kind": "fixed_chunk_count",
**(
{"minimum_chunks_per_worker": minimum_chunks_per_worker}
if minimum_chunks_per_worker is not None
else {}
),
"minimum_chunk_size": value.minimum_chunk_size,
"preferred_chunks_per_worker": value.preferred_chunks_per_worker,
}
else:
raise AssertionError("unexpected policy")
@dataclasses.dataclass(frozen=True)
| SchedulerPolicy |
python | google__pytype | pytype/pytd/pytd.py | {
"start": 4092,
"end": 6947
} | class ____(Node):
"""Represents a class declaration.
Used as dict/set key, so all components must be hashable.
Attributes:
name: Class name (string)
bases: The super classes of this class (instances of pytd.Type).
methods: Tuple of methods, classmethods, staticmethods (instances of
pytd.Function).
constants: Tuple of constant class attributes (instances of pytd.Constant).
classes: Tuple of nested classes.
slots: A.k.a. __slots__, declaring which instance attributes are writable.
template: Tuple of pytd.TemplateItem instances.
"""
name: str
keywords: tuple[tuple[str, TypeU], ...]
bases: tuple[Class | TypeU, ...]
methods: tuple[Function, ...]
constants: tuple[Constant, ...]
classes: tuple[Class, ...]
decorators: tuple[Alias, ...]
slots: tuple[str, ...] | None
template: tuple[TemplateItem, ...]
# _name2item is the lookup cache. It should not be treated as a child or used
# in equality or hash operations.
_name2item: dict[str, Any] = {}
def _InitCache(self):
# TODO(b/159053187): Put constants, functions, classes and aliases into a
# combined dict.
for x in (self.methods, self.constants, self.classes):
for item in x:
self._name2item[item.name] = item
def Lookup(self, name):
"""Convenience function: Look up a given name in the class namespace.
Tries to find a method or constant by this name in the class.
Args:
name: Name to look up.
Returns:
A Constant or Function instance.
Raises:
KeyError: if this identifier doesn't exist in this class.
"""
# TODO(b/159053187): Remove this. Make methods and constants dictionaries.
if not self._name2item:
self._InitCache()
return self._name2item[name]
def Get(self, name):
"""Version of Lookup that returns None instead of raising."""
if not self._name2item:
self._InitCache()
return self._name2item.get(name)
def __contains__(self, name):
return bool(self.Get(name))
def __hash__(self):
# _name2item is a dict, so it can't be hashed. This worked in the previous
# version by pretending that _name2item didn't exist.
# We could also delete the cache on self, but making a new instance should
# be cheaper than recomputing the cache.
nohash = self.Replace(_name2item=None)
return super(Class, nohash).__hash__()
def IterChildren(self) -> Generator[tuple[str, Any | None], None, None]:
for name, child in super().IterChildren():
if name == '_name2item':
continue
yield name, child
def Replace(self, **kwargs):
if '_name2item' not in kwargs:
kwargs['_name2item'] = {}
return super().Replace(**kwargs)
@property
def metaclass(self):
for key, val in self.keywords:
if key == 'metaclass':
return val
return None
| Class |
python | urllib3__urllib3 | src/urllib3/util/request.py | {
"start": 5873,
"end": 8363
} | class ____(typing.NamedTuple):
chunks: typing.Iterable[bytes] | None
content_length: int | None
def body_to_chunks(
body: typing.Any | None, method: str, blocksize: int
) -> ChunksAndContentLength:
"""Takes the HTTP request method, body, and blocksize and
transforms them into an iterable of chunks to pass to
socket.sendall() and an optional 'Content-Length' header.
A 'Content-Length' of 'None' indicates the length of the body
can't be determined so should use 'Transfer-Encoding: chunked'
for framing instead.
"""
chunks: typing.Iterable[bytes] | None
content_length: int | None
# No body, we need to make a recommendation on 'Content-Length'
# based on whether that request method is expected to have
# a body or not.
if body is None:
chunks = None
if method.upper() not in _METHODS_NOT_EXPECTING_BODY:
content_length = 0
else:
content_length = None
# Bytes or strings become bytes
elif isinstance(body, (str, bytes)):
chunks = (to_bytes(body),)
content_length = len(chunks[0])
# File-like object, TODO: use seek() and tell() for length?
elif hasattr(body, "read"):
def chunk_readable() -> typing.Iterable[bytes]:
encode = isinstance(body, io.TextIOBase)
while True:
datablock = body.read(blocksize)
if not datablock:
break
if encode:
datablock = datablock.encode("utf-8")
yield datablock
chunks = chunk_readable()
content_length = None
# Otherwise we need to start checking via duck-typing.
else:
try:
# Check if the body implements the buffer API.
mv = memoryview(body)
except TypeError:
try:
# Check if the body is an iterable
chunks = iter(body)
content_length = None
except TypeError:
raise TypeError(
f"'body' must be a bytes-like object, file-like "
f"object, or iterable. Instead was {body!r}"
) from None
else:
# Since it implements the buffer API can be passed directly to socket.sendall()
chunks = (body,)
content_length = mv.nbytes
return ChunksAndContentLength(chunks=chunks, content_length=content_length)
| ChunksAndContentLength |
python | astropy__astropy | astropy/io/votable/tree.py | {
"start": 122552,
"end": 132958
} | class ____(
Element, _IDProperty, _NameProperty, _UtypeProperty, _DescriptionProperty
):
"""
RESOURCE_ element: Groups TABLE_ and RESOURCE_ elements.
The keyword arguments correspond to setting members of the same
name, documented below.
"""
def __init__(
self,
name=None,
ID=None,
utype=None,
type="results",
id=None,
config=None,
pos=None,
**kwargs,
):
if config is None:
config = {}
self._config = config
self._pos = pos
Element.__init__(self)
self.name = name
self.ID = resolve_id(ID, id, config, pos)
self.utype = utype
self.type = type
self._extra_attributes = kwargs
self.description = None
self._coordinate_systems = HomogeneousList(CooSys)
self._time_systems = HomogeneousList(TimeSys)
self._groups = HomogeneousList(Group)
self._params = HomogeneousList(Param)
self._infos = HomogeneousList(Info)
self._links = HomogeneousList(Link)
self._tables = HomogeneousList(TableElement)
self._resources = HomogeneousList(Resource)
self._mivot_block = MivotBlock()
warn_unknown_attrs("RESOURCE", kwargs.keys(), config, pos)
def __repr__(self):
buff = io.StringIO()
w = XMLWriter(buff)
w.element(self._element_name, attrib=w.object_attrs(self, self._attr_list))
return buff.getvalue().strip()
@property
def type(self):
"""The type of the resource [*required*].
Must be either:
- 'results': This resource contains actual result values
(default)
- 'meta': This resource contains only datatype descriptions
(FIELD_ elements), but no actual data.
"""
return self._type
@type.setter
def type(self, type):
if type not in ("results", "meta"):
vo_raise(E18, type, self._config, self._pos)
self._type = type
@property
def mivot_block(self):
"""
Returns the MIVOT block instance.
If the host resource is of type results, it is taken from the first
child resource with a MIVOT block, if any.
Otherwise, it is taken from the host resource.
"""
if self.type == "results":
for resource in self.resources:
if str(resource._mivot_block).strip() != "":
return resource._mivot_block
return self._mivot_block
@mivot_block.setter
def mivot_block(self, mivot_block):
if self.type == "results":
vo_raise(E26)
self._mivot_block = mivot_block
@property
def extra_attributes(self):
"""Dictionary of extra attributes of the RESOURCE_ element.
This is dictionary of string keys to string values containing any
extra attributes of the RESOURCE_ element that are not defined
in the specification. The specification explicitly allows
for extra attributes here, but nowhere else.
"""
return self._extra_attributes
@property
def coordinate_systems(self):
"""
A list of coordinate system definitions (COOSYS_ elements) for
the RESOURCE_. Must contain only `CooSys` objects.
"""
return self._coordinate_systems
@property
def time_systems(self):
"""
A list of time system definitions (TIMESYS_ elements) for
the RESOURCE_. Must contain only `TimeSys` objects.
"""
return self._time_systems
@property
def infos(self):
"""
A list of informational parameters (key-value pairs) for the
resource. Must only contain `Info` objects.
"""
return self._infos
@property
def groups(self):
"""
A list of groups.
"""
return self._groups
@property
def params(self):
"""
A list of parameters (constant-valued columns) for the
resource. Must contain only `Param` objects.
"""
return self._params
@property
def links(self):
"""
A list of links (pointers to other documents or servers
through a URI) for the resource. Must contain only `Link`
objects.
"""
return self._links
@property
def tables(self):
"""
A list of tables in the resource. Must contain only
`TableElement` objects.
"""
return self._tables
@property
def resources(self):
"""
A list of nested resources inside this resource. Must contain
only `Resource` objects.
"""
return self._resources
def _add_table(self, iterator, tag, data, config, pos):
table = TableElement(self._votable, config=config, pos=pos, **data)
self.tables.append(table)
table.parse(iterator, config)
def _add_info(self, iterator, tag, data, config, pos):
info = Info(config=config, pos=pos, **data)
self.infos.append(info)
info.parse(iterator, config)
def _add_group(self, iterator, tag, data, config, pos):
group = Group(self, config=config, pos=pos, **data)
self.groups.append(group)
group.parse(iterator, config)
def _add_param(self, iterator, tag, data, config, pos):
param = Param(self._votable, config=config, pos=pos, **data)
self.params.append(param)
param.parse(iterator, config)
def _add_coosys(self, iterator, tag, data, config, pos):
coosys = CooSys(config=config, pos=pos, **data)
self.coordinate_systems.append(coosys)
coosys.parse(iterator, config)
def _add_timesys(self, iterator, tag, data, config, pos):
timesys = TimeSys(config=config, pos=pos, **data)
self.time_systems.append(timesys)
timesys.parse(iterator, config)
def _add_resource(self, iterator, tag, data, config, pos):
resource = Resource(config=config, pos=pos, **data)
self.resources.append(resource)
resource.parse(self._votable, iterator, config)
def _add_link(self, iterator, tag, data, config, pos):
link = Link(config=config, pos=pos, **data)
self.links.append(link)
link.parse(iterator, config)
def parse(self, votable, iterator, config):
self._votable = votable
tag_mapping = {
"TABLE": self._add_table,
"INFO": self._add_info,
"PARAM": self._add_param,
"GROUP": self._add_group,
"COOSYS": self._add_coosys,
"TIMESYS": self._add_timesys,
"RESOURCE": self._add_resource,
"LINK": self._add_link,
"DESCRIPTION": self._ignore_add,
}
for start, tag, data, pos in iterator:
# If the resource content starts with VODML,
# the parsing is delegated to the MIVOT parser
if tag == "VODML":
self._mivot_block.parse(votable, iterator, config)
elif start:
tag_mapping.get(tag, self._add_unknown_tag)(
iterator, tag, data, config, pos
)
elif tag == "DESCRIPTION":
if self.description is not None:
warn_or_raise(W17, W17, "RESOURCE", config, pos)
self.description = data or None
elif tag == "RESOURCE":
break
del self._votable
return self
def to_xml(self, w, **kwargs):
attrs = w.object_attrs(self, ("ID", "type", "utype"))
attrs.update(self.extra_attributes)
with w.tag("RESOURCE", attrib=attrs):
if self.description is not None:
w.element("DESCRIPTION", self.description, wrap=True)
if self.mivot_block is not None and self.type == "meta":
self.mivot_block.to_xml(w)
element_sets = [
self.infos,
self.coordinate_systems,
self.time_systems,
self.params,
self.links,
]
if kwargs["version_1_2_or_later"]:
element_sets.append(self.groups)
for element_set in element_sets:
for element in element_set:
element.to_xml(w, **kwargs)
# The mivot_block should be before the table
for elm in self.resources:
if elm.type == "meta" and elm.mivot_block is not None:
elm.to_xml(w, **kwargs)
for elm in self.tables:
elm.to_xml(w, **kwargs)
for elm in self.resources:
if elm.type != "meta":
elm.to_xml(w, **kwargs)
def iter_tables(self):
"""
Recursively iterates over all tables in the resource and
nested resources.
"""
yield from self.tables
for resource in self.resources:
yield from resource.iter_tables()
def iter_fields_and_params(self):
"""
Recursively iterates over all FIELD_ and PARAM_ elements in
the resource, its tables and nested resources.
"""
yield from self.params
for table in self.tables:
yield from table.iter_fields_and_params()
for resource in self.resources:
yield from resource.iter_fields_and_params()
def iter_coosys(self):
"""
Recursively iterates over all the COOSYS_ elements in the
resource and nested resources.
"""
yield from self.coordinate_systems
for resource in self.resources:
yield from resource.iter_coosys()
def iter_timesys(self):
"""
Recursively iterates over all the TIMESYS_ elements in the
resource and nested resources.
"""
yield from self.time_systems
for resource in self.resources:
yield from resource.iter_timesys()
def iter_info(self):
"""
Recursively iterates over all the INFO_ elements in the
resource and nested resources.
"""
yield from self.infos
for table in self.tables:
yield from table.iter_info()
for resource in self.resources:
yield from resource.iter_info()
| Resource |
python | tornadoweb__tornado | maint/benchmark/chunk_benchmark.py | {
"start": 541,
"end": 1622
} | class ____(RequestHandler):
def get(self):
for i in xrange(options.num_chunks):
self.write('A' * options.chunk_size)
self.flush()
self.finish()
def main():
parse_command_line()
app = Application([('/', ChunkHandler)])
app.listen(options.port, address='127.0.0.1')
def callback(response):
response.rethrow()
assert len(response.body) == (options.num_chunks * options.chunk_size)
logging.warning("fetch completed in %s seconds", response.request_time)
IOLoop.current().stop()
logging.warning("Starting fetch with curl client")
curl_client = CurlAsyncHTTPClient()
curl_client.fetch('http://localhost:%d/' % options.port,
callback=callback)
IOLoop.current().start()
logging.warning("Starting fetch with simple client")
simple_client = SimpleAsyncHTTPClient()
simple_client.fetch('http://localhost:%d/' % options.port,
callback=callback)
IOLoop.current().start()
if __name__ == '__main__':
main()
| ChunkHandler |
python | sympy__sympy | sympy/stats/stochastic_process.py | {
"start": 158,
"end": 2312
} | class ____(ProductPSpace):
"""
Represents probability space of stochastic processes
and their random variables. Contains mechanics to do
computations for queries of stochastic processes.
Explanation
===========
Initialized by symbol, the specific process and
distribution(optional) if the random indexed symbols
of the process follows any specific distribution, like,
in Bernoulli Process, each random indexed symbol follows
Bernoulli distribution. For processes with memory, this
parameter should not be passed.
"""
def __new__(cls, sym, process, distribution=None):
sym = _symbol_converter(sym)
from sympy.stats.stochastic_process_types import StochasticProcess
if not isinstance(process, StochasticProcess):
raise TypeError("`process` must be an instance of StochasticProcess.")
if distribution is None:
distribution = Distribution()
return Basic.__new__(cls, sym, process, distribution)
@property
def process(self):
"""
The associated stochastic process.
"""
return self.args[1]
@property
def domain(self):
return ProductDomain(self.process.index_set,
self.process.state_space)
@property
def symbol(self):
return self.args[0]
@property
def distribution(self):
return self.args[2]
def probability(self, condition, given_condition=None, evaluate=True, **kwargs):
"""
Transfers the task of handling queries to the specific stochastic
process because every process has their own logic of handling such
queries.
"""
return self.process.probability(condition, given_condition, evaluate, **kwargs)
def compute_expectation(self, expr, condition=None, evaluate=True, **kwargs):
"""
Transfers the task of handling queries to the specific stochastic
process because every process has their own logic of handling such
queries.
"""
return self.process.expectation(expr, condition, evaluate, **kwargs)
| StochasticPSpace |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-github/source_github/github_schema.py | {
"start": 499885,
"end": 500603
} | class ____(sgqlc.types.relay.Connection):
"""The connection type for Commit."""
__schema__ = github_schema
__field_names__ = ("edges", "nodes", "page_info", "total_count")
edges = sgqlc.types.Field(sgqlc.types.list_of("CommitEdge"), graphql_name="edges")
"""A list of edges."""
nodes = sgqlc.types.Field(sgqlc.types.list_of("Commit"), graphql_name="nodes")
"""A list of nodes."""
page_info = sgqlc.types.Field(sgqlc.types.non_null("PageInfo"), graphql_name="pageInfo")
"""Information to aid in pagination."""
total_count = sgqlc.types.Field(sgqlc.types.non_null(Int), graphql_name="totalCount")
"""Identifies the total count of items in the connection."""
| CommitConnection |
python | pypa__packaging | src/packaging/tags.py | {
"start": 816,
"end": 22856
} | class ____:
"""
A representation of the tag triple for a wheel.
Instances are considered immutable and thus are hashable. Equality checking
is also supported.
"""
__slots__ = ["_abi", "_hash", "_interpreter", "_platform"]
def __init__(self, interpreter: str, abi: str, platform: str) -> None:
self._interpreter = interpreter.lower()
self._abi = abi.lower()
self._platform = platform.lower()
# The __hash__ of every single element in a Set[Tag] will be evaluated each time
# that a set calls its `.disjoint()` method, which may be called hundreds of
# times when scanning a page of links for packages with tags matching that
# Set[Tag]. Pre-computing the value here produces significant speedups for
# downstream consumers.
self._hash = hash((self._interpreter, self._abi, self._platform))
@property
def interpreter(self) -> str:
return self._interpreter
@property
def abi(self) -> str:
return self._abi
@property
def platform(self) -> str:
return self._platform
def __eq__(self, other: object) -> bool:
if not isinstance(other, Tag):
return NotImplemented
return (
(self._hash == other._hash) # Short-circuit ASAP for perf reasons.
and (self._platform == other._platform)
and (self._abi == other._abi)
and (self._interpreter == other._interpreter)
)
def __hash__(self) -> int:
return self._hash
def __str__(self) -> str:
return f"{self._interpreter}-{self._abi}-{self._platform}"
def __repr__(self) -> str:
return f"<{self} @ {id(self)}>"
def __setstate__(self, state: tuple[None, dict[str, Any]]) -> None:
# The cached _hash is wrong when unpickling.
_, slots = state
for k, v in slots.items():
setattr(self, k, v)
self._hash = hash((self._interpreter, self._abi, self._platform))
def parse_tag(tag: str) -> frozenset[Tag]:
"""
Parses the provided tag (e.g. `py3-none-any`) into a frozenset of Tag instances.
Returning a set is required due to the possibility that the tag is a
compressed tag set.
"""
tags = set()
interpreters, abis, platforms = tag.split("-")
for interpreter in interpreters.split("."):
for abi in abis.split("."):
for platform_ in platforms.split("."):
tags.add(Tag(interpreter, abi, platform_))
return frozenset(tags)
def _get_config_var(name: str, warn: bool = False) -> int | str | None:
value: int | str | None = sysconfig.get_config_var(name)
if value is None and warn:
logger.debug(
"Config variable '%s' is unset, Python ABI tag may be incorrect", name
)
return value
def _normalize_string(string: str) -> str:
return string.replace(".", "_").replace("-", "_").replace(" ", "_")
def _is_threaded_cpython(abis: list[str]) -> bool:
"""
Determine if the ABI corresponds to a threaded (`--disable-gil`) build.
The threaded builds are indicated by a "t" in the abiflags.
"""
if len(abis) == 0:
return False
# expect e.g., cp313
m = re.match(r"cp\d+(.*)", abis[0])
if not m:
return False
abiflags = m.group(1)
return "t" in abiflags
def _abi3_applies(python_version: PythonVersion, threading: bool) -> bool:
"""
Determine if the Python version supports abi3.
PEP 384 was first implemented in Python 3.2. The threaded (`--disable-gil`)
builds do not support abi3.
"""
return len(python_version) > 1 and tuple(python_version) >= (3, 2) and not threading
def _cpython_abis(py_version: PythonVersion, warn: bool = False) -> list[str]:
py_version = tuple(py_version) # To allow for version comparison.
abis = []
version = _version_nodot(py_version[:2])
threading = debug = pymalloc = ucs4 = ""
with_debug = _get_config_var("Py_DEBUG", warn)
has_refcount = hasattr(sys, "gettotalrefcount")
# Windows doesn't set Py_DEBUG, so checking for support of debug-compiled
# extension modules is the best option.
# https://github.com/pypa/pip/issues/3383#issuecomment-173267692
has_ext = "_d.pyd" in EXTENSION_SUFFIXES
if with_debug or (with_debug is None and (has_refcount or has_ext)):
debug = "d"
if py_version >= (3, 13) and _get_config_var("Py_GIL_DISABLED", warn):
threading = "t"
if py_version < (3, 8):
with_pymalloc = _get_config_var("WITH_PYMALLOC", warn)
if with_pymalloc or with_pymalloc is None:
pymalloc = "m"
if py_version < (3, 3):
unicode_size = _get_config_var("Py_UNICODE_SIZE", warn)
if unicode_size == 4 or (
unicode_size is None and sys.maxunicode == 0x10FFFF
):
ucs4 = "u"
elif debug:
# Debug builds can also load "normal" extension modules.
# We can also assume no UCS-4 or pymalloc requirement.
abis.append(f"cp{version}{threading}")
abis.insert(0, f"cp{version}{threading}{debug}{pymalloc}{ucs4}")
return abis
def cpython_tags(
python_version: PythonVersion | None = None,
abis: Iterable[str] | None = None,
platforms: Iterable[str] | None = None,
*,
warn: bool = False,
) -> Iterator[Tag]:
"""
Yields the tags for a CPython interpreter.
The tags consist of:
- cp<python_version>-<abi>-<platform>
- cp<python_version>-abi3-<platform>
- cp<python_version>-none-<platform>
- cp<less than python_version>-abi3-<platform> # Older Python versions down to 3.2.
If python_version only specifies a major version then user-provided ABIs and
the 'none' ABItag will be used.
If 'abi3' or 'none' are specified in 'abis' then they will be yielded at
their normal position and not at the beginning.
"""
if not python_version:
python_version = sys.version_info[:2]
interpreter = f"cp{_version_nodot(python_version[:2])}"
if abis is None:
abis = _cpython_abis(python_version, warn) if len(python_version) > 1 else []
abis = list(abis)
# 'abi3' and 'none' are explicitly handled later.
for explicit_abi in ("abi3", "none"):
try:
abis.remove(explicit_abi)
except ValueError: # noqa: PERF203
pass
platforms = list(platforms or platform_tags())
for abi in abis:
for platform_ in platforms:
yield Tag(interpreter, abi, platform_)
threading = _is_threaded_cpython(abis)
use_abi3 = _abi3_applies(python_version, threading)
if use_abi3:
yield from (Tag(interpreter, "abi3", platform_) for platform_ in platforms)
yield from (Tag(interpreter, "none", platform_) for platform_ in platforms)
if use_abi3:
for minor_version in range(python_version[1] - 1, 1, -1):
for platform_ in platforms:
version = _version_nodot((python_version[0], minor_version))
interpreter = f"cp{version}"
yield Tag(interpreter, "abi3", platform_)
def _generic_abi() -> list[str]:
"""
Return the ABI tag based on EXT_SUFFIX.
"""
# The following are examples of `EXT_SUFFIX`.
# We want to keep the parts which are related to the ABI and remove the
# parts which are related to the platform:
# - linux: '.cpython-310-x86_64-linux-gnu.so' => cp310
# - mac: '.cpython-310-darwin.so' => cp310
# - win: '.cp310-win_amd64.pyd' => cp310
# - win: '.pyd' => cp37 (uses _cpython_abis())
# - pypy: '.pypy38-pp73-x86_64-linux-gnu.so' => pypy38_pp73
# - graalpy: '.graalpy-38-native-x86_64-darwin.dylib'
# => graalpy_38_native
ext_suffix = _get_config_var("EXT_SUFFIX", warn=True)
if not isinstance(ext_suffix, str) or ext_suffix[0] != ".":
raise SystemError("invalid sysconfig.get_config_var('EXT_SUFFIX')")
parts = ext_suffix.split(".")
if len(parts) < 3:
# CPython3.7 and earlier uses ".pyd" on Windows.
return _cpython_abis(sys.version_info[:2])
soabi = parts[1]
if soabi.startswith("cpython"):
# non-windows
abi = "cp" + soabi.split("-")[1]
elif soabi.startswith("cp"):
# windows
abi = soabi.split("-")[0]
elif soabi.startswith("pypy"):
abi = "-".join(soabi.split("-")[:2])
elif soabi.startswith("graalpy"):
abi = "-".join(soabi.split("-")[:3])
elif soabi:
# pyston, ironpython, others?
abi = soabi
else:
return []
return [_normalize_string(abi)]
def generic_tags(
interpreter: str | None = None,
abis: Iterable[str] | None = None,
platforms: Iterable[str] | None = None,
*,
warn: bool = False,
) -> Iterator[Tag]:
"""
Yields the tags for a generic interpreter.
The tags consist of:
- <interpreter>-<abi>-<platform>
The "none" ABI will be added if it was not explicitly provided.
"""
if not interpreter:
interp_name = interpreter_name()
interp_version = interpreter_version(warn=warn)
interpreter = f"{interp_name}{interp_version}"
abis = _generic_abi() if abis is None else list(abis)
platforms = list(platforms or platform_tags())
if "none" not in abis:
abis.append("none")
for abi in abis:
for platform_ in platforms:
yield Tag(interpreter, abi, platform_)
def _py_interpreter_range(py_version: PythonVersion) -> Iterator[str]:
"""
Yields Python versions in descending order.
After the latest version, the major-only version will be yielded, and then
all previous versions of that major version.
"""
if len(py_version) > 1:
yield f"py{_version_nodot(py_version[:2])}"
yield f"py{py_version[0]}"
if len(py_version) > 1:
for minor in range(py_version[1] - 1, -1, -1):
yield f"py{_version_nodot((py_version[0], minor))}"
def compatible_tags(
python_version: PythonVersion | None = None,
interpreter: str | None = None,
platforms: Iterable[str] | None = None,
) -> Iterator[Tag]:
"""
Yields the sequence of tags that are compatible with a specific version of Python.
The tags consist of:
- py*-none-<platform>
- <interpreter>-none-any # ... if `interpreter` is provided.
- py*-none-any
"""
if not python_version:
python_version = sys.version_info[:2]
platforms = list(platforms or platform_tags())
for version in _py_interpreter_range(python_version):
for platform_ in platforms:
yield Tag(version, "none", platform_)
if interpreter:
yield Tag(interpreter, "none", "any")
for version in _py_interpreter_range(python_version):
yield Tag(version, "none", "any")
def _mac_arch(arch: str, is_32bit: bool = _32_BIT_INTERPRETER) -> str:
if not is_32bit:
return arch
if arch.startswith("ppc"):
return "ppc"
return "i386"
def _mac_binary_formats(version: AppleVersion, cpu_arch: str) -> list[str]:
formats = [cpu_arch]
if cpu_arch == "x86_64":
if version < (10, 4):
return []
formats.extend(["intel", "fat64", "fat32"])
elif cpu_arch == "i386":
if version < (10, 4):
return []
formats.extend(["intel", "fat32", "fat"])
elif cpu_arch == "ppc64":
# TODO: Need to care about 32-bit PPC for ppc64 through 10.2?
if version > (10, 5) or version < (10, 4):
return []
formats.append("fat64")
elif cpu_arch == "ppc":
if version > (10, 6):
return []
formats.extend(["fat32", "fat"])
if cpu_arch in {"arm64", "x86_64"}:
formats.append("universal2")
if cpu_arch in {"x86_64", "i386", "ppc64", "ppc", "intel"}:
formats.append("universal")
return formats
def mac_platforms(
version: AppleVersion | None = None, arch: str | None = None
) -> Iterator[str]:
"""
Yields the platform tags for a macOS system.
The `version` parameter is a two-item tuple specifying the macOS version to
generate platform tags for. The `arch` parameter is the CPU architecture to
generate platform tags for. Both parameters default to the appropriate value
for the current system.
"""
version_str, _, cpu_arch = platform.mac_ver()
if version is None:
version = cast("AppleVersion", tuple(map(int, version_str.split(".")[:2])))
if version == (10, 16):
# When built against an older macOS SDK, Python will report macOS 10.16
# instead of the real version.
version_str = subprocess.run(
[
sys.executable,
"-sS",
"-c",
"import platform; print(platform.mac_ver()[0])",
],
check=True,
env={"SYSTEM_VERSION_COMPAT": "0"},
stdout=subprocess.PIPE,
text=True,
).stdout
version = cast("AppleVersion", tuple(map(int, version_str.split(".")[:2])))
if arch is None:
arch = _mac_arch(cpu_arch)
if (10, 0) <= version < (11, 0):
# Prior to Mac OS 11, each yearly release of Mac OS bumped the
# "minor" version number. The major version was always 10.
major_version = 10
for minor_version in range(version[1], -1, -1):
compat_version = major_version, minor_version
binary_formats = _mac_binary_formats(compat_version, arch)
for binary_format in binary_formats:
yield f"macosx_{major_version}_{minor_version}_{binary_format}"
if version >= (11, 0):
# Starting with Mac OS 11, each yearly release bumps the major version
# number. The minor versions are now the midyear updates.
minor_version = 0
for major_version in range(version[0], 10, -1):
compat_version = major_version, minor_version
binary_formats = _mac_binary_formats(compat_version, arch)
for binary_format in binary_formats:
yield f"macosx_{major_version}_{minor_version}_{binary_format}"
if version >= (11, 0):
# Mac OS 11 on x86_64 is compatible with binaries from previous releases.
# Arm64 support was introduced in 11.0, so no Arm binaries from previous
# releases exist.
#
# However, the "universal2" binary format can have a
# macOS version earlier than 11.0 when the x86_64 part of the binary supports
# that version of macOS.
major_version = 10
if arch == "x86_64":
for minor_version in range(16, 3, -1):
compat_version = major_version, minor_version
binary_formats = _mac_binary_formats(compat_version, arch)
for binary_format in binary_formats:
yield f"macosx_{major_version}_{minor_version}_{binary_format}"
else:
for minor_version in range(16, 3, -1):
compat_version = major_version, minor_version
binary_format = "universal2"
yield f"macosx_{major_version}_{minor_version}_{binary_format}"
def ios_platforms(
version: AppleVersion | None = None, multiarch: str | None = None
) -> Iterator[str]:
"""
Yields the platform tags for an iOS system.
:param version: A two-item tuple specifying the iOS version to generate
platform tags for. Defaults to the current iOS version.
:param multiarch: The CPU architecture+ABI to generate platform tags for -
(the value used by `sys.implementation._multiarch` e.g.,
`arm64_iphoneos` or `x84_64_iphonesimulator`). Defaults to the current
multiarch value.
"""
if version is None:
# if iOS is the current platform, ios_ver *must* be defined. However,
# it won't exist for CPython versions before 3.13, which causes a mypy
# error.
_, release, _, _ = platform.ios_ver() # type: ignore[attr-defined, unused-ignore]
version = cast("AppleVersion", tuple(map(int, release.split(".")[:2])))
if multiarch is None:
multiarch = sys.implementation._multiarch
multiarch = multiarch.replace("-", "_")
ios_platform_template = "ios_{major}_{minor}_{multiarch}"
# Consider any iOS major.minor version from the version requested, down to
# 12.0. 12.0 is the first iOS version that is known to have enough features
# to support CPython. Consider every possible minor release up to X.9. There
# highest the minor has ever gone is 8 (14.8 and 15.8) but having some extra
# candidates that won't ever match doesn't really hurt, and it saves us from
# having to keep an explicit list of known iOS versions in the code. Return
# the results descending order of version number.
# If the requested major version is less than 12, there won't be any matches.
if version[0] < 12:
return
# Consider the actual X.Y version that was requested.
yield ios_platform_template.format(
major=version[0], minor=version[1], multiarch=multiarch
)
# Consider every minor version from X.0 to the minor version prior to the
# version requested by the platform.
for minor in range(version[1] - 1, -1, -1):
yield ios_platform_template.format(
major=version[0], minor=minor, multiarch=multiarch
)
for major in range(version[0] - 1, 11, -1):
for minor in range(9, -1, -1):
yield ios_platform_template.format(
major=major, minor=minor, multiarch=multiarch
)
def android_platforms(
api_level: int | None = None, abi: str | None = None
) -> Iterator[str]:
"""
Yields the :attr:`~Tag.platform` tags for Android. If this function is invoked on
non-Android platforms, the ``api_level`` and ``abi`` arguments are required.
:param int api_level: The maximum `API level
<https://developer.android.com/tools/releases/platforms>`__ to return. Defaults
to the current system's version, as returned by ``platform.android_ver``.
:param str abi: The `Android ABI <https://developer.android.com/ndk/guides/abis>`__,
e.g. ``arm64_v8a``. Defaults to the current system's ABI , as returned by
``sysconfig.get_platform``. Hyphens and periods will be replaced with
underscores.
"""
if platform.system() != "Android" and (api_level is None or abi is None):
raise TypeError(
"on non-Android platforms, the api_level and abi arguments are required"
)
if api_level is None:
# Python 3.13 was the first version to return platform.system() == "Android",
# and also the first version to define platform.android_ver().
api_level = platform.android_ver().api_level # type: ignore[attr-defined]
if abi is None:
abi = sysconfig.get_platform().split("-")[-1]
abi = _normalize_string(abi)
# 16 is the minimum API level known to have enough features to support CPython
# without major patching. Yield every API level from the maximum down to the
# minimum, inclusive.
min_api_level = 16
for ver in range(api_level, min_api_level - 1, -1):
yield f"android_{ver}_{abi}"
def _linux_platforms(is_32bit: bool = _32_BIT_INTERPRETER) -> Iterator[str]:
linux = _normalize_string(sysconfig.get_platform())
if not linux.startswith("linux_"):
# we should never be here, just yield the sysconfig one and return
yield linux
return
if is_32bit:
if linux == "linux_x86_64":
linux = "linux_i686"
elif linux == "linux_aarch64":
linux = "linux_armv8l"
_, arch = linux.split("_", 1)
archs = {"armv8l": ["armv8l", "armv7l"]}.get(arch, [arch])
yield from _manylinux.platform_tags(archs)
yield from _musllinux.platform_tags(archs)
for arch in archs:
yield f"linux_{arch}"
def _generic_platforms() -> Iterator[str]:
yield _normalize_string(sysconfig.get_platform())
def platform_tags() -> Iterator[str]:
"""
Provides the platform tags for this installation.
"""
if platform.system() == "Darwin":
return mac_platforms()
elif platform.system() == "iOS":
return ios_platforms()
elif platform.system() == "Android":
return android_platforms()
elif platform.system() == "Linux":
return _linux_platforms()
else:
return _generic_platforms()
def interpreter_name() -> str:
"""
Returns the name of the running interpreter.
Some implementations have a reserved, two-letter abbreviation which will
be returned when appropriate.
"""
name = sys.implementation.name
return INTERPRETER_SHORT_NAMES.get(name) or name
def interpreter_version(*, warn: bool = False) -> str:
"""
Returns the version of the running interpreter.
"""
version = _get_config_var("py_version_nodot", warn=warn)
return str(version) if version else _version_nodot(sys.version_info[:2])
def _version_nodot(version: PythonVersion) -> str:
return "".join(map(str, version))
def sys_tags(*, warn: bool = False) -> Iterator[Tag]:
"""
Returns the sequence of tag triples for the running interpreter.
The order of the sequence corresponds to priority order for the
interpreter, from most to least important.
"""
interp_name = interpreter_name()
if interp_name == "cp":
yield from cpython_tags(warn=warn)
else:
yield from generic_tags()
if interp_name == "pp":
interp = "pp3"
elif interp_name == "cp":
interp = "cp" + interpreter_version(warn=warn)
else:
interp = None
yield from compatible_tags(interpreter=interp)
| Tag |
python | cherrypy__cherrypy | cherrypy/lib/locking.py | {
"start": 1037,
"end": 1645
} | class ____(object):
"""Keep track of the time and detect if a timeout has expired."""
def __init__(self, session_id, timeout):
"""Initialize a lock acquisition tracker."""
self.session_id = session_id
if timeout:
self.timer = Timer.after(timeout)
else:
self.timer = NeverExpires()
def expired(self):
"""Check whether the lock checker has expired."""
if self.timer.expired():
raise LockTimeout(
'Timeout acquiring lock for %(session_id)s' % vars(self),
)
return False
| LockChecker |
python | dagster-io__dagster | python_modules/dagster/dagster/_config/field_utils.py | {
"start": 16812,
"end": 18055
} | class ____(int):
"""Class used to represent an environment variable in the Dagster config system.
The environment variable will be resolved to an int value when the config is
loaded.
"""
name: str
@classmethod
def create(cls, name: str) -> "IntEnvVar":
var = IntEnvVar(0)
var.name = name
return var
def __int__(self) -> int:
"""Raises an exception of the EnvVar value is directly accessed. Users should instead use
the `get_value` method, or use the EnvVar as an input to Dagster config or resources.
"""
raise _create_direct_access_exception(self.__class__, self.env_var_name)
def __str__(self) -> str:
return str(int(self))
def get_value(self, default: Optional[int] = None) -> Optional[int]:
"""Returns the value of the environment variable, or the default value if the
environment variable is not set. If no default is provided, None will be returned.
"""
value = os.getenv(self.name, default=default)
return int(value) if value else None
@property
def env_var_name(self) -> str:
"""Returns the name of the environment variable."""
return self.name
@public
| IntEnvVar |
python | readthedocs__readthedocs.org | readthedocs/projects/views/private.py | {
"start": 33329,
"end": 33391
} | class ____(IntegrationMixin, ListView):
pass
| IntegrationList |
python | jazzband__prettytable | tests/test_prettytable.py | {
"start": 47219,
"end": 62222
} | class ____:
colored = "\033[31mC\033[32mO\033[31mL\033[32mO\033[31mR\033[32mE\033[31mD\033[0m"
def test_color(self) -> None:
table = PrettyTable(["Field 1", "Field 2"])
table.add_row([self.colored, self.colored])
table.add_row(["nothing", "neither"])
result = table.get_string()
assert (
result.strip()
== f"""
+---------+---------+
| Field 1 | Field 2 |
+---------+---------+
| {self.colored} | {self.colored} |
| nothing | neither |
+---------+---------+
""".strip()
)
def test_reset(self) -> None:
table = PrettyTable(["Field 1", "Field 2"])
table.add_row(["abc def\033(B", "\033[31mabc def\033[m"])
table.add_row(["nothing", "neither"])
result = table.get_string()
assert (
result.strip()
== """
+---------+---------+
| Field 1 | Field 2 |
+---------+---------+
| abc def\033(B | \033[31mabc def\033[m |
| nothing | neither |
+---------+---------+
""".strip()
)
@pytest.mark.parametrize(
"loops, fields, desired_width, border, internal_border",
[
(15, ["Test table"], 20, True, False),
(16, ["Test table"], 21, True, False),
(18, ["Test table", "Test table 2"], 40, True, False),
(19, ["Test table", "Test table 2"], 41, True, False),
(21, ["Test table", "Test col 2", "Test col 3"], 50, True, False),
(22, ["Test table", "Test col 2", "Test col 3"], 51, True, False),
(19, ["Test table"], 20, False, False),
(20, ["Test table"], 21, False, False),
(25, ["Test table", "Test table 2"], 40, False, False),
(26, ["Test table", "Test table 2"], 41, False, False),
(25, ["Test table", "Test col 2", "Test col 3"], 50, False, False),
(26, ["Test table", "Test col 2", "Test col 3"], 51, False, False),
(18, ["Test table"], 20, False, True),
(19, ["Test table"], 21, False, True),
(23, ["Test table", "Test table 2"], 40, False, True),
(24, ["Test table", "Test table 2"], 41, False, True),
(22, ["Test table", "Test col 2", "Test col 3"], 50, False, True),
(23, ["Test table", "Test col 2", "Test col 3"], 51, False, True),
],
)
def test_min_table_width(
self,
loops: int,
fields: list[str],
desired_width: int,
border: bool,
internal_border: bool,
) -> None:
for col_width in range(loops):
x = prettytable.PrettyTable()
x.border = border
x.preserve_internal_border = internal_border
x.field_names = fields
x.add_row(["X" * col_width] + ["" for _ in range(len(fields) - 1)])
x.min_table_width = desired_width
t = x.get_string()
if border is False and internal_border is False:
assert [len(x) for x in t.split("\n")] == [desired_width, desired_width]
elif border is False and internal_border is True:
assert [len(x) for x in t.split("\n")] == [
desired_width,
desired_width - 1,
desired_width,
]
else:
assert [len(x) for x in t.split("\n")] == [
desired_width,
desired_width,
desired_width,
desired_width,
desired_width,
]
def test_max_table_width(self) -> None:
table = PrettyTable()
table.max_table_width = 5
table.add_row([0])
# FIXME: Table is wider than table.max_table_width
assert (
table.get_string().strip()
== """
+----+
| Fi |
+----+
| 0 |
+----+
""".strip()
)
def test_max_table_width_wide(self) -> None:
table = PrettyTable()
table.max_table_width = 52
table.add_row(
[
0,
0,
0,
0,
0,
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam "
"nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam "
"erat, sed diam voluptua",
]
)
assert (
table.get_string().strip()
== """
+---+---+---+---+---+------------------------------+
| F | F | F | F | F | Field 6 |
+---+---+---+---+---+------------------------------+
| 0 | 0 | 0 | 0 | 0 | Lorem ipsum dolor sit amet, |
| | | | | | consetetur sadipscing elitr, |
| | | | | | sed diam nonumy eirmod |
| | | | | | tempor invidunt ut labore et |
| | | | | | dolore magna aliquyam erat, |
| | | | | | sed diam voluptua |
+---+---+---+---+---+------------------------------+""".strip()
)
def test_max_table_width_wide2(self) -> None:
table = PrettyTable()
table.max_table_width = 70
table.add_row(
[
"Lorem",
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam ",
"ipsum",
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam ",
"dolor",
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam ",
]
)
assert (
table.get_string().strip()
== """
+---+-----------------+---+-----------------+---+-----------------+
| F | Field 2 | F | Field 4 | F | Field 6 |
+---+-----------------+---+-----------------+---+-----------------+
| L | Lorem ipsum | i | Lorem ipsum | d | Lorem ipsum |
| o | dolor sit amet, | p | dolor sit amet, | o | dolor sit amet, |
| r | consetetur | s | consetetur | l | consetetur |
| e | sadipscing | u | sadipscing | o | sadipscing |
| m | elitr, sed diam | m | elitr, sed diam | r | elitr, sed diam |
+---+-----------------+---+-----------------+---+-----------------+""".strip()
)
@pytest.mark.parametrize("set_width_parameter", [True, False])
def test_table_max_width_wo_header_width(self, set_width_parameter: bool) -> None:
headers = [
"A Field Name",
"B Field Name",
"D Field Name",
"E Field Name",
"F Field Name",
"G Field Name",
"H Field Name",
"I Field Name",
"J Field Name",
"K Field Name",
"L Field Name",
"M Field Name",
]
row = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
expected = """+---+---+---+---+---+---+---+---+---+---+----+----+
| A | B | D | E | F | G | H | I | J | K | L | M |
+---+---+---+---+---+---+---+---+---+---+----+----+
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
+---+---+---+---+---+---+---+---+---+---+----+----+"""
if set_width_parameter:
table = PrettyTable(headers, use_header_width=False)
else:
table = PrettyTable(headers)
table.use_header_width = False
table.add_row(row)
assert table.get_string() == expected
def test_table_width_on_init_wo_columns(self) -> None:
"""See also #272"""
table = PrettyTable(max_width=10)
table.add_row(
[
"Lorem",
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam ",
"ipsum",
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam ",
"dolor",
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam ",
]
)
assert (
table.get_string().strip()
== """
+---------+------------+---------+------------+---------+------------+
| Field 1 | Field 2 | Field 3 | Field 4 | Field 5 | Field 6 |
+---------+------------+---------+------------+---------+------------+
| Lorem | Lorem | ipsum | Lorem | dolor | Lorem |
| | ipsum | | ipsum | | ipsum |
| | dolor sit | | dolor sit | | dolor sit |
| | amet, | | amet, | | amet, |
| | consetetur | | consetetur | | consetetur |
| | sadipscing | | sadipscing | | sadipscing |
| | elitr, sed | | elitr, sed | | elitr, sed |
| | diam | | diam | | diam |
+---------+------------+---------+------------+---------+------------+""".strip()
)
def test_table_width_on_init_with_columns(self) -> None:
"""See also #272"""
table = PrettyTable(
["Field 1", "Field 2", "Field 3", "Field 4", "Field 5", "Field 6"],
max_width=10,
)
table.add_row(
[
"Lorem",
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam ",
"ipsum",
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam ",
"dolor",
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam ",
]
)
assert (
table.get_string().strip()
== """
+---------+------------+---------+------------+---------+------------+
| Field 1 | Field 2 | Field 3 | Field 4 | Field 5 | Field 6 |
+---------+------------+---------+------------+---------+------------+
| Lorem | Lorem | ipsum | Lorem | dolor | Lorem |
| | ipsum | | ipsum | | ipsum |
| | dolor sit | | dolor sit | | dolor sit |
| | amet, | | amet, | | amet, |
| | consetetur | | consetetur | | consetetur |
| | sadipscing | | sadipscing | | sadipscing |
| | elitr, sed | | elitr, sed | | elitr, sed |
| | diam | | diam | | diam |
+---------+------------+---------+------------+---------+------------+""".strip()
)
def test_table_minwidth_on_init_with_columns(self) -> None:
table = PrettyTable(["Field 1", "Field 2"], min_width=20)
table.add_row(
[
"Lorem",
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam ",
]
)
assert (
table.get_string().strip()
== """+----------------------+--------------------------------------------------------------------+
| Field 1 | Field 2 |
+----------------------+--------------------------------------------------------------------+
| Lorem | Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam |
+----------------------+--------------------------------------------------------------------+""" # noqa: E501
)
def test_table_min_max_width_on_init_with_columns(self) -> None:
table = PrettyTable(["Field 1", "Field 2"], min_width=20, max_width=40)
table.add_row(
[
"Lorem",
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam ",
]
)
assert (
table.get_string().strip()
== """+----------------------+------------------------------------------+
| Field 1 | Field 2 |
+----------------------+------------------------------------------+
| Lorem | Lorem ipsum dolor sit amet, consetetur |
| | sadipscing elitr, sed diam |
+----------------------+------------------------------------------+"""
)
def test_table_float_formatting_on_init_wo_columns(self) -> None:
"""See also #243"""
table = prettytable.PrettyTable(float_format="10.2")
table.field_names = ["Metric", "Initial sol.", "Best sol."]
table.add_rows([["foo", 1.0 / 3.0, 1.0 / 3.0]])
assert (
table.get_string().strip()
== """
+--------+--------------+------------+
| Metric | Initial sol. | Best sol. |
+--------+--------------+------------+
| foo | 0.33 | 0.33 |
+--------+--------------+------------+""".strip()
)
def test_max_table_width_wide_vrules_frame(self) -> None:
table = PrettyTable()
table.max_table_width = 52
table.vrules = VRuleStyle.FRAME
table.add_row(
[
0,
0,
0,
0,
0,
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam "
"nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam "
"erat, sed diam voluptua",
]
)
assert (
table.get_string().strip()
== """
+--------------------------------------------------+
| F F F F F Field 6 |
+--------------------------------------------------+
| 0 0 0 0 0 Lorem ipsum dolor sit amet, |
| consetetur sadipscing elitr, |
| sed diam nonumy eirmod |
| tempor invidunt ut labore et |
| dolore magna aliquyam erat, |
| sed diam voluptua |
+--------------------------------------------------+""".strip()
)
def test_max_table_width_wide_vrules_none(self) -> None:
table = PrettyTable()
table.max_table_width = 52
table.vrules = VRuleStyle.NONE
table.add_row(
[
0,
0,
0,
0,
0,
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam "
"nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam "
"erat, sed diam voluptua",
]
)
assert (
table.get_string().strip()
== """
----------------------------------------------------
F F F F F Field 6
----------------------------------------------------
0 0 0 0 0 Lorem ipsum dolor sit amet,
consetetur sadipscing elitr,
sed diam nonumy eirmod
tempor invidunt ut labore et
dolore magna aliquyam erat,
sed diam voluptua
----------------------------------------------------""".strip() # noqa: W291
)
| TestWidth |
python | walkccc__LeetCode | solutions/1100. Find K-Length Substrings With No Repeated Characters/1100.py | {
"start": 0,
"end": 391
} | class ____:
def numKLenSubstrNoRepeats(self, s: str, k: int) -> int:
ans = 0
unique = 0
count = collections.Counter()
for i, c in enumerate(s):
count[c] += 1
if count[c] == 1:
unique += 1
if i >= k:
count[s[i - k]] -= 1
if count[s[i - k]] == 0:
unique -= 1
if unique == k:
ans += 1
return ans
| Solution |
python | pytorch__pytorch | test/test_dynamic_shapes.py | {
"start": 54680,
"end": 71186
} | class ____(TestCase):
def _do_test(self, fn, inp1, inp2, shape_env, is_unary_fn):
with self.subTest(fn=fn, inp1=inp1, inp2=inp2, is_unary_fn=is_unary_fn):
return self._do_test2(fn, inp1, inp2, shape_env, is_unary_fn)
def _do_test2(self, fn, inp1, inp2, shape_env, is_unary_fn):
# Helper function
# NB: don't use one as that will get specialized
# TODO: We don't have to circuitously create the float, can just
# create a symfloat directly
seed_node = (create_symint(shape_env, 2) / 2.0).node
bool_seed_node = (create_symint(shape_env, 2) == 2).node
def get_sym_inp(inp):
# NB: this must come before int
if isinstance(inp, bool):
return torch.SymBool(to_node(bool_seed_node, inp))
elif isinstance(inp, int):
return torch.SymInt(to_node(seed_node, inp))
else:
return torch.SymFloat(to_node(seed_node, inp))
if fn == "float_pow":
if inp1 < 0:
return
if fn == "pow_by_natural":
if isinstance(inp1, float) or isinstance(inp2, float):
return
if inp2 < 0:
return
def maybe_xfail(inp1, inp2):
if fn == "sym_sqrt" and inp1 < 0:
# ValueError: math domain error
return self.assertRaises((ValueError,))
elif (
fn in ("float_truediv", "int_truediv", "int_floordiv", "mod")
and inp2 == 0
):
# ZeroDivisionError: division by zero
return self.assertRaises((ZeroDivisionError,))
elif fn in ["float_pow", "pow_by_natural"] and inp1 == 0 and inp2 < 0:
# ZeroDivisionError: 0.0 cannot be raised to a negative power
return self.assertRaises((ZeroDivisionError,))
elif (
# TODO: dear catastrophe waitress,
# this doesn't work
fn in ["float_pow", "pow_by_natural"]
and inp1 < 0
and (
type(inp1) is (SymInt, SymFloat) or type(inp2) is (SymInt, SymFloat)
)
and (type(inp1) is (SymFloat, float) or type(inp2) is (SymFloat, float))
):
# Complex result, which we do not support:
# TypeError: Cannot convert complex to float
return self.assertRaises((RuntimeError,))
elif fn in ("lshift", "rshift") and not (
isinstance(inp1, (SymInt, int)) and isinstance(inp2, (SymInt, int))
):
# TypeError: unsupported operand type(s)
return self.assertRaises((TypeError,))
elif fn in ("lshift", "rshift") and inp2 < 0:
# ValueError: math domain error
return self.assertRaises((ValueError,))
else:
return contextlib.nullcontext()
lambda_apply = method_to_operator(fn)
def guard_fn(v):
if type(v) in (SymBool, bool):
return guard_bool(v)
elif type(v) in (SymFloat, float):
return guard_float(v)
else: # SymInt, int
return guard_int(v)
# Get reference result
with maybe_xfail(inp1, inp2):
if is_unary_fn:
ref_out = lambda_apply(inp1)
else:
ref_out = lambda_apply(inp1, inp2)
# Symified first arg
sym_inp1 = get_sym_inp(inp1)
with maybe_xfail(sym_inp1, inp2):
if is_unary_fn:
out = lambda_apply(sym_inp1)
else:
out = lambda_apply(sym_inp1, inp2)
self.assertTrue(isinstance(out, (SymInt, SymFloat, SymBool)))
out = guard_fn(out)
self.assertEqual(out, ref_out)
if is_unary_fn:
return
# Symified second arg
sym_inp2 = get_sym_inp(inp2)
with maybe_xfail(inp1, sym_inp2):
out = lambda_apply(inp1, sym_inp2)
self.assertTrue(isinstance(out, (SymInt, SymFloat, SymBool)))
out = guard_fn(out)
self.assertEqual(out, ref_out)
# Symified both args
with maybe_xfail(sym_inp1, sym_inp2):
out = lambda_apply(sym_inp1, sym_inp2)
self.assertTrue(isinstance(out, (SymInt, SymFloat, SymBool)))
out = guard_fn(out)
self.assertEqual(out, ref_out)
@parametrize("fn", list(sym_node.magic_methods.keys()))
def test_bool_method(self, fn):
# sym_ite has its own tests
if fn not in sym_node.bool_magic_methods or fn == "sym_ite":
self.skipTest(f"{fn} is non-bool")
is_unary_fn = fn in sym_node.unary_methods
shape_env = ShapeEnv()
self._do_test(fn, True, False, shape_env, is_unary_fn)
@parametrize("fn", list(sym_node.magic_methods.keys()))
@parametrize("first_type", ["int", "float"])
@parametrize("second_type", ["int", "float"])
def test_method(self, fn, first_type, second_type):
if first_type == "float":
# TODO: Hmm, this looks like we skip all floats
self.skipTest(f"{fn} is not a float magic method")
if (
first_type == "int" or second_type == "int"
) and fn in sym_node.only_float_magic_methods:
self.skipTest(f"{fn} is not an int method")
if second_type == "float" and fn in ["mod"]:
self.skipTest(f"{fn} only handles int")
if fn in sym_node.bitwise_ops and (first_type != "int" or second_type != "int"):
self.skipTest(f"{fn} is a bitwise op, only handles int")
is_unary_fn = fn in sym_node.unary_methods or fn == "round"
# Second argument is ignored for unary function. So only run for one type
if is_unary_fn and second_type == "float":
self.skipTest(f"{fn} is unary and already tested")
if fn in sym_node.bool_magic_methods:
self.skipTest(f"{fn} is bool")
# Only floats here since these will be converted to int if necessary.
# We also ignore complex and bool.
values = (
0.0,
1.0,
0.5 if fn in ("sym_acos", "sym_asin") else 2.5, # avoid math domain error
)
neg_values = tuple(-x for x in values)
for inp1, inp2 in itertools.chain(
itertools.product(values, values),
itertools.product(values, neg_values),
itertools.product(neg_values, values),
itertools.product(neg_values, neg_values),
):
if first_type == "int":
inp1 = int(inp1)
if second_type == "int":
inp2 = int(inp2)
shape_env = ShapeEnv()
self._do_test(fn, inp1, inp2, shape_env, is_unary_fn)
def get_constant_bool(self, val):
return SymBool(torch._C._get_constant_bool_symnode(val))
@unittest.expectedFailure
def test_symint_hashing(self):
shape_env = ShapeEnv()
hash(create_symint(shape_env, 3))
def test_symnode_hashing(self):
shape_env = ShapeEnv()
# These all trigger specialization when hashed
hash(create_symbool(shape_env, True))
# We should be passing in float here, but create_symbol currently
# only supports int
hash(create_symfloat(shape_env, 3.0))
# NestedInt (SymInt), constant SymBool, SymNode are hashable
j1 = torch._C._get_nested_int(1, 1)
j1_copy = torch._C._get_nested_int(1, 1)
j2 = torch._C._get_nested_int(2, 1)
t = self.get_constant_bool(True)
t_copy = self.get_constant_bool(True)
f = self.get_constant_bool(False)
n = create_symint(shape_env, 3).node
m = self.get_constant_bool(True).node
self.assertIs(j1 == j1_copy, True)
self.assertEqual(hash(j1), hash(j1_copy))
self.assertIs(j1 == j2, False)
self.assertNotEqual(hash(j1), hash(j2))
self.assertIs(t == t_copy, True)
self.assertEqual(hash(t), hash(t_copy))
self.assertIs(t == f, False)
self.assertNotEqual(hash(t), hash(f))
hash(n)
hash(m)
def test_symint_deepcopy(self):
shape_env = ShapeEnv()
symnodes = (torch._C._get_nested_int(1, 1),)
deepcopied_symnodes = copy.deepcopy(symnodes)
self.assertEqual(symnodes, deepcopied_symnodes)
def test_non_symbolic_symnode(self):
j1 = torch._C._get_nested_int(1, 1)
j2 = torch._C._get_nested_int(1, 1)
j3 = torch._C._get_nested_int(3, 1)
self.assertIsInstance(j1, torch.SymInt)
self.assertNotIsInstance(j1, int)
with self.assertRaisesRegex(
RuntimeError, "add not supported by NestedIntSymNode"
):
j1 + 3
self.assertFalse(j1 == 3)
with self.assertRaisesRegex(RuntimeError, "indeterminate"):
self.assertFalse(3 >= j2)
self.assertIs(j1 == j1, True)
self.assertIs(j1 == j2, True)
self.assertIs(j1 == j3, False)
self.assertIs(j1 != j3, True)
self.assertIs(j1 != j2, False)
x = self.get_constant_bool(True)
#
# Unary
#
# op(constant SymBool)
self.assertIs(x.__sym_not__(), False)
#
# Binary
#
# op(constant SymBool, bool)
# op(constant SymBool, constant SymBool)
# op(bool, constant SymBool)
self.assertIs(operator.and_(x, True), True)
self.assertIs(operator.and_(x, x), True)
self.assertIs(operator.and_(True, x), True)
# op(symbolic SymBool, constant Symbool)
# op(constant SymBool, symbolic Symbool)
shape_env = ShapeEnv()
a = create_symint(shape_env, 2)
b = create_symint(shape_env, 2)
c = a == b # symbolic SymBool
d = self.get_constant_bool(True)
e = operator.and_(c, d)
f = operator.and_(d, c)
self.assertTrue(is_symbolic(e))
self.assertTrue(is_symbolic(f))
self.assertIs(e.node.guard_bool("", 0), True)
self.assertIs(f.node.guard_bool("", 0), True)
# Comparing sizes
sz1 = torch.Size([j1, j1, j1])
sz2 = torch.Size([j1, j1, j1])
self.assertIs(sz1 == sz2, True)
sz1 = torch.Size([3, j1, 4])
sz2 = torch.Size([3, j2, 4])
self.assertIs(sz1 == sz2, True)
self.assertIs(sz1 != sz2, False)
def test_stride_symnode(self):
shape_env = ShapeEnv()
# check everything static
t = create_fake_tensor_with_dynamic_size(
torch.ones(3, 6),
shape_env,
dynamic_sizes=[
DimDynamic.STATIC,
DimDynamic.STATIC,
],
dynamic_strides=[
DimDynamic.INFER_STRIDE,
DimDynamic.INFER_STRIDE,
],
)
self.assertTrue(all(isinstance(size, int) for size in t.size()))
self.assertTrue(all(isinstance(stride, int) for stride in t.stride()))
# check dynamic size but static dims
t = create_fake_tensor_with_dynamic_size(
torch.ones(3, 6),
shape_env,
dynamic_sizes=[
DimDynamic.DYNAMIC,
DimDynamic.DYNAMIC,
],
dynamic_strides=[
DimDynamic.INFER_STRIDE,
DimDynamic.INFER_STRIDE,
],
)
# Expect stride to be inferred
s0, s1 = t.size()
s2, s3 = t.stride()
self.assertTrue(isinstance(s0, torch.SymInt))
self.assertTrue(isinstance(s1, torch.SymInt))
self.assertTrue(isinstance(s2, torch.SymInt))
self.assertTrue(s1 == s2)
self.assertEqual(s3, 1)
# Check dynamic stride but static dims
t = create_fake_tensor_with_dynamic_size(
torch.ones(3, 6),
shape_env,
dynamic_sizes=[
DimDynamic.STATIC,
DimDynamic.STATIC,
],
dynamic_strides=[
DimDynamic.DYNAMIC,
DimDynamic.INFER_STRIDE,
],
)
s0, s1 = t.size()
s2, s3 = t.stride()
self.assertTrue(isinstance(s0, int))
self.assertTrue(isinstance(s1, int))
self.assertTrue(isinstance(s2, torch.SymInt))
self.assertTrue(isinstance(s3, int))
# Check dynamic sizes and dims, and ensure different symbol
t = create_fake_tensor_with_dynamic_size(
torch.ones(3, 6),
shape_env,
dynamic_sizes=[
DimDynamic.DYNAMIC,
DimDynamic.DYNAMIC,
],
dynamic_strides=[
DimDynamic.DYNAMIC,
DimDynamic.INFER_STRIDE,
],
)
s0, s1 = t.size()
s2, s3 = t.stride()
self.assertTrue(isinstance(s0, torch.SymInt))
self.assertTrue(isinstance(s1, torch.SymInt))
self.assertTrue(isinstance(s2, torch.SymInt))
self.assertTrue(isinstance(s3, int))
self.assertTrue(str(s1.node.expr) != str(s2.node.expr))
@fresh_cache()
@torch._dynamo.config.patch("capture_scalar_outputs", True)
@parametrize("backend", ["inductor", "eager"])
def test_dynamic_int_basic_compile(self, backend):
from torch.fx.experimental.sym_node import DynamicInt
cnt = CompileCounterWithBackend(backend)
# test scalar inputs to function
def f(x, y, z):
out = torch.tensor([x + y + z])
out = out + torch.zeros(abs(x) + 2).sum() # test out tensor construction
return out
fn = torch.compile(f, fullgraph=True, backend=cnt)
x = DynamicInt(1)
z = DynamicInt(3)
self.assertEqual(fn(x, x, z), f(1, 1, 3)) # guard: x == y
self.assertEqual(fn(2, 2, 0), f(2, 2, 0))
self.assertEqual(fn(-1, -1, 2), f(-1, -1, 2))
self.assertEqual(cnt.frame_count, 1) # no recompiles
self.assertEqual(fn(3, 4, 5), f(3, 4, 5)) # now we recompile
self.assertEqual(cnt.frame_count, 2)
# test nn module property
class Foo(torch.nn.Module):
def __init__(self):
super().__init__()
self.i = DynamicInt(1)
def forward(self, x):
return torch.tensor([x + self.i])
cnt.clear()
m = Foo()
mc = torch.compile(m, backend=cnt, fullgraph=True)
self.assertEqual(mc(DynamicInt(0)), m(0))
mc.i = -2 # override attribute
self.assertEqual(mc(-1), m(-1))
self.assertEqual(cnt.frame_count, 1)
def test_dynamic_int_eager_usage(self):
from torch.fx.experimental.sym_node import DynamicInt
w = DynamicInt(-1)
x = DynamicInt(0)
y = DynamicInt(1)
z = DynamicInt(2)
def check(l, r):
self.assertTrue(isinstance(l, DynamicInt))
self.assertEqual(l, r)
# test arithmetic
check(2 * y + z, 4)
check((10 - z) // 2, 4)
check(1 // z, 0)
check(-w + w**2, 2)
check(x % z, 0)
check(1 << z, 4)
check(z | y, 3)
check(min(y, z), 1)
self.assertTrue(z > -2)
with self.assertRaises(ZeroDivisionError):
y % x
# math, numpy
self.assertEqual(math.cos(x), y)
self.assertEqual(math.prod([z, z], start=z), 8)
self.assertEqual(np.arange(z)[y], 1)
self.assertTrue(np.allclose(np.ones([y, z]).sum(axis=x), np.ones(z)))
# test conversions
self.assertTrue(isinstance(x + 2, int))
self.assertTrue(isinstance(x + 2, DynamicInt))
self.assertEqual(y / 2.0, 0.5) # this could return DynamicFloat in future
self.assertEqual(float(z), 2.0)
self.assertFalse(bool(x))
self.assertEqual(DynamicInt(x).real, x.real)
# torch functions, scalar inputs
self.assertEqual(torch.arange(z)[:w][x], 0)
self.assertEqual(torch.add(torch.tensor(w), torch.tensor(w), alpha=z), -3)
self.assertEqual(
list(torch.nn.Linear(z, y)(torch.randn(z * 2, z)).shape), [4, 1]
)
self.assertEqual(z * torch.ones(z).sum(dim=x), 4)
instantiate_parametrized_tests(TestSymNumberMagicMethods)
| TestSymNumberMagicMethods |
python | kubernetes-client__python | kubernetes/client/models/v1_device_constraint.py | {
"start": 383,
"end": 8400
} | class ____(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'distinct_attribute': 'str',
'match_attribute': 'str',
'requests': 'list[str]'
}
attribute_map = {
'distinct_attribute': 'distinctAttribute',
'match_attribute': 'matchAttribute',
'requests': 'requests'
}
def __init__(self, distinct_attribute=None, match_attribute=None, requests=None, local_vars_configuration=None): # noqa: E501
"""V1DeviceConstraint - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._distinct_attribute = None
self._match_attribute = None
self._requests = None
self.discriminator = None
if distinct_attribute is not None:
self.distinct_attribute = distinct_attribute
if match_attribute is not None:
self.match_attribute = match_attribute
if requests is not None:
self.requests = requests
@property
def distinct_attribute(self):
"""Gets the distinct_attribute of this V1DeviceConstraint. # noqa: E501
DistinctAttribute requires that all devices in question have this attribute and that its type and value are unique across those devices. This acts as the inverse of MatchAttribute. This constraint is used to avoid allocating multiple requests to the same device by ensuring attribute-level differentiation. This is useful for scenarios where resource requests must be fulfilled by separate physical devices. For example, a container requests two network interfaces that must be allocated from two different physical NICs. # noqa: E501
:return: The distinct_attribute of this V1DeviceConstraint. # noqa: E501
:rtype: str
"""
return self._distinct_attribute
@distinct_attribute.setter
def distinct_attribute(self, distinct_attribute):
"""Sets the distinct_attribute of this V1DeviceConstraint.
DistinctAttribute requires that all devices in question have this attribute and that its type and value are unique across those devices. This acts as the inverse of MatchAttribute. This constraint is used to avoid allocating multiple requests to the same device by ensuring attribute-level differentiation. This is useful for scenarios where resource requests must be fulfilled by separate physical devices. For example, a container requests two network interfaces that must be allocated from two different physical NICs. # noqa: E501
:param distinct_attribute: The distinct_attribute of this V1DeviceConstraint. # noqa: E501
:type: str
"""
self._distinct_attribute = distinct_attribute
@property
def match_attribute(self):
"""Gets the match_attribute of this V1DeviceConstraint. # noqa: E501
MatchAttribute requires that all devices in question have this attribute and that its type and value are the same across those devices. For example, if you specified \"dra.example.com/numa\" (a hypothetical example!), then only devices in the same NUMA node will be chosen. A device which does not have that attribute will not be chosen. All devices should use a value of the same type for this attribute because that is part of its specification, but if one device doesn't, then it also will not be chosen. Must include the domain qualifier. # noqa: E501
:return: The match_attribute of this V1DeviceConstraint. # noqa: E501
:rtype: str
"""
return self._match_attribute
@match_attribute.setter
def match_attribute(self, match_attribute):
"""Sets the match_attribute of this V1DeviceConstraint.
MatchAttribute requires that all devices in question have this attribute and that its type and value are the same across those devices. For example, if you specified \"dra.example.com/numa\" (a hypothetical example!), then only devices in the same NUMA node will be chosen. A device which does not have that attribute will not be chosen. All devices should use a value of the same type for this attribute because that is part of its specification, but if one device doesn't, then it also will not be chosen. Must include the domain qualifier. # noqa: E501
:param match_attribute: The match_attribute of this V1DeviceConstraint. # noqa: E501
:type: str
"""
self._match_attribute = match_attribute
@property
def requests(self):
"""Gets the requests of this V1DeviceConstraint. # noqa: E501
Requests is a list of the one or more requests in this claim which must co-satisfy this constraint. If a request is fulfilled by multiple devices, then all of the devices must satisfy the constraint. If this is not specified, this constraint applies to all requests in this claim. References to subrequests must include the name of the main request and may include the subrequest using the format <main request>[/<subrequest>]. If just the main request is given, the constraint applies to all subrequests. # noqa: E501
:return: The requests of this V1DeviceConstraint. # noqa: E501
:rtype: list[str]
"""
return self._requests
@requests.setter
def requests(self, requests):
"""Sets the requests of this V1DeviceConstraint.
Requests is a list of the one or more requests in this claim which must co-satisfy this constraint. If a request is fulfilled by multiple devices, then all of the devices must satisfy the constraint. If this is not specified, this constraint applies to all requests in this claim. References to subrequests must include the name of the main request and may include the subrequest using the format <main request>[/<subrequest>]. If just the main request is given, the constraint applies to all subrequests. # noqa: E501
:param requests: The requests of this V1DeviceConstraint. # noqa: E501
:type: list[str]
"""
self._requests = requests
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, V1DeviceConstraint):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, V1DeviceConstraint):
return True
return self.to_dict() != other.to_dict()
| V1DeviceConstraint |
python | pytorch__pytorch | test/distributed/fsdp/test_fsdp_fine_tune.py | {
"start": 1187,
"end": 1324
} | class ____(nn.Linear):
def forward(self, frozen_input, learnable_input):
return super().forward(frozen_input)
| LinearUnusedInput |
python | getsentry__sentry | tests/sentry/monitors/endpoints/test_organization_monitor_index_count.py | {
"start": 131,
"end": 2009
} | class ____(MonitorTestCase):
endpoint = "sentry-api-0-organization-monitor-index-count"
def setUp(self) -> None:
super().setUp()
self.login_as(self.user)
def test_simple(self) -> None:
self._create_monitor(name="Active Monitor 1")
self._create_monitor(name="Active Monitor 2")
self._create_monitor(name="Disabled Monitor", status=ObjectStatus.DISABLED)
# Monitors pending deletion should be excluded
self._create_monitor(name="Pending Deletion", status=ObjectStatus.PENDING_DELETION)
response = self.get_success_response(self.organization.slug)
assert response.data == {
"counts": {
"total": 3,
"active": 2,
"disabled": 1,
},
}
def test_filtered_by_environment(self) -> None:
# Create monitors with different environments
monitor1 = self._create_monitor(name="Monitor 1")
monitor2 = self._create_monitor(name="Monitor 2")
monitor3 = self._create_monitor(name="Monitor 3", status=ObjectStatus.DISABLED)
self._create_monitor_environment(monitor1, name="production")
self._create_monitor_environment(monitor2, name="staging")
self._create_monitor_environment(monitor3, name="production")
response = self.get_success_response(self.organization.slug, environment=["production"])
assert response.data == {
"counts": {
"total": 2,
"active": 1,
"disabled": 1,
},
}
response = self.get_success_response(self.organization.slug, environment=["staging"])
assert response.data == {
"counts": {
"total": 1,
"active": 1,
"disabled": 0,
},
}
| OrganizationMonitorsCountTest |
python | getsentry__sentry | src/sentry/rules/base.py | {
"start": 3360,
"end": 3800
} | class ____:
def __init__(
self,
is_new: bool,
is_regression: bool,
is_new_group_environment: bool,
has_reappeared: bool,
has_escalated: bool,
) -> None:
self.is_new = is_new
self.is_regression = is_regression
self.is_new_group_environment = is_new_group_environment
self.has_reappeared = has_reappeared
self.has_escalated = has_escalated
| EventState |
python | simplejson__simplejson | simplejson/tests/test_dump.py | {
"start": 616,
"end": 10570
} | class ____(TestCase):
def test_dump(self):
sio = StringIO()
json.dump({}, sio)
self.assertEqual(sio.getvalue(), '{}')
def test_constants(self):
for c in [None, True, False]:
self.assertTrue(json.loads(json.dumps(c)) is c)
self.assertTrue(json.loads(json.dumps([c]))[0] is c)
self.assertTrue(json.loads(json.dumps({'a': c}))['a'] is c)
def test_stringify_key(self):
items = [(b('bytes'), 'bytes'),
(1.0, '1.0'),
(10, '10'),
(True, 'true'),
(False, 'false'),
(None, 'null'),
(long_type(100), '100')]
for k, expect in items:
self.assertEqual(
json.loads(json.dumps({k: expect})),
{expect: expect})
self.assertEqual(
json.loads(json.dumps({k: expect}, sort_keys=True)),
{expect: expect})
self.assertRaises(TypeError, json.dumps, {json: 1})
for v in [{}, {'other': 1}, {b('derp'): 1, 'herp': 2}]:
for sort_keys in [False, True]:
v0 = dict(v)
v0[json] = 1
v1 = dict((as_text_type(key), val) for (key, val) in v.items())
self.assertEqual(
json.loads(json.dumps(v0, skipkeys=True, sort_keys=sort_keys)),
v1)
self.assertEqual(
json.loads(json.dumps({'': v0}, skipkeys=True, sort_keys=sort_keys)),
{'': v1})
self.assertEqual(
json.loads(json.dumps([v0], skipkeys=True, sort_keys=sort_keys)),
[v1])
def test_dumps(self):
self.assertEqual(json.dumps({}), '{}')
def test_encode_truefalse(self):
self.assertEqual(json.dumps(
{True: False, False: True}, sort_keys=True),
'{"false": true, "true": false}')
self.assertEqual(
# load first because the keys are not sorted
json.loads(json.dumps({'k1': {False: 5}, 'k2': {0: 5}})),
{'k1': {'false': 5}, 'k2': {'0': 5}},
)
self.assertEqual(
json.dumps(
{2: 3.0,
4.0: long_type(5),
False: 1,
long_type(6): True,
"7": 0},
sort_keys=True),
'{"2": 3.0, "4.0": 5, "6": true, "7": 0, "false": 1}')
def test_ordered_dict(self):
# http://bugs.python.org/issue6105
items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)]
s = json.dumps(json.OrderedDict(items))
self.assertEqual(
s,
'{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}')
def test_indent_unknown_type_acceptance(self):
"""
A test against the regression mentioned at `github issue 29`_.
The indent parameter should accept any type which pretends to be
an instance of int or long when it comes to being multiplied by
strings, even if it is not actually an int or long, for
backwards compatibility.
.. _github issue 29:
http://github.com/simplejson/simplejson/issue/29
"""
class AwesomeInt(object):
"""An awesome reimplementation of integers"""
def __init__(self, *args, **kwargs):
if len(args) > 0:
# [construct from literals, objects, etc.]
# ...
# Finally, if args[0] is an integer, store it
if isinstance(args[0], int):
self._int = args[0]
# [various methods]
def __mul__(self, other):
# [various ways to multiply AwesomeInt objects]
# ... finally, if the right-hand operand is not awesome enough,
# try to do a normal integer multiplication
if hasattr(self, '_int'):
return self._int * other
else:
raise NotImplementedError("To do non-awesome things with"
" this object, please construct it from an integer!")
s = json.dumps([0, 1, 2], indent=AwesomeInt(3))
self.assertEqual(s, '[\n 0,\n 1,\n 2\n]')
def test_accumulator(self):
# the C API uses an accumulator that collects after 100,000 appends
lst = [0] * 100000
self.assertEqual(json.loads(json.dumps(lst)), lst)
def test_sort_keys(self):
# https://github.com/simplejson/simplejson/issues/106
for num_keys in range(2, 32):
p = dict((str(x), x) for x in range(num_keys))
sio = StringIO()
json.dump(p, sio, sort_keys=True)
self.assertEqual(sio.getvalue(), json.dumps(p, sort_keys=True))
self.assertEqual(json.loads(sio.getvalue()), p)
def test_misbehaving_text_subtype(self):
# https://github.com/simplejson/simplejson/issues/185
text = "this is some text"
self.assertEqual(
json.dumps(MisbehavingTextSubtype(text)),
json.dumps(text)
)
self.assertEqual(
json.dumps([MisbehavingTextSubtype(text)]),
json.dumps([text])
)
self.assertEqual(
json.dumps({MisbehavingTextSubtype(text): 42}),
json.dumps({text: 42})
)
def test_misbehaving_bytes_subtype(self):
data = b("this is some data \xe2\x82\xac")
self.assertEqual(
json.dumps(MisbehavingBytesSubtype(data)),
json.dumps(data)
)
self.assertEqual(
json.dumps([MisbehavingBytesSubtype(data)]),
json.dumps([data])
)
self.assertEqual(
json.dumps({MisbehavingBytesSubtype(data): 42}),
json.dumps({data: 42})
)
def test_bytes_toplevel(self):
self.assertEqual(json.dumps(b('\xe2\x82\xac')), r'"\u20ac"')
self.assertRaises(UnicodeDecodeError, json.dumps, b('\xa4'))
self.assertEqual(json.dumps(b('\xa4'), encoding='iso-8859-1'),
r'"\u00a4"')
self.assertEqual(json.dumps(b('\xa4'), encoding='iso-8859-15'),
r'"\u20ac"')
if PY3:
self.assertRaises(TypeError, json.dumps, b('\xe2\x82\xac'),
encoding=None)
self.assertRaises(TypeError, json.dumps, b('\xa4'),
encoding=None)
self.assertEqual(json.dumps(b('\xa4'), encoding=None,
default=decode_iso_8859_15),
r'"\u20ac"')
else:
self.assertEqual(json.dumps(b('\xe2\x82\xac'), encoding=None),
r'"\u20ac"')
self.assertRaises(UnicodeDecodeError, json.dumps, b('\xa4'),
encoding=None)
self.assertRaises(UnicodeDecodeError, json.dumps, b('\xa4'),
encoding=None, default=decode_iso_8859_15)
def test_bytes_nested(self):
self.assertEqual(json.dumps([b('\xe2\x82\xac')]), r'["\u20ac"]')
self.assertRaises(UnicodeDecodeError, json.dumps, [b('\xa4')])
self.assertEqual(json.dumps([b('\xa4')], encoding='iso-8859-1'),
r'["\u00a4"]')
self.assertEqual(json.dumps([b('\xa4')], encoding='iso-8859-15'),
r'["\u20ac"]')
if PY3:
self.assertRaises(TypeError, json.dumps, [b('\xe2\x82\xac')],
encoding=None)
self.assertRaises(TypeError, json.dumps, [b('\xa4')],
encoding=None)
self.assertEqual(json.dumps([b('\xa4')], encoding=None,
default=decode_iso_8859_15),
r'["\u20ac"]')
else:
self.assertEqual(json.dumps([b('\xe2\x82\xac')], encoding=None),
r'["\u20ac"]')
self.assertRaises(UnicodeDecodeError, json.dumps, [b('\xa4')],
encoding=None)
self.assertRaises(UnicodeDecodeError, json.dumps, [b('\xa4')],
encoding=None, default=decode_iso_8859_15)
def test_bytes_key(self):
self.assertEqual(json.dumps({b('\xe2\x82\xac'): 42}), r'{"\u20ac": 42}')
self.assertRaises(UnicodeDecodeError, json.dumps, {b('\xa4'): 42})
self.assertEqual(json.dumps({b('\xa4'): 42}, encoding='iso-8859-1'),
r'{"\u00a4": 42}')
self.assertEqual(json.dumps({b('\xa4'): 42}, encoding='iso-8859-15'),
r'{"\u20ac": 42}')
if PY3:
self.assertRaises(TypeError, json.dumps, {b('\xe2\x82\xac'): 42},
encoding=None)
self.assertRaises(TypeError, json.dumps, {b('\xa4'): 42},
encoding=None)
self.assertRaises(TypeError, json.dumps, {b('\xa4'): 42},
encoding=None, default=decode_iso_8859_15)
self.assertEqual(json.dumps({b('\xa4'): 42}, encoding=None,
skipkeys=True),
r'{}')
else:
self.assertEqual(json.dumps({b('\xe2\x82\xac'): 42}, encoding=None),
r'{"\u20ac": 42}')
self.assertRaises(UnicodeDecodeError, json.dumps, {b('\xa4'): 42},
encoding=None)
self.assertRaises(UnicodeDecodeError, json.dumps, {b('\xa4'): 42},
encoding=None, default=decode_iso_8859_15)
self.assertRaises(UnicodeDecodeError, json.dumps, {b('\xa4'): 42},
encoding=None, skipkeys=True)
| TestDump |
python | dagster-io__dagster | python_modules/libraries/dagster-airbyte/dagster_airbyte/managed/generated/sources.py | {
"start": 260566,
"end": 263405
} | class ____(GeneratedAirbyteSource):
class OAuth:
@public
def __init__(
self,
client_id: str,
client_secret: str,
refresh_token: str,
access_token: Optional[str] = None,
):
self.auth_type = "Client"
self.client_id = check.str_param(client_id, "client_id")
self.client_secret = check.str_param(client_secret, "client_secret")
self.access_token = check.opt_str_param(access_token, "access_token")
self.refresh_token = check.str_param(refresh_token, "refresh_token")
class ServiceAccountKeyAuthentication:
@public
def __init__(self, service_account_info: str, email: str):
self.auth_type = "Service"
self.service_account_info = check.str_param(
service_account_info, "service_account_info"
)
self.email = check.str_param(email, "email")
@public
def __init__(
self,
name: str,
site_urls: list[str],
start_date: str,
authorization: Union[
"GoogleSearchConsoleSource.OAuth",
"GoogleSearchConsoleSource.ServiceAccountKeyAuthentication",
],
end_date: Optional[str] = None,
custom_reports: Optional[str] = None,
):
"""Airbyte Source for Google Search Console.
Documentation can be found at https://docs.airbyte.com/integrations/sources/google-search-console
Args:
name (str): The name of the destination.
site_urls (List[str]): The URLs of the website property attached to your GSC account. Read more here.
start_date (str): UTC date in the format 2017-01-25. Any data before this date will not be replicated.
end_date (Optional[str]): UTC date in the format 2017-01-25. Any data after this date will not be replicated. Must be greater or equal to the start date field.
custom_reports (Optional[str]): A JSON array describing the custom reports you want to sync from Google Search Console. See the docs for more information about the exact format you can use to fill out this field.
"""
self.site_urls = check.list_param(site_urls, "site_urls", str)
self.start_date = check.str_param(start_date, "start_date")
self.end_date = check.opt_str_param(end_date, "end_date")
self.authorization = check.inst_param(
authorization,
"authorization",
(
GoogleSearchConsoleSource.OAuth,
GoogleSearchConsoleSource.ServiceAccountKeyAuthentication,
),
)
self.custom_reports = check.opt_str_param(custom_reports, "custom_reports")
super().__init__("Google Search Console", name)
| GoogleSearchConsoleSource |
python | pypa__packaging | tests/test_markers.py | {
"start": 4676,
"end": 16246
} | class ____:
@pytest.mark.parametrize(
"marker_string",
[
"{} {} {!r}".format(*i)
for i in itertools.product(VARIABLES, OPERATORS, VALUES)
]
+ [
"{2!r} {1} {0}".format(*i)
for i in itertools.product(VARIABLES, OPERATORS, VALUES)
],
)
def test_parses_valid(self, marker_string: str) -> None:
Marker(marker_string)
@pytest.mark.parametrize(
"marker_string",
[
"this_isnt_a_real_variable >= '1.0'",
"python_version",
"(python_version)",
"python_version >= 1.0 and (python_version)",
'(python_version == "2.7" and os_name == "linux"',
'(python_version == "2.7") with random text',
],
)
def test_parses_invalid(self, marker_string: str) -> None:
with pytest.raises(InvalidMarker):
Marker(marker_string)
@pytest.mark.parametrize(
("marker_string", "expected"),
[
# Test the different quoting rules
("python_version == '2.7'", 'python_version == "2.7"'),
('python_version == "2.7"', 'python_version == "2.7"'),
# Test and/or expressions
(
'python_version == "2.7" and os_name == "linux"',
'python_version == "2.7" and os_name == "linux"',
),
(
'python_version == "2.7" or os_name == "linux"',
'python_version == "2.7" or os_name == "linux"',
),
(
'python_version == "2.7" and os_name == "linux" or '
'sys_platform == "win32"',
'python_version == "2.7" and os_name == "linux" or '
'sys_platform == "win32"',
),
# Test nested expressions and grouping with ()
('(python_version == "2.7")', 'python_version == "2.7"'),
(
'(python_version == "2.7" and sys_platform == "win32")',
'python_version == "2.7" and sys_platform == "win32"',
),
(
'python_version == "2.7" and (sys_platform == "win32" or '
'sys_platform == "linux")',
'python_version == "2.7" and (sys_platform == "win32" or '
'sys_platform == "linux")',
),
],
)
def test_str_repr_eq_hash(self, marker_string: str, expected: str) -> None:
m = Marker(marker_string)
assert str(m) == expected
assert repr(m) == f"<Marker({str(m)!r})>"
# Objects created from the same string should be equal.
assert m == Marker(marker_string)
# Objects created from the equivalent strings should also be equal.
assert m == Marker(expected)
# Objects created from the same string should have the same hash.
assert hash(Marker(marker_string)) == hash(Marker(marker_string))
# Objects created from equivalent strings should also have the same hash.
assert hash(Marker(marker_string)) == hash(Marker(expected))
@pytest.mark.parametrize(
("example1", "example2"),
[
# Test trivial comparisons.
('python_version == "2.7"', 'python_version == "3.7"'),
(
'python_version == "2.7"',
'python_version == "2.7" and os_name == "linux"',
),
(
'python_version == "2.7"',
'(python_version == "2.7" and os_name == "linux")',
),
# Test different precedence.
(
'python_version == "2.7" and (os_name == "linux" or '
'sys_platform == "win32")',
'python_version == "2.7" and os_name == "linux" or '
'sys_platform == "win32"',
),
],
)
def test_different_markers_different_hashes(
self, example1: str, example2: str
) -> None:
marker1, marker2 = Marker(example1), Marker(example2)
# Markers created from strings that are not equivalent should differ.
assert marker1 != marker2
# Different Marker objects should have different hashes.
assert hash(marker1) != hash(marker2)
def test_compare_markers_to_other_objects(self) -> None:
# Markers should not be comparable to other kinds of objects.
assert Marker("os_name == 'nt'") != "os_name == 'nt'"
def test_environment_assumes_empty_extra(self) -> None:
assert Marker('extra == "im_valid"').evaluate() is False
def test_environment_with_extra_none(self) -> None:
# GIVEN
marker_str = 'extra == "im_valid"'
# Pretend that this is dict[str, str], even though it's not. This is a
# test for being bug-for-bug compatible with the older implementation.
environment = cast("dict[str, str]", {"extra": None})
# WHEN
marker = Marker(marker_str)
# THEN
assert marker.evaluate(environment) is False
@pytest.mark.parametrize(
("marker_string", "environment", "expected"),
[
(f"os_name == '{os.name}'", None, True),
("os_name == 'foo'", {"os_name": "foo"}, True),
("os_name == 'foo'", {"os_name": "bar"}, False),
("'2.7' in python_version", {"python_version": "2.7.5"}, True),
("'2.7' not in python_version", {"python_version": "2.7"}, False),
(
"os_name == 'foo' and python_version ~= '2.7.0'",
{"os_name": "foo", "python_version": "2.7.6"},
True,
),
(
"python_version ~= '2.7.0' and (os_name == 'foo' or os_name == 'bar')",
{"os_name": "foo", "python_version": "2.7.4"},
True,
),
(
"python_version ~= '2.7.0' and (os_name == 'foo' or os_name == 'bar')",
{"os_name": "bar", "python_version": "2.7.4"},
True,
),
(
"python_version ~= '2.7.0' and (os_name == 'foo' or os_name == 'bar')",
{"os_name": "other", "python_version": "2.7.4"},
False,
),
("extra == 'security'", {"extra": "quux"}, False),
("extra == 'security'", {"extra": "security"}, True),
("extra == 'SECURITY'", {"extra": "security"}, True),
("extra == 'security'", {"extra": "SECURITY"}, True),
("extra == 'pep-685-norm'", {"extra": "PEP_685...norm"}, True),
(
"extra == 'Different.punctuation..is...equal'",
{"extra": "different__punctuation_is_EQUAL"},
True,
),
],
)
def test_evaluates(
self, marker_string: str, environment: dict[str, str] | None, expected: bool
) -> None:
args = () if environment is None else (environment,)
assert Marker(marker_string).evaluate(*args) == expected
@pytest.mark.parametrize(
"marker_string",
[
"{} {} {!r}".format(*i)
for i in itertools.product(PEP_345_VARIABLES, OPERATORS, VALUES)
]
+ [
"{2!r} {1} {0}".format(*i)
for i in itertools.product(PEP_345_VARIABLES, OPERATORS, VALUES)
],
)
def test_parses_pep345_valid(self, marker_string: str) -> None:
Marker(marker_string)
@pytest.mark.parametrize(
("marker_string", "environment", "expected"),
[
(f"os.name == '{os.name}'", None, True),
("sys.platform == 'win32'", {"sys_platform": "linux2"}, False),
("platform.version in 'Ubuntu'", {"platform_version": "#39"}, False),
("platform.machine=='x86_64'", {"platform_machine": "x86_64"}, True),
(
"platform.python_implementation=='Jython'",
{"platform_python_implementation": "CPython"},
False,
),
(
"python_version == '2.5' and platform.python_implementation!= 'Jython'",
{"python_version": "2.7"},
False,
),
],
)
def test_evaluate_pep345_markers(
self, marker_string: str, environment: dict[str, str] | None, expected: bool
) -> None:
args = () if environment is None else (environment,)
assert Marker(marker_string).evaluate(*args) == expected
@pytest.mark.parametrize(
"marker_string",
[
"{} {} {!r}".format(*i)
for i in itertools.product(SETUPTOOLS_VARIABLES, OPERATORS, VALUES)
]
+ [
"{2!r} {1} {0}".format(*i)
for i in itertools.product(SETUPTOOLS_VARIABLES, OPERATORS, VALUES)
],
)
def test_parses_setuptools_legacy_valid(self, marker_string: str) -> None:
Marker(marker_string)
def test_evaluate_setuptools_legacy_markers(self) -> None:
marker_string = "python_implementation=='Jython'"
args = ({"platform_python_implementation": "CPython"},)
assert Marker(marker_string).evaluate(*args) is False
def test_extra_str_normalization(self) -> None:
raw_name = "S_P__A_M"
normalized_name = "s-p-a-m"
lhs = f"{raw_name!r} == extra"
rhs = f"extra == {raw_name!r}"
assert str(Marker(lhs)) == f'"{normalized_name}" == extra'
assert str(Marker(rhs)) == f'extra == "{normalized_name}"'
def test_python_full_version_untagged_user_provided(self) -> None:
"""A user-provided python_full_version ending with a + is also repaired."""
assert Marker("python_full_version < '3.12'").evaluate(
{"python_full_version": "3.11.1+"}
)
def test_python_full_version_untagged(self) -> None:
with mock.patch("platform.python_version", return_value="3.11.1+"):
assert Marker("python_full_version < '3.12'").evaluate()
@pytest.mark.parametrize("variable", ["extras", "dependency_groups"])
@pytest.mark.parametrize(
("expression", "result"),
[
pytest.param('"foo" in {0}', True, id="value-in-foo"),
pytest.param('"bar" in {0}', True, id="value-in-bar"),
pytest.param('"baz" in {0}', False, id="value-not-in"),
pytest.param('"baz" not in {0}', True, id="value-not-in-negated"),
pytest.param('"foo" in {0} and "bar" in {0}', True, id="and-in"),
pytest.param('"foo" in {0} or "bar" in {0}', True, id="or-in"),
pytest.param(
'"baz" in {0} and "foo" in {0}', False, id="short-circuit-and"
),
pytest.param('"foo" in {0} or "baz" in {0}', True, id="short-circuit-or"),
pytest.param('"Foo" in {0}', True, id="case-sensitive"),
],
)
def test_extras_and_dependency_groups(
self, variable: str, expression: str, result: bool
) -> None:
environment = {variable: {"foo", "bar"}}
assert Marker(expression.format(variable)).evaluate(environment) == result
@pytest.mark.parametrize("variable", ["extras", "dependency_groups"])
def test_extras_and_dependency_groups_disallowed(self, variable: str) -> None:
marker = Marker(f'"foo" in {variable}')
assert not marker.evaluate(context="lock_file")
with pytest.raises(KeyError):
marker.evaluate()
with pytest.raises(KeyError):
marker.evaluate(context="requirement")
| TestMarker |
python | jd__tenacity | tests/test_tenacity.py | {
"start": 44965,
"end": 46534
} | class ____:
def test_redefine_wait(self):
start = current_time_ms()
result = _retryable_test_with_wait.retry_with(wait=tenacity.wait_fixed(0.1))(
NoneReturnUntilAfterCount(5)
)
t = current_time_ms() - start
assert t >= 500
assert result is True
def test_redefine_stop(self):
result = _retryable_test_with_stop.retry_with(
stop=tenacity.stop_after_attempt(5)
)(NoneReturnUntilAfterCount(4))
assert result is True
def test_retry_error_cls_should_be_preserved(self):
@retry(stop=tenacity.stop_after_attempt(10), retry_error_cls=ValueError)
def _retryable():
raise Exception("raised for test purposes")
with pytest.raises(Exception) as exc_ctx:
_retryable.retry_with(stop=tenacity.stop_after_attempt(2))()
assert exc_ctx.type is ValueError, "Should remap to specific exception type"
def test_retry_error_callback_should_be_preserved(self):
def return_text(retry_state):
return "Calling {} keeps raising errors after {} attempts".format(
retry_state.fn.__name__,
retry_state.attempt_number,
)
@retry(stop=tenacity.stop_after_attempt(10), retry_error_callback=return_text)
def _retryable():
raise Exception("raised for test purposes")
result = _retryable.retry_with(stop=tenacity.stop_after_attempt(5))()
assert result == "Calling _retryable keeps raising errors after 5 attempts"
| TestRetryWith |
python | pytorch__pytorch | test/dynamo/cpython/3_13/test_itertools.py | {
"start": 88485,
"end": 96907
} | class ____(__TestCase):
def test_batched_recipe(self):
def batched_recipe(iterable, n):
"Batch data into tuples of length n. The last batch may be shorter."
# batched('ABCDEFG', 3) --> ABC DEF G
if n < 1:
raise ValueError('n must be at least one')
it = iter(iterable)
while batch := tuple(islice(it, n)):
yield batch
for iterable, n in product(
['', 'a', 'ab', 'abc', 'abcd', 'abcde', 'abcdef', 'abcdefg', None],
[-1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, None]):
with self.subTest(iterable=iterable, n=n):
try:
e1, r1 = None, list(batched(iterable, n))
except Exception as e:
e1, r1 = type(e), None
try:
e2, r2 = None, list(batched_recipe(iterable, n))
except Exception as e:
e2, r2 = type(e), None
self.assertEqual(r1, r2)
self.assertEqual(e1, e2)
@staticmethod
def islice(iterable, *args):
s = slice(*args)
start, stop, step = s.start or 0, s.stop or sys.maxsize, s.step or 1
it = iter(range(start, stop, step))
try:
nexti = next(it)
except StopIteration:
# Consume *iterable* up to the *start* position.
for i, element in zip(range(start), iterable):
pass
return
try:
for i, element in enumerate(iterable):
if i == nexti:
yield element
nexti = next(it)
except StopIteration:
# Consume to *stop*.
for i, element in zip(range(i + 1, stop), iterable):
pass
@skipIfTorchDynamo("infinite loop in torch dynamo")
def test_islice_recipe(self):
self.assertEqual(list(self.islice('ABCDEFG', 2)), list('AB'))
self.assertEqual(list(self.islice('ABCDEFG', 2, 4)), list('CD'))
self.assertEqual(list(self.islice('ABCDEFG', 2, None)), list('CDEFG'))
self.assertEqual(list(self.islice('ABCDEFG', 0, None, 2)), list('ACEG'))
# Test items consumed.
it = iter(range(10))
self.assertEqual(list(self.islice(it, 3)), list(range(3)))
self.assertEqual(list(it), list(range(3, 10)))
it = iter(range(10))
self.assertEqual(list(self.islice(it, 3, 3)), [])
self.assertEqual(list(it), list(range(3, 10)))
# Test that slice finishes in predictable state.
c = count()
self.assertEqual(list(self.islice(c, 1, 3, 50)), [1])
self.assertEqual(next(c), 3)
def test_tee_recipe(self):
# Begin tee() recipe ###########################################
def tee(iterable, n=2):
if n < 0:
raise ValueError
if n == 0:
return ()
iterator = _tee(iterable)
result = [iterator]
for _ in range(n - 1):
result.append(_tee(iterator))
return tuple(result)
class _tee:
def __init__(self, iterable):
it = iter(iterable)
if isinstance(it, _tee):
self.iterator = it.iterator
self.link = it.link
else:
self.iterator = it
self.link = [None, None]
def __iter__(self):
return self
def __next__(self):
link = self.link
if link[1] is None:
link[0] = next(self.iterator)
link[1] = [None, None]
value, self.link = link
return value
# End tee() recipe #############################################
n = 200
a, b = tee([]) # test empty iterator
self.assertEqual(list(a), [])
self.assertEqual(list(b), [])
a, b = tee(irange(n)) # test 100% interleaved
self.assertEqual(lzip(a,b), lzip(range(n), range(n)))
a, b = tee(irange(n)) # test 0% interleaved
self.assertEqual(list(a), list(range(n)))
self.assertEqual(list(b), list(range(n)))
a, b = tee(irange(n)) # test dealloc of leading iterator
for i in range(100):
self.assertEqual(next(a), i)
del a
self.assertEqual(list(b), list(range(n)))
a, b = tee(irange(n)) # test dealloc of trailing iterator
for i in range(100):
self.assertEqual(next(a), i)
del b
self.assertEqual(list(a), list(range(100, n)))
for j in range(5): # test randomly interleaved
order = [0]*n + [1]*n
random.shuffle(order)
lists = ([], [])
its = tee(irange(n))
for i in order:
value = next(its[i])
lists[i].append(value)
self.assertEqual(lists[0], list(range(n)))
self.assertEqual(lists[1], list(range(n)))
# test argument format checking
self.assertRaises(TypeError, tee)
self.assertRaises(TypeError, tee, 3)
self.assertRaises(TypeError, tee, [1,2], 'x')
self.assertRaises(TypeError, tee, [1,2], 3, 'x')
# tee object should be instantiable
a, b = tee('abc')
c = type(a)('def')
self.assertEqual(list(c), list('def'))
# test long-lagged and multi-way split
a, b, c = tee(range(2000), 3)
for i in range(100):
self.assertEqual(next(a), i)
self.assertEqual(list(b), list(range(2000)))
self.assertEqual([next(c), next(c)], list(range(2)))
self.assertEqual(list(a), list(range(100,2000)))
self.assertEqual(list(c), list(range(2,2000)))
# test invalid values of n
self.assertRaises(TypeError, tee, 'abc', 'invalid')
self.assertRaises(ValueError, tee, [], -1)
for n in range(5):
result = tee('abc', n)
self.assertEqual(type(result), tuple)
self.assertEqual(len(result), n)
self.assertEqual([list(x) for x in result], [list('abc')]*n)
# tee objects are independent (see bug gh-123884)
a, b = tee('abc')
c, d = tee(a)
e, f = tee(c)
self.assertTrue(len({a, b, c, d, e, f}) == 6)
# test tee_new
t1, t2 = tee('abc')
tnew = type(t1)
self.assertRaises(TypeError, tnew)
self.assertRaises(TypeError, tnew, 10)
t3 = tnew(t1)
self.assertTrue(list(t1) == list(t2) == list(t3) == list('abc'))
# test that tee objects are weak referenceable
a, b = tee(range(10))
p = weakref.proxy(a)
self.assertEqual(getattr(p, '__class__'), type(b))
del a
gc.collect() # For PyPy or other GCs.
self.assertRaises(ReferenceError, getattr, p, '__class__')
ans = list('abc')
long_ans = list(range(10000))
# Tests not applicable to the tee() recipe
if False:
# check copy
a, b = tee('abc')
self.assertEqual(list(copy.copy(a)), ans)
self.assertEqual(list(copy.copy(b)), ans)
a, b = tee(list(range(10000)))
self.assertEqual(list(copy.copy(a)), long_ans)
self.assertEqual(list(copy.copy(b)), long_ans)
# check partially consumed copy
a, b = tee('abc')
take(2, a)
take(1, b)
self.assertEqual(list(copy.copy(a)), ans[2:])
self.assertEqual(list(copy.copy(b)), ans[1:])
self.assertEqual(list(a), ans[2:])
self.assertEqual(list(b), ans[1:])
a, b = tee(range(10000))
take(100, a)
take(60, b)
self.assertEqual(list(copy.copy(a)), long_ans[100:])
self.assertEqual(list(copy.copy(b)), long_ans[60:])
self.assertEqual(list(a), long_ans[100:])
self.assertEqual(list(b), long_ans[60:])
# Issue 13454: Crash when deleting backward iterator from tee()
forward, backward = tee(repeat(None, 2000)) # 20000000
try:
any(forward) # exhaust the iterator
del backward
except:
del forward, backward
raise
| TestPurePythonRoughEquivalents |
python | pytorch__pytorch | torch/_inductor/fuzzer.py | {
"start": 5999,
"end": 14398
} | class ____(Enum):
"""
This class handles the process of assigning concrete values to type annotations. So a type annotation of
```python
foo: Optional[int] = None
```
Will be assigned an int if the dispatch function gets TOGGLE, or a 50/50 split between an int and None if it gets
RANDOM.
"""
TOGGLE = "TOGGLE" # toggle to the opposite value
RANDOM = "RANDOM" # randomly choose an option
@staticmethod
def _generate_value_for_type(
random_sample: bool, field_name: str, type_hint: type[Any], default: Any
) -> Any:
"""
Generates a value of a type based on the setting.
"""
# look for name in type overrides
if field_name in TYPE_OVERRIDES:
return random.choice(TYPE_OVERRIDES[field_name])
if type_hint is bool:
return random.choice([True, False]) if random_sample else not default
elif type_hint is int:
# NOTE initially tried to use negation of the value, but it doesn't work because most types are ints
# when they should be natural numbers + zero. Python types to cover these values aren't super convenient.
return random.randint(0, 1000)
elif type_hint is float:
return random.uniform(0, 1000)
elif type_hint is str:
characters = string.ascii_letters + string.digits + string.punctuation
return "".join(
random.choice(characters) for _ in range(random.randint(1, 20))
)
elif is_type(type_hint, list):
elem_type = getattr(
type_hint,
"__args__",
[type(default[0])] if default and len(default) else [type(None)],
)[0]
new_default = default[0] if default and len(default) > 0 else None
return [
SamplingMethod._generate_value_for_type(
random_sample, field_name, elem_type, new_default
)
for _ in range(random.randint(1, 3))
]
elif is_type(type_hint, set): # noqa: set_linter
indexable = list(default)
elem_type = getattr(
type_hint,
"__args__",
[type(indexable[0])] if default and len(default) else [type(None)],
)[0]
new_default = indexable[0] if default and len(default) > 0 else None
return { # noqa: set_linter
SamplingMethod._generate_value_for_type(
random_sample, field_name, elem_type, new_default
)
for _ in range(random.randint(1, 3))
}
elif is_type(type_hint, OrderedSet):
indexable = list(default)
elem_type = getattr(
type_hint,
"__args__",
[type(indexable[0])] if default and len(default) else [type(None)],
)[0]
new_default = indexable[0] if default and len(default) > 0 else None
return OrderedSet(
[
SamplingMethod._generate_value_for_type(
random_sample, field_name, elem_type, new_default
)
for _ in range(random.randint(1, 3))
]
)
elif is_type(type_hint, dict):
key_type, value_type = getattr(
type_hint,
"__args__",
map(type, next(iter(default.items())))
if (default is not None and len(default))
else (type(None), type(None)),
)
if default is not None and len(default.items()) > 0:
default_key, default_val = next(iter(default.items()))
else:
default_key, default_val = None, None
return {
SamplingMethod._generate_value_for_type(
random_sample, field_name, key_type, default_key
): SamplingMethod._generate_value_for_type(
random_sample, field_name, value_type, default_val
)
for _ in range(random.randint(0, 3))
}
elif is_type(type_hint, Union):
# do whatever is not the type of default
try:
assert len(type_hint.__args__) > 1
except AttributeError as err:
raise ValueError("Union type with no args") from err
if random_sample:
new_type = random.choice(type_hint.__args__)
else:
new_type = random.choice(
[t for t in type_hint.__args__ if t is not type(default)]
)
try:
new_default = new_type()
except Exception:
# if default constructor doesn't work, try None
new_default = None
return SamplingMethod._generate_value_for_type(
random_sample, field_name, new_type, new_default
)
elif is_type(type_hint, tuple):
args = getattr(
type_hint,
"__args__",
tuple(map(type, default)),
)
zipped = zip(args, default)
return tuple(
map( # noqa: C417
lambda x: SamplingMethod._generate_value_for_type(
random_sample, field_name, x[0], x[1]
),
zipped,
)
)
elif is_type(type_hint, Literal):
try:
if random_sample:
return random.choice(type_hint.__args__)
else:
choices = [t for t in type_hint.__args__ if t != default]
if choices:
return random.choice(choices)
else:
return default
except AttributeError as err:
raise ValueError("Literal type with no args") from err
elif is_optional_type(type_hint):
try:
elem_type = type_hint.__args__[0]
except AttributeError as err:
raise ValueError("Optional type with no args") from err
if random_sample:
return random.choice(
[
None,
SamplingMethod._generate_value_for_type(
random_sample, field_name, elem_type, default
),
]
)
else:
if default is None:
return SamplingMethod._generate_value_for_type(
random_sample, field_name, elem_type, None
)
else:
return None
elif type_hint is type(None):
return None
elif is_callable_type(type_hint):
try:
return_type = list(type_hint.__args__)[-1]
except AttributeError as err:
raise ValueError("Callable type with no args") from err
@wraps(lambda *args, **kwargs: None)
def dummy_function(*args, **kwargs): # type: ignore[no-untyped-def]
return SamplingMethod._generate_value_for_type(
random_sample, field_name, return_type, None
)
return dummy_function
elif type_hint == torch._ops.OpOverload:
return torch.ops.aten.add.default
elif TypeExemplars.contains(type_hint):
return TypeExemplars.example(type_hint)
elif type_hint == Any:
return 1 if default != 1 else 2
else:
raise ValueError(f"Unable to process type {type_hint}. PRs welcome :)")
@staticmethod
def dispatch(sm: "SamplingMethod") -> SamplingType:
"""
Returns a function that will generate values from a type, based on the SamplingMethod passed in.
"""
if sm == SamplingMethod.RANDOM:
return partial(SamplingMethod._generate_value_for_type, True)
elif sm == SamplingMethod.TOGGLE:
return partial(SamplingMethod._generate_value_for_type, False)
else:
raise ValueError(f"malformed sampling method: {sm}")
| SamplingMethod |
python | apache__airflow | providers/amazon/src/airflow/providers/amazon/aws/operators/appflow.py | {
"start": 1796,
"end": 5956
} | class ____(AwsBaseOperator[AppflowHook]):
"""
Amazon AppFlow Base Operator class (not supposed to be used directly in DAGs).
:param source: The source name (Supported: salesforce, zendesk)
:param flow_name: The flow name
:param flow_update: A boolean to enable/disable a flow update before the run
:param source_field: The field name to apply filters
:param filter_date: The date value (or template) to be used in filters.
:param poll_interval: how often in seconds to check the query status
:param max_attempts: how many times to check for status before timing out
:param wait_for_completion: whether to wait for the run to end to return
:param aws_conn_id: The Airflow connection used for AWS credentials.
If this is ``None`` or empty then the default boto3 behaviour is used. If
running Airflow in a distributed manner and aws_conn_id is None or
empty, then default boto3 configuration would be used (and must be
maintained on each worker node).
:param region_name: AWS region_name. If not specified then the default boto3 behaviour is used.
:param verify: Whether or not to verify SSL certificates. See:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html
:param botocore_config: Configuration dictionary (key-values) for botocore client. See:
https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html
"""
aws_hook_class = AppflowHook
ui_color = "#2bccbd"
template_fields = aws_template_fields("flow_name", "source", "source_field", "filter_date")
UPDATE_PROPAGATION_TIME: int = 15
def __init__(
self,
flow_name: str,
flow_update: bool,
source: str | None = None,
source_field: str | None = None,
filter_date: str | None = None,
poll_interval: int = 20,
max_attempts: int = 60,
wait_for_completion: bool = True,
**kwargs,
) -> None:
super().__init__(**kwargs)
if source is not None and source not in SUPPORTED_SOURCES:
raise ValueError(f"{source} is not a supported source (options: {SUPPORTED_SOURCES})!")
self.filter_date = filter_date
self.flow_name = flow_name
self.source = source
self.source_field = source_field
self.poll_interval = poll_interval
self.max_attempts = max_attempts
self.flow_update = flow_update
self.wait_for_completion = wait_for_completion
def execute(self, context: Context) -> None:
self.filter_date_parsed: datetime | None = (
datetime.fromisoformat(self.filter_date) if self.filter_date else None
)
if self.source is not None:
self.connector_type = self._get_connector_type()
if self.flow_update:
self._update_flow()
# while schedule flows will pick up the update right away, on-demand flows might use out of date
# info if triggered right after an update, so we need to wait a bit for the DB to be consistent.
time.sleep(AppflowBaseOperator.UPDATE_PROPAGATION_TIME)
self._run_flow(context)
def _get_connector_type(self) -> str:
response = self.hook.conn.describe_flow(flowName=self.flow_name)
connector_type = response["sourceFlowConfig"]["connectorType"]
if self.source != connector_type.lower():
raise ValueError(f"Incompatible source ({self.source} and connector type ({connector_type})!")
return connector_type
def _update_flow(self) -> None:
self.hook.update_flow_filter(flow_name=self.flow_name, filter_tasks=[], set_trigger_ondemand=True)
def _run_flow(self, context) -> str:
execution_id = self.hook.run_flow(
flow_name=self.flow_name,
poll_interval=self.poll_interval,
max_attempts=self.max_attempts,
wait_for_completion=self.wait_for_completion,
)
task_instance = context["task_instance"]
task_instance.xcom_push("execution_id", execution_id)
return execution_id
| AppflowBaseOperator |
python | apache__airflow | providers/amazon/src/airflow/providers/amazon/aws/sensors/glacier.py | {
"start": 1370,
"end": 4050
} | class ____(AwsBaseSensor[GlacierHook]):
"""
Glacier sensor for checking job state. This operator runs only in reschedule mode.
.. seealso::
For more information on how to use this sensor, take a look at the guide:
:ref:`howto/sensor:GlacierJobOperationSensor`
:param aws_conn_id: The reference to the AWS connection details
:param vault_name: name of Glacier vault on which job is executed
:param job_id: the job ID was returned by retrieve_inventory()
:param poke_interval: Time in seconds that the job should wait in
between each tries
:param mode: How the sensor operates.
Options are: ``{ poke | reschedule }``, default is ``poke``.
When set to ``poke`` the sensor is taking up a worker slot for its
whole execution time and sleeps between pokes. Use this mode if the
expected runtime of the sensor is short or if a short poke interval
is required. Note that the sensor will hold onto a worker slot and
a pool slot for the duration of the sensor's runtime in this mode.
When set to ``reschedule`` the sensor task frees the worker slot when
the criteria is not yet met and it's rescheduled at a later time. Use
this mode if the time before the criteria is met is expected to be
quite long. The poke interval should be more than one minute to
prevent too much load on the scheduler.
"""
aws_hook_class = GlacierHook
template_fields: Sequence[str] = aws_template_fields("vault_name", "job_id")
def __init__(
self,
*,
vault_name: str,
job_id: str,
poke_interval: int = 60 * 20,
mode: str = "reschedule",
**kwargs: Any,
) -> None:
super().__init__(**kwargs)
self.vault_name = vault_name
self.job_id = job_id
self.poke_interval = poke_interval
self.mode = mode
def poke(self, context: Context) -> bool:
response = self.hook.describe_job(vault_name=self.vault_name, job_id=self.job_id)
if response["StatusCode"] == JobStatus.SUCCEEDED.value:
self.log.info("Job status: %s, code status: %s", response["Action"], response["StatusCode"])
self.log.info("Job finished successfully")
return True
if response["StatusCode"] == JobStatus.IN_PROGRESS.value:
self.log.info("Processing...")
self.log.warning("Code status: %s", response["StatusCode"])
return False
raise AirflowException(
f"Sensor failed. Job status: {response['Action']}, code status: {response['StatusCode']}"
)
| GlacierJobOperationSensor |
python | keras-team__keras | keras/src/metrics/confusion_metrics_test.py | {
"start": 18739,
"end": 24652
} | class ____(testing.TestCase):
def test_config(self):
r_obj = metrics.Recall(
name="my_recall", thresholds=[0.4, 0.9], top_k=15, class_id=12
)
self.assertEqual(r_obj.name, "my_recall")
self.assertLen(r_obj.variables, 2)
self.assertEqual(
[v.name for v in r_obj.variables],
["true_positives", "false_negatives"],
)
self.assertEqual(r_obj.thresholds, [0.4, 0.9])
self.assertEqual(r_obj.top_k, 15)
self.assertEqual(r_obj.class_id, 12)
# Check save and restore config
r_obj2 = metrics.Recall.from_config(r_obj.get_config())
self.assertEqual(r_obj2.name, "my_recall")
self.assertLen(r_obj2.variables, 2)
self.assertEqual(r_obj2.thresholds, [0.4, 0.9])
self.assertEqual(r_obj2.top_k, 15)
self.assertEqual(r_obj2.class_id, 12)
def test_unweighted(self):
r_obj = metrics.Recall()
y_pred = np.array([1, 0, 1, 0])
y_true = np.array([0, 1, 1, 0])
self.assertAlmostEqual(0.5, r_obj(y_true, y_pred))
def test_unweighted_all_incorrect(self):
r_obj = metrics.Recall(thresholds=[0.5])
inputs = np.random.randint(0, 2, size=(100, 1))
y_pred = np.array(inputs)
y_true = np.array(1 - inputs)
self.assertAlmostEqual(0, r_obj(y_true, y_pred))
def test_weighted(self):
r_obj = metrics.Recall()
y_pred = np.array([[1, 0, 1, 0], [0, 1, 0, 1]])
y_true = np.array([[0, 1, 1, 0], [1, 0, 0, 1]])
result = r_obj(
y_true,
y_pred,
sample_weight=np.array([[1, 2, 3, 4], [4, 3, 2, 1]]),
)
weighted_tp = 3.0 + 1.0
weighted_t = (2.0 + 3.0) + (4.0 + 1.0)
expected_recall = weighted_tp / weighted_t
self.assertAlmostEqual(expected_recall, result)
def test_div_by_zero(self):
r_obj = metrics.Recall()
y_pred = np.array([0, 0, 0, 0])
y_true = np.array([0, 0, 0, 0])
self.assertEqual(0, r_obj(y_true, y_pred))
def test_unweighted_with_threshold(self):
r_obj = metrics.Recall(thresholds=[0.5, 0.7])
y_pred = np.array([1, 0, 0.6, 0])
y_true = np.array([0, 1, 1, 0])
self.assertAllClose([0.5, 0.0], r_obj(y_true, y_pred), 0)
def test_weighted_with_threshold(self):
r_obj = metrics.Recall(thresholds=[0.5, 1.0])
y_true = np.array([[0, 1], [1, 0]])
y_pred = np.array([[1, 0], [0.6, 0]], dtype="float32")
weights = np.array([[1, 4], [3, 2]], dtype="float32")
result = r_obj(y_true, y_pred, sample_weight=weights)
weighted_tp = 0 + 3.0
weighted_positives = (0 + 3.0) + (4.0 + 0.0)
expected_recall = weighted_tp / weighted_positives
self.assertAllClose([expected_recall, 0], result, 1e-3)
def test_multiple_updates(self):
r_obj = metrics.Recall(thresholds=[0.5, 1.0])
y_true = np.array([[0, 1], [1, 0]])
y_pred = np.array([[1, 0], [0.6, 0]], dtype="float32")
weights = np.array([[1, 4], [3, 2]], dtype="float32")
for _ in range(2):
r_obj.update_state(y_true, y_pred, sample_weight=weights)
weighted_tp = (0 + 3.0) + (0 + 3.0)
weighted_positives = ((0 + 3.0) + (4.0 + 0.0)) + (
(0 + 3.0) + (4.0 + 0.0)
)
expected_recall = weighted_tp / weighted_positives
self.assertAllClose([expected_recall, 0], r_obj.result(), 1e-3)
def test_unweighted_top_k(self):
r_obj = metrics.Recall(top_k=3)
y_pred = np.array([0.2, 0.1, 0.5, 0, 0.2])
y_true = np.array([0, 1, 1, 0, 0])
self.assertAlmostEqual(0.5, r_obj(y_true, y_pred))
def test_weighted_top_k(self):
r_obj = metrics.Recall(top_k=3)
y_pred1 = np.array([[0.2, 0.1, 0.4, 0, 0.2]])
y_true1 = np.array([[0, 1, 1, 0, 1]])
r_obj(y_true1, y_pred1, sample_weight=np.array([[1, 4, 2, 3, 5]]))
y_pred2 = np.array([0.2, 0.6, 0.4, 0.2, 0.2])
y_true2 = np.array([1, 0, 1, 1, 1])
result = r_obj(y_true2, y_pred2, sample_weight=np.array(3))
tp = (2 + 5) + (3 + 3)
positives = (4 + 2 + 5) + (3 + 3 + 3 + 3)
expected_recall = tp / positives
self.assertAlmostEqual(expected_recall, result)
def test_unweighted_class_id_should_throw_error_1d(self):
r_obj = metrics.Recall(class_id=2)
y_pred = np.array([0.2, 0.1, 0.6, 0, 0.2])
y_true = np.array([0, 1, 1, 0, 0])
with self.assertRaisesRegex(
ValueError,
r"When class_id is provided, y_pred must be a 2D array "
r"with shape \(num_samples, num_classes\), found shape:.*",
):
r_obj(y_true, y_pred)
def test_unweighted_class_id_multiclass(self):
r_obj = metrics.Recall(class_id=1)
y_pred = np.array(
[
[0.1, 0.2, 0.7],
[0.5, 0.3, 0.2],
[0.2, 0.6, 0.2],
[0.7, 0.2, 0.1],
[0.1, 0.1, 0.8],
]
)
y_true = np.array(
[
[0.0, 0.0, 1.0],
[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[1.0, 0.0, 0.0],
[0.0, 0.0, 1.0],
]
)
result = r_obj(y_true, y_pred)
self.assertAlmostEqual(1.0, result)
self.assertAlmostEqual(1.0, r_obj.true_positives)
self.assertAlmostEqual(0.0, r_obj.false_negatives)
def test_unweighted_top_k_and_threshold(self):
r_obj = metrics.Recall(thresholds=0.7, top_k=2)
y_pred = np.array([0.2, 0.8, 0.6, 0, 0.2])
y_true = np.array([1, 1, 1, 0, 1])
self.assertAlmostEqual(0.25, r_obj(y_true, y_pred))
self.assertAlmostEqual(1, r_obj.true_positives)
self.assertAlmostEqual(3, r_obj.false_negatives)
| RecallTest |
python | pypa__setuptools | setuptools/discovery.py | {
"start": 5651,
"end": 6391
} | class ____(_Finder):
"""Find isolated Python modules.
This function will **not** recurse subdirectories.
"""
@classmethod
def _find_iter(
cls, where: StrPath, exclude: _Filter, include: _Filter
) -> Iterator[str]:
for file in glob(os.path.join(where, "*.py")):
module, _ext = os.path.splitext(os.path.basename(file))
if not cls._looks_like_module(module):
continue
if include(module) and not exclude(module):
yield module
_looks_like_module = staticmethod(_valid_name)
# We have to be extra careful in the case of flat layout to not include files
# and directories not meant for distribution (e.g. tool-related)
| ModuleFinder |
python | pypa__pipenv | pipenv/vendor/tomlkit/source.py | {
"start": 1211,
"end": 1779
} | class ____:
"""
State preserver for the Parser.
"""
def __init__(self, source: Source) -> None:
self._source = source
self._states = []
def __call__(self, *args, **kwargs):
return _State(self._source, *args, **kwargs)
def __enter__(self) -> _State:
state = self()
self._states.append(state)
return state.__enter__()
def __exit__(self, exception_type, exception_val, trace):
state = self._states.pop()
return state.__exit__(exception_type, exception_val, trace)
| _StateHandler |
python | coleifer__peewee | tests/base_models.py | {
"start": 641,
"end": 697
} | class ____(TestModel):
value = IntegerField()
| Register |
python | PrefectHQ__prefect | src/prefect/server/orchestration/core_policy.py | {
"start": 45897,
"end": 48166
} | class ____(FlowRunOrchestrationRule):
"""
Governs runs attempting to enter a Paused/Suspended state
"""
FROM_STATES = ALL_ORCHESTRATION_STATES
TO_STATES = {StateType.PAUSED}
async def before_transition(
self,
initial_state: states.State[Any] | None,
proposed_state: states.State[Any] | None,
context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy],
) -> None:
if proposed_state is None:
return
verb = "suspend" if proposed_state.name == "Suspended" else "pause"
if initial_state is None:
await self.abort_transition(f"Cannot {verb} flows with no state.")
return
if not initial_state.is_running():
await self.reject_transition(
state=None,
reason=f"Cannot {verb} flows that are not currently running.",
)
return
self.key = proposed_state.state_details.pause_key
if self.key is None:
# if no pause key is provided, default to a UUID
self.key = str(uuid4())
pause_keys = context.run.empirical_policy.pause_keys or set()
if self.key in pause_keys:
await self.reject_transition(
state=None, reason=f"This {verb} has already fired."
)
return
if proposed_state.state_details.pause_reschedule:
if context.run.parent_task_run_id:
await self.abort_transition(
reason=f"Cannot {verb} subflows.",
)
return
if context.run.deployment_id is None:
await self.abort_transition(
reason=f"Cannot {verb} flows without a deployment.",
)
return
async def after_transition(
self,
initial_state: states.State[Any] | None,
validated_state: states.State[Any] | None,
context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy],
) -> None:
updated_policy = context.run.empirical_policy.model_dump()
updated_policy["pause_keys"].add(self.key)
context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)
| HandlePausingFlows |
python | PrefectHQ__prefect | src/prefect/client/schemas/filters.py | {
"start": 6896,
"end": 7432
} | class ____(PrefectBaseModel):
"""Filter by `FlowRun.next_scheduled_start_time`."""
before_: Optional[DateTime] = Field(
default=None,
description=(
"Only include flow runs with a next_scheduled_start_time or before this"
" time"
),
)
after_: Optional[DateTime] = Field(
default=None,
description=(
"Only include flow runs with a next_scheduled_start_time at or after this"
" time"
),
)
| FlowRunFilterNextScheduledStartTime |
python | encode__starlette | starlette/datastructures.py | {
"start": 15932,
"end": 18837
} | class ____(Mapping[str, str]):
"""
An immutable, case-insensitive multidict.
"""
def __init__(
self,
headers: Mapping[str, str] | None = None,
raw: list[tuple[bytes, bytes]] | None = None,
scope: MutableMapping[str, Any] | None = None,
) -> None:
self._list: list[tuple[bytes, bytes]] = []
if headers is not None:
assert raw is None, 'Cannot set both "headers" and "raw".'
assert scope is None, 'Cannot set both "headers" and "scope".'
self._list = [(key.lower().encode("latin-1"), value.encode("latin-1")) for key, value in headers.items()]
elif raw is not None:
assert scope is None, 'Cannot set both "raw" and "scope".'
self._list = raw
elif scope is not None:
# scope["headers"] isn't necessarily a list
# it might be a tuple or other iterable
self._list = scope["headers"] = list(scope["headers"])
@property
def raw(self) -> list[tuple[bytes, bytes]]:
return list(self._list)
def keys(self) -> list[str]: # type: ignore[override]
return [key.decode("latin-1") for key, value in self._list]
def values(self) -> list[str]: # type: ignore[override]
return [value.decode("latin-1") for key, value in self._list]
def items(self) -> list[tuple[str, str]]: # type: ignore[override]
return [(key.decode("latin-1"), value.decode("latin-1")) for key, value in self._list]
def getlist(self, key: str) -> list[str]:
get_header_key = key.lower().encode("latin-1")
return [item_value.decode("latin-1") for item_key, item_value in self._list if item_key == get_header_key]
def mutablecopy(self) -> MutableHeaders:
return MutableHeaders(raw=self._list[:])
def __getitem__(self, key: str) -> str:
get_header_key = key.lower().encode("latin-1")
for header_key, header_value in self._list:
if header_key == get_header_key:
return header_value.decode("latin-1")
raise KeyError(key)
def __contains__(self, key: Any) -> bool:
get_header_key = key.lower().encode("latin-1")
for header_key, header_value in self._list:
if header_key == get_header_key:
return True
return False
def __iter__(self) -> Iterator[Any]:
return iter(self.keys())
def __len__(self) -> int:
return len(self._list)
def __eq__(self, other: Any) -> bool:
if not isinstance(other, Headers):
return False
return sorted(self._list) == sorted(other._list)
def __repr__(self) -> str:
class_name = self.__class__.__name__
as_dict = dict(self.items())
if len(as_dict) == len(self):
return f"{class_name}({as_dict!r})"
return f"{class_name}(raw={self.raw!r})"
| Headers |
python | prompt-toolkit__python-prompt-toolkit | src/prompt_toolkit/cursor_shapes.py | {
"start": 1340,
"end": 1628
} | class ____(ABC):
@abstractmethod
def get_cursor_shape(self, application: Application[Any]) -> CursorShape:
"""
Return the cursor shape to be used in the current state.
"""
AnyCursorShapeConfig = Union[CursorShape, CursorShapeConfig, None]
| CursorShapeConfig |
python | run-llama__llama_index | llama-index-integrations/tools/llama-index-tools-artifact-editor/tests/test_artifact_editor.py | {
"start": 397,
"end": 603
} | class ____(BaseModel):
"""Person model for testing the artifact editor."""
name: str
age: int
email: Optional[str] = None
tags: List[str] = []
address: Optional[Address] = None
| Person |
python | kamyu104__LeetCode-Solutions | Python/number-of-subarrays-having-even-product.py | {
"start": 40,
"end": 381
} | class ____(object):
def evenProduct(self, nums):
"""
:type nums: List[int]
:rtype: int
"""
result = (len(nums)+1)*len(nums)//2
cnt = 0
for x in nums:
cnt = cnt+1 if x%2 else 0
result -= cnt
return result
# Time: O(n)
# Space: O(1)
# dp, math
| Solution |
python | pennersr__django-allauth | tests/apps/socialaccount/providers/sharefile/tests.py | {
"start": 246,
"end": 707
} | class ____(OAuth2TestsMixin, TestCase):
provider_id = ShareFileProvider.id
def get_mocked_response(self):
return MockedResponse(
HTTPStatus.OK,
"""
{
"Id": "123",
"Email":"user.one@domain.com",
"FirstName":"Name",
"LastName":"Last Name",
"Company":"Company",
"DefaultZone":
{
"Id":"zoneid"
}
} """,
)
def get_expected_to_str(self):
return "user.one@domain.com"
| ShareFileTests |
python | fastapi__sqlmodel | tests/test_enums_models.py | {
"start": 178,
"end": 275
} | class ____(SQLModel):
id: uuid.UUID = Field(primary_key=True)
enum_field: MyEnum2
| BaseModel |
python | bokeh__bokeh | src/bokeh/plotting/contour.py | {
"start": 10915,
"end": 15545
} | class ____:
''' Coordinates for contour lines at a single contour level.
The x and y coordinates are stored in a single NumPy array each, with a
np.nan separating each line.
'''
xs: np.ndarray
ys: np.ndarray
def _color(color: ContourColorOrPalette, n: int) -> ContourColor:
# Dict to sequence of colors such as palettes.cividis
if isinstance(color, dict):
return _palette_from_collection(color, n)
if isinstance(color, Sequence) and not isinstance(color, (bytes, str)) and len(color) != n:
return interp_palette(color, n)
return color
def _contour_coords(
x: ArrayLike | None,
y: ArrayLike | None,
z: ArrayLike | np.ma.MaskedArray | None,
levels: ArrayLike,
want_fill: bool,
want_line: bool,
) -> ContourCoords:
'''
Return the (xs, ys) coords of filled and/or line contours.
'''
if not want_fill and not want_line:
raise RuntimeError("Neither fill nor line requested in _contour_coords")
from contourpy import FillType, LineType, contour_generator
cont_gen = contour_generator(x, y, z, line_type=LineType.ChunkCombinedNan, fill_type=FillType.OuterOffset)
fill_coords = None
if want_fill:
all_xs = []
all_ys = []
for i in range(len(levels)-1):
filled = cont_gen.filled(levels[i], levels[i+1])
# This is guaranteed by use of fill_type=FillType.OuterOffset in contour_generator call.
filled = cast("FillReturn_OuterOffset", filled)
coords = _filled_to_coords(filled)
all_xs.append(coords.xs)
all_ys.append(coords.ys)
fill_coords = FillCoords(all_xs, all_ys)
line_coords = None
if want_line:
all_xs = []
all_ys = []
for level in levels:
lines = cont_gen.lines(level)
# This is guaranteed by use of line_type=LineType.ChunkCombinedNan in contour_generator call.
lines = cast("LineReturn_ChunkCombinedNan", lines)
coords = _lines_to_coords(lines)
all_xs.append(coords.xs)
all_ys.append(coords.ys)
line_coords = LineCoords(all_xs, all_ys)
return ContourCoords(fill_coords, line_coords)
def _filled_to_coords(filled: FillReturn_OuterOffset) -> SingleFillCoords:
# Processes polygon data returned from a single call to
# contourpy.ContourGenerator.filled(lower_level, upper_level)
# ContourPy filled data format is FillType.OuterOffset.
xs = []
ys = []
for points, offsets in zip(*filled):
# Polygon with outer boundary and zero or more holes.
n = len(offsets) - 1
xs.append([points[offsets[i]:offsets[i+1], 0] for i in range(n)])
ys.append([points[offsets[i]:offsets[i+1], 1] for i in range(n)])
return SingleFillCoords(xs, ys)
def _lines_to_coords(lines: LineReturn_ChunkCombinedNan) -> SingleLineCoords:
# Processes line data returned from a single call to
# contourpy.ContourGenerator.lines(level).
# ContourPy line data format is LineType.ChunkCombinedNan.
points = lines[0][0]
if points is None:
empty = np.empty(0)
return SingleLineCoords(empty, empty)
xs = points[:, 0]
ys = points[:, 1]
return SingleLineCoords(xs, ys)
def _palette_from_collection(collection: PaletteCollection, n: int) -> Palette:
# Return palette of length n from the specified palette collection, which
# is a dict[int, Palette]. If the required length palette is in the
# collection then return that. If the required length is bigger than the
# longest palette then interpolate that. If the required length is smaller
# than the shortest palette then interpolate that.
if len(collection) < 1:
raise ValueError("PaletteCollection is empty")
palette = collection.get(n, None)
if palette is not None:
return palette
max_key = max(collection.keys())
if isinstance(max_key, int) and n > max_key:
return interp_palette(collection[max_key], n)
min_key = min(collection.keys())
if isinstance(min_key, int) and n < min_key:
return interp_palette(collection[min_key], n)
raise ValueError(f"Unable to extract or interpolate palette of length {n} from PaletteCollection")
def _validate_levels(levels: ArrayLike | None) -> NDArray[float]:
levels = np.asarray(levels)
if levels.ndim == 0 or len(levels) == 0:
raise ValueError("No contour levels specified")
if len(levels) > 1 and np.diff(levels).min() <= 0.0:
raise ValueError("Contour levels must be increasing")
return levels
| SingleLineCoords |
python | ansible__ansible | lib/ansible/module_utils/_internal/_messages.py | {
"start": 2929,
"end": 3207
} | class ____(_datatag.AnsibleSerializableDataclass):
"""Base class for an error/warning/deprecation summary with details (possibly derived from an exception __cause__ chain) and an optional traceback."""
event: Event
@_dataclasses.dataclass(**_dataclass_kwargs)
| SummaryBase |
python | sympy__sympy | sympy/core/tests/test_expr.py | {
"start": 27314,
"end": 27347
} | class ____(Add):
pass
| CustomAdd |
python | apache__airflow | task-sdk/src/airflow/sdk/api/datamodels/_generated.py | {
"start": 13021,
"end": 13401
} | class ____(str, Enum):
REMOVED = "removed"
SCHEDULED = "scheduled"
QUEUED = "queued"
RUNNING = "running"
SUCCESS = "success"
RESTARTING = "restarting"
FAILED = "failed"
UP_FOR_RETRY = "up_for_retry"
UP_FOR_RESCHEDULE = "up_for_reschedule"
UPSTREAM_FAILED = "upstream_failed"
SKIPPED = "skipped"
DEFERRED = "deferred"
| TaskInstanceState |
python | dagster-io__dagster | examples/docs_snippets/docs_snippets/guides/external-systems/apis/use_configurable_resource_in_asset.py | {
"start": 83,
"end": 1432
} | class ____(dg.ConfigurableResource):
# highlight-start
# Define the configuration and
# remove previously hard-coded parameters
latitude: str
longitude: str
time_zone: str
# highlight-end
@property
# highlight-start
# Update the query string to use the configuration
def query_string(self) -> str:
return f"https://api.sunrise-sunset.org/json?lat={self.latitude}&lng={self.longitude}&date=today&tzid={self.time_zone}"
# highlight-end
def sunrise(self) -> str:
data = requests.get(self.query_string, timeout=5).json()
return data["results"]["sunrise"]
@dg.asset
def sfo_sunrise(context: dg.AssetExecutionContext, sun_resource: SunResource) -> None:
sunrise = sun_resource.sunrise()
context.log.info(f"Sunrise in San Francisco is at {sunrise}.")
# end_use_configurable_resource_in_asset
# start_use_configurable_resource_in_asset_defs
@dg.definitions
def resources():
return dg.Definitions(
# highlight-start
# Define configuration values
resources={
"sun_resource": SunResource(
latitude="37.615223",
longitude="-122.389977",
time_zone="America/Los_Angeles",
)
},
# highlight-end
)
# end_use_configurable_resource_in_asset_defs
| SunResource |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-github/source_github/github_schema.py | {
"start": 922858,
"end": 923578
} | class ____(sgqlc.types.relay.Connection):
"""The connection type for Release."""
__schema__ = github_schema
__field_names__ = ("edges", "nodes", "page_info", "total_count")
edges = sgqlc.types.Field(sgqlc.types.list_of("ReleaseEdge"), graphql_name="edges")
"""A list of edges."""
nodes = sgqlc.types.Field(sgqlc.types.list_of("Release"), graphql_name="nodes")
"""A list of nodes."""
page_info = sgqlc.types.Field(sgqlc.types.non_null(PageInfo), graphql_name="pageInfo")
"""Information to aid in pagination."""
total_count = sgqlc.types.Field(sgqlc.types.non_null(Int), graphql_name="totalCount")
"""Identifies the total count of items in the connection."""
| ReleaseConnection |
python | huggingface__transformers | tests/models/janus/test_modeling_janus.py | {
"start": 12105,
"end": 13840
} | class ____:
def __init__(
self,
parent,
batch_size=5,
is_training=False,
initializer_range=0.02,
image_size=30,
num_embeds=12,
base_channels=32, # we have a GroupNorm of 32 groups, so can't do less
embed_dim=12,
channel_multiplier=[1, 2],
patch_size=2,
scope=None,
):
self.parent = parent
self.batch_size = batch_size
self.is_training = is_training
self.initializer_range = initializer_range
self.image_size = image_size
self.base_channels = base_channels
self.num_embeds = num_embeds
self.embed_dim = embed_dim
self.channel_multiplier = channel_multiplier
self.num_patches = image_size // patch_size
def prepare_config_and_inputs(self):
pixel_values = floats_tensor([self.batch_size, 3, self.image_size, self.image_size])
config = self.get_config()
return config, pixel_values
def get_config(self):
return JanusVQVAEConfig(
embed_dim=self.embed_dim,
num_embeddings=self.num_embeds,
latent_channels=self.embed_dim,
in_channels=3,
base_channels=self.base_channels,
channel_multiplier=self.channel_multiplier,
initializer_range=self.initializer_range,
resolution=self.image_size,
num_patches=self.num_patches,
)
def prepare_config_and_inputs_for_common(self):
config_and_inputs = self.prepare_config_and_inputs()
config, pixel_values = config_and_inputs
inputs_dict = {"pixel_values": pixel_values}
return config, inputs_dict
@require_torch
| JanusVQModelTester |
python | jina-ai__jina | tests/integration/docarray_v2/csp/SampleColbertExecutor/executor.py | {
"start": 530,
"end": 595
} | class ____(TextDoc):
embeddings: NdArray
| EmbeddingResponseModel |
python | scikit-learn__scikit-learn | sklearn/neighbors/_nca.py | {
"start": 944,
"end": 19943
} | class ____(
ClassNamePrefixFeaturesOutMixin, TransformerMixin, BaseEstimator
):
"""Neighborhood Components Analysis.
Neighborhood Component Analysis (NCA) is a machine learning algorithm for
metric learning. It learns a linear transformation in a supervised fashion
to improve the classification accuracy of a stochastic nearest neighbors
rule in the transformed space.
Read more in the :ref:`User Guide <nca>`.
Parameters
----------
n_components : int, default=None
Preferred dimensionality of the projected space.
If None it will be set to `n_features`.
init : {'auto', 'pca', 'lda', 'identity', 'random'} or ndarray of shape \
(n_features_a, n_features_b), default='auto'
Initialization of the linear transformation. Possible options are
`'auto'`, `'pca'`, `'lda'`, `'identity'`, `'random'`, and a numpy
array of shape `(n_features_a, n_features_b)`.
- `'auto'`
Depending on `n_components`, the most reasonable initialization
is chosen. If `n_components <= min(n_features, n_classes - 1)`
we use `'lda'`, as it uses labels information. If not, but
`n_components < min(n_features, n_samples)`, we use `'pca'`, as
it projects data in meaningful directions (those of higher
variance). Otherwise, we just use `'identity'`.
- `'pca'`
`n_components` principal components of the inputs passed
to :meth:`fit` will be used to initialize the transformation.
(See :class:`~sklearn.decomposition.PCA`)
- `'lda'`
`min(n_components, n_classes)` most discriminative
components of the inputs passed to :meth:`fit` will be used to
initialize the transformation. (If `n_components > n_classes`,
the rest of the components will be zero.) (See
:class:`~sklearn.discriminant_analysis.LinearDiscriminantAnalysis`)
- `'identity'`
If `n_components` is strictly smaller than the
dimensionality of the inputs passed to :meth:`fit`, the identity
matrix will be truncated to the first `n_components` rows.
- `'random'`
The initial transformation will be a random array of shape
`(n_components, n_features)`. Each value is sampled from the
standard normal distribution.
- numpy array
`n_features_b` must match the dimensionality of the inputs passed
to :meth:`fit` and n_features_a must be less than or equal to that.
If `n_components` is not `None`, `n_features_a` must match it.
warm_start : bool, default=False
If `True` and :meth:`fit` has been called before, the solution of the
previous call to :meth:`fit` is used as the initial linear
transformation (`n_components` and `init` will be ignored).
max_iter : int, default=50
Maximum number of iterations in the optimization.
tol : float, default=1e-5
Convergence tolerance for the optimization.
callback : callable, default=None
If not `None`, this function is called after every iteration of the
optimizer, taking as arguments the current solution (flattened
transformation matrix) and the number of iterations. This might be
useful in case one wants to examine or store the transformation
found after each iteration.
verbose : int, default=0
If 0, no progress messages will be printed.
If 1, progress messages will be printed to stdout.
If > 1, progress messages will be printed and the `disp`
parameter of :func:`scipy.optimize.minimize` will be set to
`verbose - 2`.
random_state : int or numpy.RandomState, default=None
A pseudo random number generator object or a seed for it if int. If
`init='random'`, `random_state` is used to initialize the random
transformation. If `init='pca'`, `random_state` is passed as an
argument to PCA when initializing the transformation. Pass an int
for reproducible results across multiple function calls.
See :term:`Glossary <random_state>`.
Attributes
----------
components_ : ndarray of shape (n_components, n_features)
The linear transformation learned during fitting.
n_features_in_ : int
Number of features seen during :term:`fit`.
.. versionadded:: 0.24
n_iter_ : int
Counts the number of iterations performed by the optimizer.
random_state_ : numpy.RandomState
Pseudo random number generator object used during initialization.
feature_names_in_ : ndarray of shape (`n_features_in_`,)
Names of features seen during :term:`fit`. Defined only when `X`
has feature names that are all strings.
.. versionadded:: 1.0
See Also
--------
sklearn.discriminant_analysis.LinearDiscriminantAnalysis : Linear
Discriminant Analysis.
sklearn.decomposition.PCA : Principal component analysis (PCA).
References
----------
.. [1] J. Goldberger, G. Hinton, S. Roweis, R. Salakhutdinov.
"Neighbourhood Components Analysis". Advances in Neural Information
Processing Systems. 17, 513-520, 2005.
https://www.cs.toronto.edu/~rsalakhu/papers/ncanips.pdf
.. [2] Wikipedia entry on Neighborhood Components Analysis
https://en.wikipedia.org/wiki/Neighbourhood_components_analysis
Examples
--------
>>> from sklearn.neighbors import NeighborhoodComponentsAnalysis
>>> from sklearn.neighbors import KNeighborsClassifier
>>> from sklearn.datasets import load_iris
>>> from sklearn.model_selection import train_test_split
>>> X, y = load_iris(return_X_y=True)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y,
... stratify=y, test_size=0.7, random_state=42)
>>> nca = NeighborhoodComponentsAnalysis(random_state=42)
>>> nca.fit(X_train, y_train)
NeighborhoodComponentsAnalysis(...)
>>> knn = KNeighborsClassifier(n_neighbors=3)
>>> knn.fit(X_train, y_train)
KNeighborsClassifier(...)
>>> print(knn.score(X_test, y_test))
0.933333...
>>> knn.fit(nca.transform(X_train), y_train)
KNeighborsClassifier(...)
>>> print(knn.score(nca.transform(X_test), y_test))
0.961904...
"""
_parameter_constraints: dict = {
"n_components": [
Interval(Integral, 1, None, closed="left"),
None,
],
"init": [
StrOptions({"auto", "pca", "lda", "identity", "random"}),
np.ndarray,
],
"warm_start": ["boolean"],
"max_iter": [Interval(Integral, 1, None, closed="left")],
"tol": [Interval(Real, 0, None, closed="left")],
"callback": [callable, None],
"verbose": ["verbose"],
"random_state": ["random_state"],
}
def __init__(
self,
n_components=None,
*,
init="auto",
warm_start=False,
max_iter=50,
tol=1e-5,
callback=None,
verbose=0,
random_state=None,
):
self.n_components = n_components
self.init = init
self.warm_start = warm_start
self.max_iter = max_iter
self.tol = tol
self.callback = callback
self.verbose = verbose
self.random_state = random_state
@_fit_context(prefer_skip_nested_validation=True)
def fit(self, X, y):
"""Fit the model according to the given training data.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The training samples.
y : array-like of shape (n_samples,)
The corresponding training labels.
Returns
-------
self : object
Fitted estimator.
"""
# Validate the inputs X and y, and converts y to numerical classes.
X, y = validate_data(self, X, y, ensure_min_samples=2)
check_classification_targets(y)
y = LabelEncoder().fit_transform(y)
# Check the preferred dimensionality of the projected space
if self.n_components is not None and self.n_components > X.shape[1]:
raise ValueError(
"The preferred dimensionality of the "
f"projected space `n_components` ({self.n_components}) cannot "
"be greater than the given data "
f"dimensionality ({X.shape[1]})!"
)
# If warm_start is enabled, check that the inputs are consistent
if (
self.warm_start
and hasattr(self, "components_")
and self.components_.shape[1] != X.shape[1]
):
raise ValueError(
f"The new inputs dimensionality ({X.shape[1]}) does not "
"match the input dimensionality of the "
f"previously learned transformation ({self.components_.shape[1]})."
)
# Check how the linear transformation should be initialized
init = self.init
if isinstance(init, np.ndarray):
init = check_array(init)
# Assert that init.shape[1] = X.shape[1]
if init.shape[1] != X.shape[1]:
raise ValueError(
f"The input dimensionality ({init.shape[1]}) of the given "
"linear transformation `init` must match the "
f"dimensionality of the given inputs `X` ({X.shape[1]})."
)
# Assert that init.shape[0] <= init.shape[1]
if init.shape[0] > init.shape[1]:
raise ValueError(
f"The output dimensionality ({init.shape[0]}) of the given "
"linear transformation `init` cannot be "
f"greater than its input dimensionality ({init.shape[1]})."
)
# Assert that self.n_components = init.shape[0]
if self.n_components is not None and self.n_components != init.shape[0]:
raise ValueError(
"The preferred dimensionality of the "
f"projected space `n_components` ({self.n_components}) does"
" not match the output dimensionality of "
"the given linear transformation "
f"`init` ({init.shape[0]})!"
)
# Initialize the random generator
self.random_state_ = check_random_state(self.random_state)
# Measure the total training time
t_train = time.time()
# Compute a mask that stays fixed during optimization:
same_class_mask = y[:, np.newaxis] == y[np.newaxis, :]
# (n_samples, n_samples)
# Initialize the transformation
transformation = np.ravel(self._initialize(X, y, init))
# Create a dictionary of parameters to be passed to the optimizer
disp = self.verbose - 2 if self.verbose > 1 else -1
optimizer_params = {
"method": "L-BFGS-B",
"fun": self._loss_grad_lbfgs,
"args": (X, same_class_mask, -1.0),
"jac": True,
"x0": transformation,
"tol": self.tol,
"options": dict(
maxiter=self.max_iter,
**_get_additional_lbfgs_options_dict("disp", disp),
),
"callback": self._callback,
}
# Call the optimizer
self.n_iter_ = 0
opt_result = minimize(**optimizer_params)
# Reshape the solution found by the optimizer
self.components_ = opt_result.x.reshape(-1, X.shape[1])
# Stop timer
t_train = time.time() - t_train
if self.verbose:
cls_name = self.__class__.__name__
# Warn the user if the algorithm did not converge
if not opt_result.success:
warn(
"[{}] NCA did not converge: {}".format(
cls_name, opt_result.message
),
ConvergenceWarning,
)
print("[{}] Training took {:8.2f}s.".format(cls_name, t_train))
return self
def transform(self, X):
"""Apply the learned transformation to the given data.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Data samples.
Returns
-------
X_embedded: ndarray of shape (n_samples, n_components)
The data samples transformed.
Raises
------
NotFittedError
If :meth:`fit` has not been called before.
"""
check_is_fitted(self)
X = validate_data(self, X, reset=False)
return np.dot(X, self.components_.T)
def _initialize(self, X, y, init):
"""Initialize the transformation.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The training samples.
y : array-like of shape (n_samples,)
The training labels.
init : str or ndarray of shape (n_features_a, n_features_b)
The validated initialization of the linear transformation.
Returns
-------
transformation : ndarray of shape (n_components, n_features)
The initialized linear transformation.
"""
transformation = init
if self.warm_start and hasattr(self, "components_"):
transformation = self.components_
elif isinstance(init, np.ndarray):
pass
else:
n_samples, n_features = X.shape
n_components = self.n_components or n_features
if init == "auto":
n_classes = len(np.unique(y))
if n_components <= min(n_features, n_classes - 1):
init = "lda"
elif n_components < min(n_features, n_samples):
init = "pca"
else:
init = "identity"
if init == "identity":
transformation = np.eye(n_components, X.shape[1])
elif init == "random":
transformation = self.random_state_.standard_normal(
size=(n_components, X.shape[1])
)
elif init in {"pca", "lda"}:
init_time = time.time()
if init == "pca":
pca = PCA(
n_components=n_components, random_state=self.random_state_
)
if self.verbose:
print("Finding principal components... ", end="")
sys.stdout.flush()
pca.fit(X)
transformation = pca.components_
elif init == "lda":
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
lda = LinearDiscriminantAnalysis(n_components=n_components)
if self.verbose:
print("Finding most discriminative components... ", end="")
sys.stdout.flush()
lda.fit(X, y)
transformation = lda.scalings_.T[:n_components]
if self.verbose:
print("done in {:5.2f}s".format(time.time() - init_time))
return transformation
def _callback(self, transformation):
"""Called after each iteration of the optimizer.
Parameters
----------
transformation : ndarray of shape (n_components * n_features,)
The solution computed by the optimizer in this iteration.
"""
if self.callback is not None:
self.callback(transformation, self.n_iter_)
self.n_iter_ += 1
def _loss_grad_lbfgs(self, transformation, X, same_class_mask, sign=1.0):
"""Compute the loss and the loss gradient w.r.t. `transformation`.
Parameters
----------
transformation : ndarray of shape (n_components * n_features,)
The raveled linear transformation on which to compute loss and
evaluate gradient.
X : ndarray of shape (n_samples, n_features)
The training samples.
same_class_mask : ndarray of shape (n_samples, n_samples)
A mask where `mask[i, j] == 1` if `X[i]` and `X[j]` belong
to the same class, and `0` otherwise.
Returns
-------
loss : float
The loss computed for the given transformation.
gradient : ndarray of shape (n_components * n_features,)
The new (flattened) gradient of the loss.
"""
if self.n_iter_ == 0:
self.n_iter_ += 1
if self.verbose:
header_fields = ["Iteration", "Objective Value", "Time(s)"]
header_fmt = "{:>10} {:>20} {:>10}"
header = header_fmt.format(*header_fields)
cls_name = self.__class__.__name__
print("[{}]".format(cls_name))
print(
"[{}] {}\n[{}] {}".format(
cls_name, header, cls_name, "-" * len(header)
)
)
t_funcall = time.time()
transformation = transformation.reshape(-1, X.shape[1])
X_embedded = np.dot(X, transformation.T) # (n_samples, n_components)
# Compute softmax distances
p_ij = pairwise_distances(X_embedded, squared=True)
np.fill_diagonal(p_ij, np.inf)
p_ij = softmax(-p_ij) # (n_samples, n_samples)
# Compute loss
masked_p_ij = p_ij * same_class_mask
p = np.sum(masked_p_ij, axis=1, keepdims=True) # (n_samples, 1)
loss = np.sum(p)
# Compute gradient of loss w.r.t. `transform`
weighted_p_ij = masked_p_ij - p_ij * p
weighted_p_ij_sym = weighted_p_ij + weighted_p_ij.T
np.fill_diagonal(weighted_p_ij_sym, -weighted_p_ij.sum(axis=0))
gradient = 2 * X_embedded.T.dot(weighted_p_ij_sym).dot(X)
# time complexity of the gradient: O(n_components x n_samples x (
# n_samples + n_features))
if self.verbose:
t_funcall = time.time() - t_funcall
values_fmt = "[{}] {:>10} {:>20.6e} {:>10.2f}"
print(
values_fmt.format(
self.__class__.__name__, self.n_iter_, loss, t_funcall
)
)
sys.stdout.flush()
return sign * loss, sign * gradient.ravel()
def __sklearn_tags__(self):
tags = super().__sklearn_tags__()
tags.target_tags.required = True
return tags
@property
def _n_features_out(self):
"""Number of transformed output features."""
return self.components_.shape[0]
| NeighborhoodComponentsAnalysis |
python | cython__cython | Cython/Compiler/Builtin.py | {
"start": 3353,
"end": 3741
} | class ____(_BuiltinOverride):
def declare_in_scope(self, scope):
func_type, sig = self.func_type, self.sig
if func_type is None:
func_type = self.build_func_type(sig)
scope.declare_builtin_cfunction(
self.py_name, func_type, self.cname, self.py_equiv, self.utility_code,
specialiser=self.specialiser,
)
| BuiltinFunction |
python | django__django | tests/invalid_models_tests/test_backend_specific.py | {
"start": 437,
"end": 1029
} | class ____(SimpleTestCase):
@mock.patch("django.db.models.fields.router.allow_migrate", new=dummy_allow_migrate)
def test_check_field(self):
"""Test if backend specific checks are performed."""
error = Error("an error")
class Model(models.Model):
field = models.IntegerField()
field = Model._meta.get_field("field")
with mock.patch.object(
connections["default"].validation, "check_field", return_value=[error]
):
self.assertEqual(field.check(databases={"default"}), [error])
| BackendSpecificChecksTests |
python | pydantic__pydantic | pydantic-core/tests/test_tzinfo.py | {
"start": 476,
"end": 741
} | class ____:
"""
Object that is greater than anything (except itself).
"""
def __eq__(self, other):
return isinstance(other, _LARGEST)
def __lt__(self, other):
return False
LARGEST = _LARGEST()
@functools.total_ordering
| _LARGEST |
python | sqlalchemy__sqlalchemy | test/orm/inheritance/test_assorted_poly.py | {
"start": 27157,
"end": 32280
} | class ____(fixtures.MappedTest):
@classmethod
def define_tables(cls, metadata):
global people, engineers, managers, cars, offroad_cars
cars = Table(
"cars",
metadata,
Column(
"car_id",
Integer,
primary_key=True,
test_needs_autoincrement=True,
),
Column("name", String(30)),
)
offroad_cars = Table(
"offroad_cars",
metadata,
Column(
"car_id",
Integer,
ForeignKey("cars.car_id"),
nullable=False,
primary_key=True,
),
)
people = Table(
"people",
metadata,
Column(
"person_id",
Integer,
primary_key=True,
test_needs_autoincrement=True,
),
Column(
"car_id", Integer, ForeignKey("cars.car_id"), nullable=False
),
Column("name", String(50)),
)
engineers = Table(
"engineers",
metadata,
Column(
"person_id",
Integer,
ForeignKey("people.person_id"),
primary_key=True,
),
Column("field", String(30)),
)
managers = Table(
"managers",
metadata,
Column(
"person_id",
Integer,
ForeignKey("people.person_id"),
primary_key=True,
),
Column("category", String(70)),
)
def test_manytoone_lazyload(self):
"""test that lazy load clause to a polymorphic child mapper generates
correctly [ticket:493]"""
class PersistentObject:
def __init__(self, **kwargs):
for key, value in kwargs.items():
setattr(self, key, value)
class Status(PersistentObject):
def __repr__(self):
return "Status %s" % self.name
class Person(PersistentObject):
def __repr__(self):
return "Ordinary person %s" % self.name
class Engineer(Person):
def __repr__(self):
return "Engineer %s, field %s" % (self.name, self.field)
class Manager(Person):
def __repr__(self):
return "Manager %s, category %s" % (self.name, self.category)
class Car(PersistentObject):
def __repr__(self):
return "Car number %d, name %s" % (self.car_id, self.name)
class Offraod_Car(Car):
def __repr__(self):
return "Offroad Car number %d, name %s" % (
self.car_id,
self.name,
)
employee_join = polymorphic_union(
{
"engineer": people.join(engineers),
"manager": people.join(managers),
},
"type",
"employee_join",
)
car_join = polymorphic_union(
{
"car": cars.outerjoin(offroad_cars)
.select()
.where(offroad_cars.c.car_id == None)
.reduce_columns()
.subquery(),
"offroad": cars.join(offroad_cars),
},
"type",
"car_join",
)
car_mapper = self.mapper_registry.map_imperatively(
Car,
cars,
with_polymorphic=("*", car_join),
polymorphic_on=car_join.c.type,
polymorphic_identity="car",
)
self.mapper_registry.map_imperatively(
Offraod_Car,
offroad_cars,
inherits=car_mapper,
polymorphic_identity="offroad",
)
person_mapper = self.mapper_registry.map_imperatively(
Person,
people,
with_polymorphic=("*", employee_join),
polymorphic_on=employee_join.c.type,
polymorphic_identity="person",
properties={"car": relationship(car_mapper)},
)
self.mapper_registry.map_imperatively(
Engineer,
engineers,
inherits=person_mapper,
polymorphic_identity="engineer",
)
self.mapper_registry.map_imperatively(
Manager,
managers,
inherits=person_mapper,
polymorphic_identity="manager",
)
session = fixture_session()
for i in range(1, 4):
if i % 2:
car = Car()
else:
car = Offraod_Car()
session.add(Manager(name="M%d" % i, category="YYYYYYYYY", car=car))
session.add(Engineer(name="E%d" % i, field="X", car=car))
session.flush()
session.expunge_all()
r = session.query(Person).all()
for p in r:
assert p.car_id == p.car.car_id
| RelationshipTest7 |
python | django__django | tests/template_tests/utils.py | {
"start": 4021,
"end": 4101
} | class ____:
def __str__(self):
return mark_safe("you > me")
| SafeClass |
python | HypothesisWorks__hypothesis | hypothesis-python/tests/cover/test_pretty.py | {
"start": 17813,
"end": 17962
} | class ____:
def __init__(self, val=None) -> None:
self.val = val
def __repr__(self):
return "invalid syntax"
| InvalidSyntaxRepr |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-twilio/unit_tests/test_streams.py | {
"start": 3938,
"end": 7909
} | class ____:
@freeze_time("2022-11-16 12:03:11+00:00")
def test_calls_includes_date_window_params(self, requests_mock):
requests_mock.get(f"{BASE}/Accounts.json", json=ACCOUNTS_JSON, status_code=200)
qs = urlencode({"EndTime>": "2022-11-15", "EndTime<": "2022-11-16", "PageSize": 1000})
requests_mock.get(
f"{BASE}/Accounts/AC123/Calls.json?{qs}",
json={"calls": [{"sid": "CA1", "end_time": "2022-11-15T12:00:00Z"}]},
status_code=200,
)
records = read_from_stream({**TEST_CONFIG, "start_date": "2022-11-15T00:00:00Z"}, "calls", SyncMode.full_refresh).records
assert len(records) == 1
@freeze_time("2022-11-16 12:03:11+00:00")
@pytest.mark.parametrize(
"stream_name,path,lower_key,upper_key,state,windows",
[
(
"messages",
"/Accounts/AC123/Messages.json",
"DateSent>",
"DateSent<",
{
"states": [
{
"partition": {"subresource_uri": "/2010-04-01/Accounts/AC123/Messages.json"},
"cursor": {"date_sent": "2022-11-13T12:11:10Z"},
}
]
},
[
("2022-11-13 12:11:10Z", "2022-11-16 12:03:11Z"),
],
),
(
"usage_records",
"/Accounts/AC123/Usage/Records/Daily.json",
"StartDate",
"EndDate",
{"states": [{"partition": {"account_sid": "AC123"}, "cursor": {"start_date": "2022-11-13"}}]},
[
("2022-11-13", "2022-11-16"),
],
),
(
"recordings",
"/Accounts/AC123/Recordings.json",
"DateCreated>",
"DateCreated<",
{
"states": [
{
"partition": {"subresource_uri": "/2010-04-01/Accounts/AC123/Recordings.json"},
"cursor": {"date_created": "2021-11-13 00:00:00Z"},
}
]
},
[
("2021-11-13 00:00:00Z", "2022-11-12 23:59:59Z"),
("2022-11-13 00:00:00Z", "2022-11-16 12:03:11Z"),
],
),
],
)
def test_incremental_calls_with_date_ranges(self, stream_name, path, lower_key, upper_key, state, windows, requests_mock):
def _register_date_window(m, path, body_key, lower_key, upper_key, lower_val, upper_val):
def _match(req):
q = parse_qs(urlparse(req.url).query, keep_blank_values=True)
return q.get(lower_key) == [lower_val] and q.get(upper_key) == [upper_val]
# one matcher per window
return m.get(f"{BASE}{path}", json={body_key: [{}]}, status_code=200, additional_matcher=_match)
# Parent
accounts_matcher = requests_mock.get(f"{BASE}/Accounts.json", json=ACCOUNTS_JSON, status_code=200)
# One matcher per expected window (exact query values)
child_matchers = [_register_date_window(requests_mock, path, stream_name, lower_key, upper_key, lo, hi) for (lo, hi) in windows]
state = (
StateBuilder()
.with_stream_state(
stream_name,
state,
)
.build()
)
_ = read_from_stream({**TEST_CONFIG, "start_date": "2000-11-15T00:00:00Z"}, stream_name, SyncMode.incremental, state).records
assert accounts_matcher.called, "Accounts endpoint was not called"
assert all(m.called for m in child_matchers), "Not all date-window URLs were called"
assert sum(m.call_count for m in child_matchers) == len(windows)
| TestIncrementalTwilioStream |
python | tensorflow__tensorflow | tensorflow/python/distribute/distribute_config.py | {
"start": 792,
"end": 1700
} | class ____(
collections.namedtuple(
'DistributeConfig',
['train_distribute', 'eval_distribute', 'remote_cluster'])):
"""A config tuple for distribution strategies.
Attributes:
train_distribute: a `DistributionStrategy` object for training.
eval_distribute: an optional `DistributionStrategy` object for
evaluation.
remote_cluster: a dict, `ClusterDef` or `ClusterSpec` object specifying
the cluster configurations. If this is given, the `train_and_evaluate`
method will be running as a standalone client which connects to the
cluster for training.
"""
def __new__(cls,
train_distribute=None,
eval_distribute=None,
remote_cluster=None):
return super(DistributeConfig, cls).__new__(cls, train_distribute,
eval_distribute, remote_cluster)
| DistributeConfig |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-box-data-extract/source_box_data_extract/source.py | {
"start": 1836,
"end": 4201
} | class ____(AbstractSource):
def check_connection(self, logger, config) -> Tuple[bool, any]:
"""
:param config: the user-input config object conforming to the connector's spec.yaml
:param logger: logger object
:return Tuple[bool, any]: (True, None) if the input config can be used to connect to the API successfully, (False, error) otherwise.
"""
logger.info("Checking Box API connection...")
try:
box_client = get_box_ccg_client(config)
user = box_client.users.get_user_me()
logger.debug(f"box_subject_type: {config.get('box_subject_type')}, box_subject_id: {config.get('box_subject_id')}")
logger.info(f"Logged into Box as: {user.name} ({user.id} - {user.login})")
except BoxAPIError as e:
logger.error(f"Unable to connect to Box API with the provided credentials - {e}")
return False, f"Unable to connect to Box API with the provided credentials"
return True, None
def streams(self, config: Mapping[str, Any]) -> List[Stream]:
"""
:param config: A Mapping of the user input configuration as defined in the connector spec.
"""
box_client = get_box_ccg_client(config)
box_folder_text_representation_stream = StreamTextRepresentationFolder(
box_client, config["box_folder_id"], is_recursive=config.get("is_recursive", False)
)
box_folder_ask_ai_stream = StreamAIAskFolder(
box_client, config["box_folder_id"], config["ask_ai_prompt"], is_recursive=config.get("is_recursive", False)
)
box_folder_extract_ai_stream = StreamAIExtractFolder(
box_client, config["box_folder_id"], config["extract_ai_prompt"], is_recursive=config.get("is_recursive", False)
)
box_folder_extract_structured_ai_stream = StreamAIExtractStructuredFolder(
client=box_client,
folder_id=config["box_folder_id"],
fields_json_str=config["extract_structured_ai_fields"],
is_recursive=config.get("is_recursive", False),
)
return [
box_folder_text_representation_stream,
box_folder_ask_ai_stream,
box_folder_extract_ai_stream,
box_folder_extract_structured_ai_stream,
]
# Streams
| SourceBoxDataExtract |
python | tensorflow__tensorflow | tensorflow/python/eager/polymorphic_function/argument_naming_test.py | {
"start": 1228,
"end": 9659
} | class ____(test.TestCase, parameterized.TestCase):
"""Tests for recognizable export signatures from concrete functions."""
def testBasic(self):
@polymorphic_function.function
def fn(a, b):
return a + b, a * b
# Call the function to make def_function happy
fn(array_ops.ones([]), array_ops.ones([]))
fn_op = fn.get_concrete_function(
tensor_spec.TensorSpec(shape=(None,), dtype=dtypes.float32),
tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32))
self.assertEqual(
['a', 'b'],
[inp.op.name for inp in fn_op.inputs])
self.assertEqual(
[b'a', b'b'],
[inp.op.get_attr('_user_specified_name') for inp in fn_op.inputs])
self.assertLen(fn_op.graph.structured_outputs, 2)
self.assertAllClose(
[3., 2.],
fn_op(constant_op.constant(1.), constant_op.constant(2.)))
self.assertAllClose(
[3., 2.],
fn_op(a=constant_op.constant(1.), b=constant_op.constant(2.)))
def testVariable(self):
@polymorphic_function.function
def fn(a, b):
return a + b, a * b
# Call the function to make def_function happy
fn(array_ops.ones([]), array_ops.ones([]))
fn_op = fn.get_concrete_function(
tensor_spec.TensorSpec(shape=(None,), dtype=dtypes.float32),
variables.Variable(1.))
self.assertEqual(
['a', 'b'],
[inp.op.name for inp in fn_op.inputs])
self.assertEqual(
[b'a', b'b'],
[inp.op.get_attr('_user_specified_name') for inp in fn_op.inputs])
self.assertLen(fn_op.graph.structured_outputs, 2)
def testDictReturned(self):
@polymorphic_function.function
def fn(x, z=(1., 2.), y=3.):
z1, z2 = z
return {'alpha': x + y + z1, 'beta': x * y + z2}
# Call the function to make def_function happy
fn(array_ops.ones([]))
fn_op = fn.get_concrete_function(
x=tensor_spec.TensorSpec(shape=(None,), dtype=dtypes.float32),
y=tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32))
self.assertEqual(
['x', 'y'],
[inp.op.name for inp in fn_op.inputs])
self.assertEqual(
[b'x', b'y'],
[inp.op.get_attr('_user_specified_name') for inp in fn_op.inputs])
self.assertEqual({'alpha', 'beta'},
set(fn_op.graph.structured_outputs.keys()))
fn_op2 = fn.get_concrete_function(
z=(tensor_spec.TensorSpec(shape=(None,), dtype=dtypes.float32,
name='z_first'),
tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32,
name='z_second')),
y=tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32, name='custom'),
x=4.)
self.assertEqual(
['z_first', 'z_second', 'custom'],
[inp.op.name for inp in fn_op2.inputs])
self.assertEqual(
[b'z_first', b'z_second', b'custom'],
[inp.op.get_attr('_user_specified_name') for inp in fn_op2.inputs])
fn_op3 = fn.get_concrete_function(
tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32, name='custom'),
z=(tensor_spec.TensorSpec(shape=(None,), dtype=dtypes.float32,
name='z1'),
tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32, name='z2')),
y=tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32))
self.assertEqual(
['custom', 'z1', 'z2', 'y'],
[inp.op.name for inp in fn_op3.inputs])
self.assertEqual(
[b'custom', b'z1', b'z2', b'y'],
[inp.op.get_attr('_user_specified_name') for inp in fn_op3.inputs])
def testMethod(self):
class HasMethod(object):
@polymorphic_function.function
def method(self, x):
return x
has_method = HasMethod()
# Call the function to make def_function happy
HasMethod.method(has_method, array_ops.ones([]))
class_op = HasMethod.method.get_concrete_function(
has_method, tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32))
self.assertEqual(
['x'],
[inp.op.name for inp in class_op.inputs])
self.assertEqual(
[b'x'],
[inp.op.get_attr('_user_specified_name') for inp in class_op.inputs])
# Call the function to make def_function happy
has_method.method(array_ops.ones([]))
method_op = has_method.method.get_concrete_function(
tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32))
self.assertEqual(
['x'],
[inp.op.name for inp in method_op.inputs])
self.assertEqual(
[b'x'],
[inp.op.get_attr('_user_specified_name') for inp in method_op.inputs])
# TODO(allenl): It should be possible to override names when exporting. Do
# TensorSpec names need to go in cache keys? Or maybe get_concrete_function
# should always retrace?
self.skipTest('Not working')
method_op = has_method.method.get_concrete_function(
tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32, name='y'))
self.assertEqual(
['y'],
[inp.op.name for inp in method_op.inputs])
self.assertEqual(
[b'y'],
[inp.op.get_attr('_user_specified_name') for inp in method_op.inputs])
def testMethodSignature(self):
class HasMethod(object):
@polymorphic_function.function(
input_signature=(tensor_spec.TensorSpec(
shape=None, dtype=dtypes.float64, name='y'),))
def method(self, x):
hash(self) # No weak proxies passed as `self`
return x
has_method = HasMethod()
# Call the function to make def_function happy
has_method.method(array_ops.ones([], dtype=dtypes.float64))
method_op = has_method.method.get_concrete_function()
self.assertEqual(
['y'],
[inp.op.name for inp in method_op.inputs])
self.assertEqual(
[b'y'],
[inp.op.get_attr('_user_specified_name') for inp in method_op.inputs])
method_op2 = has_method.method.get_concrete_function()
self.assertEqual(
['y'],
[inp.op.name for inp in method_op2.inputs])
self.assertEqual(
[b'y'],
[inp.op.get_attr('_user_specified_name') for inp in method_op2.inputs])
def testVariadic(self):
@polymorphic_function.function
def variadic_fn(x, *args, **kwargs):
return x + math_ops.add_n(list(args) + list(kwargs.values()))
# Call the function to make def_function happy
variadic_fn(array_ops.ones([]), array_ops.ones([]))
variadic_op = variadic_fn.get_concrete_function(
tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32),
tensor_spec.TensorSpec(shape=None, dtype=dtypes.float32, name='y'),
tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32),
tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32,
name='second_variadic'),
z=tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32),
zz=tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32, name='cust'))
self.assertEqual(
['x', 'y', 'args_1', 'second_variadic', 'z', 'cust'],
[inp.op.name for inp in variadic_op.inputs])
self.assertEqual(
[b'x', b'y', b'args_1', b'second_variadic', b'z', b'cust'],
[inp.op.get_attr('_user_specified_name') for inp in variadic_op.inputs])
def testVariadicInputSignature(self):
@polymorphic_function.function(
input_signature=(
tensor_spec.TensorSpec(shape=None, dtype=dtypes.float32),
tensor_spec.TensorSpec(shape=None, dtype=dtypes.float32, name='y'),
tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32),
tensor_spec.TensorSpec(shape=(), dtype=dtypes.float32, name='z'),
))
def variadic_fn(x, *args):
return x + math_ops.add_n(list(args))
# Call the function to make def_function happy
variadic_fn(array_ops.ones([]), array_ops.ones([]),
array_ops.ones([]), array_ops.ones([]))
variadic_op = variadic_fn.get_concrete_function()
self.assertIn(b'variadic_fn', variadic_op.name)
self.assertEqual(
['x', 'y', 'args_1', 'z'],
[inp.op.name for inp in variadic_op.inputs])
self.assertEqual(
[b'x', b'y', b'args_1', b'z'],
[inp.op.get_attr('_user_specified_name')
for inp in variadic_op.inputs])
if __name__ == '__main__':
ops.enable_eager_execution(
config=config_pb2.ConfigProto(device_count={'CPU': 4}))
test.main()
| ArgumentNamingTests |
python | dagster-io__dagster | python_modules/dagster/dagster/_core/definitions/declarative_automation/operands/run_operands.py | {
"start": 7079,
"end": 7578
} | class ____(NewUpdatesWithRunTagsCondition):
tag_keys: Optional[Set[str]] = None
tag_values: Optional[Mapping[str, str]] = None
@property
def base_name(self) -> str:
return "all_new_updates_have_run_tags"
def match_candidate_runs(self, candidate_run_ids: Set[str], matching_run_ids: Set[str]) -> bool:
# every candidate run must have matched the filters
return all(run_id in matching_run_ids for run_id in candidate_run_ids)
| AllNewUpdatesHaveRunTagsCondition |
python | apache__airflow | airflow-core/src/airflow/models/asset.py | {
"start": 17689,
"end": 19438
} | class ____(Base):
"""References from a DAG to an asset alias of which it is a consumer."""
alias_id: Mapped[int] = mapped_column(Integer, primary_key=True, nullable=False)
dag_id: Mapped[str] = mapped_column(StringID(), primary_key=True, nullable=False)
created_at: Mapped[datetime] = mapped_column(UtcDateTime, default=timezone.utcnow, nullable=False)
updated_at: Mapped[datetime] = mapped_column(
UtcDateTime, default=timezone.utcnow, onupdate=timezone.utcnow, nullable=False
)
asset_alias = relationship("AssetAliasModel", back_populates="scheduled_dags")
dag = relationship("DagModel", back_populates="schedule_asset_alias_references")
__tablename__ = "dag_schedule_asset_alias_reference"
__table_args__ = (
PrimaryKeyConstraint(alias_id, dag_id, name="dsaar_pkey"),
ForeignKeyConstraint(
(alias_id,),
["asset_alias.id"],
name="dsaar_asset_alias_fkey",
ondelete="CASCADE",
),
ForeignKeyConstraint(
columns=(dag_id,),
refcolumns=["dag.dag_id"],
name="dsaar_dag_id_fkey",
ondelete="CASCADE",
),
Index("idx_dag_schedule_asset_alias_reference_dag_id", dag_id),
)
def __eq__(self, other: object) -> bool:
if isinstance(other, self.__class__):
return self.alias_id == other.alias_id and self.dag_id == other.dag_id
return NotImplemented
def __hash__(self):
return hash(self.__mapper__.primary_key)
def __repr__(self):
args = [f"{x.name}={getattr(self, x.name)!r}" for x in self.__mapper__.primary_key]
return f"{self.__class__.__name__}({', '.join(args)})"
| DagScheduleAssetAliasReference |
python | pytorch__pytorch | test/inductor/test_aot_inductor.py | {
"start": 288569,
"end": 289093
} | class ____(TestCase):
device = GPU_TYPE
device_type = GPU_TYPE
check_model = check_model
check_model_with_multiple_inputs = check_model_with_multiple_inputs
code_check_count = code_check_count
allow_stack_allocation = False
use_minimal_arrayref_interface = False
copy_tests(
AOTInductorTestsTemplate,
AOTInductorTestABICompatibleGpu,
GPU_TYPE,
GPU_TEST_FAILURES,
)
@unittest.skipIf(not torch.backends.mps.is_available(), "No MPS backend available")
| AOTInductorTestABICompatibleGpu |
python | doocs__leetcode | solution/3600-3699/3694.Distinct Points Reachable After Substring Removal/Solution.py | {
"start": 0,
"end": 611
} | class ____:
def distinctPoints(self, s: str, k: int) -> int:
n = len(s)
f = [0] * (n + 1)
g = [0] * (n + 1)
x = y = 0
for i, c in enumerate(s, 1):
if c == "U":
y += 1
elif c == "D":
y -= 1
elif c == "L":
x -= 1
else:
x += 1
f[i] = x
g[i] = y
st = set()
for i in range(k, n + 1):
a = f[n] - (f[i] - f[i - k])
b = g[n] - (g[i] - g[i - k])
st.add((a, b))
return len(st)
| Solution |
python | spack__spack | lib/spack/spack/vendor/ruamel/yaml/resolver.py | {
"start": 11808,
"end": 11958
} | class ____(BaseResolver):
pass
for ir in implicit_resolvers:
if (1, 2) in ir[0]:
Resolver.add_implicit_resolver_base(*ir[1:])
| Resolver |
python | getsentry__sentry | src/sentry/workflow_engine/endpoints/serializers/detector_serializer.py | {
"start": 869,
"end": 1067
} | class ____(TypedDict, total=False):
owner: str | None
createdById: str | None
alertRuleId: int | None
ruleId: int | None
latestGroup: dict | None
| DetectorSerializerResponseOptional |
python | dagster-io__dagster | python_modules/dagster/dagster/_core/remote_representation/external_data.py | {
"start": 18754,
"end": 18905
} | class ____:
error: Optional[SerializableErrorInfo]
@whitelist_for_serdes(storage_name="ExternalSensorExecutionErrorData")
@record
| RepositoryErrorSnap |
python | pandas-dev__pandas | pandas/tests/io/parser/test_network.py | {
"start": 2136,
"end": 8148
} | class ____:
@pytest.mark.parametrize(
"suffix, compression",
[
("", None),
(".gz", "gzip"),
(".bz2", "bz2"),
],
)
@pytest.mark.parametrize("nrows", [None, 10])
@pytest.mark.parametrize("engine", ["c", "python"])
def test_parse_public_s3_bucket(
self,
s3_bucket_public_with_data,
s3so,
tips_df,
suffix,
compression,
nrows,
engine,
):
# more of an integration test due to the not-public contents portion
# can probably mock this though.
pytest.importorskip("s3fs")
df = read_csv(
f"s3://{s3_bucket_public_with_data.name}/tips.csv{suffix}",
nrows=nrows,
compression=compression,
storage_options=s3so,
engine=engine,
)
tm.assert_frame_equal(df, tips_df.iloc[:nrows])
def test_parse_private_s3_bucket(self, s3_bucket_private_with_data, s3so, tips_df):
# Read public file from bucket with not-public contents
pytest.importorskip("s3fs")
df = read_csv(
f"s3://{s3_bucket_private_with_data.name}/tips.csv", storage_options=s3so
)
tm.assert_frame_equal(df, tips_df)
@pytest.mark.parametrize("scheme", ["s3n", "s3a"])
def test_parse_public_bucket_s3n_s3a(
self, s3_bucket_public_with_data, s3so, tips_df, scheme
):
nrows = 10
df = read_csv(
f"{scheme}://{s3_bucket_public_with_data.name}/tips.csv",
nrows=nrows,
storage_options=s3so,
)
tm.assert_frame_equal(df, tips_df.iloc[:nrows])
@pytest.mark.parametrize(
"suffix, compression",
[
("", None),
(".gz", "gzip"),
(".bz2", "bz2"),
],
)
@pytest.mark.parametrize("engine", ["c", "python"])
def test_parse_public_s3_bucket_chunked(
self, s3_bucket_public_with_data, s3so, tips_df, suffix, compression, engine
):
# Read with a chunksize
chunksize = 5
with read_csv(
f"s3://{s3_bucket_public_with_data.name}/tips.csv{suffix}",
chunksize=chunksize,
compression=compression,
storage_options=s3so,
engine=engine,
) as df_reader:
assert df_reader.chunksize == chunksize
for i_chunk in [0, 1, 2]:
# Read a couple of chunks and make sure we see them
# properly.
df = df_reader.get_chunk()
assert isinstance(df, DataFrame)
assert not df.empty
true_df = tips_df.iloc[chunksize * i_chunk : chunksize * (i_chunk + 1)]
tm.assert_frame_equal(true_df, df)
@pytest.mark.parametrize("suffix", ["", ".gz", ".bz2"])
def test_infer_s3_compression(
self, s3_bucket_public_with_data, s3so, tips_df, suffix
):
df = read_csv(
f"s3://{s3_bucket_public_with_data.name}/tips.csv{suffix}",
engine="python",
compression="infer",
storage_options=s3so,
)
tm.assert_frame_equal(df, tips_df)
def test_read_s3_fails(self, s3_bucket_public_with_data, s3so):
msg = "The specified bucket does not exist"
with pytest.raises(OSError, match=msg):
read_csv("s3://nyqpug/asdf.csv", storage_options=s3so)
def test_read_s3_fails_private(self, s3_bucket_private_with_data, s3so):
s3_url = f"{s3_bucket_private_with_data.name}/file.csv"
msg = rf"{s3_url}"
# Receive a permission error when trying to read a private bucket.
# It's irrelevant here that this isn't actually a table.
with pytest.raises(FileNotFoundError, match=msg):
read_csv(
f"s3://{s3_url}",
storage_options=s3so,
)
@pytest.mark.single_cpu
def test_read_csv_handles_boto_s3_object(
self, s3_bucket_public_with_data, tips_file
):
# see gh-16135
s3_object = s3_bucket_public_with_data.Object("tips.csv")
with BytesIO(s3_object.get()["Body"].read()) as buffer:
result = read_csv(buffer, encoding="utf8")
assert isinstance(result, DataFrame)
assert not result.empty
expected = read_csv(tips_file)
tm.assert_frame_equal(result, expected)
@pytest.mark.single_cpu
def test_read_csv_chunked_download(self, s3_bucket_public, s3so, caplog):
# 8 MB, S3FS uses 5MB chunks
df = DataFrame(np.zeros((100000, 4)), columns=list("abcd"))
with BytesIO(df.to_csv().encode("utf-8")) as buf:
s3_bucket_public.put_object(Key="large-file.csv", Body=buf)
uri = f"{s3_bucket_public.name}/large-file.csv"
match_re = re.compile(rf"^Fetch: {uri}, 0-(?P<stop>\d+)$")
with caplog.at_level(logging.DEBUG, logger="s3fs"):
read_csv(
f"s3://{uri}",
nrows=5,
storage_options=s3so,
)
for log in caplog.messages:
if match := re.match(match_re, log):
# Less than 8 MB
assert int(match.group("stop")) < 8000000
def test_read_s3_with_hash_in_key(self, s3_bucket_public_with_data, s3so, tips_df):
# GH 25945
result = read_csv(
f"s3://{s3_bucket_public_with_data.name}/tips#1.csv", storage_options=s3so
)
tm.assert_frame_equal(tips_df, result)
def test_read_feather_s3_file_path(
self, s3_bucket_public_with_data, s3so, feather_file
):
# GH 29055
pytest.importorskip("pyarrow")
expected = read_feather(feather_file)
res = read_feather(
f"s3://{s3_bucket_public_with_data.name}/simple_dataset.feather",
storage_options=s3so,
)
tm.assert_frame_equal(expected, res)
| TestS3 |
python | tiangolo__fastapi | docs_src/openapi_callbacks/tutorial001.py | {
"start": 303,
"end": 1371
} | class ____(BaseModel):
ok: bool
invoices_callback_router = APIRouter()
@invoices_callback_router.post(
"{$callback_url}/invoices/{$request.body.id}", response_model=InvoiceEventReceived
)
def invoice_notification(body: InvoiceEvent):
pass
@app.post("/invoices/", callbacks=invoices_callback_router.routes)
def create_invoice(invoice: Invoice, callback_url: Union[HttpUrl, None] = None):
"""
Create an invoice.
This will (let's imagine) let the API user (some external developer) create an
invoice.
And this path operation will:
* Send the invoice to the client.
* Collect the money from the client.
* Send a notification back to the API user (the external developer), as a callback.
* At this point is that the API will somehow send a POST request to the
external API with the notification of the invoice event
(e.g. "payment successful").
"""
# Send the invoice, collect the money, send the notification (the callback)
return {"msg": "Invoice received"}
| InvoiceEventReceived |
python | huggingface__transformers | src/transformers/models/longcat_flash/modular_longcat_flash.py | {
"start": 6125,
"end": 6847
} | class ____(nn.Module):
"""
A mixed expert module containing zero compute (identity) experts.
"""
def __init__(self, config):
super().__init__()
self.intermediate_size = config.expert_ffn_hidden_size
self.config = config
self.experts = LongcatFlashExperts(config)
self.router = LongcatFlashTopkRouter(config)
def forward(self, hidden_states):
orig_shape = hidden_states.shape
topk_weights, topk_indices = self.router(hidden_states)
hidden_states = hidden_states.view(-1, hidden_states.shape[-1])
hidden_states = self.experts(hidden_states, topk_indices, topk_weights).view(*orig_shape)
return hidden_states
| LongcatFlashMoE |
python | getsentry__sentry | tests/sentry/api/endpoints/test_broadcast_index.py | {
"start": 3390,
"end": 7272
} | class ____(APITestCase):
def test_basic_user(self) -> None:
self.add_user_permission(user=self.user, permission="broadcasts.admin")
self.login_as(user=self.user, superuser=False)
response = self.client.post(
"/api/0/broadcasts/",
{
"title": "bar",
"message": "foo",
"link": "http://example.com",
"cta": "Read More",
"mediaUrl": "http://example.com/image.png",
"category": "announcement",
},
)
assert response.status_code == 401
def test_superuser(self) -> None:
self.add_user_permission(user=self.user, permission="broadcasts.admin")
self.login_as(user=self.user, superuser=True)
response = self.client.post(
"/api/0/broadcasts/",
{
"title": "bar",
"message": "foo",
"link": "http://example.com",
"cta": "Read More",
"mediaUrl": "http://example.com/image.png",
"category": "announcement",
},
)
assert response.status_code == 200, response.data
broadcast = Broadcast.objects.get(id=response.data["id"])
assert broadcast.title == "bar"
assert broadcast.message == "foo"
assert broadcast.media_url == "http://example.com/image.png"
assert broadcast.category == "announcement"
assert broadcast.created_by_id == self.user
def test_validation(self) -> None:
self.add_user_permission(user=self.user, permission="broadcasts.admin")
self.login_as(user=self.user, superuser=True)
response = self.client.post(
"/api/0/broadcasts/",
{
"title": "bar",
"message": "foo",
"link": "http://example.com",
"cta": "Read More",
"mediaUrl": "this is not a url",
"category": "announcement",
},
)
assert response.status_code == 400, response.data
response = self.client.post(
"/api/0/broadcasts/",
{
"title": "bar",
"message": "foo",
"link": "http://example.com",
"cta": "Read More",
"mediaUrl": "http://example.com/image.png",
"category": "this is not a category",
},
)
assert response.status_code == 400, response.data
response = self.client.post(
"/api/0/broadcasts/",
{
"title": "bar",
"message": "foo",
"link": "http://example.com",
"cta": "Read More",
"mediaUrl": "http://example.com/image.png",
"category": "announcement",
},
)
assert response.status_code == 200, response.data
response = self.client.post(
"/api/0/broadcasts/",
{
"title": "bar",
"message": "foo",
"link": "http://example.com",
"cta": "Read More",
},
)
assert response.status_code == 200, response.data
def test_not_required_cta(self) -> None:
self.add_user_permission(user=self.user, permission="broadcasts.admin")
self.login_as(user=self.user, superuser=True)
response = self.client.post(
"/api/0/broadcasts/",
{
"title": "bar",
"message": "foo",
"link": "http://example.com",
"mediaUrl": "http://example.com/image.png",
"category": "announcement",
},
)
assert response.status_code == 200, response.data
@control_silo_test
| BroadcastCreateTest |
python | walkccc__LeetCode | solutions/3033. Modify the Matrix/3033.py | {
"start": 0,
"end": 323
} | class ____:
def modifiedMatrix(self, matrix: list[list[int]]) -> list[list[int]]:
m = len(matrix)
n = len(matrix[0])
ans = matrix.copy()
for j in range(n):
mx = max(matrix[i][j] for i in range(m))
for i in range(m):
if matrix[i][j] == -1:
ans[i][j] = mx
return ans
| Solution |
python | spack__spack | lib/spack/spack/vendor/ruamel/yaml/dumper.py | {
"start": 2084,
"end": 3565
} | class ____(Emitter, Serializer, SafeRepresenter, Resolver):
def __init__(
self,
stream,
default_style=None,
default_flow_style=None,
canonical=None,
indent=None,
width=None,
allow_unicode=None,
line_break=None,
encoding=None,
explicit_start=None,
explicit_end=None,
version=None,
tags=None,
block_seq_indent=None,
top_level_colon_align=None,
prefix_colon=None,
):
# type: (StreamType, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA
Emitter.__init__(
self,
stream,
canonical=canonical,
indent=indent,
width=width,
allow_unicode=allow_unicode,
line_break=line_break,
block_seq_indent=block_seq_indent,
dumper=self,
)
Serializer.__init__(
self,
encoding=encoding,
explicit_start=explicit_start,
explicit_end=explicit_end,
version=version,
tags=tags,
dumper=self,
)
SafeRepresenter.__init__(
self,
default_style=default_style,
default_flow_style=default_flow_style,
dumper=self,
)
Resolver.__init__(self, loadumper=self)
| SafeDumper |
python | ansible__ansible | test/lib/ansible_test/_internal/commands/integration/filters.py | {
"start": 7004,
"end": 7114
} | class ____(PosixTargetFilter[PosixSshConfig]):
"""Target filter for POSIX SSH hosts."""
| PosixSshTargetFilter |
python | joke2k__faker | faker/providers/phone_number/en_AU/__init__.py | {
"start": 49,
"end": 1317
} | class ____(PhoneNumberProvider):
formats = (
# Local calls
"#### ####",
"####-####",
"####.####", # domain registrars apparently use this
"########",
# National dialing
"0{{area_code}} #### ####",
"0{{area_code}}-####-####",
"0{{area_code}}.####.####",
"0{{area_code}}########",
# Optional parenthesis
"(0{{area_code}}) #### ####",
"(0{{area_code}})-####-####",
"(0{{area_code}}).####.####",
"(0{{area_code}})########",
# International drops the 0
"+61 {{area_code}} #### ####",
"+61-{{area_code}}-####-####",
"+61.{{area_code}}.####.####",
"+61{{area_code}}########",
# 04 Mobile telephones (Australia-wide) mostly commonly written 4 - 3 -
# 3 instead of 2 - 4 - 4
"04## ### ###",
"04##-###-###",
"04##.###.###",
"+61 4## ### ###",
"+61-4##-###-###",
"+61.4##.###.###",
)
def area_code(self) -> str:
return self.numerify(self.random_element(["2", "3", "7", "8"]))
def phone_number(self) -> str:
pattern: str = self.random_element(self.formats)
return self.numerify(self.generator.parse(pattern))
| Provider |
python | getsentry__sentry | src/sentry/auth/providers/saml2/rippling/apps.py | {
"start": 36,
"end": 276
} | class ____(AppConfig):
name = "sentry.auth.providers.saml2.rippling"
def ready(self) -> None:
from sentry.auth import register
from .provider import RipplingSAML2Provider
register(RipplingSAML2Provider)
| Config |
python | kamyu104__LeetCode-Solutions | Python/maximum-profit-from-valid-topological-order-in-dag.py | {
"start": 52,
"end": 797
} | class ____(object):
def maxProfit(self, n, edges, score):
"""
:type n: int
:type edges: List[List[int]]
:type score: List[int]
:rtype: int
"""
def popcount(x):
return bin(x).count('1')
adj = [0]*n
for i, j in edges:
adj[j] |= 1<<i
dp = [-1]*(1<<n)
dp[0] = 0
for mask in xrange(1<<n):
if dp[mask] == -1:
continue
l = popcount(mask)+1
for i in xrange(n):
if mask&(1<<i):
continue
if (mask & adj[i]) == adj[i]:
dp[mask|(1<<i)] = max(dp[mask|(1<<i)], dp[mask]+l*score[i])
return dp[-1]
| Solution |
python | pydata__xarray | xarray/tests/test_datatree.py | {
"start": 80413,
"end": 85600
} | class ____:
def test_unary_op(self) -> None:
ds1 = xr.Dataset({"a": [5], "b": [3]})
ds2 = xr.Dataset({"x": [0.1, 0.2], "y": [10, 20]})
dt = DataTree.from_dict({"/": ds1, "/subnode": ds2})
expected = DataTree.from_dict({"/": (-ds1), "/subnode": (-ds2)})
result = -dt
assert_equal(result, expected)
def test_unary_op_inherited_coords(self) -> None:
tree = DataTree(xr.Dataset(coords={"x": [1, 2, 3]}))
tree["/foo"] = DataTree(xr.Dataset({"bar": ("x", [4, 5, 6])}))
actual = -tree
actual_dataset = actual.children["foo"].to_dataset(inherit=False)
assert "x" not in actual_dataset.coords
expected = tree.copy()
# unary ops are not applied to coordinate variables, only data variables
expected["/foo/bar"].data = np.array([-4, -5, -6])
assert_identical(actual, expected)
def test_binary_op_on_int(self) -> None:
ds1 = xr.Dataset({"a": [5], "b": [3]})
ds2 = xr.Dataset({"x": [0.1, 0.2], "y": [10, 20]})
dt = DataTree.from_dict({"/": ds1, "/subnode": ds2})
expected = DataTree.from_dict({"/": ds1 * 5, "/subnode": ds2 * 5})
result = dt * 5
assert_equal(result, expected)
def test_binary_op_on_dataarray(self) -> None:
ds1 = xr.Dataset({"a": [5], "b": [3]})
ds2 = xr.Dataset({"x": [0.1, 0.2], "y": [10, 20]})
dt = DataTree.from_dict(
{
"/": ds1,
"/subnode": ds2,
}
)
other_da = xr.DataArray(name="z", data=[0.1, 0.2], dims="z")
expected = DataTree.from_dict(
{
"/": ds1 * other_da,
"/subnode": ds2 * other_da,
}
)
result = dt * other_da
assert_equal(result, expected)
def test_binary_op_on_dataset(self) -> None:
ds1 = xr.Dataset({"a": [5], "b": [3]})
ds2 = xr.Dataset({"x": [0.1, 0.2], "y": [10, 20]})
dt = DataTree.from_dict(
{
"/": ds1,
"/subnode": ds2,
}
)
other_ds = xr.Dataset({"z": ("z", [0.1, 0.2])})
expected = DataTree.from_dict(
{
"/": ds1 * other_ds,
"/subnode": ds2 * other_ds,
}
)
result = dt * other_ds
assert_equal(result, expected)
def test_binary_op_on_datatree(self) -> None:
ds1 = xr.Dataset({"a": [5], "b": [3]})
ds2 = xr.Dataset({"x": [0.1, 0.2], "y": [10, 20]})
dt = DataTree.from_dict({"/": ds1, "/subnode": ds2})
expected = DataTree.from_dict({"/": ds1 * ds1, "/subnode": ds2 * ds2})
result = dt * dt
assert_equal(result, expected)
def test_binary_op_order_invariant(self) -> None:
tree_ab = DataTree.from_dict({"/a": Dataset({"a": 1}), "/b": Dataset({"b": 2})})
tree_ba = DataTree.from_dict({"/b": Dataset({"b": 2}), "/a": Dataset({"a": 1})})
expected = DataTree.from_dict(
{"/a": Dataset({"a": 2}), "/b": Dataset({"b": 4})}
)
actual = tree_ab + tree_ba
assert_identical(expected, actual)
def test_arithmetic_inherited_coords(self) -> None:
tree = DataTree(xr.Dataset(coords={"x": [1, 2, 3]}))
tree["/foo"] = DataTree(xr.Dataset({"bar": ("x", [4, 5, 6])}))
actual = 2 * tree
actual_dataset = actual.children["foo"].to_dataset(inherit=False)
assert "x" not in actual_dataset.coords
expected = tree.copy()
expected["/foo/bar"].data = np.array([8, 10, 12])
assert_identical(actual, expected)
def test_binary_op_commutativity_with_dataset(self) -> None:
# regression test for #9365
ds1 = xr.Dataset({"a": [5], "b": [3]})
ds2 = xr.Dataset({"x": [0.1, 0.2], "y": [10, 20]})
dt = DataTree.from_dict(
{
"/": ds1,
"/subnode": ds2,
}
)
other_ds = xr.Dataset({"z": ("z", [0.1, 0.2])})
expected = DataTree.from_dict(
{
"/": ds1 * other_ds,
"/subnode": ds2 * other_ds,
}
)
result = other_ds * dt
assert_equal(result, expected)
def test_inplace_binary_op(self) -> None:
ds1 = xr.Dataset({"a": [5], "b": [3]})
ds2 = xr.Dataset({"x": [0.1, 0.2], "y": [10, 20]})
dt = DataTree.from_dict({"/": ds1, "/subnode": ds2})
expected = DataTree.from_dict({"/": ds1 + 1, "/subnode": ds2 + 1})
dt += 1
assert_equal(dt, expected)
def test_dont_broadcast_single_node_tree(self) -> None:
# regression test for https://github.com/pydata/xarray/issues/9365#issuecomment-2291622577
ds1 = xr.Dataset({"a": [5], "b": [3]})
ds2 = xr.Dataset({"x": [0.1, 0.2], "y": [10, 20]})
dt = DataTree.from_dict({"/": ds1, "/subnode": ds2})
node = dt["/subnode"]
with pytest.raises(
xr.TreeIsomorphismError,
match=re.escape(r"children at root node do not match: ['subnode'] vs []"),
):
dt * node
| TestOps |
python | automl__auto-sklearn | autosklearn/metalearning/metafeatures/metafeatures.py | {
"start": 21451,
"end": 21735
} | class ____(MetaFeature):
def _calculate(self, X, y, logger, feat_type):
kurts = helper_functions.get_value("Kurtosisses")
std = np.nanstd(kurts) if len(kurts) > 0 else 0
return std if np.isfinite(std) else 0
@helper_functions.define("Skewnesses")
| KurtosisSTD |
python | scrapy__scrapy | tests/test_http_response.py | {
"start": 12361,
"end": 31642
} | class ____(TestResponseBase):
response_class = TextResponse
def test_replace(self):
super().test_replace()
r1 = self.response_class(
"http://www.example.com", body="hello", encoding="cp852"
)
r2 = r1.replace(url="http://www.example.com/other")
r3 = r1.replace(url="http://www.example.com/other", encoding="latin1")
assert isinstance(r2, self.response_class)
assert r2.url == "http://www.example.com/other"
self._assert_response_encoding(r2, "cp852")
assert r3.url == "http://www.example.com/other"
assert r3._declared_encoding() == "latin1"
def test_unicode_url(self):
# instantiate with unicode url without encoding (should set default encoding)
resp = self.response_class("http://www.example.com/")
self._assert_response_encoding(resp, self.response_class._DEFAULT_ENCODING)
# make sure urls are converted to str
resp = self.response_class(url="http://www.example.com/", encoding="utf-8")
assert isinstance(resp.url, str)
resp = self.response_class(
url="http://www.example.com/price/\xa3", encoding="utf-8"
)
assert resp.url == to_unicode(b"http://www.example.com/price/\xc2\xa3")
resp = self.response_class(
url="http://www.example.com/price/\xa3", encoding="latin-1"
)
assert resp.url == "http://www.example.com/price/\xa3"
resp = self.response_class(
"http://www.example.com/price/\xa3",
headers={"Content-type": ["text/html; charset=utf-8"]},
)
assert resp.url == to_unicode(b"http://www.example.com/price/\xc2\xa3")
resp = self.response_class(
"http://www.example.com/price/\xa3",
headers={"Content-type": ["text/html; charset=iso-8859-1"]},
)
assert resp.url == "http://www.example.com/price/\xa3"
def test_unicode_body(self):
unicode_string = (
"\u043a\u0438\u0440\u0438\u043b\u043b\u0438\u0447\u0435\u0441\u043a\u0438\u0439 "
"\u0442\u0435\u043a\u0441\u0442"
)
with pytest.raises(TypeError):
self.response_class("http://www.example.com", body="unicode body")
original_string = unicode_string.encode("cp1251")
r1 = self.response_class(
"http://www.example.com", body=original_string, encoding="cp1251"
)
# check response.text
assert isinstance(r1.text, str)
assert r1.text == unicode_string
def test_encoding(self):
r1 = self.response_class(
"http://www.example.com",
body=b"\xc2\xa3",
headers={"Content-type": ["text/html; charset=utf-8"]},
)
r2 = self.response_class(
"http://www.example.com", encoding="utf-8", body="\xa3"
)
r3 = self.response_class(
"http://www.example.com",
body=b"\xa3",
headers={"Content-type": ["text/html; charset=iso-8859-1"]},
)
r4 = self.response_class("http://www.example.com", body=b"\xa2\xa3")
r5 = self.response_class(
"http://www.example.com",
body=b"\xc2\xa3",
headers={"Content-type": ["text/html; charset=None"]},
)
r6 = self.response_class(
"http://www.example.com",
body=b"\xa8D",
headers={"Content-type": ["text/html; charset=gb2312"]},
)
r7 = self.response_class(
"http://www.example.com",
body=b"\xa8D",
headers={"Content-type": ["text/html; charset=gbk"]},
)
r8 = self.response_class(
"http://www.example.com",
body=codecs.BOM_UTF8 + b"\xc2\xa3",
headers={"Content-type": ["text/html; charset=cp1251"]},
)
r9 = self.response_class(
"http://www.example.com",
body=b"\x80",
headers={
"Content-type": [b"application/x-download; filename=\x80dummy.txt"]
},
)
assert r1._headers_encoding() == "utf-8"
assert r2._headers_encoding() is None
assert r2._declared_encoding() == "utf-8"
self._assert_response_encoding(r2, "utf-8")
assert r3._headers_encoding() == "cp1252"
assert r3._declared_encoding() == "cp1252"
assert r4._headers_encoding() is None
assert r5._headers_encoding() is None
assert r8._headers_encoding() == "cp1251"
assert r9._headers_encoding() is None
assert r8._declared_encoding() == "utf-8"
assert r9._declared_encoding() is None
self._assert_response_encoding(r5, "utf-8")
self._assert_response_encoding(r8, "utf-8")
self._assert_response_encoding(r9, "cp1252")
assert r4._body_inferred_encoding() is not None
assert r4._body_inferred_encoding() != "ascii"
self._assert_response_values(r1, "utf-8", "\xa3")
self._assert_response_values(r2, "utf-8", "\xa3")
self._assert_response_values(r3, "iso-8859-1", "\xa3")
self._assert_response_values(r6, "gb18030", "\u2015")
self._assert_response_values(r7, "gb18030", "\u2015")
self._assert_response_values(r9, "cp1252", "€")
# TextResponse (and subclasses) must be passed a encoding when instantiating with unicode bodies
with pytest.raises(TypeError):
self.response_class("http://www.example.com", body="\xa3")
def test_declared_encoding_invalid(self):
"""Check that unknown declared encodings are ignored"""
r = self.response_class(
"http://www.example.com",
headers={"Content-type": ["text/html; charset=UNKNOWN"]},
body=b"\xc2\xa3",
)
assert r._declared_encoding() is None
self._assert_response_values(r, "utf-8", "\xa3")
def test_utf16(self):
"""Test utf-16 because UnicodeDammit is known to have problems with"""
r = self.response_class(
"http://www.example.com",
body=b"\xff\xfeh\x00i\x00",
encoding="utf-16",
)
self._assert_response_values(r, "utf-16", "hi")
def test_invalid_utf8_encoded_body_with_valid_utf8_BOM(self):
r6 = self.response_class(
"http://www.example.com",
headers={"Content-type": ["text/html; charset=utf-8"]},
body=b"\xef\xbb\xbfWORD\xe3\xab",
)
assert r6.encoding == "utf-8"
assert r6.text in {
"WORD\ufffd\ufffd", # w3lib < 1.19.0
"WORD\ufffd", # w3lib >= 1.19.0
}
def test_bom_is_removed_from_body(self):
# Inferring encoding from body also cache decoded body as sideeffect,
# this test tries to ensure that calling response.encoding and
# response.text in indistinct order doesn't affect final
# response.text in indistinct order doesn't affect final
# values for encoding and decoded body.
url = "http://example.com"
body = b"\xef\xbb\xbfWORD"
headers = {"Content-type": ["text/html; charset=utf-8"]}
# Test response without content-type and BOM encoding
response = self.response_class(url, body=body)
assert response.encoding == "utf-8"
assert response.text == "WORD"
response = self.response_class(url, body=body)
assert response.text == "WORD"
assert response.encoding == "utf-8"
# Body caching sideeffect isn't triggered when encoding is declared in
# content-type header but BOM still need to be removed from decoded
# body
response = self.response_class(url, headers=headers, body=body)
assert response.encoding == "utf-8"
assert response.text == "WORD"
response = self.response_class(url, headers=headers, body=body)
assert response.text == "WORD"
assert response.encoding == "utf-8"
def test_replace_wrong_encoding(self):
"""Test invalid chars are replaced properly"""
r = self.response_class(
"http://www.example.com",
encoding="utf-8",
body=b"PREFIX\xe3\xabSUFFIX",
)
# XXX: Policy for replacing invalid chars may suffer minor variations
# but it should always contain the unicode replacement char ('\ufffd')
assert "\ufffd" in r.text, repr(r.text)
assert "PREFIX" in r.text, repr(r.text)
assert "SUFFIX" in r.text, repr(r.text)
# Do not destroy html tags due to encoding bugs
r = self.response_class(
"http://example.com",
encoding="utf-8",
body=b"\xf0<span>value</span>",
)
assert "<span>value</span>" in r.text, repr(r.text)
# FIXME: This test should pass once we stop using BeautifulSoup's UnicodeDammit in TextResponse
# r = self.response_class("http://www.example.com", body=b'PREFIX\xe3\xabSUFFIX')
# assert '\ufffd' in r.text, repr(r.text)
def test_selector(self):
body = b"<html><head><title>Some page</title><body></body></html>"
response = self.response_class("http://www.example.com", body=body)
assert isinstance(response.selector, Selector)
assert response.selector.type == "html"
assert response.selector is response.selector # property is cached
assert response.selector.response is response
assert response.selector.xpath("//title/text()").getall() == ["Some page"]
assert response.selector.css("title::text").getall() == ["Some page"]
assert response.selector.re("Some (.*)</title>") == ["page"]
def test_selector_shortcuts(self):
body = b"<html><head><title>Some page</title><body></body></html>"
response = self.response_class("http://www.example.com", body=body)
assert (
response.xpath("//title/text()").getall()
== response.selector.xpath("//title/text()").getall()
)
assert (
response.css("title::text").getall()
== response.selector.css("title::text").getall()
)
def test_selector_shortcuts_kwargs(self):
body = b'<html><head><title>Some page</title><body><p class="content">A nice paragraph.</p></body></html>'
response = self.response_class("http://www.example.com", body=body)
assert (
response.xpath(
"normalize-space(//p[@class=$pclass])", pclass="content"
).getall()
== response.xpath('normalize-space(//p[@class="content"])').getall()
)
assert (
response.xpath(
"//title[count(following::p[@class=$pclass])=$pcount]/text()",
pclass="content",
pcount=1,
).getall()
== response.xpath(
'//title[count(following::p[@class="content"])=1]/text()'
).getall()
)
def test_urljoin_with_base_url(self):
"""Test urljoin shortcut which also evaluates base-url through get_base_url()."""
body = b'<html><body><base href="https://example.net"></body></html>'
joined = self.response_class("http://www.example.com", body=body).urljoin(
"/test"
)
absolute = "https://example.net/test"
assert joined == absolute
body = b'<html><body><base href="/elsewhere"></body></html>'
joined = self.response_class("http://www.example.com", body=body).urljoin(
"test"
)
absolute = "http://www.example.com/test"
assert joined == absolute
body = b'<html><body><base href="/elsewhere/"></body></html>'
joined = self.response_class("http://www.example.com", body=body).urljoin(
"test"
)
absolute = "http://www.example.com/elsewhere/test"
assert joined == absolute
def test_follow_selector(self):
resp = self._links_response()
urls = [
"http://example.com/sample2.html",
"http://example.com/sample3.html",
"http://example.com/sample3.html",
"http://example.com/sample3.html",
"http://example.com/sample3.html#foo",
"http://www.google.com/something",
"http://example.com/innertag.html",
]
# select <a> elements
for sellist in [resp.css("a"), resp.xpath("//a")]:
for sel, url in zip(sellist, urls, strict=False):
self._assert_followed_url(sel, url, response=resp)
# select <link> elements
self._assert_followed_url(
Selector(text='<link href="foo"></link>').css("link")[0],
"http://example.com/foo",
response=resp,
)
# href attributes should work
for sellist in [resp.css("a::attr(href)"), resp.xpath("//a/@href")]:
for sel, url in zip(sellist, urls, strict=False):
self._assert_followed_url(sel, url, response=resp)
# non-a elements are not supported
with pytest.raises(
ValueError, match="Only <a> and <link> elements are supported"
):
resp.follow(resp.css("div")[0])
def test_follow_selector_list(self):
resp = self._links_response()
with pytest.raises(ValueError, match="SelectorList"):
resp.follow(resp.css("a"))
def test_follow_selector_invalid(self):
resp = self._links_response()
with pytest.raises(ValueError, match="Unsupported"):
resp.follow(resp.xpath("count(//div)")[0])
def test_follow_selector_attribute(self):
resp = self._links_response()
for src in resp.css("img::attr(src)"):
self._assert_followed_url(src, "http://example.com/sample2.jpg")
def test_follow_selector_no_href(self):
resp = self.response_class(
url="http://example.com",
body=b"<html><body><a name=123>click me</a></body></html>",
)
with pytest.raises(ValueError, match="no href"):
resp.follow(resp.css("a")[0])
def test_follow_whitespace_selector(self):
resp = self.response_class(
"http://example.com",
body=b"""<html><body><a href=" foo\n">click me</a></body></html>""",
)
self._assert_followed_url(
resp.css("a")[0], "http://example.com/foo", response=resp
)
self._assert_followed_url(
resp.css("a::attr(href)")[0],
"http://example.com/foo",
response=resp,
)
def test_follow_encoding(self):
resp1 = self.response_class(
"http://example.com",
encoding="utf8",
body='<html><body><a href="foo?привет">click me</a></body></html>'.encode(),
)
req = self._assert_followed_url(
resp1.css("a")[0],
"http://example.com/foo?%D0%BF%D1%80%D0%B8%D0%B2%D0%B5%D1%82",
response=resp1,
)
assert req.encoding == "utf8"
resp2 = self.response_class(
"http://example.com",
encoding="cp1251",
body='<html><body><a href="foo?привет">click me</a></body></html>'.encode(
"cp1251"
),
)
req = self._assert_followed_url(
resp2.css("a")[0],
"http://example.com/foo?%EF%F0%E8%E2%E5%F2",
response=resp2,
)
assert req.encoding == "cp1251"
def test_follow_flags(self):
res = self.response_class("http://example.com/")
fol = res.follow("http://example.com/", flags=["cached", "allowed"])
assert fol.flags == ["cached", "allowed"]
def test_follow_all_flags(self):
re = self.response_class("http://www.example.com/")
urls = [
"http://www.example.com/",
"http://www.example.com/2",
"http://www.example.com/foo",
]
fol = re.follow_all(urls, flags=["cached", "allowed"])
for req in fol:
assert req.flags == ["cached", "allowed"]
def test_follow_all_css(self):
expected = [
"http://example.com/sample3.html",
"http://example.com/innertag.html",
]
response = self._links_response()
extracted = [r.url for r in response.follow_all(css='a[href*="example.com"]')]
assert expected == extracted
def test_follow_all_css_skip_invalid(self):
expected = [
"http://example.com/page/1/",
"http://example.com/page/3/",
"http://example.com/page/4/",
]
response = self._links_response_no_href()
extracted1 = [r.url for r in response.follow_all(css=".pagination a")]
assert expected == extracted1
extracted2 = [r.url for r in response.follow_all(response.css(".pagination a"))]
assert expected == extracted2
def test_follow_all_xpath(self):
expected = [
"http://example.com/sample3.html",
"http://example.com/innertag.html",
]
response = self._links_response()
extracted = response.follow_all(xpath='//a[contains(@href, "example.com")]')
assert expected == [r.url for r in extracted]
def test_follow_all_xpath_skip_invalid(self):
expected = [
"http://example.com/page/1/",
"http://example.com/page/3/",
"http://example.com/page/4/",
]
response = self._links_response_no_href()
extracted1 = [
r.url for r in response.follow_all(xpath='//div[@id="pagination"]/a')
]
assert expected == extracted1
extracted2 = [
r.url
for r in response.follow_all(response.xpath('//div[@id="pagination"]/a'))
]
assert expected == extracted2
def test_follow_all_too_many_arguments(self):
response = self._links_response()
with pytest.raises(
ValueError, match="Please supply exactly one of the following arguments"
):
response.follow_all(
css='a[href*="example.com"]',
xpath='//a[contains(@href, "example.com")]',
)
def test_json_response(self):
json_body = b"""{"ip": "109.187.217.200"}"""
json_response = self.response_class("http://www.example.com", body=json_body)
assert json_response.json() == {"ip": "109.187.217.200"}
text_body = b"""<html><body>text</body></html>"""
text_response = self.response_class("http://www.example.com", body=text_body)
with pytest.raises(
ValueError, match=r"(Expecting value|Unexpected '<'): line 1"
):
text_response.json()
def test_cache_json_response(self):
json_valid_bodies = [b"""{"ip": "109.187.217.200"}""", b"""null"""]
for json_body in json_valid_bodies:
json_response = self.response_class(
"http://www.example.com", body=json_body
)
with mock.patch("json.loads") as mock_json:
for _ in range(2):
json_response.json()
mock_json.assert_called_once_with(json_body)
| TestTextResponse |
python | ansible__ansible | lib/ansible/errors/__init__.py | {
"start": 8565,
"end": 8720
} | class ____(AnsibleConnectionFailure):
"""Invalid username/password/key."""
_default_message = "Failed to authenticate."
| AnsibleAuthenticationFailure |
python | google__pytype | pytype/tests/test_builtins1.py | {
"start": 175,
"end": 12746
} | class ____(test_base.BaseTest):
"""Tests for builtin methods and classes."""
def test_repr1(self):
self.Check("""
def t_testRepr1(x):
return repr(x)
assert_type(t_testRepr1(4), str)
""")
@test_base.skip("b/238794928: Function inference will be removed.")
def test_repr2(self):
ty = self.Infer("""
def t_testRepr2(x):
return repr(x)
t_testRepr2(4)
t_testRepr2(1.234)
t_testRepr2('abc')
""")
self.assertTypesMatchPytd(
ty,
"""
from typing import Union
def t_testRepr2(x: Union[float, int, str]) -> str: ...
""",
)
def test_repr3(self):
ty = self.Infer("""
def t_testRepr3(x):
return repr(x)
t_testRepr3(__any_object__())
""")
self.assertTypesMatchPytd(
ty,
"""
def t_testRepr3(x) -> str: ...
""",
)
def test_eval_solve(self):
self.Check("""
from typing import Any
def t_testEval(x):
return eval(x)
assert_type(t_testEval(4), Any)
""")
def test_isinstance1(self):
ty = self.Infer("""
def t_testIsinstance1(x):
return isinstance(x, int)
""")
self.assertTypesMatchPytd(
ty,
"""
def t_testIsinstance1(x) -> bool: ...
""",
)
def test_isinstance2(self):
ty = self.Infer("""
class Bar:
def foo(self):
return isinstance(self, Baz)
class Baz(Bar):
pass
""")
self.assertTypesMatchPytd(
ty,
"""
class Bar:
def foo(self) -> bool: ...
class Baz(Bar):
pass
""",
)
def test_pow1(self):
ty = self.Infer("""
def t_testPow1():
# pow(int, int) returns int, or float if the exponent is negative.
# Hence, it's a handy function for testing UnionType returns.
return pow(1, -2)
""")
self.assertTypesMatchPytd(
ty,
"""
from typing import Union
def t_testPow1() -> Union[float, int]: ...
""",
)
def test_max1(self):
ty = self.Infer("""
def t_testMax1():
# max is a parameterized function
return max(1, 2)
""")
self.assertTypesMatchPytd(
ty,
"""
def t_testMax1() -> int: ...
""",
)
def test_max2(self):
ty = self.Infer("""
def t_testMax2(x, y):
# max is a parameterized function
return max(x, y)
""")
self.assertTypesMatchPytd(
ty,
"""
from typing import Any
def t_testMax2(x, y) -> Any: ...
""",
)
def test_zip_error(self):
errors = self.CheckWithErrors("zip([], [], [], 42) # wrong-arg-types[e]")
self.assertErrorRegexes(errors, {"e": r"Iterable.*int"})
def test_dict_defaults(self):
self.Check("""
from typing import Dict
def t_testDictDefaults(x):
d = {}
res = d.setdefault(x, str(x))
return res
assert_type(t_testDictDefaults(3), str)
""")
def test_dict_get(self):
ty = self.Infer("""
def f():
mydict = {"42": 42}
return mydict.get("42")
""")
self.assertTypesMatchPytd(
ty,
"""
from typing import Union
def f() -> Union[int, NoneType]: ...
""",
)
def test_dict_get_or_default(self):
ty = self.Infer("""
def f():
mydict = {"42": 42}
return mydict.get("42", False)
""")
self.assertTypesMatchPytd(
ty,
"""
def f() -> int: ...
""",
)
def test_list_init0(self):
ty = self.Infer("""
def t_testListInit0(x):
return list(x)
""")
self.assertTypesMatchPytd(
ty,
"""
def t_testListInit0(x) -> list: ...
""",
)
def test_list_init1(self):
ty = self.Infer("""
def t_testListInit1(x, y):
return x + [y]
""")
self.assertTypesMatchPytd(
ty,
"""
from typing import Any
def t_testListInit1(x, y) -> Any: ...
""",
)
def test_list_init2(self):
ty = self.Infer("""
def t_testListInit2(x, i):
return x[i]
z = __any_object__
t_testListInit2(__any_object__, z)
z + 1
""")
self.assertTypesMatchPytd(
ty,
"""
from typing import Any
z = ... # type: Any
def t_testListInit2(x, i) -> Any: ...
""",
)
def test_list_init3(self):
self.Check("""
def t_testListInit3(x, i):
return x[i]
assert_type(t_testListInit3([1,2,3,'abc'], 0), int)
""")
def test_list_init4(self):
self.Check("""
from typing import Any
def t_testListInit4(x):
l = _i_(list(x))
assert_type(l, list)
return l[0]
def _i_(x):
return x
assert_type(t_testListInit4(__any_object__), Any)
""")
def test_abs_int(self):
self.Check("""
def t_testAbsInt(x):
return abs(x)
assert_type(t_testAbsInt(1), int)
""")
def test_abs(self):
ty = self.Infer("""
def t_testAbs(x):
return abs(x)
t_testAbs(__any_object__)
""")
self.assertTypesMatchPytd(
ty,
"""
from typing import Any
# Since SupportsAbs.__abs__ returns a type parameter, the return type
# of abs(...) can be anything.
def t_testAbs(x) -> Any: ...
""",
)
def test_abs_union(self):
ty = self.Infer("""
class Foo:
def __abs__(self):
return "hello"
class Bar:
def __abs__(self):
return 42
x = Foo() if __random__ else Bar()
y = abs(x)
""")
self.assertTypesMatchPytd(
ty,
"""
from typing import Any, Union
x = ... # type: Union[Bar, Foo]
y = ... # type: Union[str, int]
class Bar:
def __abs__(self) -> int: ...
class Foo:
def __abs__(self) -> str: ...
""",
)
def test_cmp(self):
ty = self.Infer("""
def t_testCmp(x, y):
return cmp(x, y)
""")
self.assertTypesMatchPytd(
ty,
"""
def t_testCmp(x, y) -> int: ...
""",
)
@test_base.skip("b/238794928: Function inference will be removed.")
def test_cmp_multi(self):
ty = self.Infer("""
def t_testCmpMulti(x, y):
return cmp(x, y)
t_testCmpMulti(1, 2)
t_testCmpMulti(1, 2.0)
t_testCmpMulti(1.0, 2)
""")
self.assertTypesMatchPytd(
ty,
"""
from typing import Union
def t_testCmpMulti(x: Union[float, int], y: int) -> int: ...
def t_testCmpMulti(x: int, y: float) -> int: ...
""",
)
def test_cmp_str(self):
self.Check("""
def t_testCmpStr(x, y):
return cmp(x, y)
assert_type(t_testCmpStr("abc", "def"), int)
""")
def test_cmp_str2(self):
self.Check("""
def t_testCmpStr2(x, y):
return cmp(x, y)
assert_type(t_testCmpStr2("abc", __any_object__), int)
""")
def test_tuple(self):
self.Infer("""
def f(x):
return x
def g(args):
f(*tuple(args))
""")
def test_open(self):
ty = self.Infer("""
def f(x):
with open(x, "r") as fi:
return fi.read()
""")
self.assertTypesMatchPytd(
ty,
"""
def f(x) -> str: ...
""",
)
def test_open_error(self):
src = "open(0, 1, 2, 3, 4, 5, 6, 7, 8, 9) # wrong-arg-count"
self.CheckWithErrors(src)
def test_signal(self):
ty = self.Infer("""
import signal
def f():
signal.signal(signal.SIGTERM, 0)
""")
self.assertTypesMatchPytd(
ty,
"""
import signal
def f() -> NoneType: ...
""",
)
def test_sys_argv(self):
ty = self.Infer("""
import sys
def args():
return ' '.join(sys.argv)
args()
""")
self.assertTypesMatchPytd(
ty,
"""
import sys
def args() -> str: ...
""",
)
def test_setattr(self):
ty = self.Infer("""
class Foo:
def __init__(self, x):
for attr in x.__dict__:
setattr(self, attr, getattr(x, attr))
""")
self.assertTypesMatchPytd(
ty,
"""
class Foo:
def __init__(self, x) -> NoneType: ...
""",
)
def test_array_smoke(self):
ty = self.Infer("""
import array
class Foo:
def __init__(self):
array.array('i')
""")
ty.Lookup("Foo") # smoke test
def test_array(self):
ty = self.Infer("""
import array
class Foo:
def __init__(self):
self.bar = array.array('i', [1, 2, 3])
""")
self.assertTypesMatchPytd(
ty,
"""
import array
class Foo:
bar = ... # type: array.array[int]
def __init__(self) -> None: ...
""",
)
def test_inherit_from_builtin(self):
ty = self.Infer("""
class Foo(list):
pass
""")
self.assertTypesMatchPytd(
ty,
"""
class Foo(list):
pass
""",
)
def test_os_path(self):
ty = self.Infer("""
import os
class Foo:
bar = os.path.join('hello', 'world')
""")
ty.Lookup("Foo") # smoke test
def test_hasattr(self):
ty = self.Infer("""
class Bar:
pass
a = hasattr(Bar, 'foo')
""")
self.assertTypesMatchPytd(
ty,
"""
class Bar:
pass
a : bool
""",
)
def test_time(self):
ty = self.Infer("""
import time
def f(x):
if x:
return time.mktime(time.struct_time((1, 2, 3, 4, 5, 6, 7, 8, 9)))
else:
return 3j
""")
self.assertTypesMatchPytd(
ty,
"""
import time
from typing import Union
def f(x) -> Union[complex, float]: ...
""",
)
def test_div_mod(self):
ty = self.Infer("""
def seed(self, a=None):
a = int(0)
divmod(a, 30268)
""")
self.assertTypesMatchPytd(
ty,
"""
def seed(self, a=...) -> NoneType: ...
""",
)
def test_div_mod2(self):
ty = self.Infer("""
def seed(self, a=None):
if a is None:
a = int(16)
return divmod(a, 30268)
""")
self.assertTypesMatchPytd(
ty,
"""
from typing import Any, Tuple
def seed(self, a = ...) -> Tuple[Any, Any]: ...
""",
)
def test_join(self):
ty = self.Infer("""
def f(elements):
return ",".join(t for t in elements)
""")
self.assertTypesMatchPytd(
ty,
"""
def f(elements) -> str: ...
""",
)
def test_version_info(self):
ty = self.Infer("""
import sys
def f():
return 'py%d' % sys.version_info[0]
""")
self.assertTypesMatchPytd(
ty,
"""
import sys
def f() -> str: ...
""",
)
def test_inherit_from_namedtuple(self):
self.Check("""
import collections
class Foo(
collections.namedtuple('_Foo', 'x y z')):
pass
a = Foo(1, 2, 3)
""")
@test_base.skip("Does not work - x, y and z all get set to Any")
def test_store_and_load_from_namedtuple(self):
self.Check("""
import collections
t = collections.namedtuple('t', ['x', 'y', 'z'])
t.x = 3
t.y = "foo"
t.z = 1j
x = t.x
y = t.y
z = t.z
assert_type(x, int)
assert_type(y, str)
assert_type(z, complex)
""")
def test_type_equals(self):
ty = self.Infer("""
def f(n):
return type(n) == type(0)
""")
self.assertTypesMatchPytd(
ty,
"""
from typing import Any
def f(n) -> Any: ...
""",
)
def test_type_equals2(self):
ty = self.Infer("""
import types
def f(mod):
return type(mod) == types.ModuleType
""")
self.assertTypesMatchPytd(
ty,
"""
import types
from typing import Any
def f(mod) -> Any: ...
""",
)
def test_date_time(self):
ty = self.Infer("""
import datetime
def f(date):
return date.ctime()
""")
self.assertTypesMatchPytd(
ty,
"""
import datetime
from typing import Any
def f(date) -> Any: ...
""",
)
def test_from_utc(self):
ty = self.Infer("""
import datetime
def f(tz):
tz.fromutc(datetime.datetime(1929, 10, 29))
""")
self.assertTypesMatchPytd(
ty,
"""
import datetime
def f(tz) -> NoneType: ...
""",
)
if __name__ == "__main__":
test_base.main()
| BuiltinTests |
python | OmkarPathak__pygorithm | tests/test_geometry.py | {
"start": 208,
"end": 1086
} | class ____(unittest.TestCase):
def setUp(self):
# first pair of objects
self.coord1 = rect_broad_phase.Coord(1, 1)
self.coord2 = rect_broad_phase.Coord(6, 8)
self.simpleRect1 = rect_broad_phase.SimpleRectangle(self.coord1, self.coord2)
self.coord3 = rect_broad_phase.Coord(4, 0)
self.coord4 = rect_broad_phase.Coord(7, 4)
self.simpleRect2 = rect_broad_phase.SimpleRectangle(self.coord3, self.coord4)
# second pair
self.coord1 = rect_broad_phase.Coord(1, 1)
self.coord2 = rect_broad_phase.Coord(2, 3)
self.simpleRect3 = rect_broad_phase.SimpleRectangle(self.coord1, self.coord2)
self.coord3 = rect_broad_phase.Coord(4, 3)
self.coord4 = rect_broad_phase.Coord(7, 8)
self.simpleRect4 = rect_broad_phase.SimpleRectangle(self.coord3, self.coord4)
| TestCollisionDetection |
python | pytorch__pytorch | torchgen/utils.py | {
"start": 3484,
"end": 13319
} | class ____:
def __init__(
self,
install_dir: str | Path,
template_dir: str | Path,
dry_run: bool,
) -> None:
self.install_dir = Path(install_dir)
self.template_dir = Path(template_dir)
self.files: set[Path] = set()
self.dry_run = dry_run
@property
def filenames(self) -> frozenset[str]:
return frozenset({file.as_posix() for file in self.files})
def _write_if_changed(self, filename: str | Path, contents: str) -> None:
file = Path(filename)
old_contents: str | None = None
try:
old_contents = file.read_text(encoding="utf-8")
except OSError:
pass
if contents != old_contents:
# Create output directory if it doesn't exist
file.parent.mkdir(parents=True, exist_ok=True)
file.write_text(contents, encoding="utf-8")
# Read from template file and replace pattern with callable (type could be dict or str).
def substitute_with_template(
self,
template_fn: str | Path,
env_callable: Callable[[], str | dict[str, Any]],
) -> str:
assert not Path(template_fn).is_absolute(), (
f"template_fn must be relative: {template_fn}"
)
template_path = self.template_dir / template_fn
env = env_callable()
if isinstance(env, dict):
if "generated_comment" not in env:
generator_default = TORCHGEN_ROOT / "gen.py"
try:
generator = Path(
sys.modules["__main__"].__file__ or generator_default
).absolute()
except (KeyError, AttributeError):
generator = generator_default.absolute()
try:
generator_path = generator.relative_to(REPO_ROOT).as_posix()
except ValueError:
generator_path = generator.name
env = {
**env, # copy the original dict instead of mutating it
"generated_comment": (
"@" + f"generated by {generator_path} from {template_fn}"
),
}
template = _read_template(template_path)
substitute_out = template.substitute(env)
# Ensure an extra blank line between the class/function definition
# and the docstring of the previous class/function definition.
# NB: It is generally not recommended to have docstrings in pyi stub
# files. But if there are any, we need to ensure that the file
# is properly formatted.
return re.sub(
r'''
(""")\n+ # match triple quotes
(
(\s*@.+\n)* # match decorators if any
\s*(class|def) # match class/function definition
)
''',
r"\g<1>\n\n\g<2>",
substitute_out,
flags=re.VERBOSE,
)
if isinstance(env, str):
return env
assert_never(env)
def write_with_template(
self,
filename: str | Path,
template_fn: str | Path,
env_callable: Callable[[], str | dict[str, Any]],
) -> None:
filename = Path(filename)
assert not filename.is_absolute(), f"filename must be relative: {filename}"
file = self.install_dir / filename
assert file not in self.files, f"duplicate file write {file}"
self.files.add(file)
if not self.dry_run:
substitute_out = self.substitute_with_template(
template_fn=template_fn,
env_callable=env_callable,
)
self._write_if_changed(filename=file, contents=substitute_out)
def write(
self,
filename: str | Path,
env_callable: Callable[[], str | dict[str, Any]],
) -> None:
self.write_with_template(filename, filename, env_callable)
def write_sharded(
self,
filename: str | Path,
items: Iterable[T],
*,
key_fn: Callable[[T], str],
env_callable: Callable[[T], dict[str, list[str]]],
num_shards: int,
base_env: dict[str, Any] | None = None,
sharded_keys: set[str],
) -> None:
self.write_sharded_with_template(
filename,
filename,
items,
key_fn=key_fn,
env_callable=env_callable,
num_shards=num_shards,
base_env=base_env,
sharded_keys=sharded_keys,
)
def write_sharded_with_template(
self,
filename: str | Path,
template_fn: str | Path,
items: Iterable[T],
*,
key_fn: Callable[[T], str],
env_callable: Callable[[T], dict[str, list[str]]],
num_shards: int,
base_env: dict[str, Any] | None = None,
sharded_keys: set[str],
) -> None:
file = Path(filename)
assert not file.is_absolute(), f"filename must be relative: {filename}"
everything: dict[str, Any] = {"shard_id": "Everything"}
shards: list[dict[str, Any]] = [
{"shard_id": f"_{i}"} for i in range(num_shards)
]
all_shards = [everything] + shards
if base_env is not None:
for shard in all_shards:
shard.update(base_env)
for key in sharded_keys:
for shard in all_shards:
if key in shard:
assert isinstance(shard[key], list), (
"sharded keys in base_env must be a list"
)
shard[key] = shard[key].copy()
else:
shard[key] = []
def merge_env(into: dict[str, list[str]], from_: dict[str, list[str]]) -> None:
for k, v in from_.items():
assert k in sharded_keys, f"undeclared sharded key {k}"
into[k] += v
if self.dry_run:
# Dry runs don't write any templates, so incomplete environments are fine
items = ()
for item in items:
key = key_fn(item)
sid = string_stable_hash(key) % num_shards
env = env_callable(item)
merge_env(shards[sid], env)
merge_env(everything, env)
for shard in all_shards:
shard_id = shard["shard_id"]
self.write_with_template(
file.with_stem(f"{file.stem}{shard_id}"),
template_fn,
lambda: shard,
)
# filenames is used to track compiled files, but FooEverything.cpp isn't meant to be compiled
self.files.discard(self.install_dir / file.with_stem(f"{file.stem}Everything"))
def write_outputs(self, variable_name: str, filename: str | Path) -> None:
"""Write a file containing the list of all outputs which are generated by this script."""
content = "\n".join(
(
"set(",
variable_name,
# Use POSIX paths to avoid invalid escape sequences on Windows
*(f' "{file.as_posix()}"' for file in sorted(self.files)),
")",
)
)
self._write_if_changed(filename, content)
def template_dir_for_comments(self) -> str:
"""
This needs to be deterministic. The template dir is an absolute path
that varies across builds. So, just use the path relative to this file,
which will point to the codegen source but will be stable.
"""
return os.path.relpath(self.template_dir, os.path.dirname(__file__))
# Helper function to generate file manager
def make_file_manager(
options: Namespace,
install_dir: str | Path | None = None,
) -> FileManager:
template_dir = os.path.join(options.source_path, "templates")
install_dir = install_dir if install_dir else options.install_dir
return FileManager(
install_dir=install_dir,
template_dir=template_dir,
dry_run=options.dry_run,
)
# Helper function to create a pretty representation for dataclasses
def dataclass_repr(
obj: Any,
indent: int = 0,
width: int = 80,
) -> str:
return pformat(obj, indent, width)
def _format_dict(
attr: dict[Any, Any],
indent: int,
width: int,
curr_indent: int,
) -> str:
curr_indent += indent + 3
dict_repr = []
for k, v in attr.items():
k_repr = repr(k)
v_str = (
pformat(v, indent, width, curr_indent + len(k_repr))
if is_dataclass(v)
else repr(v)
)
dict_repr.append(f"{k_repr}: {v_str}")
return _format(dict_repr, indent, width, curr_indent, "{", "}")
def _format_list(
attr: list[Any] | set[Any] | tuple[Any, ...],
indent: int,
width: int,
curr_indent: int,
) -> str:
curr_indent += indent + 1
list_repr = [
pformat(l, indent, width, curr_indent) if is_dataclass(l) else repr(l)
for l in attr
]
start, end = ("[", "]") if isinstance(attr, list) else ("(", ")")
return _format(list_repr, indent, width, curr_indent, start, end)
def _format(
fields_str: list[str],
indent: int,
width: int,
curr_indent: int,
start: str,
end: str,
) -> str:
delimiter, curr_indent_str = "", ""
# if it exceed the max width then we place one element per line
if len(repr(fields_str)) >= width:
delimiter = "\n"
curr_indent_str = " " * curr_indent
indent_str = " " * indent
body = f", {delimiter}{curr_indent_str}".join(fields_str)
return f"{start}{indent_str}{body}{end}"
| FileManager |
python | PyCQA__pylint | tests/functional/ext/no_self_use/no_self_use.py | {
"start": 2086,
"end": 2227
} | class ____:
def __init__(self):
self.store = {}
def get(self, key, default=None):
return self.store.get(key, default)
| A |
python | huggingface__transformers | src/transformers/models/qwen2/modeling_qwen2.py | {
"start": 21922,
"end": 22027
} | class ____(GenericForSequenceClassification, Qwen2PreTrainedModel):
pass
| Qwen2ForSequenceClassification |
python | getsentry__sentry | src/sentry/snuba/metrics/naming_layer/mri.py | {
"start": 5086,
"end": 8575
} | class ____(Enum):
# Ingested
USER = "s:transactions/user@none"
DURATION = "d:transactions/duration@millisecond"
COUNT_PER_ROOT_PROJECT = "c:transactions/count_per_root_project@none"
MEASUREMENTS_FCP = "d:transactions/measurements.fcp@millisecond"
MEASUREMENTS_LCP = "d:transactions/measurements.lcp@millisecond"
MEASUREMENTS_APP_START_COLD = "d:transactions/measurements.app_start_cold@millisecond"
MEASUREMENTS_APP_START_WARM = "d:transactions/measurements.app_start_warm@millisecond"
MEASUREMENTS_CLS = "d:transactions/measurements.cls@none"
MEASUREMENTS_FID = "d:transactions/measurements.fid@millisecond"
MEASUREMENTS_FP = "d:transactions/measurements.fp@millisecond"
MEASUREMENTS_FRAMES_FROZEN = "d:transactions/measurements.frames_frozen@none"
MEASUREMENTS_FRAMES_FROZEN_RATE = "d:transactions/measurements.frames_frozen_rate@ratio"
MEASUREMENTS_FRAMES_SLOW = "d:transactions/measurements.frames_slow@none"
MEASUREMENTS_FRAMES_SLOW_RATE = "d:transactions/measurements.frames_slow_rate@ratio"
MEASUREMENTS_FRAMES_TOTAL = "d:transactions/measurements.frames_total@none"
MEASUREMENTS_TIME_TO_INITIAL_DISPLAY = (
"d:transactions/measurements.time_to_initial_display@millisecond"
)
MEASUREMENTS_TIME_TO_FULL_DISPLAY = (
"d:transactions/measurements.time_to_full_display@millisecond"
)
MEASUREMENTS_STALL_COUNT = "d:transactions/measurements.stall_count@none"
MEASUREMENTS_STALL_LONGEST_TIME = "d:transactions/measurements.stall_longest_time@millisecond"
MEASUREMENTS_STALL_PERCENTAGE = "d:transactions/measurements.stall_percentage@ratio"
MEASUREMENTS_STALL_TOTAL_TIME = "d:transactions/measurements.stall_total_time@millisecond"
MEASUREMENTS_TTFB = "d:transactions/measurements.ttfb@millisecond"
MEASUREMENTS_TTFB_REQUEST_TIME = "d:transactions/measurements.ttfb.requesttime@millisecond"
BREAKDOWNS_HTTP = "d:transactions/breakdowns.span_ops.ops.http@millisecond"
BREAKDOWNS_DB = "d:transactions/breakdowns.span_ops.ops.db@millisecond"
BREAKDOWNS_BROWSER = "d:transactions/breakdowns.span_ops.ops.browser@millisecond"
BREAKDOWNS_RESOURCE = "d:transactions/breakdowns.span_ops.ops.resource@millisecond"
# Derived
ALL = "e:transactions/all@none"
ALL_DURATION = "e:transactions/all_duration@none"
FAILURE_COUNT = "e:transactions/failure_count@none"
FAILURE_RATE = "e:transactions/failure_rate@ratio"
SATISFIED = "e:transactions/satisfied@none"
TOLERATED = "e:transactions/tolerated@none"
APDEX = "e:transactions/apdex@ratio"
MISERABLE_USER = "e:transactions/user.miserable@none"
ALL_USER = "e:transactions/user.all@none"
USER_MISERY = "e:transactions/user_misery@ratio"
TEAM_KEY_TRANSACTION = "e:transactions/team_key_transaction@none"
HTTP_ERROR_COUNT = "e:transactions/http_error_count@none"
HTTP_ERROR_RATE = "e:transactions/http_error_rate@ratio"
# Spans (might be moved to their own namespace soon)
SPAN_USER = "s:spans/user@none"
SPAN_DURATION = "d:spans/duration@millisecond"
SPAN_SELF_TIME = "d:spans/exclusive_time@millisecond"
SPAN_SELF_TIME_LIGHT = "d:spans/exclusive_time_light@millisecond"
COUNT_ON_DEMAND = "c:transactions/on_demand@none"
DIST_ON_DEMAND = "d:transactions/on_demand@none"
SET_ON_DEMAND = "s:transactions/on_demand@none"
# Less granular coarse metrics
DURATION_LIGHT = "d:transactions/duration_light@millisecond"
| TransactionMRI |
python | google__jax | jax/_src/hashable_array.py | {
"start": 526,
"end": 1036
} | class ____:
__slots__ = ["val"]
val: np.ndarray
def __init__(self, val):
self.val = np.array(val, copy=True)
self.val.setflags(write=False)
def __repr__(self):
return f"HashableArray({self.val!r})"
def __str__(self):
return f"HashableArray({self.val})"
def __hash__(self):
return hash((self.val.shape, self.val.dtype, self.val.tobytes()))
def __eq__(self, other):
return isinstance(other, HashableArray) and np.array_equal(
self.val, other.val
)
| HashableArray |
python | microsoft__pyright | packages/pyright-internal/src/tests/samples/protocol10.py | {
"start": 303,
"end": 355
} | class ____:
def a(self) -> None:
pass
| Base |
python | spack__spack | var/spack/test_repos/spack_repo/builtin_mock/packages/perl_extension/package.py | {
"start": 228,
"end": 741
} | class ____(PerlPackage):
"""A package which extends perl"""
homepage = "http://www.example.com"
url = "http://www.example.com/extension1-1.0.tar.gz"
version("1.0", md5="00000000000000000000000000000010")
version("2.0", md5="00000000000000000000000000000020")
extends("perl")
def install(self, spec, prefix):
mkdirp(prefix.bin)
with open(os.path.join(prefix.bin, "perl-extension"), "w+", encoding="utf-8") as fout:
fout.write(str(spec.version))
| PerlExtension |
python | dagster-io__dagster | python_modules/dagster-graphql/dagster_graphql/client/utils.py | {
"start": 69,
"end": 214
} | class ____(Exception):
def __init__(self, *args, body=None):
super().__init__(*args)
self.body = body
| DagsterGraphQLClientError |
python | readthedocs__readthedocs.org | readthedocs/core/forms.py | {
"start": 7440,
"end": 7922
} | class ____(forms.MultipleChoiceField):
"""
For filtering searches on a facet.
Has validation for the format of facet values.
"""
def valid_value(self, value):
"""
Although this is a choice field, no choices need to be supplied.
Instead, we just validate that the value is in the correct format for
facet filtering (facet_name:value)
"""
if ":" not in value:
return False
return True
| FacetField |
python | scrapy__scrapy | tests/test_spidermiddleware_output_chain.py | {
"start": 9041,
"end": 9137
} | class ____(_NotGeneratorDoNothingMiddleware):
pass
| NotGeneratorDoNothingAfterFailureMiddleware |
python | urllib3__urllib3 | src/urllib3/_collections.py | {
"start": 5264,
"end": 6873
} | class ____(set[tuple[str, str]]):
"""
HTTPHeaderDict is unusual for a Mapping[str, str] in that it has two modes of
address.
If we directly try to get an item with a particular name, we will get a string
back that is the concatenated version of all the values:
>>> d['X-Header-Name']
'Value1, Value2, Value3'
However, if we iterate over an HTTPHeaderDict's items, we will optionally combine
these values based on whether combine=True was called when building up the dictionary
>>> d = HTTPHeaderDict({"A": "1", "B": "foo"})
>>> d.add("A", "2", combine=True)
>>> d.add("B", "bar")
>>> list(d.items())
[
('A', '1, 2'),
('B', 'foo'),
('B', 'bar'),
]
This class conforms to the interface required by the MutableMapping ABC while
also giving us the nonstandard iteration behavior we want; items with duplicate
keys, ordered by time of first insertion.
"""
_headers: HTTPHeaderDict
def __init__(self, headers: HTTPHeaderDict) -> None:
self._headers = headers
def __len__(self) -> int:
return len(list(self._headers.iteritems()))
def __iter__(self) -> typing.Iterator[tuple[str, str]]:
return self._headers.iteritems()
def __contains__(self, item: object) -> bool:
if isinstance(item, tuple) and len(item) == 2:
passed_key, passed_val = item
if isinstance(passed_key, str) and isinstance(passed_val, str):
return self._headers._has_value_for_header(passed_key, passed_val)
return False
| HTTPHeaderDictItemView |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.