language stringclasses 1
value | repo stringclasses 346
values | path stringlengths 6 201 | class_span dict | source stringlengths 21 2.38M | target stringlengths 1 96 |
|---|---|---|---|---|---|
python | huggingface__transformers | src/transformers/models/granitemoe/configuration_granitemoe.py | {
"start": 1171,
"end": 9299
} | class ____(PreTrainedConfig):
r"""
This is the configuration class to store the configuration of a [`GraniteMoeModel`]. It is used to instantiate an GraniteMoe
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the GraniteMoe-3B.
Configuration objects inherit from [`PreTrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PreTrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32000):
Vocabulary size of the GraniteMoe model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`GraniteMoeModel`]
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 11008):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details, check out [this
paper](https://huggingface.co/papers/2305.13245). If it is not specified, will default to
`num_attention_heads`.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*):
Padding token id.
bos_token_id (`int`, *optional*, defaults to 1):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 2):
End of stream token id.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
rope_parameters (`RopeParameters`, *optional*):
Dictionary containing the configuration parameters for the RoPE embeddings. The dictionary should contain
a value for `rope_theta` and optionally parameters used for scaling in case you want to use RoPE
with longer `max_position_embeddings`.
attention_bias (`bool`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
embedding_multiplier (`float`, *optional*, defaults to 1.0): embedding multiplier
logits_scaling (`float`, *optional*, defaults to 1.0): divisor for output logits
residual_multiplier (`float`, *optional*, defaults to 1.0): residual multiplier
attention_multiplier (`float`, *optional*, defaults to 1.0): attention multiplier
num_local_experts (`int`, *optional*, defaults to 8): total number of experts
num_experts_per_tok (`int`, *optional*, defaults to 2): number of experts per token
output_router_logits (`bool`, *optional*, defaults to `False`):
Whether or not the router logits should be returned by the model. Enabling this will also
allow the model to output the auxiliary loss.
router_aux_loss_coef (`float`, *optional*, defaults to 0.001): router auxiliary loss coefficient
```python
>>> from transformers import GraniteMoeModel, GraniteMoeConfig
>>> # Initializing a GraniteMoe granitemoe-3b style configuration
>>> configuration = GraniteMoeConfig()
>>> # Initializing a model from the granitemoe-7b style configuration
>>> model = GraniteMoeModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```"""
model_type = "granitemoe"
keys_to_ignore_at_inference = ["past_key_values"]
def __init__(
self,
vocab_size: Optional[int] = 32000,
hidden_size: Optional[int] = 4096,
intermediate_size: Optional[int] = 11008,
num_hidden_layers: Optional[int] = 32,
num_attention_heads: Optional[int] = 32,
num_key_value_heads: Optional[int] = None,
hidden_act: Optional[str] = "silu",
max_position_embeddings: Optional[int] = 2048,
initializer_range: Optional[float] = 0.02,
rms_norm_eps: Optional[int] = 1e-6,
use_cache: Optional[bool] = True,
pad_token_id: Optional[int] = None,
bos_token_id: Optional[int] = 1,
eos_token_id: Optional[int] = 2,
tie_word_embeddings: Optional[bool] = False,
rope_parameters: Optional[RopeParameters | dict[str, RopeParameters]] = None,
attention_bias: Optional[bool] = False,
attention_dropout: Optional[float] = 0.0,
embedding_multiplier: Optional[float] = 1.0,
logits_scaling: Optional[float] = 1.0,
residual_multiplier: Optional[float] = 1.0,
attention_multiplier: Optional[float] = 1.0,
num_local_experts: Optional[int] = 8,
num_experts_per_tok: Optional[int] = 2,
output_router_logits: Optional[bool] = False,
router_aux_loss_coef: Optional[float] = 0.001,
**kwargs,
):
self.vocab_size = vocab_size
self.max_position_embeddings = max_position_embeddings
self.hidden_size = hidden_size
self.intermediate_size = intermediate_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
# for backward compatibility
if num_key_value_heads is None:
num_key_value_heads = num_attention_heads
self.num_key_value_heads = num_key_value_heads
self.hidden_act = hidden_act
self.initializer_range = initializer_range
self.rms_norm_eps = rms_norm_eps
self.use_cache = use_cache
self.attention_bias = attention_bias
self.attention_dropout = attention_dropout
self.embedding_multiplier = embedding_multiplier
self.logits_scaling = logits_scaling
self.residual_multiplier = residual_multiplier
self.attention_multiplier = attention_multiplier
self.num_local_experts = num_local_experts
self.num_experts_per_tok = num_experts_per_tok
self.output_router_logits = output_router_logits
self.router_aux_loss_coef = router_aux_loss_coef
self.rope_parameters = rope_parameters
super().__init__(
pad_token_id=pad_token_id,
bos_token_id=bos_token_id,
eos_token_id=eos_token_id,
tie_word_embeddings=tie_word_embeddings,
**kwargs,
)
__all__ = ["GraniteMoeConfig"]
| GraniteMoeConfig |
python | ray-project__ray | rllib/examples/envs/classes/multi_agent/double_row_corridor_env.py | {
"start": 194,
"end": 4930
} | class ____(MultiAgentEnv):
"""A MultiAgentEnv with a single, global observation space for all agents.
There are two agents in this grid-world-style environment, `agent_0` and `agent_1`.
The grid has two-rows and multiple columns and agents must, each
separately, reach their individual goal position to receive a final reward of +10:
+---------------+
|0 |
| 1|
+---------------+
Legend:
0 = agent_0 + goal state for agent_1
1 = agent_1 + goal state for agent_0
You can change the length of the grid through providing the "length" key in the
`config` dict passed to the env's constructor.
The action space for both agents is Discrete(4), which encodes to moving up, down,
left, or right in the grid.
If the two agents collide, meaning they end up in the exact same field after both
taking their actions at any timestep, an additional reward of +5 is given to both
agents. Thus, optimal policies aim at seeking the respective other agent first, and
only then proceeding to their agent's goal position.
Each agent in the env has an observation space of a 2-tuple containing its own
x/y-position, where x is the row index, being either 0 (1st row) or 1 (2nd row),
and y is the column index (starting from 0).
"""
def __init__(self, config=None):
super().__init__()
config = config or {}
self.length = config.get("length", 15)
self.terminateds = {}
self.collided = False
# Provide information about agents and possible agents.
self.agents = self.possible_agents = ["agent_0", "agent_1"]
self.terminateds = {}
# Observations: x/y, where the first number is the row index, the second number
# is the column index, for both agents.
# For example: [0.0, 2.0] means the agent is in row 0 and column 2.
self._obs_spaces = gym.spaces.Box(
0.0, self.length - 1, shape=(2,), dtype=np.int32
)
self._act_spaces = gym.spaces.Discrete(4)
@override(MultiAgentEnv)
def reset(self, *, seed=None, options=None):
self.agent_pos = {
"agent_0": [0, 0],
"agent_1": [1, self.length - 1],
}
self.goals = {
"agent_0": [0, self.length - 1],
"agent_1": [1, 0],
}
self.terminateds = {agent_id: False for agent_id in self.agent_pos}
self.collided = False
return self._get_obs(), {}
@override(MultiAgentEnv)
def step(self, action_dict):
rewards = {
agent_id: -0.1
for agent_id in self.agent_pos
if not self.terminateds[agent_id]
}
for agent_id, action in action_dict.items():
row, col = self.agent_pos[agent_id]
# up
if action == 0 and row > 0:
row -= 1
# down
elif action == 1 and row < 1:
row += 1
# left
elif action == 2 and col > 0:
col -= 1
# right
elif action == 3 and col < self.length - 1:
col += 1
# Update positions.
self.agent_pos[agent_id] = [row, col]
obs = self._get_obs()
# Check for collision (only if both agents are still active).
if (
not any(self.terminateds.values())
and self.agent_pos["agent_0"] == self.agent_pos["agent_1"]
):
if not self.collided:
rewards["agent_0"] += 5
rewards["agent_1"] += 5
self.collided = True
# Check goals.
for agent_id in self.agent_pos:
if (
self.agent_pos[agent_id] == self.goals[agent_id]
and not self.terminateds[agent_id]
):
rewards[agent_id] += 10
self.terminateds[agent_id] = True
terminateds = {
agent_id: self.terminateds[agent_id] for agent_id in self.agent_pos
}
terminateds["__all__"] = all(self.terminateds.values())
return obs, rewards, terminateds, {}, {}
@override(MultiAgentEnv)
def get_observation_space(self, agent_id: AgentID) -> gym.Space:
return self._obs_spaces
@override(MultiAgentEnv)
def get_action_space(self, agent_id: AgentID) -> gym.Space:
return self._act_spaces
def _get_obs(self):
obs = {}
pos = self.agent_pos
for agent_id in self.agent_pos:
if self.terminateds[agent_id]:
continue
obs[agent_id] = np.array(pos[agent_id], dtype=np.int32)
return obs
| DoubleRowCorridorEnv |
python | readthedocs__readthedocs.org | readthedocs/proxito/constants.py | {
"start": 46,
"end": 274
} | class ____(Enum):
http_to_https = auto()
to_canonical_domain = auto()
subproject_to_main_domain = auto()
# Application defined redirect.
system = auto()
# User defined redirect.
user = auto()
| RedirectType |
python | PrefectHQ__prefect | tests/deployment/test_steps.py | {
"start": 8892,
"end": 24268
} | class ____:
@pytest.mark.usefixtures("clean_asserting_events_client")
async def test_run_steps_emits_pull_step_events(
self, monkeypatch: pytest.MonkeyPatch
):
from prefect.events.clients import AssertingEventsClient
flow_run_id = str(uuid.uuid4())
monkeypatch.setenv("PREFECT__FLOW_RUN_ID", flow_run_id)
# Monkeypatch the client class so all instances are AssertingEventsClient
monkeypatch.setattr(
"prefect.events.clients.PrefectEventsClient",
AssertingEventsClient,
)
def fake_step(*, script: str, **kwargs: Any) -> dict[str, Any]:
return {"result": script, **kwargs}
monkeypatch.setattr(
"prefect.deployments.steps.run_shell_script",
fake_step,
)
steps = [
{
"prefect.deployments.steps.run_shell_script": {
"script": "first",
"id": "step-one",
"requires": "prefect>=3.0.0",
"extra": "value",
}
},
{
"prefect.deployments.steps.run_shell_script": {
"script": "second",
}
},
]
output = await run_steps(steps, {})
assert output["result"] == "second"
# Should emit one event per step
assert AssertingEventsClient.last
events = [
e
for client in AssertingEventsClient.all
if hasattr(client, "events")
for e in client.events
if f"prefect.flow-run.{flow_run_id}" in str(e.resource)
]
assert len(events) == 2
# Check first step event
first_event = events[0]
assert first_event.event == "prefect.flow-run.pull-step.executed"
assert dict(first_event.resource) == {
"prefect.resource.id": f"prefect.flow-run.{flow_run_id}",
}
first_payload = first_event.payload
assert first_payload["index"] == 0
assert first_payload["step_name"] == "run_shell_script"
assert first_payload["id"] == "step-one"
# inputs includes reserved keywords like 'requires' and 'id'
assert first_payload["inputs"] == {
"script": "first",
"id": "step-one",
"requires": "prefect>=3.0.0",
"extra": "value",
}
# Check second step event
second_event = events[1]
assert second_event.event == "prefect.flow-run.pull-step.executed"
second_payload = second_event.payload
assert second_payload["index"] == 1
assert second_payload["step_name"] == "run_shell_script"
async def test_run_steps_skips_event_without_flow_run_id(
self, monkeypatch: pytest.MonkeyPatch
):
from prefect.events.clients import AssertingEventsClient
monkeypatch.delenv("PREFECT__FLOW_RUN_ID", raising=False)
mock_events_client = AssertingEventsClient()
get_events_client_called = False
def mock_get_events_client(**kwargs):
nonlocal get_events_client_called
get_events_client_called = True
return mock_events_client
monkeypatch.setattr(
"prefect.deployments.steps.core.get_events_client",
mock_get_events_client,
raising=False,
)
def fake_step(*, script: str, **kwargs: Any) -> dict[str, Any]:
return {"result": script}
monkeypatch.setattr(
"prefect.deployments.steps.run_shell_script",
fake_step,
)
steps = [
{
"prefect.deployments.steps.run_shell_script": {
"script": "first",
}
}
]
await run_steps(steps, {})
# get_events_client should not be called since there's no flow_run_id
assert not get_events_client_called
@pytest.mark.usefixtures("clean_asserting_events_client")
async def test_run_steps_emits_event_on_failure(
self, monkeypatch: pytest.MonkeyPatch
):
from prefect.events.clients import AssertingEventsClient
flow_run_id = str(uuid.uuid4())
monkeypatch.setenv("PREFECT__FLOW_RUN_ID", flow_run_id)
monkeypatch.setattr(
"prefect.events.clients.PrefectEventsClient",
AssertingEventsClient,
)
def fake_step(*, script: str, **kwargs: Any) -> dict[str, Any]:
if script == "boom":
raise RuntimeError("explode")
return {"result": script}
monkeypatch.setattr(
"prefect.deployments.steps.run_shell_script",
fake_step,
)
steps = [
{
"prefect.deployments.steps.run_shell_script": {
"script": "ok",
"id": "step-one",
}
},
{
"prefect.deployments.steps.run_shell_script": {
"script": "boom",
"id": "step-two",
}
},
]
with pytest.raises(StepExecutionError):
await run_steps(steps, {})
# Should emit 2 events: 1 success for first step, 1 failure for second step
assert AssertingEventsClient.last
events = [
e
for client in AssertingEventsClient.all
if hasattr(client, "events")
for e in client.events
if f"prefect.flow-run.{flow_run_id}" in str(e.resource)
]
assert len(events) == 2
# First event should be success
first_event = events[0]
assert first_event.event == "prefect.flow-run.pull-step.executed"
first_payload = first_event.payload
assert first_payload["id"] == "step-one"
assert first_payload["index"] == 0
# Second event should be failure
second_event = events[1]
assert second_event.event == "prefect.flow-run.pull-step.failed"
second_payload = second_event.payload
assert second_payload["id"] == "step-two"
assert second_payload["index"] == 1
assert second_payload["step_name"] == "run_shell_script"
assert second_payload["inputs"]["script"] == "boom"
@pytest.mark.usefixtures("clean_asserting_events_client")
async def test_run_steps_does_not_expose_secrets_in_event(
self, monkeypatch: pytest.MonkeyPatch
):
from prefect.events.clients import AssertingEventsClient
flow_run_id = str(uuid.uuid4())
monkeypatch.setenv("PREFECT__FLOW_RUN_ID", flow_run_id)
monkeypatch.setenv("SECRET_ENV_VAR", "super-secret-value")
monkeypatch.setattr(
"prefect.events.clients.PrefectEventsClient",
AssertingEventsClient,
)
await Secret(value="my-secret-api-key").save(name="api-key")
async with get_client() as client:
await client._client.post(
"/variables/", json={"name": "db_password", "value": "secret-password"}
)
def fake_step(
*, script: str, api_key: str, password: str, env_secret: str
) -> dict[str, str]:
assert api_key == "my-secret-api-key"
assert password == "secret-password"
assert env_secret == "super-secret-value"
return {"result": "success"}
monkeypatch.setattr(
"prefect.deployments.steps.run_shell_script",
fake_step,
)
steps = [
{
"prefect.deployments.steps.run_shell_script": {
"script": "echo 'test'",
"api_key": "{{ prefect.blocks.secret.api-key }}",
"password": "{{ prefect.variables.db_password }}",
"env_secret": "{{ $SECRET_ENV_VAR }}",
"id": "step-with-secrets",
}
}
]
output = await run_steps(steps, {})
assert output["result"] == "success"
assert AssertingEventsClient.last
events = [
e
for client in AssertingEventsClient.all
if hasattr(client, "events")
for e in client.events
if f"prefect.flow-run.{flow_run_id}" in str(e.resource)
]
assert len(events) == 1
event = events[0]
assert event.event == "prefect.flow-run.pull-step.executed"
# Payload is the step itself, not a list of steps
payload = event.payload
assert payload["index"] == 0
assert payload["id"] == "step-with-secrets"
step_inputs = payload["inputs"]
assert step_inputs["api_key"] == "{{ prefect.blocks.secret.api-key }}"
assert step_inputs["password"] == "{{ prefect.variables.db_password }}"
assert step_inputs["env_secret"] == "{{ $SECRET_ENV_VAR }}"
assert "my-secret-api-key" not in str(payload)
assert "secret-password" not in str(payload)
assert "super-secret-value" not in str(payload)
@pytest.mark.usefixtures("clean_asserting_events_client")
async def test_run_steps_includes_deployment_as_related_resource(
self, monkeypatch: pytest.MonkeyPatch
):
from prefect.events.clients import AssertingEventsClient
flow_run_id = str(uuid.uuid4())
deployment_id = str(uuid.uuid4())
monkeypatch.setenv("PREFECT__FLOW_RUN_ID", flow_run_id)
monkeypatch.setattr(
"prefect.events.clients.PrefectEventsClient",
AssertingEventsClient,
)
def fake_step(*, script: str) -> dict[str, str]:
return {"result": "success"}
monkeypatch.setattr(
"prefect.deployments.steps.run_shell_script",
fake_step,
)
steps = [
{
"prefect.deployments.steps.run_shell_script": {
"script": "echo 'test'",
"id": "test-step",
}
}
]
mock_deployment = Mock()
mock_deployment.id = deployment_id
output = await run_steps(steps, {}, deployment=mock_deployment)
assert output["result"] == "success"
assert AssertingEventsClient.last
events = [
e
for client in AssertingEventsClient.all
if hasattr(client, "events")
for e in client.events
if f"prefect.flow-run.{flow_run_id}" in str(e.resource)
]
assert len(events) == 1
event = events[0]
assert event.event == "prefect.flow-run.pull-step.executed"
related = event.related
assert related is not None
assert len(related) == 1
assert dict(related[0]) == {
"prefect.resource.id": f"prefect.deployment.{deployment_id}",
"prefect.resource.role": "deployment",
}
async def test_run_steps_runs_multiple_steps(self):
steps = [
{
"prefect.deployments.steps.run_shell_script": {
"script": "echo 'this is a test'",
"id": "why_not_to_panic",
}
},
{
"prefect.deployments.steps.run_shell_script": {
"script": r"echo Don\'t Panic: {{ why_not_to_panic.stdout }}"
}
},
]
step_outputs = await run_steps(steps, {})
assert step_outputs == {
"why_not_to_panic": {
"stdout": "this is a test",
"stderr": "",
},
"stdout": "Don't Panic: this is a test",
"stderr": "",
}
async def test_run_steps_handles_error_gracefully(self, variables):
steps = [
{
"prefect.deployments.steps.run_shell_script": {
"script": "echo 'this is a test'",
"id": "why_not_to_panic",
}
},
{
"nonexistent.module": {
"__fn": lambda x: x * 2,
"value": "{{ step_output_1 }}",
}
},
]
with pytest.raises(StepExecutionError, match="nonexistent.module"):
await run_steps(steps, {})
async def test_run_steps_prints_step_names(
self,
):
mock_print = MagicMock()
steps = [
{
"prefect.deployments.steps.run_shell_script": {
"script": "echo 'this is a test'",
"id": "why_not_to_panic",
}
},
{
"prefect.deployments.steps.run_shell_script": {
"script": (
'bash -c echo "Don\'t Panic: {{ why_not_to_panic.stdout }}"'
)
}
},
]
await run_steps(steps, {}, print_function=mock_print)
mock_print.assert_any_call(" > Running run_shell_script step...")
async def test_run_steps_prints_deprecation_warnings(self, monkeypatch):
def func(*args, **kwargs):
warnings.warn("this is a warning", DeprecationWarning)
return {}
monkeypatch.setattr("prefect.deployments.steps.run_shell_script", func)
mock_print = MagicMock()
steps = [
{
"prefect.deployments.steps.run_shell_script": {
"script": "echo 'this is a test'",
"id": "why_not_to_panic",
}
},
]
await run_steps(steps, {}, print_function=mock_print)
mock_print.assert_any_call("this is a warning", style="yellow")
async def test_run_steps_prints_prefect_deprecation_warnings(self, monkeypatch):
def func(*args, **kwargs):
warnings.warn("this is a warning", PrefectDeprecationWarning)
return {}
monkeypatch.setattr("prefect.deployments.steps.run_shell_script", func)
mock_print = MagicMock()
steps = [
{
"prefect.deployments.steps.run_shell_script": {
"script": "echo 'this is a test'",
"id": "why_not_to_panic",
}
},
]
await run_steps(steps, {}, print_function=mock_print)
mock_print.assert_any_call("this is a warning", style="yellow")
async def test_run_steps_can_print_warnings_without_style(self, monkeypatch):
def func(*args, **kwargs):
warnings.warn("this is a warning", PrefectDeprecationWarning)
return {}
monkeypatch.setattr("prefect.deployments.steps.run_shell_script", func)
# raise an exception when style is passed. exception type is irrelevant
mock_print = MagicMock(
side_effect=lambda *args, **kwargs: 1 / 0 if kwargs.get("style") else None
)
steps = [
{
"prefect.deployments.steps.run_shell_script": {
"script": "echo 'this is a test'",
"id": "why_not_to_panic",
}
},
]
await run_steps(steps, {}, print_function=mock_print)
mock_print.assert_any_call("this is a warning")
| TestRunSteps |
python | huggingface__transformers | src/transformers/models/mgp_str/modeling_mgp_str.py | {
"start": 10519,
"end": 11759
} | class ____(nn.Module):
def __init__(self, config: MgpstrConfig):
super().__init__()
self.token_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.tokenLearner = nn.Sequential(
nn.Conv2d(config.hidden_size, config.hidden_size, kernel_size=(1, 1), stride=1, groups=8, bias=False),
nn.Conv2d(config.hidden_size, config.max_token_length, kernel_size=(1, 1), stride=1, bias=False),
)
self.feat = nn.Conv2d(
config.hidden_size, config.hidden_size, kernel_size=(1, 1), stride=1, groups=8, bias=False
)
self.norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
def forward(self, hidden_states):
hidden_states = self.token_norm(hidden_states)
hidden_states = hidden_states.transpose(1, 2).unsqueeze(-1)
selected = self.tokenLearner(hidden_states)
selected = selected.flatten(2)
attentions = F.softmax(selected, dim=-1)
feat = self.feat(hidden_states)
feat = feat.flatten(2).transpose(1, 2)
feat = torch.einsum("...si,...id->...sd", attentions, feat)
a3_out = self.norm(feat)
return (a3_out, attentions)
@auto_docstring
| MgpstrA3Module |
python | getsentry__sentry | src/sentry/apidocs/parameters.py | {
"start": 16814,
"end": 17080
} | class ____:
SENTRY_APP_ID_OR_SLUG = OpenApiParameter(
name="sentry_app_id_or_slug",
location="path",
required=True,
many=False,
type=str,
description="The ID or slug of the custom integration.",
)
| SentryAppParams |
python | run-llama__llama_index | llama-index-packs/llama-index-packs-nebulagraph-query-engine/llama_index/packs/nebulagraph_query_engine/base.py | {
"start": 613,
"end": 910
} | class ____(str, Enum):
"""NebulaGraph query engine type."""
KG_KEYWORD = "keyword"
KG_HYBRID = "hybrid"
RAW_VECTOR = "vector"
RAW_VECTOR_KG_COMBO = "vector_kg"
KG_QE = "KnowledgeGraphQueryEngine"
KG_RAG_RETRIEVER = "KnowledgeGraphRAGRetriever"
| NebulaGraphQueryEngineType |
python | huggingface__transformers | src/transformers/models/bert_japanese/tokenization_bert_japanese.py | {
"start": 22920,
"end": 24278
} | class ____:
"""Runs Character tokenization."""
def __init__(self, vocab, unk_token, normalize_text=True):
"""
Constructs a CharacterTokenizer.
Args:
**vocab**:
Vocabulary object.
**unk_token**: str
A special symbol for out-of-vocabulary token.
**normalize_text**: (`optional`) boolean (default True)
Whether to apply unicode normalization to text before tokenization.
"""
self.vocab = vocab
self.unk_token = unk_token
self.normalize_text = normalize_text
def tokenize(self, text):
"""
Tokenizes a piece of text into characters.
For example, `input = "apple""` will return as output `["a", "p", "p", "l", "e"]`.
Args:
text: A single token or whitespace separated tokens.
This should have already been passed through *BasicTokenizer*.
Returns:
A list of characters.
"""
if self.normalize_text:
text = unicodedata.normalize("NFKC", text)
output_tokens = []
for char in text:
if char not in self.vocab:
output_tokens.append(self.unk_token)
continue
output_tokens.append(char)
return output_tokens
| CharacterTokenizer |
python | faif__python-patterns | patterns/other/graph_search.py | {
"start": 54,
"end": 4905
} | class ____:
"""Graph search emulation in python, from source
http://www.python.org/doc/essays/graphs/
dfs stands for Depth First Search
bfs stands for Breadth First Search"""
def __init__(self, graph: Dict[str, List[str]]) -> None:
self.graph = graph
def find_path_dfs(
self, start: str, end: str, path: Optional[List[str]] = None
) -> Optional[List[str]]:
path = path or []
path.append(start)
if start == end:
return path
for node in self.graph.get(start, []):
if node not in path:
newpath = self.find_path_dfs(node, end, path[:])
if newpath:
return newpath
def find_all_paths_dfs(
self, start: str, end: str, path: Optional[List[str]] = None
) -> List[Union[List[str], Any]]:
path = path or []
path.append(start)
if start == end:
return [path]
paths = []
for node in self.graph.get(start, []):
if node not in path:
newpaths = self.find_all_paths_dfs(node, end, path[:])
paths.extend(newpaths)
return paths
def find_shortest_path_dfs(
self, start: str, end: str, path: Optional[List[str]] = None
) -> Optional[List[str]]:
path = path or []
path.append(start)
if start == end:
return path
shortest = None
for node in self.graph.get(start, []):
if node not in path:
newpath = self.find_shortest_path_dfs(node, end, path[:])
if newpath:
if not shortest or len(newpath) < len(shortest):
shortest = newpath
return shortest
def find_shortest_path_bfs(self, start: str, end: str) -> Optional[List[str]]:
"""
Finds the shortest path between two nodes in a graph using breadth-first search.
:param start: The node to start from.
:type start: str or int
:param end: The node to find the shortest path to.
:type end: str or int
:returns queue_path_to_end, dist_to[end]: A list of nodes
representing the shortest path from `start` to `end`, and a dictionary
mapping each node in the graph (except for `start`) with its distance from it
(in terms of hops). If no such path exists, returns an empty list and an empty
dictionary instead.
"""
queue = [start]
dist_to = {start: 0}
edge_to = {}
if start == end:
return queue
while len(queue):
value = queue.pop(0)
for node in self.graph[value]:
if node not in dist_to.keys():
edge_to[node] = value
dist_to[node] = dist_to[value] + 1
queue.append(node)
if end in edge_to.keys():
path = []
node = end
while dist_to[node] != 0:
path.insert(0, node)
node = edge_to[node]
path.insert(0, start)
return path
def main():
"""
# example of graph usage
>>> graph = {
... 'A': ['B', 'C'],
... 'B': ['C', 'D'],
... 'C': ['D', 'G'],
... 'D': ['C'],
... 'E': ['F'],
... 'F': ['C'],
... 'G': ['E'],
... 'H': ['C']
... }
# initialization of new graph search object
>>> graph_search = GraphSearch(graph)
>>> print(graph_search.find_path_dfs('A', 'D'))
['A', 'B', 'C', 'D']
# start the search somewhere in the middle
>>> print(graph_search.find_path_dfs('G', 'F'))
['G', 'E', 'F']
# unreachable node
>>> print(graph_search.find_path_dfs('C', 'H'))
None
# non existing node
>>> print(graph_search.find_path_dfs('C', 'X'))
None
>>> print(graph_search.find_all_paths_dfs('A', 'D'))
[['A', 'B', 'C', 'D'], ['A', 'B', 'D'], ['A', 'C', 'D']]
>>> print(graph_search.find_shortest_path_dfs('A', 'D'))
['A', 'B', 'D']
>>> print(graph_search.find_shortest_path_dfs('A', 'F'))
['A', 'C', 'G', 'E', 'F']
>>> print(graph_search.find_shortest_path_bfs('A', 'D'))
['A', 'B', 'D']
>>> print(graph_search.find_shortest_path_bfs('A', 'F'))
['A', 'C', 'G', 'E', 'F']
# start the search somewhere in the middle
>>> print(graph_search.find_shortest_path_bfs('G', 'F'))
['G', 'E', 'F']
# unreachable node
>>> print(graph_search.find_shortest_path_bfs('A', 'H'))
None
# non existing node
>>> print(graph_search.find_shortest_path_bfs('A', 'X'))
None
"""
if __name__ == "__main__":
import doctest
doctest.testmod()
| GraphSearch |
python | huggingface__transformers | tests/models/siglip2/test_image_processing_siglip2.py | {
"start": 1137,
"end": 3564
} | class ____:
def __init__(
self,
parent,
batch_size=7,
num_channels=3,
image_size=18,
min_resolution=30,
max_resolution=400,
do_resize=True,
size=None,
do_rescale=True,
rescale_factor=1 / 255,
do_normalize=True,
image_mean=[0.5, 0.5, 0.5],
image_std=[0.5, 0.5, 0.5],
resample=None,
patch_size=16,
max_num_patches=256,
):
size = size if size is not None else {"height": 18, "width": 18}
resample = resample if resample is not None else Image.Resampling.BILINEAR
self.parent = parent
self.batch_size = batch_size
self.num_channels = num_channels
self.image_size = image_size
self.min_resolution = min_resolution
self.max_resolution = max_resolution
self.do_resize = do_resize
self.size = size
self.do_rescale = do_rescale
self.rescale_factor = rescale_factor
self.do_normalize = do_normalize
self.image_mean = image_mean
self.image_std = image_std
self.resample = resample
self.patch_size = patch_size
self.max_num_patches = max_num_patches
def prepare_image_processor_dict(self):
return {
"do_resize": self.do_resize,
"do_rescale": self.do_rescale,
"rescale_factor": self.rescale_factor,
"do_normalize": self.do_normalize,
"image_mean": self.image_mean,
"image_std": self.image_std,
"resample": self.resample,
"patch_size": self.patch_size,
"max_num_patches": self.max_num_patches,
}
def expected_output_image_shape(self, images):
return self.max_num_patches, self.patch_size * self.patch_size * self.num_channels
def prepare_image_inputs(self, equal_resolution=False, numpify=False, torchify=False):
return prepare_image_inputs(
batch_size=self.batch_size,
num_channels=self.num_channels,
min_resolution=self.min_resolution,
max_resolution=self.max_resolution,
equal_resolution=equal_resolution,
numpify=numpify,
torchify=torchify,
)
@require_torch
@require_vision
# Copied from tests.models.clip.test_image_processing_clip.CLIPImageProcessingTest with CLIP->Siglip2
| Siglip2ImageProcessingTester |
python | getsentry__sentry | tests/acceptance/test_organization_uptime.py | {
"start": 225,
"end": 5089
} | class ____(AcceptanceTestCase):
def setUp(self) -> None:
super().setUp()
self.uptime_path = f"/organizations/{self.organization.slug}/insights/uptime/"
self.team = self.create_team(organization=self.organization, name="Uptime Team")
self.project = self.create_project(
organization=self.organization, teams=[self.team], name="Uptime Test Project"
)
self.create_team_membership(self.team, user=self.user)
self.login_as(self.user)
@with_feature("organizations:uptime")
def test_create_uptime_monitor_flow(self) -> None:
"""
Test complete flow:
-> empty overview
-> create monitor
-> fill form
-> see on details page
-> return to overview
"""
# Step 1: Start from empty uptime overview page
self.browser.get(self.uptime_path)
self.browser.wait_until_not('[data-test-id="loading-indicator"]')
# Verify we're on the empty state
self.browser.wait_until(xpath="//*[text()='The selected projects have no uptime monitors']")
# Step 2: Click "Add Uptime Monitor" button in empty state
self.browser.click_when_visible("a[aria-label='Add Uptime Monitor']")
# Should navigate to uptime alert creation form
self.browser.wait_until('[name="name"]')
# Step 3: Fill out the uptime monitor form
name_input = self.browser.find_element_by_name("name")
name_input.send_keys("My Test Uptime Monitor")
url_input = self.browser.find_element_by_name("url")
url_input.send_keys("https://example.com")
self.browser.click_when_visible(xpath='//label[@aria-label="Environment"]')
self.browser.element(
xpath='//label[@aria-label="Environment"]/following-sibling::div//input'
).send_keys("production", Keys.ENTER)
# Step 4: Submit the form using the manual approach from debug test
# Find the submit button in the form
self.browser.element("button[aria-label='Create Rule']").click()
# Step 5: Should navigate to uptime monitor details page
# Wait for page to load and check URL change
self.browser.wait_until_not('[data-test-id="loading-indicator"]', timeout=10)
self.browser.wait_until(xpath="//h1[contains(text(), 'My Test Uptime Monitor')]")
self.browser.element_exists(xpath="//*[contains(text(), 'https://example.com')]")
# Step 6: Navigate back to uptime overview
self.browser.get(self.uptime_path)
# Step 7: Verify the monitor is now shown in the overview list
self.browser.wait_until_not('[data-test-id="loading-indicator"]')
self.browser.wait_until(xpath="//*[contains(text(), 'My Test Uptime Monitor')]")
@with_feature("organizations:uptime")
def test_edit_uptime_monitor(self) -> None:
"""Test editing an existing uptime monitor"""
uptime_subscription = self.create_uptime_subscription(
url="https://sentry.io",
timeout_ms=5000,
)
self.create_uptime_detector(
name="My Awesome Monitor",
project=self.project,
uptime_subscription=uptime_subscription,
)
# Navigate to uptime overview
self.browser.get(self.uptime_path)
self.browser.wait_until_not('[data-test-id="loading-indicator"]')
# Verify the monitor is visible in the list
self.browser.wait_until(xpath="//h3[contains(text(), 'My Awesome Monitor')]")
# Click on the monitor to edit it
self.browser.click_when_visible(xpath="//a//h3[contains(text(), 'My Awesome Monitor')]")
# Should navigate to monitor details page
self.browser.wait_until_not('[data-test-id="loading-indicator"]')
self.browser.wait_until(xpath="//h1[contains(text(), 'My Awesome Monitor')]")
# Click edit button
self.browser.click_when_visible("a[aria-label='Edit Rule']")
# Should show edit form
self.browser.wait_until('[name="name"]')
# Verify the form fields are populated with existing values
name_input = self.browser.find_element_by_name("name")
assert name_input.get_attribute("value") == "My Awesome Monitor"
url_input = self.browser.find_element_by_name("url")
assert url_input.get_attribute("value") == "https://sentry.io"
# Update the name
name_input.clear()
name_input.send_keys("Updated Monitor Name")
self.browser.element("button[aria-label='Save Rule']").click()
# After form submission, wait for success and verify the updated name
self.browser.wait_until_not('[data-test-id="loading-indicator"]')
self.browser.wait_until(xpath="//h1[contains(text(), 'Updated Monitor Name')]")
| OrganizationUptimeTest |
python | ray-project__ray | python/ray/dashboard/modules/tests/test_agent.py | {
"start": 520,
"end": 1423
} | class ____(dashboard_utils.DashboardAgentModule):
def __init__(self, dashboard_agent):
super().__init__(dashboard_agent)
@staticmethod
def is_minimal_module():
return False
@routes.get("/test/http_get_from_agent")
async def get_url(self, req) -> aiohttp.web.Response:
url = req.query.get("url")
result = await test_utils.http_get(self._dashboard_agent.http_session, url)
return aiohttp.web.json_response(result)
@routes.head("/test/route_head")
async def route_head(self, req) -> aiohttp.web.Response:
pass
@routes.post("/test/route_post")
async def route_post(self, req) -> aiohttp.web.Response:
pass
@routes.patch("/test/route_patch")
async def route_patch(self, req) -> aiohttp.web.Response:
pass
async def run(self, server):
pass
if __name__ == "__main__":
pass
| TestAgent |
python | crytic__slither | slither/vyper_parsing/variables/event_variable.py | {
"start": 184,
"end": 879
} | class ____:
def __init__(self, variable: EventVariable, variable_data: AnnAssign):
self._variable = variable
self._variable.name = variable_data.target.id
if (
isinstance(variable_data.annotation, Call)
and variable_data.annotation.func.id == "indexed"
):
self._variable.indexed = True
else:
self._variable.indexed = False
self._elem_to_parse = variable_data.annotation
@property
def underlying_variable(self) -> EventVariable:
return self._variable
def analyze(self, contract) -> None:
self._variable.type = parse_type(self._elem_to_parse, contract)
| EventVariableVyper |
python | pytorch__pytorch | torch/ao/quantization/observer.py | {
"start": 64757,
"end": 65286
} | class ____(Granularity):
"""
Represents per-axis granularity in quantization.
This granularity type calculates different quantization parameters
along a specified axis of the tensor.
For example if the input tensor is shape [8, 16] and axis=0, then
the quantization parameters are calculated for each row of the tensor.
Giving a total of 8 quantization parameters.
Attributes:
axis (int): The axis along which reduction is performed.
"""
axis: int
@dataclass(frozen=True)
| PerAxis |
python | langchain-ai__langchain | libs/langchain_v1/tests/unit_tests/agents/middleware/core/test_wrap_model_call.py | {
"start": 46420,
"end": 48490
} | class ____:
"""Test edge cases and error conditions."""
def test_middleware_modifies_request(self) -> None:
"""Test middleware that modifies the request before execution."""
modified_messages = []
class RequestModifyingMiddleware(AgentMiddleware):
def wrap_model_call(self, request, handler):
# Add a system message to the request
modified_request = request
modified_messages.append(len(modified_request.messages))
return handler(modified_request)
model = GenericFakeChatModel(messages=iter([AIMessage(content="Response")]))
agent = create_agent(model=model, middleware=[RequestModifyingMiddleware()])
agent.invoke({"messages": [HumanMessage("Test")]})
assert len(modified_messages) == 1
def test_multiple_yields_retry_different_models(self) -> None:
"""Test middleware that tries multiple different models."""
attempts = []
class MultiModelRetryMiddleware(AgentMiddleware):
def wrap_model_call(self, request, handler):
attempts.append("first-attempt")
try:
return handler(request)
except Exception:
attempts.append("retry-attempt")
return handler(request)
call_count = {"value": 0}
class FailFirstSucceedSecond(GenericFakeChatModel):
def _generate(self, messages, **kwargs):
call_count["value"] += 1
if call_count["value"] == 1:
raise ValueError("First fails")
return super()._generate(messages, **kwargs)
model = FailFirstSucceedSecond(messages=iter([AIMessage(content="Success")]))
agent = create_agent(model=model, middleware=[MultiModelRetryMiddleware()])
result = agent.invoke({"messages": [HumanMessage("Test")]})
assert attempts == ["first-attempt", "retry-attempt"]
assert result["messages"][1].content == "Success"
| TestEdgeCases |
python | kamyu104__LeetCode-Solutions | Python/shortest-distance-after-road-addition-queries-i.py | {
"start": 39,
"end": 888
} | class ____(object):
def shortestDistanceAfterQueries(self, n, queries):
"""
:type n: int
:type queries: List[List[int]]
:rtype: List[int]
"""
def bfs(u, v):
adj[u].append(v)
q = [u]
while q:
new_q = []
for u in q:
for v in adj[u]:
if dist[u]+1 >= dist[v]:
continue
dist[v] = dist[u]+1
new_q.append(v)
q = new_q
return dist[-1]
adj = [[] for _ in xrange(n)]
for u in xrange(n-1):
adj[u].append(u+1)
dist = range(n)
return [bfs(u, v) for u, v in queries]
# Time: O(n^2 * logn)
# Space: O(n^2)
import heapq
# dijkstra's algorithm
| Solution |
python | pennersr__django-allauth | allauth/socialaccount/providers/box/provider.py | {
"start": 265,
"end": 640
} | class ____(OAuth2Provider):
id = "box"
name = "Box"
account_class = BoxOAuth2Account
oauth2_adapter_class = BoxOAuth2Adapter
def extract_uid(self, data):
return data["id"]
def extract_common_fields(self, data):
return dict(name=data.get("display_name"), email=data.get("email"))
provider_classes = [BoxOAuth2Provider]
| BoxOAuth2Provider |
python | pypa__setuptools | setuptools/tests/test_editable_install.py | {
"start": 8873,
"end": 14238
} | class ____:
def test_namespace_package_importable(self, venv, tmp_path, editable_opts):
"""
Installing two packages sharing the same namespace, one installed
normally using pip and the other installed in editable mode
should allow importing both packages.
"""
pkg_A = namespaces.build_pep420_namespace_package(tmp_path, 'myns.n.pkgA')
pkg_B = namespaces.build_pep420_namespace_package(tmp_path, 'myns.n.pkgB')
# use pip to install to the target directory
opts = editable_opts[:]
opts.append("--no-build-isolation") # force current version of setuptools
venv.run(["python", "-m", "pip", "install", str(pkg_A), *opts])
venv.run(["python", "-m", "pip", "install", "-e", str(pkg_B), *opts])
venv.run(["python", "-c", "import myns.n.pkgA; import myns.n.pkgB"])
def test_namespace_created_via_package_dir(self, venv, tmp_path, editable_opts):
"""Currently users can create a namespace by tweaking `package_dir`"""
files = {
"pkgA": {
"pyproject.toml": dedent(
"""\
[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "pkgA"
version = "3.14159"
[tool.setuptools]
package-dir = {"myns.n.pkgA" = "src"}
"""
),
"src": {"__init__.py": "a = 1"},
},
}
jaraco.path.build(files, prefix=tmp_path)
pkg_A = tmp_path / "pkgA"
pkg_B = namespaces.build_pep420_namespace_package(tmp_path, 'myns.n.pkgB')
pkg_C = namespaces.build_pep420_namespace_package(tmp_path, 'myns.n.pkgC')
# use pip to install to the target directory
opts = editable_opts[:]
opts.append("--no-build-isolation") # force current version of setuptools
venv.run(["python", "-m", "pip", "install", str(pkg_A), *opts])
venv.run(["python", "-m", "pip", "install", "-e", str(pkg_B), *opts])
venv.run(["python", "-m", "pip", "install", "-e", str(pkg_C), *opts])
venv.run(["python", "-c", "from myns.n import pkgA, pkgB, pkgC"])
def test_namespace_accidental_config_in_lenient_mode(self, venv, tmp_path):
"""Sometimes users might specify an ``include`` pattern that ignores parent
packages. In a normal installation this would ignore all modules inside the
parent packages, and make them namespaces (reported in issue #3504),
so the editable mode should preserve this behaviour.
"""
files = {
"pkgA": {
"pyproject.toml": dedent(
"""\
[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "pkgA"
version = "3.14159"
[tool.setuptools]
packages.find.include = ["mypkg.*"]
"""
),
"mypkg": {
"__init__.py": "",
"other.py": "b = 1",
"n": {
"__init__.py": "",
"pkgA.py": "a = 1",
},
},
"MANIFEST.in": EXAMPLE["MANIFEST.in"],
},
}
jaraco.path.build(files, prefix=tmp_path)
pkg_A = tmp_path / "pkgA"
# use pip to install to the target directory
opts = ["--no-build-isolation"] # force current version of setuptools
venv.run(["python", "-m", "pip", "-v", "install", "-e", str(pkg_A), *opts])
out = venv.run(["python", "-c", "from mypkg.n import pkgA; print(pkgA.a)"])
assert out.strip() == "1"
cmd = """\
try:
import mypkg.other
except ImportError:
print("mypkg.other not defined")
"""
out = venv.run(["python", "-c", dedent(cmd)])
assert "mypkg.other not defined" in out
def test_editable_with_prefix(tmp_path, sample_project, editable_opts):
"""
Editable install to a prefix should be discoverable.
"""
prefix = tmp_path / 'prefix'
# figure out where pip will likely install the package
site_packages_all = [
prefix / Path(path).relative_to(sys.prefix)
for path in sys.path
if 'site-packages' in path and path.startswith(sys.prefix)
]
for sp in site_packages_all:
sp.mkdir(parents=True)
# install workaround
_addsitedirs(site_packages_all)
env = dict(os.environ, PYTHONPATH=os.pathsep.join(map(str, site_packages_all)))
cmd = [
sys.executable,
'-m',
'pip',
'install',
'--editable',
str(sample_project),
'--prefix',
str(prefix),
'--no-build-isolation',
*editable_opts,
]
subprocess.check_call(cmd, env=env)
# now run 'sample' with the prefix on the PYTHONPATH
bin = 'Scripts' if platform.system() == 'Windows' else 'bin'
exe = prefix / bin / 'sample'
subprocess.check_call([exe], env=env)
| TestPep420Namespaces |
python | jazzband__django-simple-history | runtests.py | {
"start": 739,
"end": 5363
} | class ____:
def __contains__(self, item):
return True
def __getitem__(self, item):
return None
DATABASE_NAME_TO_DATABASE_SETTINGS = {
"sqlite3": {
"default": {
"ENGINE": "django.db.backends.sqlite3",
},
"other": {"ENGINE": "django.db.backends.sqlite3"},
},
"postgres": {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": "test",
"USER": "postgres",
"PASSWORD": "postgres",
"HOST": "127.0.0.1",
"PORT": 5432,
},
"other": {
"ENGINE": "django.db.backends.postgresql",
"NAME": "other",
"USER": "postgres",
"PASSWORD": "postgres",
"HOST": "127.0.0.1",
"PORT": 5432,
},
},
"mysql": {
"default": {
"ENGINE": "django.db.backends.mysql",
"NAME": "test",
"USER": "root",
"PASSWORD": "mysql",
"HOST": "127.0.0.1",
"PORT": 3306,
},
"other": {
"ENGINE": "django.db.backends.mysql",
"NAME": "other",
"USER": "root",
"PASSWORD": "mysql",
"HOST": "127.0.0.1",
"PORT": 3306,
},
},
"mariadb": {
"default": {
"ENGINE": "django.db.backends.mysql",
"NAME": "test",
"USER": "root",
"PASSWORD": "mariadb",
"HOST": "127.0.0.1",
"PORT": 3307,
},
"other": {
"ENGINE": "django.db.backends.mysql",
"NAME": "other",
"USER": "root",
"PASSWORD": "mariadb",
"HOST": "127.0.0.1",
"PORT": 3307,
},
},
}
DEFAULT_DATABASE_NAME = "sqlite3"
DEFAULT_SETTINGS = dict( # nosec
SECRET_KEY="not a secret",
ALLOWED_HOSTS=["localhost"],
AUTH_USER_MODEL="custom_user.CustomUser",
ROOT_URLCONF="simple_history.tests.urls",
MEDIA_ROOT=media_root,
STATIC_URL="/static/",
INSTALLED_APPS=installed_apps,
LOGGING={
"version": 1,
"disable_existing_loggers": True,
"handlers": {
"console": {
"class": "logging.StreamHandler",
},
},
"root": {
"handlers": ["console"],
"level": "INFO",
},
},
MIGRATION_MODULES=DisableMigrations(),
TEMPLATES=[
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
]
},
}
],
STORAGES={
"default": {
# Speeds up tests and prevents locally storing files created through them
"BACKEND": "django.core.files.storage.InMemoryStorage",
},
},
DEFAULT_AUTO_FIELD="django.db.models.AutoField",
USE_TZ=False,
)
MIDDLEWARE = [
"django.contrib.sessions.middleware.SessionMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
]
DEFAULT_SETTINGS["MIDDLEWARE"] = MIDDLEWARE
def get_default_settings(*, database_name=DEFAULT_DATABASE_NAME):
return {
**DEFAULT_SETTINGS,
"DATABASES": DATABASE_NAME_TO_DATABASE_SETTINGS[database_name],
}
def main():
parser = ArgumentParser(description="Run package tests.")
parser.add_argument(
"--database", action="store", nargs="?", default=DEFAULT_DATABASE_NAME
)
parser.add_argument("--failfast", action="store_true")
parser.add_argument("--pdb", action="store_true")
parser.add_argument("--tag", action="append", nargs="?")
namespace = parser.parse_args()
if not settings.configured:
default_settings = get_default_settings(database_name=namespace.database)
settings.configure(**default_settings)
django.setup()
tags = namespace.tag
failures = DiscoverRunner(
failfast=bool(namespace.failfast), pdb=bool(namespace.pdb), tags=tags
).run_tests(["simple_history.tests"])
failures |= DiscoverRunner(
failfast=bool(namespace.failfast), pdb=bool(namespace.pdb), tags=tags
).run_tests(["simple_history.registry_tests"])
sys.exit(failures)
if __name__ == "__main__":
main()
| DisableMigrations |
python | readthedocs__readthedocs.org | readthedocs/organizations/views/base.py | {
"start": 3048,
"end": 4269
} | class ____(OrganizationMixin):
"""
Add team query and instance methods for team related views.
This extends the :py:cls:`OrganizationMixin` to provide both teams and
organizations to the team views. Team forms are passed in the organization
determined from the organization url kwarg.
"""
def get_team_queryset(self):
"""
Return teams visible to request user.
This will either be team the user is a member of, or teams where the
user is an owner of the organization.
"""
return (
Team.objects.member(self.request.user)
.filter(
organization=self.get_organization(),
)
.order_by("name")
)
@lru_cache(maxsize=1)
def get_team(self):
"""Return team determined by url kwarg."""
return get_object_or_404(
self.get_team_queryset(),
slug=self.kwargs["team"],
)
def get_form(self, data=None, files=None, **kwargs):
"""Pass in organization to form class instance."""
kwargs["organization"] = self.get_organization()
return self.form_class(data, files, **kwargs)
# Base views
| OrganizationTeamMixin |
python | dask__dask | dask/bag/tests/test_bag.py | {
"start": 43216,
"end": 54625
} | class ____(int):
def __eq__(self, other):
assert isinstance(other, StrictReal)
return self.real == other.real
def __ne__(self, other):
assert isinstance(other, StrictReal)
return self.real != other.real
def test_reduction_with_non_comparable_objects():
b = db.from_sequence([StrictReal(x) for x in range(10)], partition_size=2)
assert_eq(b.fold(max, max), StrictReal(9))
@pytest.mark.parametrize("container", ["array", "matrix"])
def test_reduction_with_sparse_matrices(container):
sp = pytest.importorskip("scipy.sparse")
if container == "array" and not hasattr(sp, "csr_array"):
pytest.skip("scipy<1.11 has no sparray")
cls = sp.csr_matrix if container == "matrix" else sp.csr_array
b = db.from_sequence([cls([0]) for x in range(4)], partition_size=2)
def sp_reduce(a, b):
return sp.vstack([a, b])
assert b.fold(sp_reduce, sp_reduce).compute(scheduler="sync").shape == (4, 1)
def test_empty():
assert list(db.from_sequence([])) == []
def test_bag_picklable():
from pickle import dumps, loads
b = db.from_sequence(range(100))
b2 = loads(dumps(b))
assert b.compute() == b2.compute()
s = b.sum()
s2 = loads(dumps(s))
assert s.compute() == s2.compute()
def test_msgpack_unicode():
b = db.from_sequence([{"a": 1}]).groupby("a")
result = b.compute(scheduler="sync")
assert dict(result) == {1: [{"a": 1}]}
def test_bag_with_single_callable():
f = lambda: None
b = db.from_sequence([f])
assert_eq(b, [f])
def test_optimize_fuse_keys():
x = db.range(10, npartitions=2)
y = x.map(inc)
z = y.map(inc)
dsk = z.__dask_optimize__(z.dask, z.__dask_keys__())
assert not y.dask.keys() & dsk.keys()
dsk = z.__dask_optimize__(z.dask, z.__dask_keys__(), fuse_keys=y.__dask_keys__())
assert all(k in dsk for k in y.__dask_keys__())
def test_reductions_are_lazy():
current = [None]
def part():
for i in range(10):
current[0] = i
yield i
def func(part):
assert current[0] == 0
return sum(part)
b = Bag({("foo", 0): part()}, "foo", 1)
res = b.reduction(func, sum)
assert_eq(res, sum(range(10)))
def test_repeated_groupby():
b = db.range(10, npartitions=4)
c = b.groupby(lambda x: x % 3)
assert valmap(len, dict(c)) == valmap(len, dict(c))
def test_temporary_directory(tmpdir):
b = db.range(10, npartitions=4)
# We use a pool to avoid a race condition between the pool close
# cleaning up files, and the assert below.
with ProcessPoolExecutor(4) as pool:
with dask.config.set(temporary_directory=str(tmpdir), pool=pool):
b2 = b.groupby(lambda x: x % 2)
b2.compute()
assert any(fn.endswith(".partd") for fn in os.listdir(str(tmpdir)))
def test_empty_bag():
b = db.from_sequence([])
assert_eq(b.map(inc).all(), True)
assert_eq(b.map(inc).any(), False)
assert_eq(b.map(inc).sum(), False)
assert_eq(b.map(inc).count(), False)
def test_bag_paths():
b = db.from_sequence(["abc", "123", "xyz"], npartitions=2)
paths = b.to_textfiles("foo*")
assert paths[0].endswith("foo0")
assert paths[1].endswith("foo1")
os.remove("foo0")
os.remove("foo1")
def test_map_partitions_arg():
def append_str(partition, s):
return [x + s for x in partition]
mybag = db.from_sequence(["a", "b", "c"])
assert_eq(mybag.map_partitions(append_str, "foo"), ["afoo", "bfoo", "cfoo"])
assert_eq(
mybag.map_partitions(append_str, dask.delayed("foo")), ["afoo", "bfoo", "cfoo"]
)
def test_map_keynames():
b = db.from_sequence([1, 2, 3])
d = dict(b.map(inc).__dask_graph__())
assert "inc" in map(dask.utils.key_split, d)
assert set(b.map(inc).__dask_graph__()) != set(
b.map_partitions(inc).__dask_graph__()
)
def test_map_releases_element_references_as_soon_as_possible():
# Ensure that Bag.map doesn't keep *element* references longer than
# necessary. Previous map implementations used ``yield``, which would keep
# a reference to the yielded element until the yielded method resumed (this
# is just how generator functions work in CPython).
#
# See https://github.com/dask/dask/issues/5189
#
# We test 2 variant of potential extra references here:
# 1. Within an element of a partition:
# At the time of the second `f_create` for each element, the `C` from
# the first `f_create` should be dropped.
# 2. Within a partition:
# When the second item within a partition is processed, `C` from the
# first item should already be dropped.
class C:
def __init__(self, i):
self.i = i
# keep a weakref to all existing instances of `C`
in_memory = weakref.WeakSet()
def f_create(i):
# check that there are no instances of `C` left
assert len(in_memory) == 0
# create new instance
o = C(i)
in_memory.add(o)
return o
def f_drop(o):
# o reference dropped on return, should collect
return o.i + 100
b = (
db.from_sequence(range(2), npartitions=1)
.map(f_create)
.map(f_drop)
.map(f_create)
.map(f_drop)
.sum()
)
try:
# Disable gc to ensure refcycles don't matter here
gc.disable()
b.compute(scheduler="sync")
finally:
gc.enable()
def test_bagged_array_delayed():
pytest.importorskip("numpy")
da = pytest.importorskip("dask.array")
obj = da.ones(10, chunks=5).to_delayed()[0]
bag = db.from_delayed(obj)
b = bag.compute()
assert_eq(b, [1.0, 1.0, 1.0, 1.0, 1.0])
def test_dask_layers():
a = db.from_sequence([1, 2], npartitions=2)
assert a.__dask_layers__() == (a.name,)
assert a.dask.layers.keys() == {a.name}
assert a.dask.dependencies == {a.name: set()}
i = a.min()
assert i.__dask_layers__() == (i.key,)
assert i.dask.layers.keys() == {a.name, i.key}
assert i.dask.dependencies == {a.name: set(), i.key: {a.name}}
@pytest.mark.parametrize("optimize", [False, True])
def test_dask_layers_to_delayed(optimize):
# `da.Array.to_delayed` causes the layer name to not match the key.
# Ensure the layer name is propagated between `Delayed` and `Item`.
pytest.importorskip("numpy")
da = pytest.importorskip("dask.array")
i = db.Item.from_delayed(da.ones(1).to_delayed()[0])
name = i.key[0]
assert i.key[1:] == (0,)
assert i.dask.layers.keys() == {"delayed-" + name}
assert i.dask.dependencies == {"delayed-" + name: set()}
assert i.__dask_layers__() == ("delayed-" + name,)
arr = da.ones(1) + 1
delayed = arr.to_delayed(optimize_graph=optimize)[0]
i = db.Item.from_delayed(delayed)
assert i.key == delayed.key
assert i.dask is delayed.dask
assert i.__dask_layers__() == delayed.__dask_layers__()
back = i.to_delayed(optimize_graph=optimize)
assert back.__dask_layers__() == i.__dask_layers__()
if not optimize:
assert back.dask is arr.dask
# When not optimized, the key is not a layer in the graph, so using it should fail
with pytest.raises(ValueError, match="not in"):
db.Item(back.dask, back.key)
with pytest.raises(ValueError, match="not in"):
db.Item(arr.dask, (arr.name,), layer="foo")
def test_to_dataframe_optimize_graph():
pytest.importorskip("pandas")
pytest.importorskip("dask.dataframe")
from dask.dataframe.utils import assert_eq as assert_eq_df
x = db.from_sequence(
[{"name": "test1", "v1": 1}, {"name": "test2", "v1": 2}], npartitions=2
)
# linear `map` tasks will be fused by graph optimization
with dask.annotate(foo=True):
y = x.map(lambda a: dict(**a, v2=a["v1"] + 1))
y = y.map(lambda a: dict(**a, v3=a["v2"] + 1))
y = y.map(lambda a: dict(**a, v4=a["v3"] + 1))
# verifying the maps are not fused yet
assert len(y.dask) == y.npartitions * 4
# with optimizations
d = y.to_dataframe()
# no optimizations
d2 = y.to_dataframe(optimize_graph=False)
assert_eq_df(d, d2)
@pytest.mark.parametrize("nworkers", [100, 250, 500, 1000])
def test_default_partitioning_worker_saturation(nworkers):
# Ensure that Dask Bag can saturate any number of workers with concurrent tasks.
# The default partitioning scheme partitions items to keep the task to item ratio sensible
# but it should always be possible to saturate any number of workers given enough items in the bag.
ntasks = 0
nitems = 1
while ntasks < nworkers:
ntasks = len(db.from_sequence(range(nitems)).dask)
nitems += math.floor(max(1, nworkers / 10))
assert nitems < 20_000
@pytest.mark.parametrize("nworkers", [100, 250, 500, 1000])
def test_npartitions_saturation(nworkers):
# If npartitions is set the bag should always contain at least that number of tasks
for nitems in range(nworkers, 10 * nworkers, max(1, math.floor(nworkers / 10))):
assert (
len(db.from_sequence(range(nitems), npartitions=nworkers).dask) >= nworkers
)
def test_map_total_mem_usage():
"""https://github.com/dask/dask/issues/10338"""
b = db.from_sequence(range(1, 100), npartitions=3)
total_mem_b = sum(b.map_partitions(total_mem_usage).compute())
c = b.map(lambda x: x)
total_mem_c = sum(c.map_partitions(total_mem_usage).compute())
assert total_mem_b == total_mem_c
def test_reify_empty_iterator():
seq = iter([])
result = reify(seq)
# It should return the same empty iterator (or equivalent)
assert list(result) == []
def test_reify_iterator_of_iterators():
seq = iter([iter([1, 2]), iter([3, 4])])
result = reify(seq)
# Each nested iterator should be materialized into a list
assert result == [[1, 2], [3, 4]]
def test_empty_safe_apply_with_fake_sparse():
class FakeSparse:
def __init__(self, nnz):
self.nnz = nnz
def f(x):
return "called"
assert empty_safe_apply(f, FakeSparse(0), is_last=False) is no_result
assert empty_safe_apply(f, FakeSparse(5), is_last=False) == "called"
def test_empty_safe_apply_numpy():
np = pytest.importorskip("numpy")
def f(x):
return "ok"
assert empty_safe_apply(f, np.array([]), is_last=False) is no_result
assert empty_safe_apply(f, np.array([1, 2]), is_last=False) == "ok"
@pytest.mark.parametrize("sparse_type", ["csr", "csc", "coo"])
def test_empty_safe_apply_with_scipy_sparse(sparse_type):
"""Check behavior with real SciPy sparse matrices."""
np = pytest.importorskip("numpy")
sp = pytest.importorskip("scipy.sparse")
def f(x):
return "ok"
sparse_constructor = {
"csr": sp.csr_matrix,
"csc": sp.csc_matrix,
"coo": sp.coo_matrix,
}[sparse_type]
# Empty sparse matrix (no stored elements)
empty_mat = sparse_constructor((0, 0))
assert empty_safe_apply(f, empty_mat, is_last=False) is no_result
# Non-empty sparse matrix
data = np.array([1, 2, 3])
row = np.array([0, 1, 2])
col = np.array([0, 1, 2])
non_empty = sp.coo_matrix((data, (row, col)), shape=(3, 3)).asformat(sparse_type)
assert empty_safe_apply(f, non_empty, is_last=False) == "ok"
| StrictReal |
python | numba__numba | numba/cuda/cudadecl.py | {
"start": 16175,
"end": 16418
} | class ____(AttributeTemplate):
key = dim3
def resolve_x(self, mod):
return types.int32
def resolve_y(self, mod):
return types.int32
def resolve_z(self, mod):
return types.int32
@register_attr
| Dim3_attrs |
python | skorch-dev__skorch | skorch/tests/test_utils.py | {
"start": 275,
"end": 4525
} | class ____:
@pytest.fixture
def to_tensor(self):
from skorch.utils import to_tensor
return to_tensor
@pytest.mark.skipif(not torch.cuda.is_available(), reason="no cuda device")
def test_device_setting_cuda(self, to_tensor):
x = np.ones((2, 3, 4))
t = to_tensor(x, device='cpu')
assert t.device.type == 'cpu'
t = to_tensor(x, device='cuda')
assert t.device.type.startswith('cuda')
t = to_tensor(t, device='cuda')
assert t.device.type.startswith('cuda')
t = to_tensor(t, device='cpu')
assert t.device.type == 'cpu'
def tensors_equal(self, x, y):
""""Test that tensors in diverse containers are equal."""
if isinstance(x, PackedSequence):
return self.tensors_equal(x[0], y[0]) and self.tensors_equal(x[1], y[1])
if isinstance(x, dict):
return (
(x.keys() == y.keys()) and
all(self.tensors_equal(x[k], y[k]) for k in x)
)
if isinstance(x, (list, tuple)):
return all(self.tensors_equal(xi, yi) for xi, yi in zip(x, y))
if x.is_sparse is not y.is_sparse:
return False
if x.is_sparse:
x, y = x.to_dense(), y.to_dense()
return (x == y).all()
# pylint: disable=no-method-argument
def parameters():
"""Yields data, expected value, and device for tensor conversion
test.
Stops earlier when no cuda device is available.
"""
device = 'cpu'
x = torch.zeros((5, 3)).float()
y = torch.as_tensor([2, 2, 1])
z = np.arange(15).reshape(5, 3)
for X, expected in [
(x, x),
(y, y),
([x, y], [x, y]),
((x, y), (x, y)),
(z, torch.as_tensor(z)),
(
{'a': x, 'b': y, 'c': z},
{'a': x, 'b': y, 'c': torch.as_tensor(z)}
),
(torch.as_tensor(55), torch.as_tensor(55)),
(pack_padded_sequence(x, y), pack_padded_sequence(x, y)),
]:
yield X, expected, device
if not torch.cuda.is_available():
return
device = 'cuda'
x = x.to('cuda')
y = y.to('cuda')
for X, expected in [
(x, x),
(y, y),
([x, y], [x, y]),
((x, y), (x, y)),
(z, torch.as_tensor(z).to('cuda')),
(
{'a': x, 'b': y, 'c': z},
{'a': x, 'b': y, 'c': torch.as_tensor(z).to('cuda')}
),
(torch.as_tensor(55), torch.as_tensor(55).to('cuda')),
(
pack_padded_sequence(x, y.to('cpu')),
pack_padded_sequence(x, y.to('cpu')).to('cuda')
),
]:
yield X, expected, device
@pytest.mark.parametrize('X, expected, device', parameters())
def test_tensor_conversion_cuda(self, to_tensor, X, expected, device):
result = to_tensor(X, device)
assert self.tensors_equal(result, expected)
assert self.tensors_equal(expected, result)
@pytest.mark.parametrize('device', ['cpu', 'cuda'])
def test_sparse_tensor(self, to_tensor, device):
if device == 'cuda' and not torch.cuda.is_available():
pytest.skip()
inp = sparse.csr_matrix(np.zeros((5, 3)).astype(np.float32))
expected = torch.sparse_coo_tensor(size=(5, 3)).to(device)
result = to_tensor(inp, device=device, accept_sparse=True)
assert self.tensors_equal(result, expected)
@pytest.mark.parametrize('device', ['cpu', 'cuda'])
def test_sparse_tensor_not_accepted_raises(self, to_tensor, device):
if device == 'cuda' and not torch.cuda.is_available():
pytest.skip()
inp = sparse.csr_matrix(np.zeros((5, 3)).astype(np.float32))
with pytest.raises(TypeError) as exc:
to_tensor(inp, device=device)
msg = ("Sparse matrices are not supported. Set "
"accept_sparse=True to allow sparse matrices.")
assert exc.value.args[0] == msg
| TestToTensor |
python | sqlalchemy__sqlalchemy | test/orm/test_eager_relations.py | {
"start": 191036,
"end": 193351
} | class ____(
fixtures.DeclarativeMappedTest, testing.AssertsCompiledSQL
):
__dialect__ = "default"
@classmethod
def setup_classes(cls):
Base = cls.DeclarativeBasic
class PersistentObject(Base):
__tablename__ = "persistent"
id = Column(
Integer, primary_key=True, test_needs_autoincrement=True
)
class Movie(PersistentObject):
__tablename__ = "movie"
id = Column(Integer, ForeignKey("persistent.id"), primary_key=True)
director_id = Column(Integer, ForeignKey("director.id"))
title = Column(String(50))
class Director(PersistentObject):
__tablename__ = "director"
id = Column(Integer, ForeignKey("persistent.id"), primary_key=True)
movies = relationship("Movie", foreign_keys=Movie.director_id)
name = Column(String(50))
def test_from_subclass(self):
Director = self.classes.Director
s = fixture_session()
self.assert_compile(
s.query(Director).options(joinedload("*")),
"SELECT director.id AS director_id, "
"persistent.id AS persistent_id, "
"director.name AS director_name, movie_1.id AS movie_1_id, "
"persistent_1.id AS persistent_1_id, "
"movie_1.director_id AS movie_1_director_id, "
"movie_1.title AS movie_1_title "
"FROM persistent JOIN director ON persistent.id = director.id "
"LEFT OUTER JOIN "
"(persistent AS persistent_1 JOIN movie AS movie_1 "
"ON persistent_1.id = movie_1.id) "
"ON director.id = movie_1.director_id",
)
def test_integrate(self):
Director = self.classes.Director
Movie = self.classes.Movie
session = Session(testing.db)
rscott = Director(name="Ridley Scott")
alien = Movie(title="Alien")
brunner = Movie(title="Blade Runner")
rscott.movies.append(brunner)
rscott.movies.append(alien)
session.add_all([rscott, alien, brunner])
session.commit()
close_all_sessions()
self.d = session.query(Director).options(joinedload("*")).first()
assert len(list(session)) == 3
| CyclicalInheritingEagerTestTwo |
python | astropy__astropy | astropy/cosmology/_src/tests/io/test_ecsv.py | {
"start": 7844,
"end": 8260
} | class ____(ReadWriteDirectTestBase, ReadWriteECSVTestMixin):
"""
Directly test ``read/write_ecsv``.
These are not public API and are discouraged from use, in favor of
``Cosmology.read/write(..., format="ascii.ecsv")``, but should be
tested regardless b/c they are used internally.
"""
def setup_class(self):
self.functions = {"read": read_ecsv, "write": write_ecsv}
| TestReadWriteECSV |
python | django__django | tests/select_related_regress/models.py | {
"start": 2151,
"end": 2208
} | class ____(Parent):
value = models.IntegerField()
| Child |
python | huggingface__transformers | src/transformers/models/sam3_tracker_video/modular_sam3_tracker_video.py | {
"start": 3819,
"end": 17844
} | class ____(PreTrainedConfig):
r"""
[`Sam3TrackerVideoConfig`] is the configuration class to store the configuration of a [`Sam3TrackerVideoModel`]. It is used to instantiate a
SAM3 tracker video model according to the specified arguments, defining the memory attention, memory encoder, and image encoder
configs. Instantiating a configuration defaults will yield a similar configuration to that of the SAM 3
[facebook/sam3](https://huggingface.co/facebook/sam3) architecture.
Configuration objects inherit from [`PreTrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PreTrainedConfig`] for more information.
Args:
vision_config (Union[`dict`, `Sam3TrackerVideoVisionConfig`], *optional*):
Dictionary of configuration options used to initialize [`Sam3TrackerVideoVisionConfig`].
prompt_encoder_config (Union[`dict`, `Sam3TrackerVideoPromptEncoderConfig`], *optional*):
Dictionary of configuration options used to initialize [`Sam3TrackerVideoPromptEncoderConfig`].
mask_decoder_config (Union[`dict`, `Sam3TrackerVideoMaskDecoderConfig`], *optional*):
Dictionary of configuration options used to initialize [`Sam3TrackerVideoMaskDecoderConfig`].
initializer_range (`float`, *optional*, defaults to 0.02):
Standard deviation for parameter initialization.
num_maskmem (`int`, *optional*, defaults to 7):
The number of memory slots for the mask memory.
image_size (`int`, *optional*, defaults to 1008):
The size of the input images.
sigmoid_scale_for_mem_enc (`float`, *optional*, defaults to 20.0):
Scale factor for the sigmoid function in the memory encoder.
sigmoid_bias_for_mem_enc (`float`, *optional*, defaults to -10.0):
Bias for the sigmoid function in the memory encoder.
enable_occlusion_spatial_embedding (`bool`, *optional*, defaults to `True`):
Whether to enable spatial embedding for occlusions.
multimask_output_in_sam (`bool`, *optional*, defaults to `True`):
Whether to output multiple masks from the SAM head.
multimask_min_pt_num (`int`, *optional*, defaults to 0):
The minimum number of points to trigger multimask output.
multimask_max_pt_num (`int`, *optional*, defaults to 1):
The maximum number of points to trigger multimask output.
multimask_output_for_tracking (`bool`, *optional*, defaults to `True`):
Whether to use multimask output for tracking.
max_object_pointers_in_encoder (`int`, *optional*, defaults to 16):
The maximum number of object pointers in the encoder.
max_cond_frame_num (`int`, *optional*, defaults to 4):
Maximum number of conditioning frames to use in memory attention.
enable_temporal_pos_encoding_for_object_pointers (`bool`, *optional*, defaults to `True`):
Whether to enable temporal positional encoding for object pointers.
memory_attention_hidden_size (`int`, *optional*, defaults to 256):
Dimensionality of the memory attention hidden states.
memory_attention_num_layers (`int`, *optional*, defaults to 4):
The number of layers in the memory attention module.
memory_attention_num_attention_heads (`int`, *optional*, defaults to 1):
Number of attention heads for each attention layer in the memory attention.
memory_attention_downsample_rate (`int`, *optional*, defaults to 1):
The downsample rate for the attention layers.
memory_attention_feed_forward_hidden_size (`int`, *optional*, defaults to 2048):
The dimension of the feedforward network in the memory attention module.
memory_attention_feed_forward_hidden_act (`str`, *optional*, defaults to `"relu"`):
The non-linear activation function in the feedforward network in the memory attention module.
memory_attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout rate for the memory attention module.
memory_attention_rope_theta (`float`, *optional*, defaults to 10000):
The Rope theta parameter.
memory_attention_rope_feat_sizes (`list[int]`, *optional*, defaults to `[72, 72]`):
The feature sizes for the Rope positional encoding.
memory_attention_rope_dropout (`float`, *optional*, defaults to 0.1):
The dropout rate for the Rope positional encoding.
memory_encoder_hidden_size (`int`, *optional*, defaults to 256):
Dimensionality of the memory encoder hidden states.
memory_encoder_output_channels (`int`, *optional*, defaults to 64):
The number of output channels for the memory encoder.
mask_downsampler_embed_dim (`int`, *optional*, defaults to 256):
The dimension of the mask downsampler embedding.
mask_downsampler_kernel_size (`int`, *optional*, defaults to 3):
The kernel size for the mask downsampler.
mask_downsampler_stride (`int`, *optional*, defaults to 2):
The stride for the mask downsampler.
mask_downsampler_padding (`int`, *optional*, defaults to 1):
The padding for the mask downsampler.
mask_downsampler_total_stride (`int`, *optional*, defaults to 16):
The total stride for the mask downsampler.
mask_downsampler_hidden_act (`str`, *optional*, defaults to `"gelu"`):
The non-linear activation function in the mask downsampler.
memory_fuser_num_layers (`int`, *optional*, defaults to 2):
The number of layers in the memory fuser.
memory_fuser_embed_dim (`int`, *optional*, defaults to 256):
The dimension of the embedding layer in the memory fuser.
memory_fuser_intermediate_dim (`int`, *optional*, defaults to 1024):
The dimension of the intermediate layer in the memory fuser.
memory_fuser_kernel_size (`int`, *optional*, defaults to 7):
The kernel size for the memory fuser.
memory_fuser_padding (`int`, *optional*, defaults to 3):
The padding for the memory fuser.
memory_fuser_layer_scale_init_value (`float`, *optional*, defaults to 1e-06):
The initial value for the layer scale in the memory fuser.
memory_fuser_hidden_act (`str`, *optional*, defaults to `"gelu"`):
The non-linear activation function in the memory fuser.
kwargs (*optional*):
Dictionary of keyword arguments.
Example:
```python
>>> from transformers import (
... Sam3VisionConfig,
... Sam3TrackerVideoPromptEncoderConfig,
... Sam3TrackerVideoMaskDecoderConfig,
... Sam3TrackerVideoModel,
... )
>>> # Initializing a Sam3TrackerVideoConfig with `"facebook/sam3"` style configuration
>>> configuration = Sam3TrackerVideoConfig()
>>> # Initializing a Sam3TrackerVideoModel (with random weights) from the `"facebook/sam3"` style configuration
>>> model = Sam3TrackerVideoModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
>>> # We can also initialize a Sam3TrackerVideoConfig from a Sam3TrackerVideoVisionConfig, Sam3TrackerVideoPromptEncoderConfig, and Sam3TrackerVideoMaskDecoderConfig
>>> # Initializing SAM3 tracker video vision encoder, memory attention, and memory encoder configurations
>>> vision_config = Sam3TrackerVideoVisionConfig()
>>> prompt_encoder_config = Sam3TrackerVideoPromptEncoderConfig()
>>> mask_decoder_config = Sam3TrackerVideoMaskDecoderConfig()
>>> config = Sam3TrackerVideoConfig(vision_config, prompt_encoder_config, mask_decoder_config)
```"""
model_type = "sam3_tracker_video"
sub_configs = {
"vision_config": AutoConfig,
"prompt_encoder_config": Sam3TrackerVideoPromptEncoderConfig,
"mask_decoder_config": Sam3TrackerVideoMaskDecoderConfig,
}
def __init__(
self,
vision_config=None,
prompt_encoder_config=None,
mask_decoder_config=None,
initializer_range=0.02,
num_maskmem=7,
image_size=1008,
sigmoid_scale_for_mem_enc=20.0,
sigmoid_bias_for_mem_enc=-10.0,
enable_occlusion_spatial_embedding=True,
multimask_output_in_sam=True,
multimask_min_pt_num=0,
multimask_max_pt_num=1,
multimask_output_for_tracking=True,
max_object_pointers_in_encoder=16,
max_cond_frame_num=4,
enable_temporal_pos_encoding_for_object_pointers=True,
# memory attention
memory_attention_hidden_size=256,
memory_attention_num_layers=4,
memory_attention_num_attention_heads=1,
memory_attention_downsample_rate=1,
memory_attention_feed_forward_hidden_size=2048,
memory_attention_feed_forward_hidden_act="relu",
memory_attention_dropout=0.1,
memory_attention_rope_theta=10000,
memory_attention_rope_feat_sizes=None,
memory_attention_rope_dropout=0.1,
# memory encoder
memory_encoder_hidden_size=256,
memory_encoder_output_channels=64,
mask_downsampler_embed_dim=256,
mask_downsampler_kernel_size=3,
mask_downsampler_stride=2,
mask_downsampler_padding=1,
mask_downsampler_total_stride=16,
mask_downsampler_hidden_act="gelu",
memory_fuser_num_layers=2,
memory_fuser_embed_dim=256,
memory_fuser_intermediate_dim=1024,
memory_fuser_kernel_size=7,
memory_fuser_padding=3,
memory_fuser_layer_scale_init_value=1e-6,
memory_fuser_hidden_act="gelu",
**kwargs,
):
vision_config = (
vision_config
if vision_config is not None
else {"backbone_feature_sizes": [[288, 288], [144, 144], [72, 72]]}
)
prompt_encoder_config = prompt_encoder_config if prompt_encoder_config is not None else {}
mask_decoder_config = mask_decoder_config if mask_decoder_config is not None else {}
memory_attention_rope_feat_sizes = (
[72, 72] if memory_attention_rope_feat_sizes is None else memory_attention_rope_feat_sizes
)
if isinstance(vision_config, dict):
vision_config["model_type"] = vision_config.get("model_type", "sam3_vision_model")
vision_config = CONFIG_MAPPING[vision_config["model_type"]](**vision_config)
if isinstance(prompt_encoder_config, Sam3TrackerVideoPromptEncoderConfig):
prompt_encoder_config = prompt_encoder_config.to_dict()
if isinstance(mask_decoder_config, Sam3TrackerVideoMaskDecoderConfig):
mask_decoder_config = mask_decoder_config.to_dict()
self.vision_config = vision_config
self.prompt_encoder_config = Sam3TrackerVideoPromptEncoderConfig(**prompt_encoder_config)
self.mask_decoder_config = Sam3TrackerVideoMaskDecoderConfig(**mask_decoder_config)
self.initializer_range = initializer_range
self.num_maskmem = num_maskmem # default 1 input frame + 6 previous frames
self.image_size = image_size
self.sigmoid_scale_for_mem_enc = sigmoid_scale_for_mem_enc
self.sigmoid_bias_for_mem_enc = sigmoid_bias_for_mem_enc
self.multimask_output_in_sam = multimask_output_in_sam
self.multimask_min_pt_num = multimask_min_pt_num
self.multimask_max_pt_num = multimask_max_pt_num
self.multimask_output_for_tracking = multimask_output_for_tracking
self.max_object_pointers_in_encoder = max_object_pointers_in_encoder
self.max_cond_frame_num = max_cond_frame_num
# The next 4 are True for sam2.1 and False for sam2
self.enable_occlusion_spatial_embedding = enable_occlusion_spatial_embedding
self.enable_temporal_pos_encoding_for_object_pointers = enable_temporal_pos_encoding_for_object_pointers
# memory attention
self.memory_attention_hidden_size = memory_attention_hidden_size
self.memory_attention_num_layers = memory_attention_num_layers
self.memory_attention_num_attention_heads = memory_attention_num_attention_heads
self.memory_attention_downsample_rate = memory_attention_downsample_rate
self.memory_attention_feed_forward_hidden_size = memory_attention_feed_forward_hidden_size
self.memory_attention_feed_forward_hidden_act = memory_attention_feed_forward_hidden_act
self.memory_attention_dropout = memory_attention_dropout
self.memory_attention_rope_theta = memory_attention_rope_theta
self.memory_attention_rope_feat_sizes = memory_attention_rope_feat_sizes
self.memory_attention_rope_dropout = memory_attention_rope_dropout
# memory encoder
self.memory_encoder_hidden_size = memory_encoder_hidden_size
self.memory_encoder_output_channels = memory_encoder_output_channels
self.mask_downsampler_embed_dim = mask_downsampler_embed_dim
self.mask_downsampler_kernel_size = mask_downsampler_kernel_size
self.mask_downsampler_stride = mask_downsampler_stride
self.mask_downsampler_padding = mask_downsampler_padding
self.mask_downsampler_total_stride = mask_downsampler_total_stride
self.mask_downsampler_hidden_act = mask_downsampler_hidden_act
self.memory_fuser_num_layers = memory_fuser_num_layers
self.memory_fuser_embed_dim = memory_fuser_embed_dim
self.memory_fuser_intermediate_dim = memory_fuser_intermediate_dim
self.memory_fuser_kernel_size = memory_fuser_kernel_size
self.memory_fuser_padding = memory_fuser_padding
self.memory_fuser_layer_scale_init_value = memory_fuser_layer_scale_init_value
self.memory_fuser_hidden_act = memory_fuser_hidden_act
super().__init__(**kwargs)
| Sam3TrackerVideoConfig |
python | microsoft__pyright | packages/pyright-internal/src/tests/samples/protocol1.py | {
"start": 1439,
"end": 1684
} | class ____:
def do(self, x: int | None):
pass
def use_protocol1(a: Abstract1[int]):
a.do(1)
use_protocol1(Concrete1())
# This should generate an error because TypeVars cannot
# be defined in both Protocol and Generic.
| Concrete1 |
python | readthedocs__readthedocs.org | readthedocs/builds/managers.py | {
"start": 4115,
"end": 4583
} | class ____(models.Manager):
def register_match(self, rule, version, max_registers=15):
created = self.create(
rule=rule,
match_arg=rule.get_match_arg(),
action=rule.action,
version_name=version.verbose_name,
version_type=version.type,
)
for match in self.filter(rule__project=rule.project)[max_registers:]:
match.delete()
return created
| AutomationRuleMatchManager |
python | getsentry__sentry | src/sentry/monitors/validators.py | {
"start": 22104,
"end": 22210
} | class ____(serializers.Serializer):
trace_id = serializers.UUIDField(format="hex")
| TraceContextValidator |
python | wandb__wandb | wandb/vendor/pygments/lexers/fortran.py | {
"start": 492,
"end": 8544
} | class ____(RegexLexer):
"""
Lexer for FORTRAN 90 code.
.. versionadded:: 0.10
"""
name = 'Fortran'
aliases = ['fortran']
filenames = ['*.f03', '*.f90', '*.F03', '*.F90']
mimetypes = ['text/x-fortran']
flags = re.IGNORECASE | re.MULTILINE
# Data Types: INTEGER, REAL, COMPLEX, LOGICAL, CHARACTER and DOUBLE PRECISION
# Operators: **, *, +, -, /, <, >, <=, >=, ==, /=
# Logical (?): NOT, AND, OR, EQV, NEQV
# Builtins:
# http://gcc.gnu.org/onlinedocs/gcc-3.4.6/g77/Table-of-Intrinsic-Functions.html
tokens = {
'root': [
(r'^#.*\n', Comment.Preproc),
(r'!.*\n', Comment),
include('strings'),
include('core'),
(r'[a-z][\w$]*', Name),
include('nums'),
(r'[\s]+', Text),
],
'core': [
# Statements
(words((
'ABSTRACT', 'ACCEPT', 'ALL', 'ALLSTOP', 'ALLOCATABLE', 'ALLOCATE',
'ARRAY', 'ASSIGN', 'ASSOCIATE', 'ASYNCHRONOUS', 'BACKSPACE', 'BIND',
'BLOCK', 'BLOCKDATA', 'BYTE', 'CALL', 'CASE', 'CLASS', 'CLOSE',
'CODIMENSION', 'COMMON', 'CONCURRRENT', 'CONTIGUOUS', 'CONTAINS',
'CONTINUE', 'CRITICAL', 'CYCLE', 'DATA', 'DEALLOCATE', 'DECODE',
'DEFERRED', 'DIMENSION', 'DO', 'ELEMENTAL', 'ELSE', 'ENCODE', 'END',
'ENTRY', 'ENUM', 'ENUMERATOR', 'EQUIVALENCE', 'EXIT', 'EXTENDS',
'EXTERNAL', 'EXTRINSIC', 'FILE', 'FINAL', 'FORALL', 'FORMAT',
'FUNCTION', 'GENERIC', 'GOTO', 'IF', 'IMAGES', 'IMPLICIT',
'IMPORT', 'IMPURE', 'INCLUDE', 'INQUIRE', 'INTENT', 'INTERFACE',
'INTRINSIC', 'IS', 'LOCK', 'MEMORY', 'MODULE', 'NAMELIST', 'NULLIFY',
'NONE', 'NON_INTRINSIC', 'NON_OVERRIDABLE', 'NOPASS', 'OPEN', 'OPTIONAL',
'OPTIONS', 'PARAMETER', 'PASS', 'PAUSE', 'POINTER', 'PRINT', 'PRIVATE',
'PROGRAM', 'PROCEDURE', 'PROTECTED', 'PUBLIC', 'PURE', 'READ',
'RECURSIVE', 'RESULT', 'RETURN', 'REWIND', 'SAVE', 'SELECT', 'SEQUENCE',
'STOP', 'SUBMODULE', 'SUBROUTINE', 'SYNC', 'SYNCALL', 'SYNCIMAGES',
'SYNCMEMORY', 'TARGET', 'THEN', 'TYPE', 'UNLOCK', 'USE', 'VALUE',
'VOLATILE', 'WHERE', 'WRITE', 'WHILE'), prefix=r'\b', suffix=r'\s*\b'),
Keyword),
# Data Types
(words((
'CHARACTER', 'COMPLEX', 'DOUBLE PRECISION', 'DOUBLE COMPLEX', 'INTEGER',
'LOGICAL', 'REAL', 'C_INT', 'C_SHORT', 'C_LONG', 'C_LONG_LONG',
'C_SIGNED_CHAR', 'C_SIZE_T', 'C_INT8_T', 'C_INT16_T', 'C_INT32_T',
'C_INT64_T', 'C_INT_LEAST8_T', 'C_INT_LEAST16_T', 'C_INT_LEAST32_T',
'C_INT_LEAST64_T', 'C_INT_FAST8_T', 'C_INT_FAST16_T', 'C_INT_FAST32_T',
'C_INT_FAST64_T', 'C_INTMAX_T', 'C_INTPTR_T', 'C_FLOAT', 'C_DOUBLE',
'C_LONG_DOUBLE', 'C_FLOAT_COMPLEX', 'C_DOUBLE_COMPLEX',
'C_LONG_DOUBLE_COMPLEX', 'C_BOOL', 'C_CHAR', 'C_PTR', 'C_FUNPTR'),
prefix=r'\b', suffix=r'\s*\b'),
Keyword.Type),
# Operators
(r'(\*\*|\*|\+|-|\/|<|>|<=|>=|==|\/=|=)', Operator),
(r'(::)', Keyword.Declaration),
(r'[()\[\],:&%;.]', Punctuation),
# Intrinsics
(words((
'Abort', 'Abs', 'Access', 'AChar', 'ACos', 'ACosH', 'AdjustL',
'AdjustR', 'AImag', 'AInt', 'Alarm', 'All', 'Allocated', 'ALog',
'AMax', 'AMin', 'AMod', 'And', 'ANInt', 'Any', 'ASin', 'ASinH',
'Associated', 'ATan', 'ATanH', 'Atomic_Define', 'Atomic_Ref',
'BesJ', 'BesJN', 'Bessel_J0', 'Bessel_J1', 'Bessel_JN', 'Bessel_Y0',
'Bessel_Y1', 'Bessel_YN', 'BesY', 'BesYN', 'BGE', 'BGT', 'BLE',
'BLT', 'Bit_Size', 'BTest', 'CAbs', 'CCos', 'Ceiling', 'CExp',
'Char', 'ChDir', 'ChMod', 'CLog', 'Cmplx', 'Command_Argument_Count',
'Complex', 'Conjg', 'Cos', 'CosH', 'Count', 'CPU_Time', 'CShift',
'CSin', 'CSqRt', 'CTime', 'C_Loc', 'C_Associated',
'C_Null_Ptr', 'C_Null_Funptr', 'C_F_Pointer', 'C_F_ProcPointer',
'C_Null_Char', 'C_Alert', 'C_Backspace', 'C_Form_Feed', 'C_FunLoc',
'C_Sizeof', 'C_New_Line', 'C_Carriage_Return',
'C_Horizontal_Tab', 'C_Vertical_Tab', 'DAbs', 'DACos', 'DASin',
'DATan', 'Date_and_Time', 'DbesJ', 'DbesJN', 'DbesY',
'DbesYN', 'Dble', 'DCos', 'DCosH', 'DDiM', 'DErF',
'DErFC', 'DExp', 'Digits', 'DiM', 'DInt', 'DLog', 'DMax',
'DMin', 'DMod', 'DNInt', 'Dot_Product', 'DProd', 'DSign', 'DSinH',
'DShiftL', 'DShiftR', 'DSin', 'DSqRt', 'DTanH', 'DTan', 'DTime',
'EOShift', 'Epsilon', 'ErF', 'ErFC', 'ErFC_Scaled', 'ETime',
'Execute_Command_Line', 'Exit', 'Exp', 'Exponent', 'Extends_Type_Of',
'FDate', 'FGet', 'FGetC', 'FindLoc', 'Float', 'Floor', 'Flush',
'FNum', 'FPutC', 'FPut', 'Fraction', 'FSeek', 'FStat', 'FTell',
'Gamma', 'GError', 'GetArg', 'Get_Command', 'Get_Command_Argument',
'Get_Environment_Variable', 'GetCWD', 'GetEnv', 'GetGId', 'GetLog',
'GetPId', 'GetUId', 'GMTime', 'HostNm', 'Huge', 'Hypot', 'IAbs',
'IAChar', 'IAll', 'IAnd', 'IAny', 'IArgC', 'IBClr', 'IBits',
'IBSet', 'IChar', 'IDate', 'IDiM', 'IDInt', 'IDNInt', 'IEOr',
'IErrNo', 'IFix', 'Imag', 'ImagPart', 'Image_Index', 'Index',
'Int', 'IOr', 'IParity', 'IRand', 'IsaTty', 'IShft', 'IShftC',
'ISign', 'Iso_C_Binding', 'Is_Contiguous', 'Is_Iostat_End',
'Is_Iostat_Eor', 'ITime', 'Kill', 'Kind', 'LBound', 'LCoBound',
'Len', 'Len_Trim', 'LGe', 'LGt', 'Link', 'LLe', 'LLt', 'LnBlnk',
'Loc', 'Log', 'Log_Gamma', 'Logical', 'Long', 'LShift', 'LStat',
'LTime', 'MaskL', 'MaskR', 'MatMul', 'Max', 'MaxExponent',
'MaxLoc', 'MaxVal', 'MClock', 'Merge', 'Merge_Bits', 'Move_Alloc',
'Min', 'MinExponent', 'MinLoc', 'MinVal', 'Mod', 'Modulo', 'MvBits',
'Nearest', 'New_Line', 'NInt', 'Norm2', 'Not', 'Null', 'Num_Images',
'Or', 'Pack', 'Parity', 'PError', 'Precision', 'Present', 'Product',
'Radix', 'Rand', 'Random_Number', 'Random_Seed', 'Range', 'Real',
'RealPart', 'Rename', 'Repeat', 'Reshape', 'RRSpacing', 'RShift',
'Same_Type_As', 'Scale', 'Scan', 'Second', 'Selected_Char_Kind',
'Selected_Int_Kind', 'Selected_Real_Kind', 'Set_Exponent', 'Shape',
'ShiftA', 'ShiftL', 'ShiftR', 'Short', 'Sign', 'Signal', 'SinH',
'Sin', 'Sleep', 'Sngl', 'Spacing', 'Spread', 'SqRt', 'SRand',
'Stat', 'Storage_Size', 'Sum', 'SymLnk', 'System', 'System_Clock',
'Tan', 'TanH', 'Time', 'This_Image', 'Tiny', 'TrailZ', 'Transfer',
'Transpose', 'Trim', 'TtyNam', 'UBound', 'UCoBound', 'UMask',
'Unlink', 'Unpack', 'Verify', 'XOr', 'ZAbs', 'ZCos', 'ZExp',
'ZLog', 'ZSin', 'ZSqRt'), prefix=r'\b', suffix=r'\s*\b'),
Name.Builtin),
# Booleans
(r'\.(true|false)\.', Name.Builtin),
# Comparing Operators
(r'\.(eq|ne|lt|le|gt|ge|not|and|or|eqv|neqv)\.', Operator.Word),
],
'strings': [
(r'(?s)"(\\\\|\\[0-7]+|\\.|[^"\\])*"', String.Double),
(r"(?s)'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single),
],
'nums': [
(r'\d+(?![.e])(_[a-z]\w+)?', Number.Integer),
(r'[+-]?\d*\.\d+([ed][-+]?\d+)?(_[a-z]\w+)?', Number.Float),
(r'[+-]?\d+\.\d*([ed][-+]?\d+)?(_[a-z]\w+)?', Number.Float),
],
}
| FortranLexer |
python | protocolbuffers__protobuf | python/google/protobuf/internal/proto_text_test.py | {
"start": 608,
"end": 1137
} | class ____(unittest.TestCase):
def test_simple_serialize(self, message_module):
msg = message_module.TestAllTypes()
msg.optional_int32 = 101
expected = 'optional_int32: 101\n'
self.assertEqual(expected, proto_text.serialize(msg))
def test_simpor_parse(self, message_module):
text = 'optional_int32: 123'
msg = proto_text.parse(message_module.TestAllTypes, text)
self.assertEqual(123, msg.optional_int32) # pytype: disable=attribute-error
if __name__ == '__main__':
unittest.main()
| ProtoTextTest |
python | dagster-io__dagster | python_modules/libraries/dagster-airbyte/dagster_airbyte/managed/generated/sources.py | {
"start": 101754,
"end": 102758
} | class ____(GeneratedAirbyteSource):
@public
def __init__(
self, name: str, api_key: str, client_secret: str, country_code: str, start_date: str
):
"""Airbyte Source for Search Metrics.
Documentation can be found at https://docs.airbyte.com/integrations/sources/search-metrics
Args:
name (str): The name of the destination.
country_code (str): The region of the S3 staging bucket to use if utilising a copy strategy.
start_date (str): Data generated in SearchMetrics after this date will be replicated. This date must be specified in the format YYYY-MM-DDT00:00:00Z.
"""
self.api_key = check.str_param(api_key, "api_key")
self.client_secret = check.str_param(client_secret, "client_secret")
self.country_code = check.str_param(country_code, "country_code")
self.start_date = check.str_param(start_date, "start_date")
super().__init__("Search Metrics", name)
| SearchMetricsSource |
python | huggingface__transformers | tests/models/mimi/test_modeling_mimi.py | {
"start": 13418,
"end": 20359
} | class ____(unittest.TestCase):
def test_integration_using_cache_decode(self):
expected_rmse = {
"8": 0.0018785292,
"32": 0.0012330565,
}
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
model_id = "kyutai/mimi"
model = MimiModel.from_pretrained(model_id, use_cache=True).to(torch_device)
processor = AutoFeatureExtractor.from_pretrained(model_id)
librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate))
audio_sample = librispeech_dummy[-1]["audio"]["array"]
inputs = processor(
raw_audio=audio_sample,
sampling_rate=processor.sampling_rate,
return_tensors="pt",
).to(torch_device)
for num_codebooks, expected_rmse in expected_rmse.items():
with torch.no_grad():
# use max bandwidth for best possible reconstruction
encoder_outputs = model.encode(inputs["input_values"], num_quantizers=int(num_codebooks))
audio_codes = encoder_outputs[0]
decoder_outputs_first_part = model.decode(audio_codes[:, :, : audio_codes.shape[2] // 2])
decoder_outputs_second_part = model.decode(
audio_codes[:, :, audio_codes.shape[2] // 2 :],
decoder_past_key_values=decoder_outputs_first_part.decoder_past_key_values,
)
audio_output_entire_context = model.decode(audio_codes)[0]
audio_output_concat_context = torch.cat(
[decoder_outputs_first_part[0], decoder_outputs_second_part[0]], dim=2
)
# make sure audios are more or less equal
# the RMSE of two random gaussian noise vectors with ~N(0, 1) is around 1.0
rmse = compute_rmse(
audio_output_concat_context.squeeze().cpu().numpy(),
audio_output_entire_context.squeeze().cpu().numpy(),
)
self.assertTrue(rmse < 1e-3)
def test_integration_encode_with_padding_cache(self):
"""
We test here the possibility to run Mimi in a streaming manner, i.e. chunk by chunk.
1. we encode a first time the entire audio
2. we encode the audio chunk by chunk, each chunk being the smallest size possible for the model (i.e. the frame size)
This test must be run on CPU since GPU floating point operations accumulate rounding errors that cause test failures.
"""
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
model_id = "kyutai/mimi"
model = MimiModel.from_pretrained(model_id, use_cache=True).to("cpu")
processor = AutoFeatureExtractor.from_pretrained(model_id)
librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate))
audio_sample = librispeech_dummy[-1]["audio"]["array"]
inputs = processor(
raw_audio=audio_sample,
sampling_rate=processor.sampling_rate,
return_tensors="pt",
).to("cpu")
frame_size = model.config.frame_size
audio_codes = model.encode(inputs["input_values"]).audio_codes
# streaming chunk by chunk
encoder_past_key_values = None
padding_cache = None
encoded_frames_list = []
for start in range(0, inputs["input_values"].shape[-1], frame_size):
input_values_chunk = inputs["input_values"][:, :, start : start + frame_size]
encoder_outputs = model.encode(
input_values_chunk,
padding_cache=padding_cache,
encoder_past_key_values=encoder_past_key_values,
use_streaming=True,
)
encoder_past_key_values = encoder_outputs.encoder_past_key_values
padding_cache = encoder_outputs.padding_cache
encoded_frames_list.append(encoder_outputs.audio_codes)
streamed_audio_codes = torch.cat(encoded_frames_list, dim=-1)
torch.testing.assert_close(streamed_audio_codes, audio_codes)
def test_integration(self):
expected_rmses = {
"8": 0.0018785292,
"32": 0.0012330565,
}
expected_codesums = {
"8": 426176,
"32": 1795819,
}
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
model_id = "kyutai/mimi"
processor = AutoFeatureExtractor.from_pretrained(model_id)
librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate))
audio_sample = librispeech_dummy[-1]["audio"]["array"]
inputs = processor(
raw_audio=audio_sample,
sampling_rate=processor.sampling_rate,
return_tensors="pt",
).to(torch_device)
for use_cache in [False, True]:
model = MimiModel.from_pretrained(model_id, use_cache=use_cache).to(torch_device)
for num_codebooks, expected_rmse in expected_rmses.items():
with torch.no_grad():
# use max bandwidth for best possible reconstruction
encoder_outputs = model.encode(inputs["input_values"], num_quantizers=int(num_codebooks))
audio_code_sums = encoder_outputs[0].sum().item()
# make sure audio encoded codes are correct
# assert relative difference less than a threshold, because `audio_code_sums` varies a bit
# depending on torch version
self.assertTrue(
np.abs(audio_code_sums - expected_codesums[num_codebooks]) <= (3e-3 * audio_code_sums)
)
input_values_dec = model.decode(encoder_outputs[0], padding_mask=inputs["padding_mask"])[0]
input_values_enc_dec = model(
inputs["input_values"], inputs["padding_mask"], num_quantizers=int(num_codebooks)
)[1]
# make sure forward and decode gives same result
torch.testing.assert_close(input_values_dec, input_values_enc_dec)
# make sure shape matches
self.assertTrue(inputs["input_values"].shape == input_values_enc_dec.shape)
arr = inputs["input_values"][0].cpu().numpy()
arr_enc_dec = input_values_enc_dec[0].cpu().numpy()
# make sure audios are more or less equal
# the RMSE of two random gaussian noise vectors with ~N(0, 1) is around 1.0
rmse = compute_rmse(arr, arr_enc_dec)
self.assertTrue(np.abs(rmse - expected_rmse) < 1e-5)
| MimiIntegrationTest |
python | run-llama__llama_index | llama-index-integrations/storage/index_store/llama-index-storage-index-store-firestore/llama_index/storage/index_store/firestore/base.py | {
"start": 179,
"end": 1494
} | class ____(KVIndexStore):
"""
Firestore Index store.
Args:
firestore_kvstore (FirestoreKVStore): Firestore key-value store
namespace (str): namespace for the index store
"""
def __init__(
self,
firestore_kvstore: FirestoreKVStore,
namespace: Optional[str] = None,
collection_suffix: Optional[str] = None,
) -> None:
"""Init a FirestoreIndexStore."""
super().__init__(
firestore_kvstore, namespace=namespace, collection_suffix=collection_suffix
)
@classmethod
def from_database(
cls,
project: str,
database: str,
namespace: Optional[str] = None,
collection_suffix: Optional[str] = None,
) -> "FirestoreIndexStore":
"""
Load a FirestoreIndexStore from a Firestore database.
Args:
project (str): The project which the client acts on behalf of.
database (str): The database name that the client targets.
namespace (str): namespace for the docstore.
collection_suffix (str): suffix for the collection name
"""
firestore_kvstore = FirestoreKVStore(project=project, database=database)
return cls(firestore_kvstore, namespace, collection_suffix)
| FirestoreIndexStore |
python | kubernetes-client__python | kubernetes/client/models/v1_host_ip.py | {
"start": 383,
"end": 3476
} | class ____(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'ip': 'str'
}
attribute_map = {
'ip': 'ip'
}
def __init__(self, ip=None, local_vars_configuration=None): # noqa: E501
"""V1HostIP - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._ip = None
self.discriminator = None
self.ip = ip
@property
def ip(self):
"""Gets the ip of this V1HostIP. # noqa: E501
IP is the IP address assigned to the host # noqa: E501
:return: The ip of this V1HostIP. # noqa: E501
:rtype: str
"""
return self._ip
@ip.setter
def ip(self, ip):
"""Sets the ip of this V1HostIP.
IP is the IP address assigned to the host # noqa: E501
:param ip: The ip of this V1HostIP. # noqa: E501
:type: str
"""
if self.local_vars_configuration.client_side_validation and ip is None: # noqa: E501
raise ValueError("Invalid value for `ip`, must not be `None`") # noqa: E501
self._ip = ip
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, V1HostIP):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, V1HostIP):
return True
return self.to_dict() != other.to_dict()
| V1HostIP |
python | more-itertools__more-itertools | tests/test_recipes.py | {
"start": 13326,
"end": 13626
} | class ____(TestCase):
def test_justseen(self):
u = mi.unique_justseen('AAAABBBCCDABB')
self.assertEqual(list('ABCDAB'), list(u))
def test_custom_key(self):
u = mi.unique_justseen('AABCcAD', str.lower)
self.assertEqual(list('ABCAD'), list(u))
| UniqueJustseenTests |
python | encode__django-rest-framework | rest_framework/fields.py | {
"start": 68701,
"end": 70221
} | class ____(Field):
"""
A generic field that can be used against an arbitrary model field.
This is used by `ModelSerializer` when dealing with custom model fields,
that do not have a serializer field to be mapped to.
"""
default_error_messages = {
'max_length': _('Ensure this field has no more than {max_length} characters.'),
}
def __init__(self, model_field, **kwargs):
self.model_field = model_field
# The `max_length` option is supported by Django's base `Field` class,
# so we'd better support it here.
self.max_length = kwargs.pop('max_length', None)
super().__init__(**kwargs)
if self.max_length is not None:
message = lazy_format(self.error_messages['max_length'], max_length=self.max_length)
self.validators.append(
MaxLengthValidator(self.max_length, message=message))
def to_internal_value(self, data):
rel = self.model_field.remote_field
if rel is not None:
return rel.model._meta.get_field(rel.field_name).to_python(data)
return self.model_field.to_python(data)
def get_attribute(self, obj):
# We pass the object instance onto `to_representation`,
# not just the field attribute.
return obj
def to_representation(self, obj):
value = self.model_field.value_from_object(obj)
if is_protected_type(value):
return value
return self.model_field.value_to_string(obj)
| ModelField |
python | mlflow__mlflow | mlflow/langchain/output_parsers.py | {
"start": 809,
"end": 1599
} | class ____(BaseTransformOutputParser[dict[str, Any]]):
"""
OutputParser that wraps the string output into a dictionary representation of a
:py:class:`ChatCompletionResponse`
"""
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return whether this class is serializable."""
return True
@property
def _type(self) -> str:
"""Return the output parser type for serialization."""
return "mlflow_simplified_chat_completions"
def parse(self, text: str) -> dict[str, Any]:
return asdict(
RagChatCompletionResponse(
choices=[ChainCompletionChoice(message=Message(role="assistant", content=text))],
object="chat.completion",
)
)
| ChatCompletionsOutputParser |
python | pola-rs__polars | py-polars/src/polars/io/iceberg/_utils.py | {
"start": 11178,
"end": 14387
} | class ____:
def __init__(
self,
table: Table,
projected_filter_schema: pyiceberg.schema.Schema,
) -> None:
import pyiceberg.schema
from pyiceberg.io.pyarrow import schema_to_pyarrow
import polars as pl
import polars._utils.logging
verbose = polars._utils.logging.verbose()
self.file_column_statistics: dict[int, IcebergColumnStatisticsLoader] = {}
self.load_as_empty_statistics: list[str] = []
self.file_lengths: list[int] = []
self.projected_filter_schema = projected_filter_schema
for field in projected_filter_schema.fields:
field_all_types = set()
for schema in table.schemas().values():
with contextlib.suppress(ValueError):
field_all_types.add(schema.find_field(field.field_id).field_type)
_, field_polars_dtype = pl.Schema(
schema_to_pyarrow(pyiceberg.schema.Schema(field))
).popitem()
load_from_bytes_impl = LoadFromBytesImpl.init_for_field_type(
field.field_type,
field_all_types,
field_polars_dtype,
)
if verbose:
_load_from_bytes_impl = (
type(load_from_bytes_impl).__name__
if load_from_bytes_impl is not None
else "None"
)
eprint(
"IcebergStatisticsLoader: "
f"{field.name = }, "
f"{field.field_id = }, "
f"{field.field_type = }, "
f"{field_all_types = }, "
f"{field_polars_dtype = }, "
f"{_load_from_bytes_impl = }"
)
self.file_column_statistics[field.field_id] = IcebergColumnStatisticsLoader(
field_id=field.field_id,
column_name=field.name,
column_dtype=field_polars_dtype,
load_from_bytes_impl=load_from_bytes_impl,
min_values=[],
max_values=[],
null_count=[],
)
def push_file_statistics(self, file: DataFile) -> None:
self.file_lengths.append(file.record_count)
for stats in self.file_column_statistics.values():
stats.push_file_statistics(file)
def finish(
self,
expected_height: int,
identity_transformed_values: dict[int, pl.Series | str],
) -> pl.DataFrame:
import polars as pl
out: list[pl.DataFrame] = [
pl.Series("len", self.file_lengths, dtype=pl.UInt32).to_frame()
]
for field_id, stat_builder in self.file_column_statistics.items():
if (p := identity_transformed_values.get(field_id)) is not None:
if isinstance(p, str):
msg = f"statistics load failure for filter column: {p}"
raise ComputeError(msg)
column_stats_df = stat_builder.finish(expected_height, p)
out.append(column_stats_df)
return pl.concat(out, how="horizontal")
@dataclass
| IcebergStatisticsLoader |
python | sanic-org__sanic | sanic/exceptions.py | {
"start": 13297,
"end": 14769
} | class ____(HTTPException):
"""408 Request Timeout
The Web server (running the Web site) thinks that there has been too
long an interval of time between 1) the establishment of an IP
connection (socket) between the client and the server and
2) the receipt of any data on that socket, so the server has dropped
the connection. The socket connection has actually been lost - the Web
server has 'timed out' on that particular socket connection.
This is an internal exception thrown by Sanic and should not be used
directly.
Args:
message (Optional[Union[str, bytes]], optional): The message to be sent to the client. If `None`
then the HTTP status 'Bad Request' will be sent. Defaults to `None`.
quiet (Optional[bool], optional): When `True`, the error traceback will be suppressed
from the logs. Defaults to `None`.
context (Optional[Dict[str, Any]], optional): Additional mapping of key/value data that will be
sent to the client upon exception. Defaults to `None`.
extra (Optional[Dict[str, Any]], optional): Additional mapping of key/value data that will NOT be
sent to the client when in PRODUCTION mode. Defaults to `None`.
headers (Optional[Dict[str, Any]], optional): Additional headers that should be sent with the HTTP
response. Defaults to `None`.
""" # noqa: E501
status_code = 408
quiet = True
| RequestTimeout |
python | tensorflow__tensorflow | tensorflow/python/distribute/vars_test.py | {
"start": 28183,
"end": 49579
} | class ____(test.TestCase, parameterized.TestCase):
@combinations.generate(strategy_and_run_tf_function_combinations())
def testAssign(self, distribution, experimental_run_tf_function):
def assign(fn, v, update_value, cross_replica):
update_fn = lambda: getattr(v, fn)(update_value)
if cross_replica:
return update_fn()
else:
if experimental_run_tf_function:
update_fn = def_function.function(update_fn)
return test_util.gather(distribution, distribution.run(update_fn))
updates = [("assign", 1.), ("assign_add", 1.), ("assign_sub", -1.)]
aggregations = [
variables_lib.VariableAggregation.NONE,
variables_lib.VariableAggregation.SUM,
variables_lib.VariableAggregation.MEAN,
variables_lib.VariableAggregation.ONLY_FIRST_REPLICA,
]
options = list(
x for x in itertools.product(updates, aggregations, [True, False]))
for update, aggregation, cross_replica in options:
# VariableAggregation.SUM in cross-replica mode is tested below,
# VariableAggregation.NONE in cross-replica mode is not supported.
if cross_replica and aggregation in [
variables_lib.VariableAggregation.SUM,
variables_lib.VariableAggregation.NONE,
]:
continue
with distribution.scope():
v = variable_v1.VariableV1(
0.,
synchronization=variables_lib.VariableSynchronization.ON_READ,
aggregation=aggregation)
self.evaluate(variables_lib.global_variables_initializer())
fn, update_value = update
self.evaluate(assign(fn, v, update_value, cross_replica))
for component in v._values:
self.assertAllEqual(self.evaluate(component.read_value()),
self.evaluate(array_ops.ones_like(component)))
@combinations.generate(strategy_and_run_tf_function_combinations())
def testAssignOnReadVar(self, distribution, experimental_run_tf_function):
with distribution.scope():
v_to_assign = variable_v1.VariableV1(
2., aggregation=variables_lib.VariableAggregation.MEAN)
v_to_assign_sub = variable_v1.VariableV1(
-2., aggregation=variables_lib.VariableAggregation.MEAN)
def assign(fn, v, update_value, cross_replica):
update_fn = lambda: getattr(v, fn)(update_value)
if cross_replica:
return update_fn()
else:
if experimental_run_tf_function:
update_fn = def_function.function(update_fn)
return test_util.gather(distribution, distribution.run(update_fn))
updates = [("assign", v_to_assign), ("assign_add", v_to_assign),
("assign_sub", v_to_assign_sub)]
expected_cross_replica = {
variables_lib.VariableAggregation.SUM: 1.0,
variables_lib.VariableAggregation.MEAN: 2.0,
variables_lib.VariableAggregation.ONLY_FIRST_REPLICA: 2.0
}
expected_replica = {
variables_lib.VariableAggregation.SUM: 2.0,
variables_lib.VariableAggregation.MEAN: 2.0,
variables_lib.VariableAggregation.ONLY_FIRST_REPLICA: 2.0
}
# aggregation=NONE is not supported for OnReadVariables.
aggregations = [
variables_lib.VariableAggregation.SUM,
variables_lib.VariableAggregation.MEAN,
variables_lib.VariableAggregation.ONLY_FIRST_REPLICA,
]
options = list(
x for x in itertools.product(updates, aggregations, [True, False]))
for update, aggregation, cross_replica in options:
# assign in replica context with SUM does not make sense cause you can
# just do value * num replicas error is 1. is not a distributed value and
# is unsupported for aggregation SUM
if aggregation == variables_lib.VariableAggregation.SUM:
continue
with distribution.scope():
v = variable_v1.VariableV1(
0.,
aggregation=aggregation)
self.evaluate(variables_lib.global_variables_initializer())
fn, update_value = update
self.evaluate(assign(fn, v, update_value, cross_replica))
if cross_replica:
for component in v._values:
self.assertAllEqual(expected_cross_replica.get(aggregation),
self.evaluate(component.read_value()))
else:
for component in v._values:
self.assertAllEqual(expected_replica.get(aggregation),
self.evaluate(component.read_value()))
@combinations.generate(strategy_and_run_tf_function_combinations())
def testAssignPerReplicaVal(self, distribution, experimental_run_tf_function):
if strategy_test_lib.is_tpu_strategy(distribution):
self.skipTest("Assigning PerReplica values is not supported. See"
" sponge/80ba41f8-4220-4516-98ce-bbad48f9f11a.")
self.skipTest(
"We don't support assigning PerReplica values in cross "
"replica context or replica context. see error in "
"sponge/2b2e54c1-eda6-4534-82e1-c73b1dcd517f."
)
with distribution.scope():
per_replica_value = values.PerReplica(
[constant_op.constant(2.0),
constant_op.constant(2.0)])
def assign(fn, v, update_value, cross_replica):
update_fn = lambda: getattr(v, fn)(update_value)
if cross_replica:
return update_fn()
else:
if experimental_run_tf_function:
update_fn = def_function.function(update_fn)
return test_util.gather(distribution, distribution.run(update_fn))
updates = [("assign", per_replica_value)]
# We don't support assigning PerReplica valus to vars in replica context
# with aggregation=NONE.
aggregations = [
variables_lib.VariableAggregation.SUM,
variables_lib.VariableAggregation.MEAN,
variables_lib.VariableAggregation.ONLY_FIRST_REPLICA,
]
options = list(
x for x in itertools.product(updates, aggregations, [True, False]))
for update, aggregation, cross_replica in options:
# assign in replica context with SUM does not make sense cause you can
# just do value * num replicas error is 1. is not a distributed value and
# is unsupported for aggregation SUM
with distribution.scope():
v = variable_v1.VariableV1(
0.,
synchronization=variables_lib.VariableSynchronization.ON_READ,
aggregation=aggregation)
self.evaluate(variables_lib.global_variables_initializer())
fn, update_value = update
# with self.assertRaisesRegex(ValueError, "Attempt to convert a value "):
self.evaluate(assign(fn, v, update_value, cross_replica))
if aggregation == variables_lib.VariableAggregation.SUM:
expected = 4.0
else:
expected = 2.0
for component in v._values:
self.assertAllEqual(expected, self.evaluate(component.read_value()))
@combinations.generate(strategy_and_run_tf_function_combinations())
def testAssignDtypeConversion(self, distribution,
experimental_run_tf_function):
def assign(fn, v, update_value, cross_replica):
update_fn = lambda: getattr(v, fn)(update_value)
if cross_replica:
return update_fn()
else:
if experimental_run_tf_function:
update_fn = def_function.function(update_fn)
return test_util.gather(distribution, distribution.run(update_fn))
updates = [("assign", 1), ("assign_add", 1), ("assign_sub", -1)]
aggregations = [
variables_lib.VariableAggregation.NONE,
variables_lib.VariableAggregation.SUM,
variables_lib.VariableAggregation.MEAN,
variables_lib.VariableAggregation.ONLY_FIRST_REPLICA,
]
options = list(
x for x in itertools.product(updates, aggregations, [True, False]))
for update, aggregation, cross_replica in options:
# VariableAggregation.SUM in cross-replica mode is tested below,
# VariableAggregation.NONE in cross-replica mode is not supported.
if cross_replica and aggregation in [
variables_lib.VariableAggregation.SUM,
variables_lib.VariableAggregation.NONE,
]:
continue
with distribution.scope():
v = variable_v1.VariableV1(
0.,
synchronization=variables_lib.VariableSynchronization.ON_READ,
aggregation=aggregation)
self.evaluate(variables_lib.global_variables_initializer())
fn, update_value = update
self.evaluate(assign(fn, v, update_value, cross_replica))
for component in v._values:
self.assertAllEqual(self.evaluate(component.read_value()),
self.evaluate(array_ops.ones_like(component)))
@combinations.generate(strategy_with_var_policy())
def testAssignWithAggregationSum(self, distribution):
with distribution.scope():
v = variable_v1.VariableV1(
0.,
synchronization=variables_lib.VariableSynchronization.ON_READ,
aggregation=variables_lib.VariableAggregation.SUM)
self.evaluate(variables_lib.global_variables_initializer())
self.evaluate(v.assign(1. * distribution.num_replicas_in_sync))
for component in v._values:
self.assertAllEqual(self.evaluate(component.read_value()),
self.evaluate(array_ops.ones_like(component)))
@combinations.generate(strategy_with_var_policy())
def testAssignAddSubWithAggregationSum(self, distribution):
with distribution.scope():
v = variable_v1.VariableV1(
0.,
synchronization=variables_lib.VariableSynchronization.ON_READ,
aggregation=variables_lib.VariableAggregation.SUM)
self.evaluate(variables_lib.global_variables_initializer())
with self.assertRaisesRegex(
ValueError, "SyncOnReadVariable does not support "):
self.evaluate(v.assign_add(1.))
with self.assertRaisesRegex(
ValueError, "SyncOnReadVariable does not support "):
self.evaluate(v.assign_sub(1.))
@combinations.generate(strategy_and_run_tf_function_combinations())
def testReadValueInReplicaContext(self, distribution,
experimental_run_tf_function):
aggregations = [
variables_lib.VariableAggregation.NONE,
variables_lib.VariableAggregation.SUM,
variables_lib.VariableAggregation.MEAN,
variables_lib.VariableAggregation.ONLY_FIRST_REPLICA,
]
for aggregation in aggregations:
with distribution.scope():
v = variable_v1.VariableV1(
0.,
synchronization=variables_lib.VariableSynchronization.ON_READ,
aggregation=aggregation)
self.evaluate(variables_lib.global_variables_initializer())
if experimental_run_tf_function:
read_var_fn = def_function.function(v.read_value)
else:
read_var_fn = v.read_value
results = self.evaluate(
test_util.gather(distribution, distribution.run(read_var_fn)))
for component, value in zip(v._values, results):
self.assertAllEqual(self.evaluate(component.read_value()), value)
@combinations.generate(strategy_and_run_tf_function_combinations())
def testReadValueInCrossReplicaContext(self, distribution,
experimental_run_tf_function):
aggregations = [
variables_lib.VariableAggregation.SUM,
variables_lib.VariableAggregation.MEAN,
variables_lib.VariableAggregation.ONLY_FIRST_REPLICA,
]
for aggregation in aggregations:
if strategy_test_lib.is_tpu_strategy(distribution):
resolver = tpu_cluster_resolver.TPUClusterResolver("")
tpu_cluster_resolver.initialize_tpu_system(resolver)
with distribution.scope():
v = variable_v1.VariableV1(
0.,
synchronization=variables_lib.VariableSynchronization.ON_READ,
aggregation=aggregation)
self.evaluate(variables_lib.global_variables_initializer())
def assign(v=v):
ctx = distribute_lib.get_replica_context()
replica_id = ctx.replica_id_in_sync_group
return v.assign(math_ops.cast(replica_id, dtypes.float32))
if experimental_run_tf_function:
assign = def_function.function(assign)
self.evaluate(test_util.gather(distribution, distribution.run(assign)))
num_replicas = distribution.num_replicas_in_sync
sum_of_replica_values = num_replicas * (num_replicas - 1) / 2.
if aggregation == variables_lib.VariableAggregation.SUM:
expected = sum_of_replica_values
elif aggregation == variables_lib.VariableAggregation.MEAN:
expected = sum_of_replica_values / num_replicas
else:
expected = 0
self.assertEqual(expected, self.evaluate(v.read_value()), aggregation)
self.assertEqual(expected, self.evaluate(v.value()), aggregation)
self.assertEqual(expected, self.evaluate(v), aggregation)
self.assertEqual(expected, self.evaluate(array_ops.identity(v)),
aggregation)
@combinations.generate(strategy_and_run_tf_function_combinations())
def testAllReduce(self, distribution, experimental_run_tf_function):
with distribution.scope():
v = variable_v1.VariableV1(
2.,
synchronization=variables_lib.VariableSynchronization.ON_WRITE,
aggregation=variables_lib.VariableAggregation.MEAN)
self.evaluate(variables_lib.global_variables_initializer())
def all_reduce():
ctx = distribute_lib.get_replica_context()
replica_id = ctx.replica_id_in_sync_group
return ctx.all_reduce("SUM", v) + math_ops.cast(replica_id,
dtypes.float32)
if experimental_run_tf_function:
all_reduce = def_function.function(all_reduce)
per_replica_results = self.evaluate(
test_util.gather(distribution, distribution.run(all_reduce)))
expected_result = []
for i in range(distribution.num_replicas_in_sync):
expected_result.append(2.0 * distribution.num_replicas_in_sync +
1.0 * i)
self.assertAllEqual(per_replica_results, tuple(expected_result))
@combinations.generate(strategy_and_run_tf_function_combinations())
def testAssignPerReplicaBeforeRead(self, distribution,
experimental_run_tf_function):
aggregations = [
variables_lib.VariableAggregation.SUM,
variables_lib.VariableAggregation.MEAN,
variables_lib.VariableAggregation.ONLY_FIRST_REPLICA,
]
for aggregation in aggregations:
with distribution.scope():
v = variable_v1.VariableV1(
0.,
synchronization=variables_lib.VariableSynchronization.ON_READ,
aggregation=aggregation)
self.evaluate(variables_lib.global_variables_initializer())
def assign(var=v):
ctx = distribute_lib.get_replica_context()
replica_id = ctx.replica_id_in_sync_group
return var.assign(math_ops.cast(replica_id, dtypes.float32))
if experimental_run_tf_function:
assign = def_function.function(assign)
per_replica_results = self.evaluate(
test_util.gather(distribution, distribution.run(assign)))
expected_result = []
for i in range(distribution.num_replicas_in_sync):
expected_result.append(1.0 * i)
self.assertAllEqual(per_replica_results, tuple(expected_result))
@combinations.generate(strategy_with_var_policy())
def testReadValueWithAggregationNoneInCrossReplicaContext(self, distribution):
with distribution.scope():
v = variable_v1.VariableV1(
0.,
synchronization=variables_lib.VariableSynchronization.ON_READ,
aggregation=variables_lib.VariableAggregation.NONE)
self.evaluate(variables_lib.global_variables_initializer())
with self.assertRaisesRegex(
ValueError, "Could not convert from .* VariableAggregation\\.NONE"):
self.evaluate(v.read_value())
@combinations.generate(strategy_with_var_policy())
def testInitializedToSameValueInsideEagerRun(self, distribution):
if not context.executing_eagerly(): self.skipTest("eager only")
if isinstance(distribution.extended,
collective_all_reduce_strategy.CollectiveAllReduceExtended):
self.skipTest("Test for more than 1 device per worker only.")
v = [None]
@def_function.function
def step():
def f():
if v[0] is None:
v[0] = variables_lib.Variable(
random_ops.random_normal([]),
synchronization=variables_lib.VariableSynchronization.ON_READ)
distribution.run(f)
context.set_global_seed(None)
step()
vals = self.evaluate(v[0].values)
self.assertAllEqual(vals[0], vals[1])
@combinations.generate(strategy_with_var_policy())
def testOperatorOverride(self, distribution):
with distribution.scope():
v = variable_v1.VariableV1(
0.0,
synchronization=variables_lib.VariableSynchronization.ON_READ,
aggregation=variables_lib.VariableAggregation.MEAN)
self.evaluate(variables_lib.global_variables_initializer())
@def_function.function
def assign():
ctx = distribute_lib.get_replica_context()
replica_id = ctx.replica_id_in_sync_group
return v.assign(math_ops.cast(replica_id, dtypes.float32))
# Assign different replicas with different values.
self.evaluate(test_util.gather(distribution, distribution.run(assign)))
self.assertEqual(1.5, self.evaluate(v + 1))
@def_function.function
def add():
return v + 1
per_replica_results = self.evaluate(
test_util.gather(distribution, distribution.run(add)))
self.assertAllEqual([1, 2], per_replica_results)
@combinations.generate(
combinations.combine(
strategy=[
strategy_combinations.mirrored_strategy_with_gpu_and_cpu,
strategy_combinations.tpu_strategy,
strategy_combinations.tpu_strategy_packed_var,
strategy_combinations.multi_worker_mirrored_2x1_cpu,
strategy_combinations.multi_worker_mirrored_2x1_gpu,
],
mode=["eager"],
use_var_policy=[True, False]))
def testSaveAndRestoreOnRead(self, strategy):
aggregation = [variable_scope.VariableAggregation.SUM,
variable_scope.VariableAggregation.MEAN]
for agg in aggregation:
v_normal_restore = variables_lib.Variable(1.0)
v_normal_save = variables_lib.Variable(2.0)
with strategy.scope():
v_on_read = variables_lib.Variable(
1.0, synchronization=variable_scope.VariableSynchronization.ON_READ,
aggregation=agg)
@def_function.function
def assign_fn():
cluster_resolver = strategy.cluster_resolver
replica_ctx = distribute_lib.get_replica_context()
if ((cluster_resolver and cluster_resolver.task_type == "worker") or
math_ops.equal(replica_ctx.replica_id_in_sync_group,
constant_op.constant(1))):
v_on_read.assign(3.) # pylint:disable=cell-var-from-loop
else:
v_on_read.assign(4.) # pylint:disable=cell-var-from-loop
strategy.run(assign_fn)
# Save ONREAD, restore ONREAD
# Saves v[0] + v[1] = 7 for SUM and 3.5 for MEAN.
ckpt = trackable_utils.Checkpoint(var=v_on_read)
manager = ckpt_manager.CheckpointManager(
ckpt, "/tmp/ckpt_" + str(uuid.uuid4()), max_to_keep=None)
manager.save()
# Restores a value of 7/2 = 3.5 for SUM and 3.5 for MEAN.
ckpt.restore(manager.latest_checkpoint)
self.assertEqual(3.5, self.evaluate(v_on_read._values[0]))
# Save ONREAD, restore normal
ckpt_normal = trackable_utils.Checkpoint(var=v_normal_restore)
ckpt_normal.restore(manager.latest_checkpoint)
if agg == variable_scope.VariableAggregation.SUM:
self.assertEqual(7.0, self.evaluate(v_normal_restore.read_value()))
else:
self.assertEqual(3.5, self.evaluate(v_normal_restore.read_value()))
# Save normal, restore ONREAD
ckpt = trackable_utils.Checkpoint(var=v_normal_save)
manager = ckpt_manager.CheckpointManager(
ckpt, "/tmp/ckpt_" + str(uuid.uuid4()), max_to_keep=None)
manager.save()
# Restores a value of 2/2 = 1.0 for SUM and 2.0 for MEAN.
ckpt_on_read = trackable_utils.Checkpoint(var=v_on_read)
ckpt_on_read.restore(manager.latest_checkpoint)
if agg == variable_scope.VariableAggregation.SUM:
self.assertEqual(1.0, self.evaluate(v_on_read._values[0]))
else:
self.assertEqual(2.0, self.evaluate(v_on_read._values[0]))
@combinations.generate(
combinations.combine(
distribution=[
strategy_combinations.mirrored_strategy_with_gpu_and_cpu,
strategy_combinations.multi_worker_mirrored_2x1_cpu,
strategy_combinations.multi_worker_mirrored_2x1_gpu,
],
aggregation=[
variables_lib.VariableAggregation.MEAN,
variables_lib.VariableAggregation.SUM,
variables_lib.VariableAggregation.ONLY_FIRST_REPLICA,
],
mode=["graph", "eager"],
use_var_policy=[True, False]))
| OnReadVariableSyncTest |
python | dagster-io__dagster | python_modules/dagster-pipes/dagster_pipes/__init__.py | {
"start": 47572,
"end": 48749
} | class ____(PipesBlobStoreMessageWriterChannel):
"""Message writer channel for writing messages by periodically writing message chunks to an
AzureBlobStorage container.
Args:
client (Any): An azure.storage.blob.BlobServiceClient object.
bucket (str): The name of the AzureBlobStorage container to write to.
key_prefix (Optional[str]): An optional prefix to use for the keys of written blobs.
interval (float): interval in seconds between upload chunk uploads
"""
def __init__(
self, client: Any, bucket: str, key_prefix: Optional[str], *, interval: float = 10
):
super().__init__(interval=interval)
self._client = client
self._bucket = bucket
self._key_prefix = key_prefix
def upload_messages_chunk(self, payload: IO, index: int) -> None:
key = f"{self._key_prefix}/{index}.json" if self._key_prefix else f"{index}.json"
with self._client.get_blob_client(self._bucket, key) as blob_client:
blob_client.upload_blob(payload.read())
# ########################
# ##### IO - DBFS
# ########################
| PipesAzureBlobStorageMessageWriterChannel |
python | spack__spack | lib/spack/spack/vendor/macholib/mach_o.py | {
"start": 31938,
"end": 32033
} | class ____(Structure):
_fields_ = ()
def describe(self):
return {}
| ident_command |
python | ray-project__ray | release/nightly_tests/dataset/sort_benchmark.py | {
"start": 406,
"end": 6004
} | class ____(Datasource):
"""An example datasource that generates rows with random int64 keys and a
row of the given byte size.
Examples:
>>> source = RandomIntRowDatasource()
>>> ray.data.read_datasource(source, n=10, row_size_bytes=2).take()
... {'c_0': 1717767200176864416, 'c_1': b"..."}
... {'c_0': 4983608804013926748, 'c_1': b"..."}
"""
def prepare_read(
self, parallelism: int, n: int, row_size_bytes: int
) -> List[ReadTask]:
_check_pyarrow_version()
import pyarrow
read_tasks: List[ReadTask] = []
block_size = max(1, n // parallelism)
row = np.random.bytes(row_size_bytes)
schema = pyarrow.schema(
[
pyarrow.field("c_0", pyarrow.int64()),
# NOTE: We use fixed-size binary type to avoid Arrow (list) offsets
# overflows when using non-fixed-size data-types (like string,
# binary, list, etc) whose size exceeds int32 limit (of 2^31-1)
pyarrow.field("c_1", pyarrow.binary(row_size_bytes)),
]
)
def make_block(count: int) -> Block:
return pyarrow.Table.from_arrays(
[
np.random.randint(
np.iinfo(np.int64).max, size=(count,), dtype=np.int64
),
[row for _ in range(count)],
],
schema=schema,
)
i = 0
while i < n:
count = min(block_size, n - i)
meta = BlockMetadata(
num_rows=count,
size_bytes=count * (8 + row_size_bytes),
input_files=None,
exec_stats=None,
)
read_tasks.append(
ReadTask(
lambda count=count: [make_block(count)],
meta,
schema=schema,
)
)
i += block_size
return read_tasks
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
"--num-partitions", help="number of partitions", default="50", type=str
)
parser.add_argument(
"--partition-size",
help="partition size (bytes)",
default="200e6",
type=str,
)
parser.add_argument(
"--shuffle", help="shuffle instead of sort", action="store_true"
)
# Use 100-byte records to approximately match Cloudsort benchmark.
parser.add_argument(
"--row-size-bytes",
help="Size of each row in bytes.",
default=100,
type=int,
)
parser.add_argument("--use-polars-sort", action="store_true")
parser.add_argument("--limit-num-blocks", type=int, default=None)
args = parser.parse_args()
if args.use_polars_sort and not args.shuffle:
print("Using polars for sort")
ctx = DataContext.get_current()
ctx.use_polars_sort = True
ctx = DataContext.get_current()
if args.limit_num_blocks is not None:
DataContext.get_current().set_config(
"debug_limit_shuffle_execution_to_num_blocks", args.limit_num_blocks
)
num_partitions = int(args.num_partitions)
partition_size = int(float(args.partition_size))
print(
f"Dataset size: {num_partitions} partitions, "
f"{partition_size / GiB}GB partition size, "
f"{num_partitions * partition_size / GiB}GB total"
)
def run_benchmark(args):
# Override target max-block size to avoid creating too many blocks
DataContext.get_current().target_max_block_size = 1 * GiB
source = RandomIntRowDatasource()
# Each row has an int64 key.
num_rows_per_partition = partition_size // (8 + args.row_size_bytes)
ds = ray.data.read_datasource(
source,
override_num_blocks=num_partitions,
n=num_rows_per_partition * num_partitions,
row_size_bytes=args.row_size_bytes,
)
if args.shuffle:
ds = ds.random_shuffle()
else:
ds = ds.sort(key="c_0")
exc = None
try:
ds = ds.materialize()
except Exception as e:
exc = e
ds_stats = ds.stats()
# TODO(swang): Add stats for OOM worker kills. This is not very
# convenient to do programmatically right now because it requires
# querying Prometheus.
print("==== Driver memory summary ====")
maxrss = int(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss * 1e3)
print(f"max: {maxrss / 1e9}/GB")
process = psutil.Process(os.getpid())
rss = int(process.memory_info().rss)
print(f"rss: {rss / 1e9}/GB")
try:
print(memory_summary(stats_only=True))
except Exception:
print("Failed to retrieve memory summary")
print(traceback.format_exc())
print("")
if ds_stats is not None:
print(ds_stats)
results = {
"num_partitions": num_partitions,
"partition_size": partition_size,
"peak_driver_memory": maxrss,
}
# Wait until after the stats have been printed to raise any exceptions.
if exc is not None:
print(results)
raise exc
return results
benchmark = Benchmark()
benchmark.run_fn("main", run_benchmark, args)
benchmark.write_result()
ray.timeline("dump.json")
| RandomIntRowDatasource |
python | getsentry__sentry | src/sentry/search/eap/types.py | {
"start": 2746,
"end": 2912
} | class ____:
span: list[str] | None
log: list[str] | None
metric: list[str] | None
MetricType = Literal["counter", "gauge", "distribution"]
| AdditionalQueries |
python | getsentry__sentry | src/sentry/plugins/bases/tag.py | {
"start": 160,
"end": 595
} | class ____(Plugin2):
tag: ClassVar[str]
project_default_enabled = True
def get_tag_values(self, event) -> list[str]:
"""
Must return a list of values.
>>> get_tag_pairs(event)
[tag1, tag2, tag3]
"""
raise NotImplementedError
def get_tags(self, event, **kwargs):
return [(self.tag, v) for v in self.get_tag_values(event) if len(v) <= MAX_TAG_VALUE_LENGTH]
| TagPlugin |
python | matplotlib__matplotlib | lib/mpl_toolkits/axisartist/floating_axes.py | {
"start": 557,
"end": 659
} | class ____(
grid_helper_curvelinear.FloatingAxisArtistHelper):
pass
| FloatingAxisArtistHelper |
python | pydantic__pydantic | tests/mypy/modules/covariant_typevar.py | {
"start": 104,
"end": 153
} | class ____(BaseModel, Generic[T]):
value: T
| Foo |
python | openai__openai-python | src/openai/types/batch.py | {
"start": 388,
"end": 544
} | class ____(BaseModel):
data: Optional[List[BatchError]] = None
object: Optional[str] = None
"""The object type, which is always `list`."""
| Errors |
python | PrefectHQ__prefect | src/integrations/prefect-github/prefect_github/schemas/graphql_schema.py | {
"start": 354610,
"end": 354955
} | class ____(sgqlc.types.Type):
"""
See source code for more info.
"""
__schema__ = graphql_schema
__field_names__ = ("contexts",)
contexts = sgqlc.types.Field(
sgqlc.types.non_null(
sgqlc.types.list_of(sgqlc.types.non_null("HovercardContext"))
),
graphql_name="contexts",
)
| Hovercard |
python | getsentry__sentry | src/sentry/ratelimits/leaky_bucket.py | {
"start": 840,
"end": 7861
} | class ____:
NAMESPACE = "leaky_bucket_limiter"
class LimitExceeded(Exception):
def __init__(self, info: LeakyBucketLimitInfo) -> None:
self.info = info
def __init__(self, burst_limit: int, drip_rate: int, key: str | None = None) -> None:
cluster_key = settings.SENTRY_RATE_LIMIT_REDIS_CLUSTER
self.client = redis.redis_clusters.get(cluster_key)
self.burst_limit = burst_limit
self.drip_rate = drip_rate
self.default_key = key
def _redis_key(self, key: str | None = None) -> str:
key = key or self.default_key
if not key:
raise ValueError("Either key or default_key must be set")
return f"{self.NAMESPACE}:{key}"
def validate(self) -> None:
try:
self.client.ping()
self.client.connection_pool.disconnect()
except Exception as e:
raise InvalidConfiguration(str(e))
def use_and_get_info(
self,
key: str | None = None,
timestamp: float | None = None,
incr_by: int = 1,
) -> LeakyBucketLimitInfo:
"""
Consumes a request from the bucket and returns the current state of the bucket.
The state of the bucket changes if and only if the request is not limited.
"""
try:
incr_by = int(incr_by)
# TODO: do we want to support float incr_by? Right now it would just work if we'd cast it to float instead.
if incr_by <= 0:
raise ValueError
except ValueError:
raise ValueError("incr_by must be an integer greater than 0")
if timestamp is None:
timestamp = time()
redis_key = self._redis_key(key)
try:
bucket_size, drip_rate, last_drip, current_level, wait_time = leaky_bucket_info(
[redis_key],
[self.burst_limit, self.drip_rate, timestamp, incr_by],
client=self.client,
)
last_drip, current_level, wait_time = (
float(last_drip),
float(current_level),
float(wait_time),
)
return LeakyBucketLimitInfo(bucket_size, drip_rate, last_drip, current_level, wait_time)
except Exception:
logger.exception(
"Could not determine leaky bucket limiter state", dict(redis_key=redis_key)
)
# fail open
return LeakyBucketLimitInfo(self.burst_limit, self.drip_rate)
def is_limited(
self, key: str | None = None, timestamp: float | None = None, incr_by: int = 1
) -> bool:
return bool(self.use_and_get_info(key, timestamp, incr_by).wait_time)
def get_bucket_state(self, key: str | None = None) -> LeakyBucketLimitInfo:
"""
Get the current state of the bucket without consuming any requests.
"""
try:
last_drip, current_level = map(
lambda x: float(x or 0),
self.client.hmget(
self._redis_key(key),
["last_drip", "current_level"],
),
)
except Exception:
logger.exception("Could not get bucket state", extra={"key": self._redis_key(key)})
return LeakyBucketLimitInfo(self.burst_limit, self.drip_rate)
return LeakyBucketLimitInfo(
self.burst_limit,
self.drip_rate,
last_drip,
current_level,
max(0, (current_level - self.burst_limit) / self.drip_rate),
)
def decorator(
self,
key_override: str | None = None,
limited_handler: Callable[[LeakyBucketLimitInfo, dict[str, Any]], R] | None = None,
raise_exception: bool = False,
) -> Callable[[Callable[P, R]], Callable[P, R]]:
"""
Decorator to limit the rate of requests
key_override: a string that will be used as the key to identify the rate limit,
if not provided fully qualified function name will be used
limited_handler: a callback function that will be called when the rate limit is exceeded,
it will receive the LeakyBucketLimitInfo instance as an argument,
as well as original args and kwargs in a dictionary
return value will be returned as the result of the decorated function
raise_exception: if True, LeakyBucketRateLimiter.LimitExceeded will be raised when the rate limit is exceeded
if both limited_callback and raise_exception are provided, limited_callback will take precedence
if neither limited_callback nor raise_exception is provided, the decorated function will silently
be ignored when the rate limit is exceeded
Important limitation: the decorator does not allow passing incr_by, thus falls back to defualt value of 1
usage:
- basic usage:
```
limiter = LeakyBucketRateLimiter(burst_limit=10, drip_rate=1)
@limiter()
def my_function():
do_something()
```
```
limiter = LeakyBucketRateLimiter(burst_limit=10, drip_rate=1)
@limiter(key)
def my_function():
do_something()
```
- raising an exception when limited:
```
try:
@limiter(key, raise_exception=True)
def my_function():
do_something()
except LeakyBucketRateLimiter.LimitExceeded as e:
print("Rate limited, needs to wait for {e.wait_time} seconds")
```
- providing a handler function:
```
def my_limited_handler(info, context):
print(f"Rate limited, needs to wait for {info.wait_time} seconds")
print(f"Original args: {context['args']}")
print(f"Original kwargs: {context['kwargs']}")
return None
@limiter(key, limited_handler=my_limited_handler)
def my_function():
rv = do_something()
```
"""
def decorator(func: Callable[P, R]) -> Callable[P, R]:
@functools.wraps(func)
def wrapper(*args: Any, **kwargs: Any) -> Any:
try:
key = key_override or func.__qualname__
info = self.use_and_get_info(key)
if info.wait_time:
raise self.LimitExceeded(info)
return func(*args, **kwargs)
except self.LimitExceeded as e:
if limited_handler:
return limited_handler(e.info, {"args": args, "kwargs": kwargs})
if raise_exception:
raise
return wrapper
return decorator
__call__ = decorator
| LeakyBucketRateLimiter |
python | EpistasisLab__tpot | tpot/builtin_modules/arithmetictransformer.py | {
"start": 10801,
"end": 11483
} | class ____(TransformerMixin, BaseEstimator):
def __init__(self):
"""
A transformer that takes checks if all elements in a row are less than or equal to 0.
"""
pass
def fit(self, X, y=None):
return self
def transform(self, X):
transformed_X = np.array(self.transform_helper(np.array(X)))
if transformed_X.dtype != float:
transformed_X = transformed_X.astype(float)
return transformed_X
def transform_helper(self, X):
X = np.array(X)
if len(X.shape) == 1:
X = np.expand_dims(X,0)
result = X <= 0
return result.astype(float)
| LETransformer |
python | microsoft__pyright | packages/pyright-internal/src/tests/samples/constructor15.py | {
"start": 777,
"end": 892
} | class ____(Generic[_M, _N]):
def __new__(cls, m: _M, n: _N) -> Self: ...
d: D[Literal[3], Literal[4]] = D(3, 4)
| D |
python | getsentry__sentry | src/sentry/api/serializers/rest_framework/release.py | {
"start": 1907,
"end": 3183
} | class ____(serializers.Serializer):
ref = serializers.CharField(
max_length=MAX_VERSION_LENGTH,
required=False,
allow_null=True,
allow_blank=True,
help_text="An optional commit reference. This is useful if a tagged version has been provided.",
)
url = serializers.URLField(
required=False,
allow_null=True,
allow_blank=True,
help_text="A URL that points to the release. For instance, this can be the path to an online interface to the source code, such as a GitHub URL.",
)
dateReleased = serializers.DateTimeField(
required=False,
allow_null=True,
help_text="An optional date that indicates when the release went live. If not provided the current time is used.",
)
commits = serializers.ListField(
child=CommitSerializer(),
required=False,
allow_null=False,
help_text="An optional list of commit data to be associated.",
)
status = serializers.CharField(required=False, allow_null=False)
def validate_status(self, value):
try:
return ReleaseStatus.from_string(value)
except ValueError:
raise serializers.ValidationError("Invalid status %s" % value)
| ReleaseSerializer |
python | pypa__warehouse | tests/unit/email/test_init.py | {
"start": 111606,
"end": 120197
} | class ____:
def test_collaborator_added_email(
self, pyramid_request, pyramid_config, monkeypatch
):
stub_user = pretend.stub(
id="id_1",
username="username",
name="",
email="email@example.com",
primary_email=pretend.stub(email="email@example.com", verified=True),
)
stub_submitter_user = pretend.stub(
id="id_2",
username="submitterusername",
name="",
email="submiteremail@example.com",
primary_email=pretend.stub(
email="submiteremail@example.com", verified=True
),
)
subject_renderer = pyramid_config.testing_add_renderer(
"email/collaborator-added/subject.txt"
)
subject_renderer.string_response = "Email Subject"
body_renderer = pyramid_config.testing_add_renderer(
"email/collaborator-added/body.txt"
)
body_renderer.string_response = "Email Body"
html_renderer = pyramid_config.testing_add_renderer(
"email/collaborator-added/body.html"
)
html_renderer.string_response = "Email HTML Body"
send_email = pretend.stub(
delay=pretend.call_recorder(lambda *args, **kwargs: None)
)
pyramid_request.task = pretend.call_recorder(lambda *args, **kwargs: send_email)
monkeypatch.setattr(email, "send_email", send_email)
ids = [stub_submitter_user.id, stub_user.id]
pyramid_request.db = pretend.stub(
query=lambda a: pretend.stub(
filter=lambda *a: pretend.stub(
one=lambda: pretend.stub(user_id=ids.pop())
)
),
)
pyramid_request.user = stub_submitter_user
pyramid_request.registry.settings = {"mail.sender": "noreply@example.com"}
result = email.send_collaborator_added_email(
pyramid_request,
[stub_user, stub_submitter_user],
user=stub_user,
submitter=stub_submitter_user,
project_name="test_project",
role="Owner",
)
assert result == {
"username": stub_user.username,
"project": "test_project",
"role": "Owner",
"submitter": stub_submitter_user.username,
}
subject_renderer.assert_()
body_renderer.assert_(username=stub_user.username)
body_renderer.assert_(project="test_project")
body_renderer.assert_(role="Owner")
body_renderer.assert_(submitter=stub_submitter_user.username)
html_renderer.assert_(username=stub_user.username)
html_renderer.assert_(project="test_project")
html_renderer.assert_(role="Owner")
html_renderer.assert_(submitter=stub_submitter_user.username)
assert pyramid_request.task.calls == [
pretend.call(send_email),
pretend.call(send_email),
]
assert send_email.delay.calls == [
pretend.call(
"username <email@example.com>",
{
"sender": None,
"subject": "Email Subject",
"body_text": "Email Body",
"body_html": (
"<html>\n<head></head>\n"
"<body><p>Email HTML Body</p></body>\n</html>\n"
),
},
{
"tag": "account:email:sent",
"user_id": stub_user.id,
"additional": {
"from_": "noreply@example.com",
"to": "email@example.com",
"subject": "Email Subject",
"redact_ip": True,
},
},
),
pretend.call(
"submitterusername <submiteremail@example.com>",
{
"sender": None,
"subject": "Email Subject",
"body_text": "Email Body",
"body_html": (
"<html>\n<head></head>\n"
"<body><p>Email HTML Body</p></body>\n</html>\n"
),
},
{
"tag": "account:email:sent",
"user_id": stub_submitter_user.id,
"additional": {
"from_": "noreply@example.com",
"to": "submiteremail@example.com",
"subject": "Email Subject",
"redact_ip": False,
},
},
),
]
def test_collaborator_added_email_unverified(
self, pyramid_request, pyramid_config, monkeypatch
):
stub_user = pretend.stub(
id="id_1",
username="username",
name="",
email="email@example.com",
primary_email=pretend.stub(email="email@example.com", verified=False),
)
stub_submitter_user = pretend.stub(
id="id_2",
username="submitterusername",
name="",
email="submiteremail@example.com",
primary_email=pretend.stub(
email="submiteremail@example.com", verified=True
),
)
subject_renderer = pyramid_config.testing_add_renderer(
"email/collaborator-added/subject.txt"
)
subject_renderer.string_response = "Email Subject"
body_renderer = pyramid_config.testing_add_renderer(
"email/collaborator-added/body.txt"
)
body_renderer.string_response = "Email Body"
html_renderer = pyramid_config.testing_add_renderer(
"email/collaborator-added/body.html"
)
html_renderer.string_response = "Email HTML Body"
send_email = pretend.stub(
delay=pretend.call_recorder(lambda *args, **kwargs: None)
)
pyramid_request.task = pretend.call_recorder(lambda *args, **kwargs: send_email)
monkeypatch.setattr(email, "send_email", send_email)
pyramid_request.db = pretend.stub(
query=lambda a: pretend.stub(
filter=lambda *a: pretend.stub(
one=lambda: pretend.stub(user_id=stub_submitter_user.id)
)
),
)
pyramid_request.user = stub_submitter_user
pyramid_request.registry.settings = {"mail.sender": "noreply@example.com"}
result = email.send_collaborator_added_email(
pyramid_request,
[stub_user, stub_submitter_user],
user=stub_user,
submitter=stub_submitter_user,
project_name="test_project",
role="Owner",
)
assert result == {
"username": stub_user.username,
"project": "test_project",
"role": "Owner",
"submitter": stub_submitter_user.username,
}
subject_renderer.assert_()
body_renderer.assert_(username=stub_user.username)
body_renderer.assert_(project="test_project")
body_renderer.assert_(role="Owner")
body_renderer.assert_(submitter=stub_submitter_user.username)
html_renderer.assert_(username=stub_user.username)
html_renderer.assert_(project="test_project")
html_renderer.assert_(role="Owner")
html_renderer.assert_(submitter=stub_submitter_user.username)
assert pyramid_request.task.calls == [pretend.call(send_email)]
assert send_email.delay.calls == [
pretend.call(
"submitterusername <submiteremail@example.com>",
{
"sender": None,
"subject": "Email Subject",
"body_text": "Email Body",
"body_html": (
"<html>\n<head></head>\n"
"<body><p>Email HTML Body</p></body>\n</html>\n"
),
},
{
"tag": "account:email:sent",
"user_id": stub_submitter_user.id,
"additional": {
"from_": "noreply@example.com",
"to": "submiteremail@example.com",
"subject": "Email Subject",
"redact_ip": False,
},
},
)
]
| TestCollaboratorAddedEmail |
python | encode__django-rest-framework | tests/test_filters.py | {
"start": 1583,
"end": 2415
} | class ____(TestCase):
def setUp(self):
self.original_coreapi = filters.coreapi
filters.coreapi = True # mock it, because not None value needed
self.filter_backend = filters.BaseFilterBackend()
def tearDown(self):
filters.coreapi = self.original_coreapi
def test_filter_queryset_raises_error(self):
with pytest.raises(NotImplementedError):
self.filter_backend.filter_queryset(None, None, None)
@pytest.mark.skipif(not coreschema, reason='coreschema is not installed')
def test_get_schema_fields_checks_for_coreapi(self):
filters.coreapi = None
with pytest.raises(AssertionError):
self.filter_backend.get_schema_fields({})
filters.coreapi = True
assert self.filter_backend.get_schema_fields({}) == []
| BaseFilterTests |
python | numba__numba | numba/core/environment.py | {
"start": 62,
"end": 1639
} | class ____(_dynfunc.Environment):
"""Stores globals and constant pyobjects for runtime.
It is often needed to convert b/w nopython objects and pyobjects.
"""
__slots__ = ('env_name', '__weakref__')
# A weak-value dictionary to store live environment with env_name as the
# key.
_memo = weakref.WeakValueDictionary()
@classmethod
def from_fndesc(cls, fndesc):
try:
# Avoid creating new Env
return cls._memo[fndesc.env_name]
except KeyError:
inst = cls(fndesc.lookup_globals())
inst.env_name = fndesc.env_name
cls._memo[fndesc.env_name] = inst
return inst
def can_cache(self):
is_dyn = '__name__' not in self.globals
return not is_dyn
def __reduce__(self):
return _rebuild_env, (
self.globals.get('__name__'),
self.consts,
self.env_name,
)
def __del__(self):
return
def __repr__(self):
return f"<Environment {self.env_name!r} >"
def _rebuild_env(modname, consts, env_name):
env = lookup_environment(env_name)
if env is not None:
return env
mod = importlib.import_module(modname)
env = Environment(mod.__dict__)
env.consts[:] = consts
env.env_name = env_name
# Cache loaded object
Environment._memo[env_name] = env
return env
def lookup_environment(env_name):
"""Returns the Environment object for the given name;
or None if not found
"""
return Environment._memo.get(env_name)
| Environment |
python | spack__spack | var/spack/test_repos/spack_repo/builtin_mock/packages/low_priority_provider/package.py | {
"start": 217,
"end": 577
} | class ____(Package):
"""Provides multiple virtuals but is low in the priority of clingo"""
homepage = "http://www.example.com"
url = "http://www.example.com/a-1.0.tar.gz"
version("1.0", md5="0123456789abcdef0123456789abcdef")
# A low priority provider that provides both these specs together
provides("mpi", "lapack")
| LowPriorityProvider |
python | run-llama__llama_index | llama-index-integrations/readers/llama-index-readers-sec-filings/llama_index/readers/sec_filings/utils.py | {
"start": 891,
"end": 6613
} | class ____(Exception):
pass
def form_request_payload(
ticker_or_cik: str,
filing_types: List[str],
start_date: str,
end_date: str,
start_index: int,
query: str,
) -> dict:
return {
"dateRange": "custom",
"startdt": start_date,
"enddt": end_date,
"entityName": ticker_or_cik,
"forms": filing_types,
"from": start_index,
"q": query,
}
def build_filing_metadata_from_hit(hit: dict) -> FilingMetadata:
accession_number, filing_details_filename = hit["_id"].split(":", 1)
# Company CIK should be last in the CIK list. This list may also include
# the CIKs of executives carrying out insider transactions like in form 4.
cik = hit["_source"]["ciks"][-1]
accession_number_no_dashes = accession_number.replace("-", "", 2)
submission_base_url = (
f"{SEC_EDGAR_ARCHIVES_BASE_URL}/{cik}/{accession_number_no_dashes}"
)
full_submission_url = f"{submission_base_url}/{accession_number}.txt"
# Get XSL if human readable is wanted
# XSL is required to download the human-readable
# and styled version of XML documents like form 4
# SEC_EDGAR_ARCHIVES_BASE_URL + /320193/000032019320000066/wf-form4_159839550969947.xml
# SEC_EDGAR_ARCHIVES_BASE_URL +
# /320193/000032019320000066/xslF345X03/wf-form4_159839550969947.xml
# xsl = hit["_source"]["xsl"]
# if xsl is not None:
# filing_details_url = f"{submission_base_url}/{xsl}/{filing_details_filename}"
# else:
# filing_details_url = f"{submission_base_url}/{filing_details_filename}"
filing_details_url = f"{submission_base_url}/{filing_details_filename}"
filing_details_filename_extension = Path(filing_details_filename).suffix.replace(
"htm", "html"
)
filing_details_filename = (
f"{FILING_DETAILS_FILENAME_STEM}{filing_details_filename_extension}"
)
return FilingMetadata(
accession_number=accession_number,
full_submission_url=full_submission_url,
filing_details_url=filing_details_url,
filing_details_filename=filing_details_filename,
)
def generate_random_user_agent() -> str:
return f"{fake.first_name()} {fake.last_name()} {fake.email()}"
def get_filing_urls_to_download(
filing_type: str,
ticker_or_cik: str,
num_filings_to_download: int,
after_date: str,
before_date: str,
include_amends: bool,
query: str = "",
) -> List[FilingMetadata]:
"""
Get the filings URL to download the data.
Returns:
List[FilingMetadata]: Filing metadata from SEC
"""
filings_to_fetch: List[FilingMetadata] = []
start_index = 0
client = requests.Session()
client.mount("http://", HTTPAdapter(max_retries=retries))
client.mount("https://", HTTPAdapter(max_retries=retries))
try:
while len(filings_to_fetch) < num_filings_to_download:
payload = form_request_payload(
ticker_or_cik,
[filing_type],
after_date,
before_date,
start_index,
query,
)
headers = {
"User-Agent": generate_random_user_agent(),
"Accept-Encoding": "gzip, deflate",
"Host": "efts.sec.gov",
}
resp = client.post(
SEC_EDGAR_SEARCH_API_ENDPOINT, json=payload, headers=headers
)
resp.raise_for_status()
search_query_results = resp.json()
if "error" in search_query_results:
try:
root_cause = search_query_results["error"]["root_cause"]
if not root_cause: # pragma: no cover
raise ValueError
error_reason = root_cause[0]["reason"]
raise EdgarSearchApiError(
f"Edgar Search API encountered an error: {error_reason}. "
f"Request payload:\n{payload}"
)
except (ValueError, KeyError): # pragma: no cover
raise EdgarSearchApiError(
"Edgar Search API encountered an unknown error. "
f"Request payload:\n{payload}"
) from None
query_hits = search_query_results["hits"]["hits"]
# No more results to process
if not query_hits:
break
for hit in query_hits:
hit_filing_type = hit["_source"]["file_type"]
is_amend = hit_filing_type[-2:] == "/A"
if not include_amends and is_amend:
continue
if is_amend:
num_filings_to_download += 1
# Work around bug where incorrect filings are sometimes included.
# For example, AAPL 8-K searches include N-Q entries.
if not is_amend and hit_filing_type != filing_type:
continue
metadata = build_filing_metadata_from_hit(hit)
filings_to_fetch.append(metadata)
if len(filings_to_fetch) == num_filings_to_download:
return filings_to_fetch
# Edgar queries 100 entries at a time, but it is best to set this
# from the response payload in case it changes in the future
query_size = search_query_results["query"]["size"]
start_index += query_size
# Prevent rate limiting
time.sleep(SEC_EDGAR_RATE_LIMIT_SLEEP_INTERVAL)
finally:
client.close()
return filings_to_fetch
| EdgarSearchApiError |
python | getsentry__sentry | src/sentry/api/serializers/models/project.py | {
"start": 34662,
"end": 35235
} | class ____(TypedDict):
id: str
name: str
slug: str
shortName: str
type: str
canDisable: bool
isTestable: bool
hasConfiguration: bool
metadata: dict
contexts: list[str]
status: str
assets: list
doc: str
firstPartyAlternative: Any
deprecationDate: Any
altIsSentryApp: Any
enabled: bool
version: str
author: dict[str, str]
isDeprecated: bool
isHidden: bool
description: str
features: list[str]
featureDescriptions: list[dict[str, str]]
resourceLinks: list[dict[str, str]]
| Plugin |
python | readthedocs__readthedocs.org | readthedocs/domains/apps.py | {
"start": 71,
"end": 264
} | class ____(AppConfig):
default_auto_field = "django.db.models.BigAutoField"
name = "readthedocs.domains"
def ready(self):
import readthedocs.domains.tasks # noqa
| DomainsConfig |
python | crytic__slither | slither/vyper_parsing/ast/types.py | {
"start": 123,
"end": 181
} | class ____:
src: str
node_id: int
@dataclass
| ASTNode |
python | scipy__scipy | scipy/integrate/_ode.py | {
"start": 37676,
"end": 40275
} | class ____(vode):
runner = getattr(_vode, 'zvode', None)
supports_run_relax = 1
supports_step = 1
scalar = complex
__class_getitem__ = None
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Override state array sizes for ZVODE (53 doubles vs 51 for VODE)
self.state_doubles = zeros(53, dtype=np.float64) # ZVODE_STATE_DOUBLE_SIZE
self.state_ints = zeros(41, dtype=np.int32) # ZVODE_STATE_INT_SIZE
def reset(self, n, has_jac):
mf = self._determine_mf_and_set_bands(has_jac)
if mf in (10,):
lzw = 15 * n
elif mf in (11, 12):
lzw = 15 * n + 2 * n ** 2
elif mf in (-11, -12):
lzw = 15 * n + n ** 2
elif mf in (13,):
lzw = 16 * n
elif mf in (14, 15):
lzw = 17 * n + (3 * self.ml + 2 * self.mu) * n
elif mf in (-14, -15):
lzw = 16 * n + (2 * self.ml + self.mu) * n
elif mf in (20,):
lzw = 8 * n
elif mf in (21, 22):
lzw = 8 * n + 2 * n ** 2
elif mf in (-21, -22):
lzw = 8 * n + n ** 2
elif mf in (23,):
lzw = 9 * n
elif mf in (24, 25):
lzw = 10 * n + (3 * self.ml + 2 * self.mu) * n
elif mf in (-24, -25):
lzw = 9 * n + (2 * self.ml + self.mu) * n
lrw = 20 + n
if mf % 10 in (0, 3):
liw = 30
else:
liw = 30 + n
zwork = zeros((lzw,), complex)
self.zwork = zwork
rwork = zeros((lrw,), float)
rwork[4] = self.first_step
rwork[5] = self.max_step
rwork[6] = self.min_step
self.rwork = rwork
iwork = zeros((liw,), np.int32)
if self.ml is not None:
iwork[0] = self.ml
if self.mu is not None:
iwork[1] = self.mu
iwork[4] = self.order
iwork[5] = self.nsteps
iwork[6] = 2 # mxhnil
self.iwork = iwork
self.call_args = [self.rtol, self.atol, 1, 1,
self.zwork, self.rwork, self.iwork, mf]
self.success = 1
self.initialized = False
# Zero state arrays on reset to avoid contamination from previous problems.
# State persistence works within a single integration (istate=2), but between
# different problems (different n etc.), state needs to be cleared.
self.state_doubles.fill(0.0)
self.state_ints.fill(0)
if zvode.runner is not None:
IntegratorBase.integrator_classes.append(zvode)
| zvode |
python | cython__cython | Cython/Compiler/ExprNodes.py | {
"start": 480504,
"end": 489019
} | class ____(ExprNode):
"""
Used when a pointer of base_type is cast to a memoryviewslice with that
base type. i.e.
<int[:M:1, :N]> p
creates a fortran-contiguous cython.array.
We leave the type set to object so coercions to object are more efficient
and less work. Acquiring a memoryviewslice from this will be just as
efficient. ExprNode.coerce_to() will do the additional typecheck on
self.compile_time_type
This also handles <int[:, :]> my_c_array
operand ExprNode the thing we're casting
base_type_node MemoryViewSliceTypeNode the cast expression node
"""
subexprs = ['operand', 'shapes']
shapes = None
is_temp = True
mode = "c"
array_dtype = None
shape_type = PyrexTypes.c_py_ssize_t_type
def analyse_types(self, env):
from . import MemoryView
self.operand = self.operand.analyse_types(env)
if self.array_dtype:
array_dtype = self.array_dtype
else:
array_dtype = self.base_type_node.base_type_node.analyse(env)
axes = self.base_type_node.axes
self.type = error_type
self.shapes = []
ndim = len(axes)
# Base type of the pointer or C array we are converting
base_type = self.operand.type
if not self.operand.type.is_ptr and not self.operand.type.is_array:
error(self.operand.pos, ERR_NOT_POINTER)
return self
# Dimension sizes of C array
array_dimension_sizes = []
if base_type.is_array:
while base_type.is_array:
array_dimension_sizes.append(base_type.size)
base_type = base_type.base_type
elif base_type.is_ptr:
base_type = base_type.base_type
else:
error(self.pos, "unexpected base type %s found" % base_type)
return self
if not (base_type.same_as(array_dtype) or base_type.is_void):
error(self.operand.pos, ERR_BASE_TYPE)
return self
elif self.operand.type.is_array and len(array_dimension_sizes) != ndim:
error(self.operand.pos,
"Expected %d dimensions, array has %d dimensions" %
(ndim, len(array_dimension_sizes)))
return self
# Verify the start, stop and step values
# In case of a C array, use the size of C array in each dimension to
# get an automatic cast
for axis_no, axis in enumerate(axes):
if not axis.start.is_none:
error(axis.start.pos, ERR_START)
return self
if axis.stop.is_none:
if not array_dimension_sizes:
error(axis.pos, ERR_NOT_STOP)
return self
axis.stop = IntNode.for_int(self.pos, array_dimension_sizes[axis_no])
axis.stop = axis.stop.analyse_types(env)
shape = axis.stop.coerce_to(self.shape_type, env)
if not shape.is_literal:
shape.coerce_to_temp(env)
self.shapes.append(shape)
first_or_last = axis_no in (0, ndim - 1)
if not axis.step.is_none and first_or_last:
# '1' in the first or last dimension denotes F or C contiguity
axis.step = axis.step.analyse_types(env)
if (not axis.step.type.is_int and axis.step.is_literal and not
axis.step.type.is_error):
error(axis.step.pos, "Expected an integer literal")
return self
if axis.step.compile_time_value(env) != 1:
error(axis.step.pos, ERR_STEPS)
return self
if axis_no == 0:
self.mode = "fortran"
elif not axis.step.is_none and not first_or_last:
# step provided in some other dimension
error(axis.step.pos, ERR_STEPS)
return self
if not self.operand.is_name:
self.operand = self.operand.coerce_to_temp(env)
axes = [('direct', 'follow')] * len(axes)
if self.mode == "fortran":
axes[0] = ('direct', 'contig')
else:
axes[-1] = ('direct', 'contig')
self.coercion_type = PyrexTypes.MemoryViewSliceType(array_dtype, axes)
self.coercion_type.validate_memslice_dtype(self.pos)
self.type = self.get_cython_array_type(env)
MemoryView.use_cython_array_utility_code(env)
env.use_utility_code(
MemoryView.get_typeinfo_to_format_code(env.context.shared_utility_qualified_name)
)
return self
def allocate_temp_result(self, code):
if self.temp_code:
raise RuntimeError("temp allocated multiple times")
self.temp_code = code.funcstate.allocate_temp(self.type, True)
def infer_type(self, env):
return self.get_cython_array_type(env)
def get_cython_array_type(self, env):
cython_scope = env.context.cython_scope
cython_scope.load_cythonscope()
return cython_scope.viewscope.lookup("array").type
def generate_result_code(self, code):
from . import Buffer
shapes = [self.shape_type.cast_code(shape.result())
for shape in self.shapes]
dtype = self.coercion_type.dtype
shapes_temp = code.funcstate.allocate_temp(py_object_type, True)
format_temp = code.funcstate.allocate_temp(py_object_type, True)
format_ptr_temp = code.funcstate.allocate_temp(c_char_ptr_type, True)
itemsize = "sizeof(%s)" % dtype.empty_declaration_code()
type_info = Buffer.get_type_information_cname(code, dtype)
if self.operand.type.is_ptr:
code.putln("if (!%s) {" % self.operand.result())
code.putln( 'PyErr_SetString(PyExc_ValueError,'
'"Cannot create cython.array from NULL pointer");')
code.putln(code.error_goto(self.operand.pos))
code.putln("}")
code.putln("%s = __pyx_format_from_typeinfo(&%s); %s" % (
format_temp,
type_info,
code.error_goto_if_null(format_temp, self.pos),
))
code.put_gotref(format_temp, py_object_type)
buildvalue_fmt = " __PYX_BUILD_PY_SSIZE_T " * len(shapes)
code.putln('%s = Py_BuildValue("(" %s ")", %s); %s' % (
shapes_temp,
buildvalue_fmt,
", ".join(shapes),
code.error_goto_if_null(shapes_temp, self.pos),
))
code.put_gotref(shapes_temp, py_object_type)
code.putln("#if CYTHON_COMPILING_IN_LIMITED_API")
code.putln('%s = PyBytes_AsString(%s); %s' % (
format_ptr_temp, format_temp,
code.error_goto_if_null(format_ptr_temp, self.pos),
))
code.putln("#else")
code.putln('%s = PyBytes_AS_STRING(%s);' % (
format_ptr_temp, format_temp,
))
code.putln("#endif")
code.putln('%s = __pyx_array_new(%s, %s, %s, "%s", (char *) %s); %s' % (
self.result(),
shapes_temp, itemsize, format_ptr_temp, self.mode, self.operand.result(),
code.error_goto_if_null(self.result(), self.pos),
))
self.generate_gotref(code)
def dispose(temp):
code.put_decref_clear(temp, py_object_type)
code.funcstate.release_temp(temp)
dispose(shapes_temp)
dispose(format_temp)
code.funcstate.release_temp(format_ptr_temp)
@classmethod
def from_carray(cls, src_node, env):
"""
Given a C array type, return a CythonArrayNode
"""
pos = src_node.pos
base_type = src_node.type
none_node = NoneNode(pos)
axes = []
while base_type.is_array:
axes.append(SliceNode(pos, start=none_node, stop=none_node,
step=none_node))
base_type = base_type.base_type
axes[-1].step = IntNode.for_int(pos, 1)
memslicenode = Nodes.MemoryViewSliceTypeNode(pos, axes=axes,
base_type_node=base_type)
result = CythonArrayNode(pos, base_type_node=memslicenode,
operand=src_node, array_dtype=base_type)
result = result.analyse_types(env)
return result
| CythonArrayNode |
python | huggingface__transformers | src/transformers/models/ernie/modeling_ernie.py | {
"start": 36944,
"end": 37387
} | class ____(nn.Module):
def __init__(self, config):
super().__init__()
self.predictions = ErnieLMPredictionHead(config)
def forward(self, sequence_output: torch.Tensor) -> torch.Tensor:
prediction_scores = self.predictions(sequence_output)
return prediction_scores
@auto_docstring(
custom_intro="""
Ernie Model with a `language modeling` head on top for CLM fine-tuning.
"""
)
| ErnieOnlyMLMHead |
python | pypa__warehouse | warehouse/admin/views/users.py | {
"start": 2556,
"end": 3054
} | class ____(wtforms.Form):
email = wtforms.fields.EmailField(validators=[wtforms.validators.InputRequired()])
primary = wtforms.fields.BooleanField()
verified = wtforms.fields.BooleanField()
public = wtforms.fields.BooleanField()
unverify_reason = wtforms.fields.StringField(render_kw={"readonly": True})
domain_last_checked = wtforms.fields.DateTimeField(render_kw={"readonly": True})
domain_last_status = wtforms.fields.StringField(render_kw={"readonly": True})
| EmailForm |
python | encode__django-rest-framework | tests/test_fields.py | {
"start": 67215,
"end": 67782
} | class ____(FieldValues):
"""
Valid and invalid values for a `Choice` field that uses an integer type,
instead of a char type.
"""
valid_inputs = {
'1': 1,
3: 3,
}
invalid_inputs = {
5: ['"5" is not a valid choice.'],
'abc': ['"abc" is not a valid choice.']
}
outputs = {
'1': 1,
1: 1
}
field = serializers.ChoiceField(
choices=[
(1, 'Poor quality'),
(2, 'Medium quality'),
(3, 'Good quality'),
]
)
| TestChoiceFieldWithType |
python | spack__spack | var/spack/test_repos/spack_repo/builtin_mock/packages/dev_build_test_install_phases/package.py | {
"start": 217,
"end": 709
} | class ____(Package):
homepage = "example.com"
url = "fake.com"
version("0.0.0", sha256="0123456789abcdef0123456789abcdef")
phases = ["one", "two", "three", "install"]
def one(self, spec, prefix):
print("One locomoco")
def two(self, spec, prefix):
print("Two locomoco")
def three(self, spec, prefix):
print("Three locomoco")
def install(self, spec, prefix):
mkdirp(prefix.bin)
print("install")
| DevBuildTestInstallPhases |
python | donnemartin__interactive-coding-challenges | graphs_trees/min_heap/min_heap.py | {
"start": 46,
"end": 2192
} | class ____(object):
def __init__(self):
self.array = []
def __len__(self):
return len(self.array)
def extract_min(self):
if not self.array:
return None
if len(self.array) == 1:
return self.array.pop(0)
minimum = self.array[0]
# Move the last element to the root
self.array[0] = self.array.pop(-1)
self._bubble_down(index=0)
return minimum
def peek_min(self):
return self.array[0] if self.array else None
def insert(self, key):
if key is None:
raise TypeError('key cannot be None')
self.array.append(key)
self._bubble_up(index=len(self.array) - 1)
def _bubble_up(self, index):
if index == 0:
return
index_parent = (index - 1) // 2
if self.array[index] < self.array[index_parent]:
# Swap the indices and recurse
self.array[index], self.array[index_parent] = \
self.array[index_parent], self.array[index]
self._bubble_up(index_parent)
def _bubble_down(self, index):
min_child_index = self._find_smaller_child(index)
if min_child_index == -1:
return
if self.array[index] > self.array[min_child_index]:
# Swap the indices and recurse
self.array[index], self.array[min_child_index] = \
self.array[min_child_index], self.array[index]
self._bubble_down(min_child_index)
def _find_smaller_child(self, index):
left_child_index = 2 * index + 1
right_child_index = 2 * index + 2
# No right child
if right_child_index >= len(self.array):
# No left child
if left_child_index >= len(self.array):
return -1
# Left child only
else:
return left_child_index
else:
# Compare left and right children
if self.array[left_child_index] < self.array[right_child_index]:
return left_child_index
else:
return right_child_index
| MinHeap |
python | numba__numba | numba/cuda/simulator/kernelapi.py | {
"start": 1075,
"end": 1191
} | class ____:
'''
CUDA Cooperative Groups
'''
def this_grid(self):
return GridGroup()
| FakeCUDACg |
python | wandb__wandb | wandb/apis/public/registries/_members.py | {
"start": 1020,
"end": 1931
} | class ____(ArtifactsBase, arbitrary_types_allowed=True):
kind: Literal[MemberKind.ENTITY] = MemberKind.ENTITY
team: Team
role: Union[MemberRole, str] # noqa: UP007
MemberOrId = Union[User, Team, UserMember, TeamMember, str]
"""Type hint for a registry member argument that accepts a User, Team, or their ID."""
def parse_member_ids(members: Iterable[MemberOrId]) -> tuple[list[str], list[str]]:
"""Returns a tuple of (user_ids, team_ids) from parsing the given objects."""
ids_by_kind: dict[MemberKind, set[str]] = defaultdict(set)
for parsed in map(MemberId.from_obj, members):
ids_by_kind[parsed.kind].add(parsed.encode())
user_ids = ids_by_kind[MemberKind.USER]
team_ids = ids_by_kind[MemberKind.ENTITY]
# Ordering shouldn't matter, but sort anyway for reproducibility and testing
return sorted(user_ids), sorted(team_ids)
@pydantic_dataclass
| TeamMember |
python | pytorch__pytorch | test/torch_np/numpy_tests/core/test_shape_base.py | {
"start": 4736,
"end": 6332
} | class ____(TestCase):
def test_non_iterable(self):
assert_raises(TypeError, hstack, 1)
def test_empty_input(self):
assert_raises(ValueError, hstack, ())
def test_0D_array(self):
a = array(1)
b = array(2)
res = hstack([a, b])
desired = array([1, 2])
assert_array_equal(res, desired)
def test_1D_array(self):
a = array([1])
b = array([2])
res = hstack([a, b])
desired = array([1, 2])
assert_array_equal(res, desired)
def test_2D_array(self):
a = array([[1], [2]])
b = array([[1], [2]])
res = hstack([a, b])
desired = array([[1, 1], [2, 2]])
assert_array_equal(res, desired)
def test_generator(self):
# numpy 1.24 emits warnings but we don't
# with assert_warns(FutureWarning):
hstack([np.arange(3) for _ in range(2)])
# with assert_warns(FutureWarning):
hstack([x for x in np.ones((3, 2))]) # noqa: C416
@skipif(numpy.__version__ < "1.24", reason="NP_VER: fails on NumPy 1.23.x")
def test_casting_and_dtype(self):
a = np.array([1, 2, 3])
b = np.array([2.5, 3.5, 4.5])
res = np.hstack(np.append(a, b), casting="unsafe", dtype=np.int64)
expected_res = np.array([1, 2, 3, 2, 3, 4])
assert_array_equal(res, expected_res)
def test_casting_and_dtype_type_error(self):
a = np.array([1, 2, 3])
b = np.array([2.5, 3.5, 4.5])
with pytest.raises(TypeError):
hstack((a, b), casting="safe", dtype=np.int64)
| TestHstack |
python | huggingface__transformers | tests/models/bartpho/test_tokenization_bartpho.py | {
"start": 915,
"end": 3235
} | class ____(TokenizerTesterMixin, unittest.TestCase):
from_pretrained_id = "vinai/bartpho-syllable"
tokenizer_class = BartphoTokenizer
test_rust_tokenizer = False
test_sentencepiece = True
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.special_tokens_map = {"unk_token": "<unk>"}
@classmethod
def get_tokenizer(cls, pretrained_name=None, **kwargs):
"""Create a fresh tokenizer for each test instead of loading from saved."""
kwargs.update(cls.special_tokens_map)
# Create a temporary directory for this tokenizer
tmpdir = tempfile.mkdtemp()
vocab = ["▁This", "▁is", "▁a", "▁t", "est"]
vocab_tokens = dict(zip(vocab, range(len(vocab))))
monolingual_vocab_file = os.path.join(tmpdir, VOCAB_FILES_NAMES["monolingual_vocab_file"])
with open(monolingual_vocab_file, "w", encoding="utf-8") as fp:
fp.writelines(f"{token} {vocab_tokens[token]}\n" for token in vocab_tokens)
return BartphoTokenizer(SAMPLE_VOCAB, monolingual_vocab_file, **kwargs)
def get_input_output_texts(self, tokenizer):
input_text = "This is a là test"
output_text = "This is a<unk><unk> test"
return input_text, output_text
def test_full_tokenizer(self):
vocab = ["▁This", "▁is", "▁a", "▁t", "est"]
vocab_tokens = dict(zip(vocab, range(len(vocab))))
special_tokens_map = {"unk_token": "<unk>"}
with tempfile.TemporaryDirectory() as tmpdir:
monolingual_vocab_file = os.path.join(tmpdir, VOCAB_FILES_NAMES["monolingual_vocab_file"])
with open(monolingual_vocab_file, "w", encoding="utf-8") as fp:
fp.writelines(f"{token} {vocab_tokens[token]}\n" for token in vocab_tokens)
tokenizer = BartphoTokenizer(SAMPLE_VOCAB, monolingual_vocab_file, **special_tokens_map)
text = "This is a là test"
bpe_tokens = "▁This ▁is ▁a ▁l à ▁t est".split()
tokens = tokenizer.tokenize(text)
self.assertListEqual(tokens, bpe_tokens)
input_tokens = tokens + [tokenizer.unk_token]
input_bpe_tokens = [4, 5, 6, 3, 3, 7, 8, 3]
self.assertListEqual(tokenizer.convert_tokens_to_ids(input_tokens), input_bpe_tokens)
| BartphoTokenizerTest |
python | apache__airflow | providers/google/src/airflow/providers/google/cloud/operators/managed_kafka.py | {
"start": 45046,
"end": 48929
} | class ____(ManagedKafkaBaseOperator):
"""
Update the properties of a single consumer group.
:param project_id: Required. The ID of the Google Cloud project that the service belongs to.
:param location: Required. The ID of the Google Cloud region that the service belongs to.
:param cluster_id: Required. The ID of the cluster whose topic is to be updated.
:param consumer_group_id: Required. The ID of the consumer group whose configuration to update.
:param consumer_group: Required. The consumer_group to update. Its ``name`` field must be populated.
:param update_mask: Required. Field mask is used to specify the fields to be overwritten in the
ConsumerGroup resource by the update. The fields specified in the update_mask are relative to the
resource, not the full request. A field will be overwritten if it is in the mask.
:param retry: Designation of what errors, if any, should be retried.
:param timeout: The timeout for this request.
:param metadata: Strings which should be sent along with the request as metadata.
:param gcp_conn_id: The connection ID to use connecting to Google Cloud.
:param impersonation_chain: Optional service account to impersonate using short-term
credentials, or chained list of accounts required to get the access_token
of the last account in the list, which will be impersonated in the request.
If set as a string, the account must grant the originating account
the Service Account Token Creator IAM role.
If set as a sequence, the identities from the list must grant
Service Account Token Creator IAM role to the directly preceding identity, with first
account from the list granting this role to the originating account (templated).
"""
template_fields: Sequence[str] = tuple(
{"cluster_id", "consumer_group_id", "consumer_group", "update_mask"}
| set(ManagedKafkaBaseOperator.template_fields)
)
operator_extra_links = (ApacheKafkaConsumerGroupLink(),)
def __init__(
self,
cluster_id: str,
consumer_group_id: str,
consumer_group: types.Topic | dict,
update_mask: FieldMask | dict,
*args,
**kwargs,
) -> None:
super().__init__(*args, **kwargs)
self.cluster_id = cluster_id
self.consumer_group_id = consumer_group_id
self.consumer_group = consumer_group
self.update_mask = update_mask
@property
def extra_links_params(self) -> dict[str, Any]:
return {
"location": self.location,
"cluster_id": self.cluster_id,
"consumer_group_id": self.consumer_group_id,
"project_id": self.project_id,
}
def execute(self, context: Context):
ApacheKafkaConsumerGroupLink.persist(context=context)
self.log.info("Updating an Apache Kafka consumer group.")
try:
consumer_group_obj = self.hook.update_consumer_group(
project_id=self.project_id,
location=self.location,
cluster_id=self.cluster_id,
consumer_group_id=self.consumer_group_id,
consumer_group=self.consumer_group,
update_mask=self.update_mask,
retry=self.retry,
timeout=self.timeout,
metadata=self.metadata,
)
self.log.info("Apache Kafka consumer group %s was updated.", self.consumer_group_id)
return types.ConsumerGroup.to_dict(consumer_group_obj)
except NotFound as not_found_err:
self.log.info("The Consumer Group %s does not exist.", self.consumer_group_id)
raise AirflowException(not_found_err)
except Exception as error:
raise AirflowException(error)
| ManagedKafkaUpdateConsumerGroupOperator |
python | allegroai__clearml | clearml/backend_api/services/v2_20/tasks.py | {
"start": 258069,
"end": 259925
} | class ____(Response):
"""
Response of tasks.failed endpoint.
:param updated: Number of tasks updated (0 or 1)
:type updated: int
:param fields: Updated fields names and values
:type fields: dict
"""
_service = "tasks"
_action = "failed"
_version = "2.20"
_schema = {
"definitions": {},
"properties": {
"fields": {
"additionalProperties": True,
"description": "Updated fields names and values",
"type": ["object", "null"],
},
"updated": {
"description": "Number of tasks updated (0 or 1)",
"enum": [0, 1],
"type": ["integer", "null"],
},
},
"type": "object",
}
def __init__(self, updated: Optional[int] = None, fields: Optional[dict] = None, **kwargs: Any) -> None:
super(FailedResponse, self).__init__(**kwargs)
self.updated = updated
self.fields = fields
@schema_property("updated")
def updated(self) -> Optional[int]:
return self._property_updated
@updated.setter
def updated(self, value: Optional[int]) -> None:
if value is None:
self._property_updated = None
return
if isinstance(value, float) and value.is_integer():
value = int(value)
self.assert_isinstance(value, "updated", six.integer_types)
self._property_updated = value
@schema_property("fields")
def fields(self) -> Optional[dict]:
return self._property_fields
@fields.setter
def fields(self, value: Optional[dict]) -> None:
if value is None:
self._property_fields = None
return
self.assert_isinstance(value, "fields", (dict,))
self._property_fields = value
| FailedResponse |
python | kamyu104__LeetCode-Solutions | Python/sliding-window-median.py | {
"start": 2589,
"end": 3908
} | class ____(object):
def medianSlidingWindow(self, nums, k):
"""
:type nums: List[int]
:type k: int
:rtype: List[float]
"""
def lazy_delete(heap, to_remove, sign):
while heap and sign*heap[0] in to_remove:
to_remove[sign*heap[0]] -= 1
if not to_remove[sign*heap[0]]:
del to_remove[sign*heap[0]]
heapq.heappop(heap)
min_heap, max_heap = [], []
for i in xrange(k):
if i%2 == 0:
heapq.heappush(min_heap, -heapq.heappushpop(max_heap, -nums[i]))
else:
heapq.heappush(max_heap, -heapq.heappushpop(min_heap, nums[i]))
result = [float(min_heap[0])] if k%2 else [(min_heap[0]-max_heap[0])/2.0]
to_remove = collections.defaultdict(int)
for i in xrange(k, len(nums)):
heapq.heappush(max_heap, -heapq.heappushpop(min_heap, nums[i]))
if nums[i-k] > -max_heap[0]:
heapq.heappush(min_heap, -heapq.heappop(max_heap))
to_remove[nums[i-k]] += 1
lazy_delete(max_heap, to_remove, -1)
lazy_delete(min_heap, to_remove, 1)
result.append(float(min_heap[0]) if k%2 else (min_heap[0]-max_heap[0])/2.0)
return result
| Solution3 |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-facebook-marketing/source_facebook_marketing/streams/streams.py | {
"start": 13873,
"end": 13993
} | class ____(AdsInsights):
breakdowns = ["dma"]
action_breakdowns = ["action_type"]
| AdsInsightsDemographicsDMARegion |
python | getsentry__sentry | src/sentry/search/events/fields.py | {
"start": 86645,
"end": 88159
} | class ____(NamedTuple):
field: str
instance: DiscoverFunction
arguments: Mapping[str, NormalizedArg]
def resolve_datetime64(
raw_value: datetime | str | float | None, precision: int = 6
) -> Function | None:
"""
This is normally handled by the snuba-sdk but it assumes that the underlying
table uses DateTime. Because we use DateTime64(6) as the underlying column,
we need to cast to the same type or we risk truncating the timestamp which
can lead to subtle errors.
raw_value - Can be one of several types
- None: Resolves to `None`
- float: Assumed to be a epoch timestamp in seconds with fractional parts
- str: Assumed to be isoformat timestamp in UTC time (without timezone info)
- datetime: Will be formatted as a isoformat timestamp in UTC time
"""
value: str | float | None = None
if isinstance(raw_value, datetime):
if raw_value.tzinfo is not None:
# This is adapted from snuba-sdk
# See https://github.com/getsentry/snuba-sdk/blob/2f7f014920b4f527a87f18c05b6aa818212bec6e/snuba_sdk/visitors.py#L168-L172
delta = raw_value.utcoffset()
assert delta is not None
raw_value -= delta
raw_value = raw_value.replace(tzinfo=None)
value = raw_value.isoformat()
elif isinstance(raw_value, float):
value = raw_value
if value is None:
return None
return Function("toDateTime64", [value, precision])
| FunctionDetails |
python | modin-project__modin | modin/core/io/column_stores/parquet_dispatcher.py | {
"start": 6124,
"end": 7461
} | class ____(ColumnStoreDataset):
def _init_dataset(self): # noqa: GL08
from pyarrow.parquet import ParquetDataset
return ParquetDataset(self.fs_path, filesystem=self.fs)
@property
def pandas_metadata(self):
return self.dataset.schema.pandas_metadata
@property
def columns(self):
return self.dataset.schema.names
@property
def engine(self):
return "pyarrow"
@functools.cached_property
def row_groups_per_file(self):
from pyarrow.parquet import ParquetFile
row_groups_per_file = []
# Count up the total number of row groups across all files and
# keep track of row groups per file to use later.
for file in self.files:
with self.fs.open(file) as f:
row_groups = ParquetFile(f).num_row_groups
row_groups_per_file.append(row_groups)
return row_groups_per_file
@functools.cached_property
def files(self):
files = self.dataset.files
return self._get_files(files)
def to_pandas_dataframe(
self,
columns,
):
from pyarrow.parquet import read_table
return read_table(
self._fs_path, columns=columns, filesystem=self.fs
).to_pandas()
@_inherit_docstrings(ColumnStoreDataset)
| PyArrowDataset |
python | numba__numba | numba/core/typing/builtins.py | {
"start": 7600,
"end": 7784
} | class ____(ConcreteTemplate):
_tys = machine_ints + sorted(types.real_domain)
cases = [signature(types.UniTuple(ty, 2), ty, ty) for ty in _tys]
@infer_global(operator.pow)
| DivMod |
python | huggingface__transformers | src/transformers/models/gemma3/modeling_gemma3.py | {
"start": 5810,
"end": 6486
} | class ____(nn.Module):
def __init__(self, dim: int, eps: float = 1e-6):
super().__init__()
self.eps = eps
self.weight = nn.Parameter(torch.zeros(dim))
def _norm(self, x):
return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
def forward(self, x):
output = self._norm(x.float())
# Llama does x.to(float16) * w whilst Gemma3 is (x * w).to(float16)
# See https://github.com/huggingface/transformers/pull/29402
output = output * (1.0 + self.weight.float())
return output.type_as(x)
def extra_repr(self):
return f"{tuple(self.weight.shape)}, eps={self.eps}"
| Gemma3RMSNorm |
python | pytorch__pytorch | test/jit/test_await.py | {
"start": 305,
"end": 12260
} | class ____(JitTestCase):
def test_await_python(self):
def foo(x: int) -> int:
return x + 13
aw: Await[int] = torch.jit._awaitable(foo, 13)
self.assertTrue(aw.fn()(*aw.args()) == torch.jit._awaitable_wait(aw))
nw = torch.jit._awaitable_nowait(33)
self.assertTrue(nw.is_nowait())
self.assertTrue(nw.args() == (33,))
def test_await_type_python(self):
def foo() -> Tensor:
return torch.randn()
awaits = torch.jit.annotate(List[Await[Tensor]], [])
awaits.append(torch.jit._awaitable(foo))
def test_script(self):
def delayed(z: int) -> int:
return z + 3
def fn(x: Tensor):
aw: Await[int] = torch.jit._awaitable(delayed, 99)
a = torch.eye(2)
b = torch.jit._awaitable_wait(aw)
return a + b + x
inp = torch.zeros(2)
sm = torch.jit.script(fn)
out = fn(inp)
script_out = sm(inp)
self.assertTrue(torch.allclose(torch.eye(2) + 102, script_out))
self.assertTrue(torch.allclose(script_out, out))
def test_nowait(self):
def fn(x: Tensor):
aw = torch.jit._awaitable_nowait(13)
a = torch.eye(2)
b = torch.jit._awaitable_wait(aw)
return a + b + x
inp = torch.zeros(2)
sm = torch.jit.script(fn)
out = fn(inp)
script_out = sm(inp)
self.assertTrue(torch.allclose(torch.eye(2) + 13, script_out))
self.assertTrue(torch.allclose(script_out, out))
def test_nowait_class(self):
class C:
def __init__(self, a: Tensor, b: Tensor):
self._a = a
self._b = b
def a(self) -> Tensor:
return self._a
def fn(x: Tensor):
aw = torch.jit._awaitable_nowait(C(torch.zeros(2), torch.ones(2)))
_a = torch.eye(2)
c = torch.jit._awaitable_wait(aw)
return _a + c.a() + x
make_global(C)
inp = torch.zeros(2)
sm = torch.jit.script(fn)
out = fn(inp)
script_out = sm(inp)
self.assertTrue(torch.allclose(torch.eye(2), script_out))
self.assertTrue(torch.allclose(script_out, out))
def test_await_class_arg(self):
class C:
def __init__(self, a: Tensor, b: Tensor):
self.__a = a
self.__b = b
def a(self) -> Tensor:
return self.__a
make_global(C)
def delayed(c: C) -> Tensor:
return c.a()
def fn(x: Tensor):
c = C(torch.zeros(2), torch.ones(2))
aw = torch.jit._awaitable(delayed, c)
_a = torch.eye(2)
c2_t = torch.jit._awaitable_wait(aw)
return _a + c2_t + x
inp = torch.zeros(2)
sm = torch.jit.script(fn)
out = fn(inp)
script_out = sm(inp)
self.assertTrue(torch.allclose(torch.eye(2), script_out))
self.assertTrue(torch.allclose(script_out, out))
def test_awaitable_to_await(self):
class C:
__slots__ = ["_a", "_b"]
def __init__(self, a: Tensor, b: Tensor):
self._a = a
self._b = b
make_global(C)
# Can not stay in the class as Jit does not support Recursive annotations
# (self in wait_impl can not be annotated as C as C is not defined by this time)
def C_wait_impl(self: C):
return self._a + self._b
def fn(x: Tensor):
aw = torch.jit._awaitable(C_wait_impl, C(torch.zeros(2), torch.ones(2)))
_a = torch.eye(2)
c_wait_impl_res = torch.jit._awaitable_wait(aw)
return _a + c_wait_impl_res + x
inp = torch.ones(2)
sm = torch.jit.script(fn)
out = fn(inp)
script_out = sm(inp)
self.assertTrue(torch.allclose(torch.eye(2) + 2 * torch.ones(2), script_out))
self.assertTrue(torch.allclose(script_out, out))
def test_await_class_return(self):
class C:
__slots__ = ["a", "b"]
def __init__(self, a: Tensor, b: Tensor):
self.a = a
self.b = b
make_global(C)
# Can not stay in the class as Jit does not support Recursive annotations
# (self in wait_impl can not be annotated as C as C is not defined by this time)
def C_wait_impl(self: C) -> C:
return C(self.a * 2, self.b * 3)
def fn_arg_C(x: C) -> Tensor:
return x.a + x.b
def fn(x: Tensor):
aw: Await[C] = torch.jit._awaitable(C_wait_impl, C(x, x))
_a = torch.eye(2)
y = fn_arg_C(torch.jit._awaitable_wait(aw))
return _a + y + x
inp = torch.ones(2)
sm = torch.jit.script(fn)
out = fn(inp)
script_out = sm(inp)
self.assertTrue(torch.allclose(torch.eye(2) + 6 * torch.ones(2), script_out))
self.assertTrue(torch.allclose(script_out, out))
self.assertGraphContainsExactly(
sm.graph, kind="prim::awaitable_wait", num_kind_nodes=1
)
def test_await_getattr_implicit_convertion(self):
class C:
def __init__(self, a: Tensor, b: Tensor):
self._a = a
self._b = b
def b(self):
return self._b
make_global(C)
# Can not stay in the class as Jit does not support Recursive annotations
# (self in wait_impl can not be annotated as C as C is not defined by this time)
def C_wait_impl(self: C) -> C:
return C(self._a * 2, self._b * 3)
def fn_arg_C(x: C) -> Tensor: # noqa: F841
return x._a + x._b
def fn(x: Tensor):
aw: Await[C] = torch.jit._awaitable(C_wait_impl, C(x, x))
_a = torch.eye(2)
ai = aw._a
awb = aw.b() # noqa: F841
c = C(2 * x, 2 * x)
return _a + ai + x + c._a + c.b()
inp = torch.ones(2)
sm = torch.jit.script(fn)
out = fn(inp)
script_out = sm(inp)
self.assertTrue(torch.allclose(torch.eye(2) + 7 * torch.ones(2), script_out))
self.assertTrue(torch.allclose(script_out, out))
self.assertGraphContainsExactly(
sm.graph, kind="prim::awaitable_wait", num_kind_nodes=2
)
def test_await_nested(self):
class C:
def __init__(self, a: Tensor, b: Tensor):
self.__a = a
self.__b = b
def a(self) -> Tensor:
return self.__a
make_global(C)
def delayed(c: C) -> Await[Tensor]:
return torch.jit._awaitable_nowait(3 * c.a())
def fn(x: Tensor) -> Await[Await[Tensor]]:
return torch.jit._awaitable(delayed, C(2 * x, x))
def main(x: Tensor) -> Tensor:
awaw = fn(x)
return torch.jit._awaitable_wait(torch.jit._awaitable_wait(awaw))
inp = torch.eye(2)
sm = torch.jit.script(main)
out = main(inp)
script_out = sm(inp)
self.assertTrue(torch.allclose(6 * torch.eye(2), script_out))
self.assertTrue(torch.allclose(script_out, out))
def test_eager_await_non_scriptable(self):
# Tree type can not be compiled (Recursive type)
class Tree:
def __init__(self, v):
self.parent = torch.jit.annotate(Optional[Tree], None)
self.v = v
make_global(Tree)
def delayed(t: Tree):
t.v = t.v + 1
return t
aw = torch.jit._awaitable(delayed, Tree(2))
t = torch.jit._awaitable_wait(aw)
self.assertTrue(t.v == 3)
def test_await_isinstance(self):
def delayed(x: Tensor) -> Tensor:
return 2 * (x + 1)
def main(x: Tensor) -> Tensor:
aw = torch.jit._awaitable(delayed, x)
if torch.jit.is_scripting():
assert isinstance(aw, torch.jit._Await)
return torch.jit._awaitable_wait(aw)
inp = torch.eye(2)
sm = torch.jit.script(main)
out = main(inp)
script_out = sm(inp)
self.assertTrue(
torch.allclose(2 * torch.eye(2) + 2 * torch.ones(2), script_out)
)
self.assertTrue(torch.allclose(script_out, out))
def test_await_eager_lazy(self):
def delayed(x: Tensor) -> Tensor:
return 2 * (x + 1)
t = torch.ones(2, dtype=torch.int64)
aw = torch.jit._awaitable(delayed, t)
self.assertTrue(isinstance(aw, torch._C._Await))
self.assertTrue(t.dtype == aw.dtype)
def test_await_out_of_interpreter(self):
def delayed(x: Tensor) -> Tensor:
return 2 * (x + 1)
def main(x: Tensor) -> Await[Tensor]:
aw = torch.jit._awaitable(delayed, x)
return aw
inp = torch.eye(2)
sm = torch.jit.script(main)
out_aw = main(inp)
out = torch.jit._awaitable_wait(out_aw)
script_out_aw = sm(inp)
script_out = torch.jit._awaitable_wait(script_out_aw)
self.assertTrue(
torch.allclose(2 * torch.eye(2) + 2 * torch.ones(2), script_out)
)
self.assertTrue(torch.allclose(script_out, out))
def test_jit_trace(self):
def gap(x: Tensor):
return torch.relu(x) + torch.sin(x)
def delayed(x: Tensor) -> Tensor:
return 2 * (torch.cos(x) + 1)
def main(x: Tensor, y: Tensor) -> Tensor:
aw = torch.jit._awaitable(delayed, x)
z = gap(y) # noqa: F841
k = torch.jit._awaitable_wait(aw)
return y + k
inp = torch.randn(2)
tm = torch.jit.trace(main, (inp, inp))
inp_check = torch.ones(2)
self.assertEqual(main(inp_check, inp_check), tm(inp_check, inp_check))
def test_await_multiout_save(self):
def gap(x: Tensor):
return torch.relu(x) + torch.sin(x)
def delayed(x: Tensor) -> Tuple[Tensor, List[Tensor]]:
l = [x * i for i in range(5)]
return (100 * x, l)
def main(x: Tensor) -> Tensor:
aw = torch.jit._awaitable(delayed, x)
z = gap(x)
(_, l) = torch.jit._awaitable_wait(aw)
return l[3] + z
inp = torch.eye(2)
sm = torch.jit.script(main)
out = main(inp)
script_out = sm(inp)
expected = 4.8415 * torch.eye(2)
self.assertTrue(torch.allclose(expected, script_out))
self.assertTrue(torch.allclose(script_out, out))
iofile = io.BytesIO()
torch.jit.save(sm, iofile)
iofile.seek(0)
sm = torch.jit.load(iofile)
script_out_load = sm(inp)
self.assertTrue(torch.allclose(expected, script_out_load))
def test_await_func_arg(self):
def gap(x: Tensor):
return torch.relu(x) + torch.sin(x)
def delayed(x: Tensor) -> Tensor:
return -1 * x
def fn(aw: Await[Tensor]) -> Tensor:
return 3 * torch.jit._awaitable_wait(aw)
def main(x: Tensor) -> Tensor:
aw = torch.jit._awaitable(delayed, x)
z = gap(x) # noqa: F841
y = fn(aw)
return y + x
inp = torch.eye(2)
sm = torch.jit.script(main)
out = main(inp)
script_out = sm(inp)
expected = -2 * torch.eye(2)
self.assertTrue(torch.allclose(expected, script_out))
self.assertTrue(torch.allclose(script_out, out))
iofile = io.BytesIO()
torch.jit.save(sm, iofile)
iofile.seek(0)
sm = torch.jit.load(iofile)
script_out_load = sm(inp)
self.assertTrue(torch.allclose(expected, script_out_load))
if __name__ == "__main__":
raise_on_run_directly("test/test_jit.py")
| TestAwait |
python | weaviate__weaviate-python-client | weaviate/collections/classes/config_vector_index.py | {
"start": 7198,
"end": 7377
} | class ____(_QuantizerConfigCreate):
cache: Optional[bool]
rescoreLimit: Optional[int]
@staticmethod
def quantizer_name() -> str:
return "bq"
| _BQConfigCreate |
python | davidhalter__jedi | jedi/inference/compiled/value.py | {
"start": 14194,
"end": 14615
} | class ____(AbstractNameDefinition):
"""
Accessing some names will raise an exception. To avoid not having any
completions, just give Jedi the option to return this object. It infers to
nothing.
"""
def __init__(self, inference_state, name):
self.parent_context = inference_state.builtins_module
self.string_name = name
def infer(self):
return NO_VALUES
| EmptyCompiledName |
python | kamyu104__LeetCode-Solutions | Python/design-twitter.py | {
"start": 281,
"end": 3777
} | class ____(object):
def __init__(self):
"""
Initialize your data structure here.
"""
self.__number_of_most_recent_tweets = 10
self.__followings = collections.defaultdict(set)
self.__messages = collections.defaultdict(list)
self.__time = 0
def postTweet(self, userId, tweetId):
"""
Compose a new tweet.
:type userId: int
:type tweetId: int
:rtype: void
"""
self.__time += 1
self.__messages[userId].append((self.__time, tweetId))
def getNewsFeed(self, userId):
"""
Retrieve the 10 most recent tweet ids in the user's news feed. Each item in the news feed must be posted by users who the user followed or by the user herself. Tweets must be ordered from most recent to least recent.
:type userId: int
:rtype: List[int]
"""
def nth_element(nums, n, compare=lambda a, b: a < b):
def tri_partition(nums, left, right, target, compare):
mid = left
while mid <= right:
if nums[mid] == target:
mid += 1
elif compare(nums[mid], target):
nums[left], nums[mid] = nums[mid], nums[left]
left += 1
mid += 1
else:
nums[mid], nums[right] = nums[right], nums[mid]
right -= 1
return left, right
left, right = 0, len(nums)-1
while left <= right:
pivot_idx = random.randint(left, right)
pivot_left, pivot_right = tri_partition(nums, left, right, nums[pivot_idx], compare)
if pivot_left <= n <= pivot_right:
return
elif pivot_left > n:
right = pivot_left-1
else: # pivot_right < n.
left = pivot_right+1
candidates = []
if self.__messages[userId]:
candidates.append((-self.__messages[userId][-1][0], userId, 0))
for uid in self.__followings[userId]:
if self.__messages[uid]:
candidates.append((-self.__messages[uid][-1][0], uid, 0))
nth_element(candidates, self.__number_of_most_recent_tweets-1)
max_heap = candidates[:self.__number_of_most_recent_tweets]
heapq.heapify(max_heap)
result = []
while max_heap and len(result) < self.__number_of_most_recent_tweets:
t, uid, curr = heapq.heappop(max_heap)
nxt = curr + 1
if nxt != len(self.__messages[uid]):
heapq.heappush(max_heap, (-self.__messages[uid][-(nxt+1)][0], uid, nxt))
result.append(self.__messages[uid][-(curr+1)][1])
return result
def follow(self, followerId, followeeId):
"""
Follower follows a followee. If the operation is invalid, it should be a no-op.
:type followerId: int
:type followeeId: int
:rtype: void
"""
if followerId != followeeId:
self.__followings[followerId].add(followeeId)
def unfollow(self, followerId, followeeId):
"""
Follower unfollows a followee. If the operation is invalid, it should be a no-op.
:type followerId: int
:type followeeId: int
:rtype: void
"""
self.__followings[followerId].discard(followeeId)
| Twitter |
python | keon__algorithms | tests/test_maths.py | {
"start": 3242,
"end": 4866
} | class ____(unittest.TestCase):
"""[summary]
Test for the file gcd.py
Arguments:
unittest {[type]} -- [description]
"""
def test_gcd(self):
self.assertEqual(4, gcd(8, 12))
self.assertEqual(1, gcd(13, 17))
def test_gcd_non_integer_input(self):
with pytest.raises(ValueError,
match=r"Input arguments are not integers"):
gcd(1.0, 5)
gcd(5, 6.7)
gcd(33.8649, 6.12312312)
def test_gcd_zero_input(self):
with pytest.raises(ValueError,
match=r"One or more input arguments equals zero"):
gcd(0, 12)
gcd(12, 0)
gcd(0, 0)
def test_gcd_negative_input(self):
self.assertEqual(1, gcd(-13, -17))
self.assertEqual(4, gcd(-8, 12))
self.assertEqual(8, gcd(24, -16))
def test_lcm(self):
self.assertEqual(24, lcm(8, 12))
self.assertEqual(5767, lcm(73, 79))
def test_lcm_negative_numbers(self):
self.assertEqual(24, lcm(-8, -12))
self.assertEqual(5767, lcm(73, -79))
self.assertEqual(1, lcm(-1, 1))
def test_lcm_zero_input(self):
with pytest.raises(ValueError,
match=r"One or more input arguments equals zero"):
lcm(0, 12)
lcm(12, 0)
lcm(0, 0)
def test_trailing_zero(self):
self.assertEqual(1, trailing_zero(34))
self.assertEqual(3, trailing_zero(40))
def test_gcd_bit(self):
self.assertEqual(4, gcd_bit(8, 12))
self.assertEqual(1, gcd(13, 17))
| TestGcd |
python | joke2k__faker | tests/providers/test_company.py | {
"start": 8989,
"end": 9549
} | class ____:
"""Test hu_HU company provider methods"""
def test_company_suffix(self, faker, num_samples):
for _ in range(num_samples):
suffix = faker.company_suffix()
assert isinstance(suffix, str)
assert suffix in HuHuCompanyProvider.company_suffixes
def test_company(self, faker, num_samples):
for _ in range(num_samples):
company = faker.company()
assert isinstance(company, str)
assert company.split(" ")[-1] in HuHuCompanyProvider.company_suffixes
| TestHuHu |
python | tensorflow__tensorflow | tensorflow/python/keras/layers/legacy_rnn/rnn_cell_impl.py | {
"start": 25488,
"end": 26034
} | class ____(_LSTMStateTuple):
"""Tuple used by LSTM Cells for `state_size`, `zero_state`, and output state.
Stores two elements: `(c, h)`, in that order. Where `c` is the hidden state
and `h` is the output.
Only used when `state_is_tuple=True`.
"""
__slots__ = ()
@property
def dtype(self):
(c, h) = self
if c.dtype != h.dtype:
raise TypeError("Inconsistent internal state: %s vs %s" %
(str(c.dtype), str(h.dtype)))
return c.dtype
@tf_export(v1=["nn.rnn_cell.BasicLSTMCell"])
| LSTMStateTuple |
python | getsentry__sentry | src/sentry/models/artifactbundle.py | {
"start": 856,
"end": 1396
} | class ____(Enum):
SOURCE = 1
MINIFIED_SOURCE = 2
SOURCE_MAP = 3
INDEXED_RAM_BUNDLE = 4
@classmethod
def choices(cls) -> list[tuple[int, str]]:
return [(key.value, key.name) for key in cls]
@classmethod
def from_lowercase_key(cls, lowercase_key: str | None) -> SourceFileType | None:
if lowercase_key is None:
return None
for key in cls:
if key.name.lower() == lowercase_key:
return SourceFileType(key.value)
return None
| SourceFileType |
python | ansible__ansible | lib/ansible/utils/collection_loader/_collection_finder.py | {
"start": 36214,
"end": 38057
} | class ____:
def __init__(self, fullname, path_list):
self._redirect = None
split_name = fullname.split('.')
toplevel_pkg = split_name[0]
module_to_load = split_name[-1]
if toplevel_pkg != 'ansible':
raise ImportError('not interested')
builtin_meta = _get_collection_metadata('ansible.builtin')
routing_entry = _nested_dict_get(builtin_meta, ['import_redirection', fullname])
if routing_entry:
self._redirect = routing_entry.get('redirect')
if not self._redirect:
raise ImportError('not redirected, go ask path_hook')
def get_resource_reader(self, fullname):
return _AnsibleTraversableResources(fullname, self)
def exec_module(self, module):
# should never see this
if not self._redirect:
raise ValueError('no redirect found for {0}'.format(module.__spec__.name))
# Replace the module with the redirect
sys.modules[module.__spec__.name] = import_module(self._redirect)
def create_module(self, spec):
return None
def load_module(self, fullname):
# since we're delegating to other loaders, this should only be called for internal redirects where we answered
# find_module with this loader, in which case we'll just directly import the redirection target, insert it into
# sys.modules under the name it was requested by, and return the original module.
# should never see this
if not self._redirect:
raise ValueError('no redirect found for {0}'.format(fullname))
# FIXME: smuggle redirection context, provide warning/error that we tried and failed to redirect
mod = import_module(self._redirect)
sys.modules[fullname] = mod
return mod
| _AnsibleInternalRedirectLoader |
python | huggingface__transformers | tests/models/prophetnet/test_modeling_prophetnet.py | {
"start": 40770,
"end": 41873
} | class ____(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase):
all_model_classes = (ProphetNetDecoder, ProphetNetForCausalLM) if is_torch_available() else ()
test_resize_embeddings = False
is_encoder_decoder = False
def setUp(self):
self.model_tester = ProphetNetStandaloneDecoderModelTester(self, is_training=False)
self.config_tester = ConfigTester(self, config_class=ProphetNetConfig)
def test_config(self):
self.config_tester.run_common_tests()
def test_decoder_model_past(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
self.model_tester.create_and_check_decoder_model_past(*config_and_inputs)
def test_decoder_model_attn_mask_past(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
self.model_tester.create_and_check_decoder_model_attention_mask_past(*config_and_inputs)
@unittest.skip(reason="Decoder cannot keep gradients")
def test_retain_grad_hidden_states_attentions(self):
return
@require_torch
| ProphetNetStandaloneDecoderModelTest |
python | pytorch__pytorch | test/functorch/test_aotdispatch.py | {
"start": 221778,
"end": 246566
} | class ____(AOTTestCase):
@unittest.skipIf(not USE_NETWORKX, "networkx not available")
def test_recompute_partitioning(self):
def fn(a, b):
return torch.sin(torch.sin(a)) + b
# Reference calculation
ref_a = torch.rand(10, 10, requires_grad=True)
ref_b = torch.rand(10, 10, requires_grad=True)
ref = fn(ref_a, ref_b)
ref.sum().backward()
# Compiled function calculation
res_a = ref_a.detach().clone().requires_grad_(True)
res_b = ref_b.detach().clone().requires_grad_(True)
def compile_fn(x, _):
return x
compiled_fn = compiled_function(
fn, compile_fn, compile_fn, min_cut_rematerialization_partition
)
res = compiled_fn(res_a, res_b)
res.sum().backward()
assert torch.allclose(ref, res, atol=1e-3, rtol=1e-3)
assert torch.allclose(ref_a.grad, res_a.grad, atol=1e-3, rtol=1e-3)
assert torch.allclose(ref_b.grad, res_b.grad, atol=1e-3, rtol=1e-3)
def test_meta_tensor_inplace_op(self):
# Following module results in inplace ops while tracing. The test checks
# that the meta tensor information is stored for inplace ops.
class MockModule(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.weight = torch.nn.Parameter(
torch.randn(3072, 768, requires_grad=True)
)
self.bias = torch.nn.Parameter(torch.randn(3072, requires_grad=True))
def forward(self, add_4):
linear_4 = torch.nn.functional.linear(
add_4, self.weight, bias=self.bias
)
gelu = torch.nn.functional.gelu(linear_4)
return gelu
def check_meta_tensor(fx_g, _):
for node in fx_g.graph.nodes:
if node.op != "output":
assert "tensor_meta" in node.meta
return fx_g
inp0 = torch.randn(16, 128, 768, requires_grad=True)
inputs = [
inp0,
]
mod = MockModule().to(device="cpu")
aot_mod = aot_module(mod, fw_compiler=check_meta_tensor)
aot_mod(*inputs)
def test_default_partitioner_getitem(self):
mod = nn.LayerNorm([10])
def f(x, mod_weight, mod_bias):
return torch.nn.functional.layer_norm(
x, [10], mod_weight, mod_bias, eps=1e-6
)
fw_graph, bw_graph = get_fw_bw_graph(
f,
[torch.randn(3, 10, requires_grad=True), mod.weight, mod.bias],
partitioner=default_partition,
)
self.assertEqual(get_num_ins_outs(fw_graph), (3, 6))
self.assertEqual(get_num_ins_outs(bw_graph), (6, 3))
@unittest.skipIf(not USE_NETWORKX, "networkx not available")
def test_min_cut_partitioner_raise_getitems(self):
def f(x):
y = torch.split(x, x.size(0) // 2, dim=0)
a = y[0].sin()
b = y[1].cos()
return a + b
_, bw_graph = get_fw_bw_graph(f, [torch.randn(4, 4, requires_grad=True)])
self.assertExpectedInline(
bw_graph.code.strip(),
"""\
def forward(self, primals_1, tangents_1):
split = torch.ops.aten.split.Tensor(primals_1, 2); primals_1 = None
getitem_1 = split[1]
getitem = split[0]; split = None
sin_1 = torch.ops.aten.sin.default(getitem_1); getitem_1 = None
neg = torch.ops.aten.neg.default(sin_1); sin_1 = None
mul = torch.ops.aten.mul.Tensor(tangents_1, neg); neg = None
cos_1 = torch.ops.aten.cos.default(getitem); getitem = None
mul_1 = torch.ops.aten.mul.Tensor(tangents_1, cos_1); tangents_1 = cos_1 = None
cat = torch.ops.aten.cat.default([mul_1, mul]); mul_1 = mul = None
return (cat,)""",
)
@unittest.skipIf(not USE_NETWORKX, "networkx not available")
def test_custom_partitioner_fn(self):
class MyCustomPartitionerFn(CustomPartitionerFn):
def __init__(self):
super().__init__()
self.called = False
def __call__(self, gm, joint_inputs, **kwargs):
self.called = True
return min_cut_rematerialization_partition(gm, joint_inputs, **kwargs)
def uuid(self):
return None
def f(x):
return x.cos().cos()
inp = [torch.randn((4, 4), requires_grad=True)]
custom_partitioner_fn = MyCustomPartitionerFn()
fw_graph, bw_graph = get_fw_bw_graph(f, inp, partitioner=custom_partitioner_fn)
self.assertTrue(custom_partitioner_fn.called)
self.assertExpectedInline(
fw_graph.code.strip(),
"""\
def forward(self, primals_1):
cos = torch.ops.aten.cos.default(primals_1)
cos_1 = torch.ops.aten.cos.default(cos); cos = None
return (cos_1, primals_1)""",
)
self.assertExpectedInline(
bw_graph.code.strip(),
"""\
def forward(self, primals_1, tangents_1):
cos = torch.ops.aten.cos.default(primals_1)
sin = torch.ops.aten.sin.default(cos); cos = None
neg = torch.ops.aten.neg.default(sin); sin = None
mul = torch.ops.aten.mul.Tensor(tangents_1, neg); tangents_1 = neg = None
sin_1 = torch.ops.aten.sin.default(primals_1); primals_1 = None
neg_1 = torch.ops.aten.neg.default(sin_1); sin_1 = None
mul_1 = torch.ops.aten.mul.Tensor(mul, neg_1); mul = neg_1 = None
return (mul_1,)""",
)
@unittest.skipIf(not USE_NETWORKX, "networkx not available")
def test_min_cut_partitioner_save_shape(self):
def f(x):
s = x.sum(dim=1)
return s
inp = [torch.ones([10, 10], requires_grad=True)]
fw_graph, bw_graph = get_fw_bw_graph(f, inp, dynamic=True)
_, fw_output = get_ins_outs(fw_graph)
self.assertEqual(get_num_ins_outs(fw_graph), (1, 3))
self.assertEqual(get_num_ins_outs(bw_graph), (3, 1))
self.assertEqual(str(fw_output[0]), "sum_1")
# make sure we don't do the suboptimal thing of saving the bigger primals input to sum,
# rather than saving the sizes of the primals input for use in backward expand
self.assertEqual(str(fw_output[1]), "sym_size_int")
self.assertEqual(str(fw_output[2]), "sym_size_int_1")
inp = [
torch.randn(10, requires_grad=True),
torch.randn((3, 10), requires_grad=True),
torch.randn((2, 10), requires_grad=True),
]
def f(a, b, c):
# tried to test what happens if we save a size tuple in the graph;
# turns out we never will due to how we trace, but this is probably
# still a good test case for various size manipulations
sb = torch.ops.aten.sym_size(b)
sc = c.size()
x = sb[0] + sc[0]
a_sz = (x, a.size(0))
return torch.cat([a.expand(a_sz), b, c])
fw_graph, bw_graph = get_fw_bw_graph(f, inp, dynamic=True)
self.assertEqual(get_num_ins_outs(fw_graph), (3, 4))
self.assertEqual(get_num_ins_outs(bw_graph), (4, 3))
_, outs = get_ins_outs(fw_graph)
self.assertTrue(all(is_sym_node(n) for n in outs[1:]))
def test_default_partitioner_output_tensor_shape_tensor(self):
inp = [
torch.randn(10, requires_grad=True),
torch.randn((3, 10), requires_grad=True),
torch.randn((2, 10), requires_grad=True),
torch.randn((10, 1), requires_grad=True),
]
def f(a, b, c, d):
# Try to force symints intermixed with outputs in the function's returns
sb = b.size()
sc = c.size()
x = sb[0] + sc[0]
a_sz = (x, a.size(0))
cat = torch.cat([a.expand(a_sz), b, c])
mm = torch.mm(cat, d)
mm2 = torch.mm(
mm, a.view(mm.size(1), a.size(0))
) # this saves 4 new ints for backward. why?
# and what do i have to do to make it save a tensor for backward?
return cat, sb, c, mm2
fw_graph_cell = [None]
bw_graph_cell = [None]
compiled_outs = aot_function(
f,
fw_compiler=partial(extract_graph, graph_cell=fw_graph_cell),
bw_compiler=partial(extract_graph, graph_cell=bw_graph_cell),
partition_fn=default_partition,
decompositions=default_decompositions,
dynamic=True,
)(*inp)
fw_graph = fw_graph_cell[0]
(compiled_outs[0].sum() + compiled_outs[2].sum()).backward()
bw_graph = bw_graph_cell[0]
# in the fwd graph, 13 outs because:
# - 5 original outputs (sb is a tuple, gets expanded to 2 symints)
# - 8 saved outputs for backward: 5 tensors, 3 symints
self.assertEqual(get_num_ins_outs(fw_graph), (4, 13))
# in the bwd graph, 10 inputs (grad outs) because:
# - The fwd graph had 13 outputs
# - 1 was a view of an input, which gets regenerated outside of the graph
# and doesn't participate in the backward
# - 2 user outs were symints (b.size()), which don't get tangents in the backward
self.assertEqual(get_num_ins_outs(bw_graph), (10, 4))
_, fw_graph_out_nodes = get_ins_outs(fw_graph)
self.assertEqual(
# fw outputs include b.size() which expands to 2 symints,
#
# TODO(whc)- are the saved-tensors/saved-symints correct here?
# i just made the test pass based on what default partition did
# Of the 5 original forward outputs, the 4th (c) is an input,
# which won't show up in the compiled forward graph
[False, True, True, False, False] + [False] * 4 + [True] * 4,
[is_sym_node(n) for n in fw_graph_out_nodes],
)
real_outs = f(*inp)
self.assertEqual(compiled_outs, real_outs)
self.assertTrue(isinstance(real_outs[1], torch.Size))
# TODO(whc) we should learn to return torch.Sizes
self.assertFalse(isinstance(compiled_outs[1], torch.Size))
@unittest.skipIf(not USE_NETWORKX, "networkx not available")
def test_min_cut_partitioner_output_tensor_shape_tensor(self):
inp = [
torch.randn(10, requires_grad=True),
torch.randn((3, 10), requires_grad=True),
torch.randn((2, 10), requires_grad=True),
torch.randn((10, 1), requires_grad=True),
]
def f(a, b, c, d):
# Try to force symints intermixed with outputs in the function's returns
sb = b.size()
sc = c.size()
x = sb[0] + sc[0]
a_sz = (x, a.size(0))
cat = torch.cat([a.expand(a_sz), b, c])
mm = torch.mm(cat, d)
mm2 = torch.mm(
mm, a.view(mm.size(1), a.size(0))
) # this saves 4 new ints for backward. why?
# and what do i have to do to make it save a tensor for backward?
return cat, sb, c, mm2
fw_graph_cell = [None]
bw_graph_cell = [None]
compiled_outs = aot_function(
f,
fw_compiler=partial(extract_graph, graph_cell=fw_graph_cell),
bw_compiler=partial(extract_graph, graph_cell=bw_graph_cell),
partition_fn=min_cut_rematerialization_partition,
decompositions=default_decompositions,
dynamic=True,
)(*inp)
fw_graph = fw_graph_cell[0]
(compiled_outs[0].sum() + compiled_outs[2].sum()).backward()
bw_graph = bw_graph_cell[0]
self.assertEqual(get_num_ins_outs(fw_graph), (4, 12))
self.assertEqual(get_num_ins_outs(bw_graph), (9, 4))
_, fw_graph_out_nodes = get_ins_outs(fw_graph)
self.assertEqual(
# fw outputs include b.size() which expands to 2 symints,
# then 4 tensors (transposes of matrices used for mm) are saved
# finally 3 symints are saved
[False, True, True, False, False] + [False] * 4 + [True] * 3,
[is_sym_node(n) for n in fw_graph_out_nodes],
)
real_outs = f(*inp)
self.assertEqual(compiled_outs, real_outs)
self.assertTrue(isinstance(real_outs[1], torch.Size))
# TODO(whc) we should learn to return torch.Sizes
self.assertFalse(isinstance(compiled_outs[1], torch.Size))
@unittest.skipIf(not USE_NETWORKX, "networkx not available")
def test_min_cut_partitioner(self):
def f(x):
return x.cos().cos().cos()
fw_graph, bw_graph = get_fw_bw_graph(f, [torch.randn(3, requires_grad=True)])
self.assertEqual(get_num_ins_outs(fw_graph), (1, 2))
self.assertEqual(get_num_ins_outs(bw_graph), (2, 1))
def f(a, b, c, d):
x = a + b + c + d
return x.cos().cos()
fw_graph, bw_graph = get_fw_bw_graph(
f, [torch.randn(3, requires_grad=True) for _ in range(4)]
)
self.assertEqual(get_num_ins_outs(fw_graph), (4, 2))
self.assertEqual(get_num_ins_outs(bw_graph), (2, 4))
def test_contiguous(self):
# The test simulates the condition where transpose followed by view
# happens in the backward pass.
# https://discuss.pytorch.org/t/error-on-transpose-and-view/434
def f(x):
return x.view(2, 3).t()
inp = torch.randn(6, requires_grad=True)
out = aot_function(f, nop)(inp)
torch.autograd.grad(out, inp, torch.randn(3, 2))
def test_preserve_random(self):
def fn(x):
return torch.nn.functional.dropout(x, 0.5) + x
x = torch.randn(4)
torch.manual_seed(0)
ref = fn(x)
torch.manual_seed(0)
aot_fn = aot_function(fn, nop)
res = aot_fn(x)
assert torch.allclose(ref, res)
# https://github.com/pytorch/pytorch/issues/110666
def test_generate_gives_inference_graph(self):
# We expect this to give an inference graph
def generate(x):
with torch.no_grad():
return torch.mul(x, x)
inference_graph_cell = [None]
inference_compiler = make_boxed_compiler(
partial(extract_graph, graph_cell=inference_graph_cell)
)
aot_fn = aot_function(generate, nop, inference_compiler=inference_compiler)
# Even though x requires grad, we should still get an inference graph
x = torch.randn(4, requires_grad=True)
aot_fn(x)
self.assertTrue(inference_graph_cell[0] is not None)
@unittest.skipIf(not torch.cuda.is_available(), "CUDA is unavailable")
@unittest.skipIf(not USE_TORCHVISION, "test requires torchvision")
def test_autocast(self):
mod = torchvision.models.resnet18().cuda()
mod.train()
x = torch.randn(16, 3, 32, 32, device="cuda")
aot_mod = memory_efficient_fusion(mod)
# Ensure that AOT Autograd works with AMP
with torch.cuda.amp.autocast(True):
res = aot_mod(x)
res.sum().backward()
def test_quantize_activation_duplicate_nodes(self):
"""Test both quantize_activation_fw and quantize_activation_bw handle duplicate nodes correctly"""
import torch.fx as fx
from torch._functorch.partitioners import (
quantize_activation_bw,
quantize_activation_fw,
)
from torch._subclasses.fake_tensor import extract_tensor_metadata
# Mock the inductor config
with patch.dict(
"torch._inductor.config.post_grad_fusion_options",
{
"activation_quantization_aten_pass": {
"allowed_dtypes": "torch.bfloat16",
"size_in_mb": 1,
"use_scaling": True,
"exclude_primals": False,
"skip_dynamo_guards": True,
"quantize_dynamic_shape": False,
"quant_type": "torch.float16", # float8_e5m2 must be GPU
}
},
):
# Test Forward Graph with duplicate nodes
fwd_graph = fx.Graph()
# Create input nodes
x = fwd_graph.placeholder("x")
x.meta["val"] = torch.randn(100, 100, dtype=torch.bfloat16)
x.meta["tensor_meta"] = extract_tensor_metadata(x.meta["val"])
y = fwd_graph.placeholder("y")
y.meta["val"] = torch.randn(100, 100, dtype=torch.bfloat16)
y.meta["tensor_meta"] = extract_tensor_metadata(y.meta["val"])
# Create a computation node that will be duplicated in outputs
mul_node = fwd_graph.call_function(torch.ops.aten.mul.Tensor, (x, y))
mul_node.meta["val"] = torch.randn(100, 100, dtype=torch.bfloat16)
mul_node.meta["tensor_meta"] = extract_tensor_metadata(mul_node.meta["val"])
mul_node.meta["saved_for_quantization"] = True
# Create another node
add_node = fwd_graph.call_function(torch.ops.aten.add.Tensor, (x, y))
add_node.meta["val"] = torch.randn(100, 100, dtype=torch.bfloat16)
add_node.meta["tensor_meta"] = extract_tensor_metadata(add_node.meta["val"])
# Create output with DUPLICATE nodes - mul_node appears at positions 0 and 2
fwd_graph.output((mul_node, add_node, mul_node))
# Test the forward quantization function
quantize_activation_fw(fwd_graph)
# Get the forward output node
fwd_output_node = fwd_graph.find_nodes(op="output")[0]
fwd_output_args = fwd_output_node.args[0]
# Verify forward graph has the correct structure
self.assertGreaterEqual(
len(fwd_output_args), 3, "Should have at least the original 3 outputs"
)
# Check that positions 0 and 2 reuse the same quantized node
pos_0_node = fwd_output_args[0]
pos_2_node = fwd_output_args[2]
# Both should be quantized nodes
self.assertTrue(
pos_0_node.name.startswith("fp8_quant_"),
f"Position 0 should be quantized node, got: {pos_0_node.name}",
)
self.assertTrue(
pos_2_node.name.startswith("fp8_quant_"),
f"Position 2 should be quantized node, got: {pos_2_node.name}",
)
# The shared quantized node should have the first occurrence position in its name
self.assertIn(
"_pos_0",
pos_0_node.name,
f"Shared quantized node should have '_pos_0' in name: {pos_0_node.name}",
)
self.assertIn(
"_pos_2",
pos_2_node.name,
f"Shared quantized node should have '_pos_2' in name: {pos_2_node.name}",
)
# Find scale nodes in the forward output
fwd_scale_nodes = [
node for node in fwd_output_args if "fp8_scale_" in node.name
]
self.assertEqual(
len(fwd_scale_nodes),
2,
"Should have exactly 2 scale node (shared for both quantized instances)",
)
# Test Backward Graph with duplicate nodes
bwd_graph = fx.Graph()
# Create backward placeholders corresponding to forward outputs
quant_input1 = bwd_graph.placeholder("fp8_quant_pos_0_mul_tensor")
quant_input1.meta["val"] = torch.randn(100, 100, dtype=torch.float16)
quant_input1.meta["tensor_meta"] = extract_tensor_metadata(
quant_input1.meta["val"]
)
quant_input1.meta["saved_for_quantization"] = True
quant_input1.meta["dequant_type"] = torch.bfloat16
add_input = bwd_graph.placeholder("add")
add_input.meta["val"] = torch.randn(100, 100, dtype=torch.bfloat16)
add_input.meta["tensor_meta"] = extract_tensor_metadata(
add_input.meta["val"]
)
quant_input2 = bwd_graph.placeholder("fp8_quant_pos_2_mul_tensor")
quant_input2.meta["val"] = torch.randn(100, 100, dtype=torch.float16)
quant_input2.meta["tensor_meta"] = extract_tensor_metadata(
quant_input2.meta["val"]
)
quant_input2.meta["saved_for_quantization"] = True
quant_input2.meta["dequant_type"] = torch.bfloat16
# Add scale node (would come from forward)
scale_input = bwd_graph.placeholder("fp8_scale_pos_0_mul_tensor")
scale_input.meta["val"] = torch.randn(100, 100, dtype=torch.float32)
scale_input.meta["tensor_meta"] = extract_tensor_metadata(
scale_input.meta["val"]
)
scale_input2 = bwd_graph.placeholder("fp8_scale_pos_2_mul_tensor")
scale_input2.meta["val"] = torch.randn(100, 100, dtype=torch.float32)
scale_input2.meta["tensor_meta"] = extract_tensor_metadata(
scale_input.meta["val"]
)
# Create some backward computation using both quantized inputs
grad_output1 = bwd_graph.placeholder("tangents_1")
grad_output1.meta["val"] = torch.randn(100, 100, dtype=torch.bfloat16)
grad_output1.meta["tensor_meta"] = extract_tensor_metadata(
grad_output1.meta["val"]
)
grad_output2 = bwd_graph.placeholder("tangents_2")
grad_output2.meta["val"] = torch.randn(100, 100, dtype=torch.bfloat16)
grad_output2.meta["tensor_meta"] = extract_tensor_metadata(
grad_output2.meta["val"]
)
# Create backward operations using the quantized inputs
mul_bwd1 = bwd_graph.call_function(
torch.ops.aten.mul.Tensor, (quant_input1, grad_output1)
)
mul_bwd1.meta["val"] = torch.randn(100, 100, dtype=torch.bfloat16)
mul_bwd1.meta["tensor_meta"] = extract_tensor_metadata(mul_bwd1.meta["val"])
mul_bwd2 = bwd_graph.call_function(
torch.ops.aten.mul.Tensor, (quant_input2, grad_output2)
)
mul_bwd2.meta["val"] = torch.randn(100, 100, dtype=torch.bfloat16)
mul_bwd2.meta["tensor_meta"] = extract_tensor_metadata(mul_bwd2.meta["val"])
# Create output
bwd_graph.output((mul_bwd1, mul_bwd2))
# Test the backward quantization function
quantize_activation_bw(bwd_graph)
# Verify backward graph processing
bwd_placeholders = list(bwd_graph.find_nodes(op="placeholder"))
quantized_placeholders = [
p for p in bwd_placeholders if "fp8_quant_" in p.name
]
scale_placeholders = [p for p in bwd_placeholders if "fp8_scale_" in p.name]
# Should have processed the quantized placeholders
self.assertGreater(
len(quantized_placeholders), 0, "Should have quantized placeholders"
)
self.assertGreater(
len(scale_placeholders), 0, "Should have scale placeholders"
)
# Check that dequantization operations were added
dequant_operations = [
node
for node in bwd_graph.nodes
if node.op == "call_function"
and "convert_element_type" in str(node.target)
]
# Should have dequantization operations for each quantized input that was processed
self.assertGreater(
len(dequant_operations),
0,
"Should have dequantization operations in backward graph",
)
# Verify the backward graph users were properly updated
for quant_placeholder in quantized_placeholders:
# The quantized placeholder should not be directly used in final operations
# (it should be replaced by dequantized versions)
direct_users = [
user
for user in quant_placeholder.users
if user.op == "call_function" and "mul" in str(user.target)
]
# Direct usage should be minimal (only for dequantization chain)
self.assertLessEqual(
len(direct_users),
1,
f"Quantized placeholder {quant_placeholder.name} should have minimal direct users",
)
| TestPartitioning |
python | bottlepy__bottle | bottle.py | {
"start": 122828,
"end": 127198
} | class ____:
def __init__(
self,
stream,
boundary,
content_length=-1,
disk_limit=2 ** 30,
mem_limit=2 ** 20,
memfile_limit=2 ** 18,
buffer_size=2 ** 16,
charset="latin1",
):
self.stream = stream
self.boundary = boundary
self.content_length = content_length
self.disk_limit = disk_limit
self.memfile_limit = memfile_limit
self.mem_limit = min(mem_limit, self.disk_limit)
self.buffer_size = min(buffer_size, self.mem_limit)
self.charset = charset
if not boundary:
raise MultipartError("No boundary.")
if self.buffer_size - 6 < len(boundary): # "--boundary--\r\n"
raise MultipartError("Boundary does not fit into buffer_size.")
def _lineiter(self):
""" Iterate over a binary file-like object (crlf terminated) line by
line. Each line is returned as a (line, crlf) tuple. Lines larger
than buffer_size are split into chunks where all but the last chunk
has an empty string instead of crlf. Maximum chunk size is twice the
buffer size.
"""
read = self.stream.read
maxread, maxbuf = self.content_length, self.buffer_size
partial = b"" # Contains the last (partial) line
while True:
chunk = read(maxbuf if maxread < 0 else min(maxbuf, maxread))
maxread -= len(chunk)
if not chunk:
if partial:
yield partial, b''
break
if partial:
chunk = partial + chunk
scanpos = 0
while True:
i = chunk.find(b'\r\n', scanpos)
if i >= 0:
yield chunk[scanpos:i], b'\r\n'
scanpos = i + 2
else: # CRLF not found
partial = chunk[scanpos:] if scanpos else chunk
break
if len(partial) > maxbuf:
yield partial[:-1], b""
partial = partial[-1:]
def parse(self):
""" Return a MultiPart iterator. Can only be called once. """
lines, line = self._lineiter(), ""
separator = b"--" + tob(self.boundary)
terminator = separator + b"--"
mem_used, disk_used = 0, 0 # Track used resources to prevent DoS
is_tail = False # True if the last line was incomplete (cutted)
# Consume first boundary. Ignore any preamble, as required by RFC
# 2046, section 5.1.1.
for line, nl in lines:
if line in (separator, terminator):
break
else:
raise MultipartError("Stream does not contain boundary")
# First line is termainating boundary -> empty multipart stream
if line == terminator:
for _ in lines:
raise MultipartError("Found data after empty multipart stream")
return
part_options = {
"buffer_size": self.buffer_size,
"memfile_limit": self.memfile_limit,
"charset": self.charset,
}
part = _MultipartPart(**part_options)
for line, nl in lines:
if not is_tail and (line == separator or line == terminator):
part.finish()
if part.is_buffered():
mem_used += part.size
else:
disk_used += part.size
yield part
if line == terminator:
break
part = _MultipartPart(**part_options)
else:
is_tail = not nl # The next line continues this one
try:
part.feed(line, nl)
if part.is_buffered():
if part.size + mem_used > self.mem_limit:
raise MultipartError("Memory limit reached.")
elif part.size + disk_used > self.disk_limit:
raise MultipartError("Disk limit reached.")
except MultipartError:
part.close()
raise
else:
part.close()
if line != terminator:
raise MultipartError("Unexpected end of multipart stream.")
| _MultipartParser |
python | PrefectHQ__prefect | src/prefect/server/schemas/responses.py | {
"start": 3779,
"end": 4848
} | class ____(PrefectBaseModel):
"""Represents a history of aggregation states over an interval"""
interval_start: DateTime = Field(
default=..., description="The start date of the interval."
)
interval_end: DateTime = Field(
default=..., description="The end date of the interval."
)
states: List[HistoryResponseState] = Field(
default=..., description="A list of state histories during the interval."
)
@model_validator(mode="before")
@classmethod
def validate_timestamps(
cls, values: dict
) -> dict: # TODO: remove this, handle with ORM
d = {"interval_start": None, "interval_end": None}
for field in d.keys():
val = values.get(field)
if isinstance(val, datetime.datetime):
d[field] = create_datetime_instance(values[field])
else:
d[field] = val
return {**values, **d}
StateResponseDetails = Union[
StateAcceptDetails, StateWaitDetails, StateRejectDetails, StateAbortDetails
]
| HistoryResponse |
python | crytic__slither | slither/core/expressions/unary_operation.py | {
"start": 468,
"end": 3209
} | class ____(Enum):
BANG = 0 # !
TILD = 1 # ~
DELETE = 2 # delete
PLUSPLUS_PRE = 3 # ++
MINUSMINUS_PRE = 4 # --
PLUSPLUS_POST = 5 # ++
MINUSMINUS_POST = 6 # --
PLUS_PRE = 7 # for stuff like uint(+1)
MINUS_PRE = 8 # for stuff like uint(-1)
@staticmethod
def get_type(operation_type: str, isprefix: bool) -> "UnaryOperationType":
if isprefix:
if operation_type == "!":
return UnaryOperationType.BANG
if operation_type == "~":
return UnaryOperationType.TILD
if operation_type == "delete":
return UnaryOperationType.DELETE
if operation_type == "++":
return UnaryOperationType.PLUSPLUS_PRE
if operation_type == "--":
return UnaryOperationType.MINUSMINUS_PRE
if operation_type == "+":
return UnaryOperationType.PLUS_PRE
if operation_type == "-":
return UnaryOperationType.MINUS_PRE
else:
if operation_type == "++":
return UnaryOperationType.PLUSPLUS_POST
if operation_type == "--":
return UnaryOperationType.MINUSMINUS_POST
raise SlitherCoreError(f"get_type: Unknown operation type {operation_type}")
def __str__(self) -> str:
if self == UnaryOperationType.BANG:
return "!"
if self == UnaryOperationType.TILD:
return "~"
if self == UnaryOperationType.DELETE:
return "delete"
if self == UnaryOperationType.PLUS_PRE:
return "+"
if self == UnaryOperationType.MINUS_PRE:
return "-"
if self in [UnaryOperationType.PLUSPLUS_PRE, UnaryOperationType.PLUSPLUS_POST]:
return "++"
if self in [
UnaryOperationType.MINUSMINUS_PRE,
UnaryOperationType.MINUSMINUS_POST,
]:
return "--"
raise SlitherCoreError(f"str: Unknown operation type {self}")
@staticmethod
def is_prefix(operation_type: "UnaryOperationType") -> bool:
if operation_type in [
UnaryOperationType.BANG,
UnaryOperationType.TILD,
UnaryOperationType.DELETE,
UnaryOperationType.PLUSPLUS_PRE,
UnaryOperationType.MINUSMINUS_PRE,
UnaryOperationType.PLUS_PRE,
UnaryOperationType.MINUS_PRE,
]:
return True
if operation_type in [
UnaryOperationType.PLUSPLUS_POST,
UnaryOperationType.MINUSMINUS_POST,
]:
return False
raise SlitherCoreError(f"is_prefix: Unknown operation type {operation_type}")
| UnaryOperationType |
python | cherrypy__cherrypy | cherrypy/_cptools.py | {
"start": 1703,
"end": 5002
} | class ____(object):
"""A registered function for use with CherryPy request-processing hooks.
help(tool.callable) should give you more information about this
Tool.
"""
namespace = 'tools'
def __init__(self, point, callable, name=None, priority=50):
"""Initialize a CherryPy tool instance."""
self._point = point
self.callable = callable
self._name = name
self._priority = priority
self.__doc__ = self.callable.__doc__
self._setargs()
@property
def on(self):
"""Flag whether the tool is enabled."""
raise AttributeError(_attr_error)
@on.setter
def on(self, value):
"""Set a flag for whether the tool is enabled."""
raise AttributeError(_attr_error)
def _setargs(self):
"""Copy func parameter names to obj attributes."""
try:
for arg in _getargs(self.callable):
setattr(self, arg, None)
except (TypeError, AttributeError):
if hasattr(self.callable, '__call__'):
for arg in _getargs(self.callable.__call__):
setattr(self, arg, None)
# IronPython 1.0 raises NotImplementedError because
# inspect.getargspec tries to access Python bytecode
# in co_code attribute.
except NotImplementedError:
pass
# IronPython 1B1 may raise IndexError in some cases,
# but if we trap it here it doesn't prevent CP from working.
except IndexError:
pass
def _merged_args(self, d=None):
"""Return a dict of configuration entries for this Tool."""
if d:
conf = d.copy()
else:
conf = {}
tm = cherrypy.serving.request.toolmaps[self.namespace]
if self._name in tm:
conf.update(tm[self._name])
if 'on' in conf:
del conf['on']
return conf
def __call__(self, *args, **kwargs):
"""Compile-time decorator (turn on the tool in config).
For example::
@expose
@tools.proxy()
def whats_my_base(self):
return cherrypy.request.base
"""
if args:
raise TypeError(
'The %r Tool does not accept positional '
'arguments; you must use keyword arguments.' % self._name,
)
def tool_decorator(f):
if not hasattr(f, '_cp_config'):
f._cp_config = {}
subspace = self.namespace + '.' + self._name + '.'
f._cp_config[subspace + 'on'] = True
for k, v in kwargs.items():
f._cp_config[subspace + k] = v
return f
return tool_decorator
def _setup(self):
"""Wire this tool into ``cherrypy.request``.
The standard CherryPy request object will automatically call
this method when the tool is "turned on" in config.
"""
conf = self._merged_args()
p = conf.pop('priority', None)
if p is None:
p = getattr(self.callable, 'priority', self._priority)
cherrypy.serving.request.hooks.attach(
self._point,
self.callable,
priority=p,
**conf,
)
| Tool |
python | huggingface__transformers | src/transformers/models/big_bird/modeling_big_bird.py | {
"start": 5337,
"end": 9379
} | class ____(nn.Module):
def __init__(self, config, layer_idx=None):
super().__init__()
if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"):
raise ValueError(
f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention "
f"heads ({config.num_attention_heads})"
)
self.num_attention_heads = config.num_attention_heads
self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
self.all_head_size = self.num_attention_heads * self.attention_head_size
self.query = nn.Linear(config.hidden_size, self.all_head_size, bias=config.use_bias)
self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.use_bias)
self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.use_bias)
self.dropout = nn.Dropout(config.attention_probs_dropout_prob)
self.is_decoder = config.is_decoder
self.layer_idx = layer_idx
def forward(
self,
hidden_states,
attention_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
past_key_values=None,
output_attentions=False,
cache_position=None,
):
batch_size, seq_length, _ = hidden_states.shape
query_layer = (
self.query(hidden_states)
.view(batch_size, -1, self.num_attention_heads, self.attention_head_size)
.transpose(1, 2)
)
is_cross_attention = encoder_hidden_states is not None
current_states = encoder_hidden_states if is_cross_attention else hidden_states
attention_mask = encoder_attention_mask if is_cross_attention else attention_mask
if is_cross_attention and past_key_values is not None and past_key_values.get_seq_length(self.layer_idx) > 0:
# reuse k,v, cross_attentions
key_layer = past_key_values.layers[self.layer_idx].keys
value_layer = past_key_values.layers[self.layer_idx].values
else:
key_layer = (
self.key(current_states)
.view(batch_size, -1, self.num_attention_heads, self.attention_head_size)
.transpose(1, 2)
)
value_layer = (
self.value(current_states)
.view(batch_size, -1, self.num_attention_heads, self.attention_head_size)
.transpose(1, 2)
)
if past_key_values is not None:
# save all key/value_layer to cache to be re-used for fast auto-regressive generation
key_layer, value_layer = past_key_values.update(
key_layer,
value_layer,
self.layer_idx,
)
# Take the dot product between "query" and "key" to get the raw attention scores.
attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
attention_scores = attention_scores / math.sqrt(self.attention_head_size)
if attention_mask is not None:
# Apply the attention mask is (precomputed for all layers in BigBirdModel forward() function)
attention_scores = attention_scores + attention_mask
# Normalize the attention scores to probabilities.
attention_probs = nn.functional.softmax(attention_scores, dim=-1)
# This is actually dropping out entire tokens to attend to, which might
# seem a bit unusual, but is taken from the original Transformer paper.
attention_probs = self.dropout(attention_probs)
context_layer = torch.matmul(attention_probs, value_layer)
context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
context_layer = context_layer.view(*new_context_layer_shape)
return context_layer, attention_probs
| BigBirdSelfAttention |
python | FactoryBoy__factory_boy | tests/test_base.py | {
"start": 991,
"end": 2728
} | class ____(unittest.TestCase):
def test_factory_for_optional(self):
"""Ensure that model= is optional for abstract=True."""
class TestObjectFactory(base.Factory):
class Meta:
abstract = True
self.assertTrue(TestObjectFactory._meta.abstract)
self.assertIsNone(TestObjectFactory._meta.model)
def test_factory_for_and_abstract_factory_optional(self):
"""Ensure that Meta.abstract is optional."""
class TestObjectFactory(base.Factory):
pass
self.assertTrue(TestObjectFactory._meta.abstract)
self.assertIsNone(TestObjectFactory._meta.model)
def test_abstract_factory_cannot_be_called(self):
class TestObjectFactory(base.Factory):
pass
with self.assertRaises(errors.FactoryError):
TestObjectFactory.build()
with self.assertRaises(errors.FactoryError):
TestObjectFactory.create()
def test_abstract_factory_not_inherited(self):
"""abstract=True isn't propagated to child classes."""
class TestObjectFactory(base.Factory):
class Meta:
abstract = True
model = TestObject
class TestObjectChildFactory(TestObjectFactory):
pass
self.assertFalse(TestObjectChildFactory._meta.abstract)
def test_abstract_or_model_is_required(self):
class TestObjectFactory(base.Factory):
class Meta:
abstract = False
model = None
with self.assertRaises(errors.FactoryError):
TestObjectFactory.build()
with self.assertRaises(errors.FactoryError):
TestObjectFactory.create()
| AbstractFactoryTestCase |
python | google__pytype | pytype/errors/error_printer.py | {
"start": 493,
"end": 567
} | class ____:
expected: str
actual: str
error_details: list[str]
| BadCall |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.