language stringclasses 1 value | repo stringclasses 346 values | path stringlengths 6 201 | class_span dict | source stringlengths 21 2.38M | target stringlengths 1 96 |
|---|---|---|---|---|---|
python | huggingface__transformers | src/transformers/models/diffllama/configuration_diffllama.py | {
"start": 935,
"end": 7931
} | class ____(PreTrainedConfig):
r"""
This is the configuration class to store the configuration of a [`DiffLlamaModel`]. It is used to instantiate an DiffLlama
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults
will yield a similar configuration to that of the [kajuma/DiffLlama-0.3B-handcut](https://huggingface.co/kajuma/DiffLlama-0.3B-handcut).
Configuration objects inherit from [`PreTrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PreTrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32000):
Vocabulary size of the DiffLlama model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`DiffLlamaModel`]
hidden_size (`int`, *optional*, defaults to 2048):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 8192):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 16):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details, check out [this
paper](https://huggingface.co/papers/2305.13245). If it is not specified, will default to
`num_attention_heads`.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*):
Padding token id.
bos_token_id (`int`, *optional*, defaults to 1):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 2):
End of stream token id.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
rope_parameters (`RopeParameters`, *optional*):
Dictionary containing the configuration parameters for the RoPE embeddings. The dictionary should contain
a value for `rope_theta` and optionally parameters used for scaling in case you want to use RoPE
with longer `max_position_embeddings`.
attention_bias (`bool`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
lambda_std_dev (`float`, *optional*, defaults to 0.1):
The standard deviation for initialization of parameter lambda in attention layer.
head_dim (`int`, *optional*):
The attention head dimension. If None, it will default to hidden_size // num_heads
```python
>>> from transformers import DiffLlamaModel, DiffLlamaConfig
>>> # Initializing a DiffLlama diffllama-7b style configuration
>>> configuration = DiffLlamaConfig()
>>> # Initializing a model from the diffllama-7b style configuration
>>> model = DiffLlamaModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```"""
model_type = "diffllama"
keys_to_ignore_at_inference = ["past_key_values"]
def __init__(
self,
vocab_size: Optional[int] = 32000,
hidden_size: Optional[int] = 2048,
intermediate_size: Optional[int] = 8192,
num_hidden_layers: Optional[int] = 16,
num_attention_heads: Optional[int] = 32,
num_key_value_heads: Optional[int] = None,
hidden_act: Optional[str] = "silu",
max_position_embeddings: Optional[int] = 2048,
initializer_range: Optional[float] = 0.02,
rms_norm_eps: Optional[int] = 1e-5,
use_cache: Optional[bool] = True,
pad_token_id: Optional[int] = None,
bos_token_id: Optional[int] = 1,
eos_token_id: Optional[int] = 2,
tie_word_embeddings: Optional[bool] = False,
rope_parameters: Optional[RopeParameters | dict[str, RopeParameters]] = None,
attention_bias: Optional[bool] = False,
attention_dropout: Optional[float] = 0.0,
lambda_std_dev: Optional[float] = 0.1,
head_dim: Optional[int] = None,
**kwargs,
):
self.vocab_size = vocab_size
self.max_position_embeddings = max_position_embeddings
self.hidden_size = hidden_size
self.intermediate_size = intermediate_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
# for backward compatibility
if num_key_value_heads is None:
num_key_value_heads = num_attention_heads
self.num_key_value_heads = num_key_value_heads
self.hidden_act = hidden_act
self.initializer_range = initializer_range
self.rms_norm_eps = rms_norm_eps
self.use_cache = use_cache
self.attention_bias = attention_bias
self.attention_dropout = attention_dropout
self.lambda_std_dev = lambda_std_dev
self.head_dim = head_dim if head_dim is not None else self.hidden_size // self.num_attention_heads
self.rope_parameters = rope_parameters
super().__init__(
pad_token_id=pad_token_id,
bos_token_id=bos_token_id,
eos_token_id=eos_token_id,
tie_word_embeddings=tie_word_embeddings,
**kwargs,
)
__all__ = ["DiffLlamaConfig"]
| DiffLlamaConfig |
python | pytorch__pytorch | test/inductor/test_compiled_autograd.py | {
"start": 204027,
"end": 206513
} | class ____(TestCase):
def setUp(self) -> None:
super(TestCase, self).setUp()
reset()
def tearDown(self) -> None:
super(TestCase, self).tearDown()
reset()
@skipIfXpu(msg="NotImplementedError: The operator 'testlib::mutating_custom_op'")
@ops(
list(filter(lambda op: op.name not in xfail_hops, hop_db)),
allowed_dtypes=(torch.float,),
)
def test_hops_in_bwd(self, device, dtype, op):
def create_bwd_fn_closure(op_args, op_kwargs):
op_out_ref = []
class Foo(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
return x
@staticmethod
def backward(ctx, grad):
out = op.op(*op_args, **op_kwargs)
op_out_ref.append(out)
return grad
def fn(x):
return Foo.apply(x).sum()
return fn, op_out_ref
# Note: requires_grad=False because aot dispatch is already covered elsewhere
for inp in op.sample_inputs(device, dtype, requires_grad=False):
input = inp.input if isinstance(inp.input, tuple) else (inp.input,)
eager_args = (*input, *inp.args)
eager_kwargs = inp.kwargs
compiled_args = deepcopy(eager_args)
compiled_kwargs = deepcopy(eager_kwargs)
# 1. Run eager
torch.manual_seed(123)
dummy = torch.randn(2, 2, dtype=dtype, device=device, requires_grad=True)
fn, op_out_ref = create_bwd_fn_closure(eager_args, eager_kwargs)
fn(dummy).backward()
self.assertEqual(len(op_out_ref), 1)
expected = op_out_ref[0]
# 2. Run under CA
torch.manual_seed(123)
dummy = torch.randn(2, 2, dtype=dtype, device=device, requires_grad=True)
fn, op_out_ref = create_bwd_fn_closure(compiled_args, compiled_kwargs)
with compiled_autograd._enable(make_compiler_fn(backend="aot_eager")):
fn(dummy).backward()
self.assertEqual(len(op_out_ref), 1)
actual = op_out_ref[0]
self.assertEqual(expected, actual)
instantiate_device_type_tests(TestCompiledAutogradOpInfo, globals(), allow_xpu=True)
instantiate_parametrized_tests(TestCompiledAutograd)
if __name__ == "__main__":
if HAS_CPU:
run_tests(needs="filelock")
| TestCompiledAutogradOpInfo |
python | RaRe-Technologies__gensim | gensim/test/test_utils.py | {
"start": 8493,
"end": 9680
} | class ____(unittest.TestCase):
def test_save_as_line_sentence_en(self):
corpus_file = get_tmpfile('gensim_utils.tst')
ref_sentences = [
line.split()
for line in utils.any2unicode('hello world\nhow are you').split('\n')
]
utils.save_as_line_sentence(ref_sentences, corpus_file)
with utils.open(corpus_file, 'rb', encoding='utf8') as fin:
sentences = [line.strip().split() for line in fin.read().strip().split('\n')]
self.assertEqual(sentences, ref_sentences)
def test_save_as_line_sentence_ru(self):
corpus_file = get_tmpfile('gensim_utils.tst')
ref_sentences = [
line.split()
for line in utils.any2unicode('привет мир\nкак ты поживаешь').split('\n')
]
utils.save_as_line_sentence(ref_sentences, corpus_file)
with utils.open(corpus_file, 'rb', encoding='utf8') as fin:
sentences = [line.strip().split() for line in fin.read().strip().split('\n')]
self.assertEqual(sentences, ref_sentences)
if __name__ == '__main__':
logging.root.setLevel(logging.WARNING)
unittest.main()
| TestSaveAsLineSentence |
python | ipython__ipython | IPython/core/completer.py | {
"start": 63607,
"end": 136627
} | class ____(Completer):
"""Extension of the completer class with IPython-specific features"""
@observe('greedy')
def _greedy_changed(self, change):
"""update the splitter and readline delims when greedy is changed"""
if change["new"]:
self.evaluation = "unsafe"
self.auto_close_dict_keys = True
self.splitter.delims = GREEDY_DELIMS
else:
self.evaluation = "limited"
self.auto_close_dict_keys = False
self.splitter.delims = DELIMS
dict_keys_only = Bool(
False,
help="""
Whether to show dict key matches only.
(disables all matchers except for `IPCompleter.dict_key_matcher`).
""",
)
suppress_competing_matchers = UnionTrait(
[Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
default_value=None,
help="""
Whether to suppress completions from other *Matchers*.
When set to ``None`` (default) the matchers will attempt to auto-detect
whether suppression of other matchers is desirable. For example, at
the beginning of a line followed by `%` we expect a magic completion
to be the only applicable option, and after ``my_dict['`` we usually
expect a completion with an existing dictionary key.
If you want to disable this heuristic and see completions from all matchers,
set ``IPCompleter.suppress_competing_matchers = False``.
To disable the heuristic for specific matchers provide a dictionary mapping:
``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
Set ``IPCompleter.suppress_competing_matchers = True`` to limit
completions to the set of matchers with the highest priority;
this is equivalent to ``IPCompleter.merge_completions`` and
can be beneficial for performance, but will sometimes omit relevant
candidates from matchers further down the priority list.
""",
).tag(config=True)
merge_completions = Bool(
True,
help="""Whether to merge completion results into a single list
If False, only the completion results from the first non-empty
completer will be returned.
As of version 8.6.0, setting the value to ``False`` is an alias for:
``IPCompleter.suppress_competing_matchers = True.``.
""",
).tag(config=True)
disable_matchers = ListTrait(
Unicode(),
help="""List of matchers to disable.
The list should contain matcher identifiers (see :any:`completion_matcher`).
""",
).tag(config=True)
omit__names = Enum(
(0, 1, 2),
default_value=2,
help="""Instruct the completer to omit private method names
Specifically, when completing on ``object.<tab>``.
When 2 [default]: all names that start with '_' will be excluded.
When 1: all 'magic' names (``__foo__``) will be excluded.
When 0: nothing will be excluded.
"""
).tag(config=True)
limit_to__all__ = Bool(False,
help="""
DEPRECATED as of version 5.0.
Instruct the completer to use __all__ for the completion
Specifically, when completing on ``object.<tab>``.
When True: only those names in obj.__all__ will be included.
When False [default]: the __all__ attribute is ignored
""",
).tag(config=True)
profile_completions = Bool(
default_value=False,
help="If True, emit profiling data for completion subsystem using cProfile."
).tag(config=True)
profiler_output_dir = Unicode(
default_value=".completion_profiles",
help="Template for path at which to output profile data for completions."
).tag(config=True)
@observe('limit_to__all__')
def _limit_to_all_changed(self, change):
warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
'value has been deprecated since IPython 5.0, will be made to have '
'no effects and then removed in future version of IPython.',
UserWarning)
def __init__(
self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
):
"""IPCompleter() -> completer
Return a completer object.
Parameters
----------
shell
a pointer to the ipython shell itself. This is needed
because this completer knows about magic functions, and those can
only be accessed via the ipython instance.
namespace : dict, optional
an optional dict where completions are performed.
global_namespace : dict, optional
secondary optional dict for completions, to
handle cases (such as IPython embedded inside functions) where
both Python scopes are visible.
config : Config
traitlet's config object
**kwargs
passed to super class unmodified.
"""
self.magic_escape = ESC_MAGIC
self.splitter = CompletionSplitter()
# _greedy_changed() depends on splitter and readline being defined:
super().__init__(
namespace=namespace,
global_namespace=global_namespace,
config=config,
**kwargs,
)
# List where completion matches will be stored
self.matches = []
self.shell = shell
# Regexp to split filenames with spaces in them
self.space_name_re = re.compile(r'([^\\] )')
# Hold a local ref. to glob.glob for speed
self.glob = glob.glob
# Determine if we are running on 'dumb' terminals, like (X)Emacs
# buffers, to avoid completion problems.
term = os.environ.get('TERM','xterm')
self.dumb_terminal = term in ['dumb','emacs']
# Special handling of backslashes needed in win32 platforms
if sys.platform == "win32":
self.clean_glob = self._clean_glob_win32
else:
self.clean_glob = self._clean_glob
#regexp to parse docstring for function signature
self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
#use this if positional argument name is also needed
#= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
self.magic_arg_matchers = [
self.magic_config_matcher,
self.magic_color_matcher,
]
# This is set externally by InteractiveShell
self.custom_completers = None
# This is a list of names of unicode characters that can be completed
# into their corresponding unicode value. The list is large, so we
# lazily initialize it on first use. Consuming code should access this
# attribute through the `@unicode_names` property.
self._unicode_names = None
self._backslash_combining_matchers = [
self.latex_name_matcher,
self.unicode_name_matcher,
back_latex_name_matcher,
back_unicode_name_matcher,
self.fwd_unicode_matcher,
]
if not self.backslash_combining_completions:
for matcher in self._backslash_combining_matchers:
self.disable_matchers.append(_get_matcher_id(matcher))
if not self.merge_completions:
self.suppress_competing_matchers = True
@property
def matchers(self) -> list[Matcher]:
"""All active matcher routines for completion"""
if self.dict_keys_only:
return [self.dict_key_matcher]
if self.use_jedi:
return [
*self.custom_matchers,
*self._backslash_combining_matchers,
*self.magic_arg_matchers,
self.custom_completer_matcher,
self.magic_matcher,
self._jedi_matcher,
self.dict_key_matcher,
self.file_matcher,
]
else:
return [
*self.custom_matchers,
*self._backslash_combining_matchers,
*self.magic_arg_matchers,
self.custom_completer_matcher,
self.dict_key_matcher,
self.magic_matcher,
self.python_matcher,
self.file_matcher,
self.python_func_kw_matcher,
]
def all_completions(self, text: str) -> list[str]:
"""
Wrapper around the completion methods for the benefit of emacs.
"""
prefix = text.rpartition('.')[0]
with provisionalcompleter():
return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
for c in self.completions(text, len(text))]
return self.complete(text)[1]
def _clean_glob(self, text:str):
return self.glob("%s*" % text)
def _clean_glob_win32(self, text:str):
return [f.replace("\\","/")
for f in self.glob("%s*" % text)]
@context_matcher()
def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
"""Match filenames, expanding ~USER type strings.
Most of the seemingly convoluted logic in this completer is an
attempt to handle filenames with spaces in them. And yet it's not
quite perfect, because Python's readline doesn't expose all of the
GNU readline details needed for this to be done correctly.
For a filename with a space in it, the printed completions will be
only the parts after what's already been typed (instead of the
full completions, as is normally done). I don't think with the
current (as of Python 2.3) Python readline it's possible to do
better.
"""
# TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
# starts with `/home/`, `C:\`, etc)
text = context.token
code_until_cursor = self._extract_code(context.text_until_cursor)
completion_type = self._determine_completion_context(code_until_cursor)
in_cli_context = self._is_completing_in_cli_context(code_until_cursor)
if (
completion_type == self._CompletionContextType.ATTRIBUTE
and not in_cli_context
):
return {
"completions": [],
"suppress": False,
}
# chars that require escaping with backslash - i.e. chars
# that readline treats incorrectly as delimiters, but we
# don't want to treat as delimiters in filename matching
# when escaped with backslash
if text.startswith('!'):
text = text[1:]
text_prefix = u'!'
else:
text_prefix = u''
text_until_cursor = self.text_until_cursor
# track strings with open quotes
open_quotes = has_open_quotes(text_until_cursor)
if '(' in text_until_cursor or '[' in text_until_cursor:
lsplit = text
else:
try:
# arg_split ~ shlex.split, but with unicode bugs fixed by us
lsplit = arg_split(text_until_cursor)[-1]
except ValueError:
# typically an unmatched ", or backslash without escaped char.
if open_quotes:
lsplit = text_until_cursor.split(open_quotes)[-1]
else:
return {
"completions": [],
"suppress": False,
}
except IndexError:
# tab pressed on empty line
lsplit = ""
if not open_quotes and lsplit != protect_filename(lsplit):
# if protectables are found, do matching on the whole escaped name
has_protectables = True
text0,text = text,lsplit
else:
has_protectables = False
text = os.path.expanduser(text)
if text == "":
return {
"completions": [
SimpleCompletion(
text=text_prefix + protect_filename(f), type="path"
)
for f in self.glob("*")
],
"suppress": False,
}
# Compute the matches from the filesystem
if sys.platform == 'win32':
m0 = self.clean_glob(text)
else:
m0 = self.clean_glob(text.replace('\\', ''))
if has_protectables:
# If we had protectables, we need to revert our changes to the
# beginning of filename so that we don't double-write the part
# of the filename we have so far
len_lsplit = len(lsplit)
matches = [text_prefix + text0 +
protect_filename(f[len_lsplit:]) for f in m0]
else:
if open_quotes:
# if we have a string with an open quote, we don't need to
# protect the names beyond the quote (and we _shouldn't_, as
# it would cause bugs when the filesystem call is made).
matches = m0 if sys.platform == "win32" else\
[protect_filename(f, open_quotes) for f in m0]
else:
matches = [text_prefix +
protect_filename(f) for f in m0]
# Mark directories in input list by appending '/' to their names.
return {
"completions": [
SimpleCompletion(text=x + "/" if os.path.isdir(x) else x, type="path")
for x in matches
],
"suppress": False,
}
def _extract_code(self, line: str) -> str:
"""Extract code from magics if any."""
if not line:
return line
maybe_magic, *rest = line.split(maxsplit=1)
if not rest:
return line
args = rest[0]
known_magics = self.shell.magics_manager.lsmagic()
line_magics = known_magics["line"]
magic_name = maybe_magic.lstrip(self.magic_escape)
if magic_name not in line_magics:
return line
if not maybe_magic.startswith(self.magic_escape):
all_variables = [*self.namespace.keys(), *self.global_namespace.keys()]
if magic_name in all_variables:
# short circuit if we see a line starting with say `time`
# but time is defined as a variable (in addition to being
# a magic). In these cases users need to use explicit `%time`.
return line
magic_method = line_magics[magic_name]
try:
if magic_name == "timeit":
opts, stmt = magic_method.__self__.parse_options(
args,
"n:r:tcp:qov:",
posix=False,
strict=False,
preserve_non_opts=True,
)
return stmt
elif magic_name == "prun":
opts, stmt = magic_method.__self__.parse_options(
args, "D:l:rs:T:q", list_all=True, posix=False
)
return stmt
elif hasattr(magic_method, "parser") and getattr(
magic_method, "has_arguments", False
):
# e.g. %debug, %time
args, extra = magic_method.parser.parse_argstring(args, partial=True)
return " ".join(extra)
except UsageError:
return line
return line
@context_matcher()
def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
"""Match magics."""
# Get all shell magics now rather than statically, so magics loaded at
# runtime show up too.
text = context.token
lsm = self.shell.magics_manager.lsmagic()
line_magics = lsm['line']
cell_magics = lsm['cell']
pre = self.magic_escape
pre2 = pre + pre
explicit_magic = text.startswith(pre)
# Completion logic:
# - user gives %%: only do cell magics
# - user gives %: do both line and cell magics
# - no prefix: do both
# In other words, line magics are skipped if the user gives %% explicitly
#
# We also exclude magics that match any currently visible names:
# https://github.com/ipython/ipython/issues/4877, unless the user has
# typed a %:
# https://github.com/ipython/ipython/issues/10754
bare_text = text.lstrip(pre)
global_matches = self.global_matches(bare_text)
if not explicit_magic:
def matches(magic):
"""
Filter magics, in particular remove magics that match
a name present in global namespace.
"""
return ( magic.startswith(bare_text) and
magic not in global_matches )
else:
def matches(magic):
return magic.startswith(bare_text)
completions = [pre2 + m for m in cell_magics if matches(m)]
if not text.startswith(pre2):
completions += [pre + m for m in line_magics if matches(m)]
is_magic_prefix = len(text) > 0 and text[0] == "%"
return {
"completions": [
SimpleCompletion(text=comp, type="magic") for comp in completions
],
"suppress": is_magic_prefix and len(completions) > 0,
}
@context_matcher()
def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
"""Match class names and attributes for %config magic."""
# NOTE: uses `line_buffer` equivalent for compatibility
matches = self.magic_config_matches(context.line_with_cursor)
return _convert_matcher_v1_result_to_v2_no_no(matches, type="param")
def magic_config_matches(self, text: str) -> list[str]:
"""Match class names and attributes for %config magic.
.. deprecated:: 8.6
You can use :meth:`magic_config_matcher` instead.
"""
texts = text.strip().split()
if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
# get all configuration classes
classes = sorted(set([ c for c in self.shell.configurables
if c.__class__.class_traits(config=True)
]), key=lambda x: x.__class__.__name__)
classnames = [ c.__class__.__name__ for c in classes ]
# return all classnames if config or %config is given
if len(texts) == 1:
return classnames
# match classname
classname_texts = texts[1].split('.')
classname = classname_texts[0]
classname_matches = [ c for c in classnames
if c.startswith(classname) ]
# return matched classes or the matched class with attributes
if texts[1].find('.') < 0:
return classname_matches
elif len(classname_matches) == 1 and \
classname_matches[0] == classname:
cls = classes[classnames.index(classname)].__class__
help = cls.class_get_help()
# strip leading '--' from cl-args:
help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
return [ attr.split('=')[0]
for attr in help.strip().splitlines()
if attr.startswith(texts[1]) ]
return []
@context_matcher()
def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
"""Match color schemes for %colors magic."""
text = context.line_with_cursor
texts = text.split()
if text.endswith(' '):
# .split() strips off the trailing whitespace. Add '' back
# so that: '%colors ' -> ['%colors', '']
texts.append('')
if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
prefix = texts[1]
return SimpleMatcherResult(
completions=[
SimpleCompletion(color, type="param")
for color in theme_table.keys()
if color.startswith(prefix)
],
suppress=False,
)
return SimpleMatcherResult(
completions=[],
suppress=False,
)
@context_matcher(identifier="IPCompleter.jedi_matcher")
def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
matches = self._jedi_matches(
cursor_column=context.cursor_position,
cursor_line=context.cursor_line,
text=context.full_text,
)
return {
"completions": matches,
# static analysis should not suppress other matcher
# NOTE: file_matcher is automatically suppressed on attribute completions
"suppress": False,
}
def _jedi_matches(
self, cursor_column: int, cursor_line: int, text: str
) -> Iterator[_JediCompletionLike]:
"""
Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
cursor position.
Parameters
----------
cursor_column : int
column position of the cursor in ``text``, 0-indexed.
cursor_line : int
line position of the cursor in ``text``, 0-indexed
text : str
text to complete
Notes
-----
If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
object containing a string with the Jedi debug information attached.
.. deprecated:: 8.6
You can use :meth:`_jedi_matcher` instead.
"""
namespaces = [self.namespace]
if self.global_namespace is not None:
namespaces.append(self.global_namespace)
completion_filter = lambda x:x
offset = cursor_to_position(text, cursor_line, cursor_column)
# filter output if we are completing for object members
if offset:
pre = text[offset-1]
if pre == '.':
if self.omit__names == 2:
completion_filter = lambda c:not c.name.startswith('_')
elif self.omit__names == 1:
completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
elif self.omit__names == 0:
completion_filter = lambda x:x
else:
raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
interpreter = jedi.Interpreter(text[:offset], namespaces)
try_jedi = True
try:
# find the first token in the current tree -- if it is a ' or " then we are in a string
completing_string = False
try:
first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
except StopIteration:
pass
else:
# note the value may be ', ", or it may also be ''' or """, or
# in some cases, """what/you/typed..., but all of these are
# strings.
completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
# if we are in a string jedi is likely not the right candidate for
# now. Skip it.
try_jedi = not completing_string
except Exception as e:
# many of things can go wrong, we are using private API just don't crash.
if self.debug:
print("Error detecting if completing a non-finished string :", e, '|')
if not try_jedi:
return iter([])
try:
return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
except Exception as e:
if self.debug:
return iter(
[
_FakeJediCompletion(
'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
% (e)
)
]
)
else:
return iter([])
class _CompletionContextType(enum.Enum):
ATTRIBUTE = "attribute" # For attribute completion
GLOBAL = "global" # For global completion
def _determine_completion_context(self, line):
"""
Determine whether the cursor is in an attribute or global completion context.
"""
# Cursor in string/comment → GLOBAL.
is_string, is_in_expression = self._is_in_string_or_comment(line)
if is_string and not is_in_expression:
return self._CompletionContextType.GLOBAL
# If we're in a template string expression, handle specially
if is_string and is_in_expression:
# Extract the expression part - look for the last { that isn't closed
expr_start = line.rfind("{")
if expr_start >= 0:
# We're looking at the expression inside a template string
expr = line[expr_start + 1 :]
# Recursively determine the context of the expression
return self._determine_completion_context(expr)
# Handle plain number literals - should be global context
# Ex: 3. -42.14 but not 3.1.
if re.search(r"(?<!\w)(?<!\d\.)([-+]?\d+\.(\d+)?)(?!\w)$", line):
return self._CompletionContextType.GLOBAL
# Handle all other attribute matches np.ran, d[0].k, (a,b).count
chain_match = re.search(r".*(.+(?<!\s)\.(?:[a-zA-Z]\w*)?)$", line)
if chain_match:
return self._CompletionContextType.ATTRIBUTE
return self._CompletionContextType.GLOBAL
def _is_completing_in_cli_context(self, text: str) -> bool:
"""
Determine if we are completing in a CLI alias, line magic, or bang expression context.
"""
stripped = text.lstrip()
if stripped.startswith("!") or stripped.startswith("%"):
return True
# Check for CLI aliases
try:
tokens = stripped.split(None, 1)
if not tokens:
return False
first_token = tokens[0]
# Must have arguments after the command for this to apply
if len(tokens) < 2:
return False
# Check if first token is a known alias
if not any(
alias[0] == first_token for alias in self.shell.alias_manager.aliases
):
return False
try:
if first_token in self.shell.user_ns:
# There's a variable defined, so the alias is overshadowed
return False
except (AttributeError, KeyError):
pass
return True
except Exception:
return False
def _is_in_string_or_comment(self, text):
"""
Determine if the cursor is inside a string or comment.
Returns (is_string, is_in_expression) tuple:
- is_string: True if in any kind of string
- is_in_expression: True if inside an f-string/t-string expression
"""
in_single_quote = False
in_double_quote = False
in_triple_single = False
in_triple_double = False
in_template_string = False # Covers both f-strings and t-strings
in_expression = False # For expressions in f/t-strings
expression_depth = 0 # Track nested braces in expressions
i = 0
while i < len(text):
# Check for f-string or t-string start
if (
i + 1 < len(text)
and text[i] in ("f", "t")
and (text[i + 1] == '"' or text[i + 1] == "'")
and not (
in_single_quote
or in_double_quote
or in_triple_single
or in_triple_double
)
):
in_template_string = True
i += 1 # Skip the 'f' or 't'
# Handle triple quotes
if i + 2 < len(text):
if (
text[i : i + 3] == '"""'
and not in_single_quote
and not in_triple_single
):
in_triple_double = not in_triple_double
if not in_triple_double:
in_template_string = False
i += 3
continue
if (
text[i : i + 3] == "'''"
and not in_double_quote
and not in_triple_double
):
in_triple_single = not in_triple_single
if not in_triple_single:
in_template_string = False
i += 3
continue
# Handle escapes
if text[i] == "\\" and i + 1 < len(text):
i += 2
continue
# Handle nested braces within f-strings
if in_template_string:
# Special handling for consecutive opening braces
if i + 1 < len(text) and text[i : i + 2] == "{{":
i += 2
continue
# Detect start of an expression
if text[i] == "{":
# Only increment depth and mark as expression if not already in an expression
# or if we're at a top-level nested brace
if not in_expression or (in_expression and expression_depth == 0):
in_expression = True
expression_depth += 1
i += 1
continue
# Detect end of an expression
if text[i] == "}":
expression_depth -= 1
if expression_depth <= 0:
in_expression = False
expression_depth = 0
i += 1
continue
in_triple_quote = in_triple_single or in_triple_double
# Handle quotes - also reset template string when closing quotes are encountered
if text[i] == '"' and not in_single_quote and not in_triple_quote:
in_double_quote = not in_double_quote
if not in_double_quote and not in_triple_quote:
in_template_string = False
elif text[i] == "'" and not in_double_quote and not in_triple_quote:
in_single_quote = not in_single_quote
if not in_single_quote and not in_triple_quote:
in_template_string = False
# Check for comment
if text[i] == "#" and not (
in_single_quote or in_double_quote or in_triple_quote
):
return True, False
i += 1
is_string = (
in_single_quote or in_double_quote or in_triple_single or in_triple_double
)
# Return tuple (is_string, is_in_expression)
return (
is_string or (in_template_string and not in_expression),
in_expression and expression_depth > 0,
)
@context_matcher()
def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
"""Match attributes or global python names"""
text = context.text_until_cursor
text = self._extract_code(text)
in_cli_context = self._is_completing_in_cli_context(text)
if in_cli_context:
completion_type = self._CompletionContextType.GLOBAL
else:
completion_type = self._determine_completion_context(text)
if completion_type == self._CompletionContextType.ATTRIBUTE:
try:
matches, fragment = self._attr_matches(
text, include_prefix=False, context=context
)
if text.endswith(".") and self.omit__names:
if self.omit__names == 1:
# true if txt is _not_ a __ name, false otherwise:
no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
else:
# true if txt is _not_ a _ name, false otherwise:
no__name = (
lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
is None
)
matches = filter(no__name, matches)
matches = _convert_matcher_v1_result_to_v2(
matches, type="attribute", fragment=fragment
)
return matches
except NameError:
# catches <undefined attributes>.<tab>
return SimpleMatcherResult(completions=[], suppress=False)
else:
try:
matches = self.global_matches(context.token, context=context)
except TypeError:
matches = self.global_matches(context.token)
# TODO: maybe distinguish between functions, modules and just "variables"
return SimpleMatcherResult(
completions=[
SimpleCompletion(text=match, type="variable") for match in matches
],
suppress=False,
)
@completion_matcher(api_version=1)
def python_matches(self, text: str) -> Iterable[str]:
"""Match attributes or global python names.
.. deprecated:: 8.27
You can use :meth:`python_matcher` instead."""
if "." in text:
try:
matches = self.attr_matches(text)
if text.endswith('.') and self.omit__names:
if self.omit__names == 1:
# true if txt is _not_ a __ name, false otherwise:
no__name = (lambda txt:
re.match(r'.*\.__.*?__',txt) is None)
else:
# true if txt is _not_ a _ name, false otherwise:
no__name = (lambda txt:
re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
matches = filter(no__name, matches)
except NameError:
# catches <undefined attributes>.<tab>
matches = []
else:
matches = self.global_matches(text)
return matches
def _default_arguments_from_docstring(self, doc):
"""Parse the first line of docstring for call signature.
Docstring should be of the form 'min(iterable[, key=func])\n'.
It can also parse cython docstring of the form
'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
"""
if doc is None:
return []
#care only the firstline
line = doc.lstrip().splitlines()[0]
#p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
#'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
sig = self.docstring_sig_re.search(line)
if sig is None:
return []
# iterable[, key=func]' -> ['iterable[' ,' key=func]']
sig = sig.groups()[0].split(',')
ret = []
for s in sig:
#re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
ret += self.docstring_kwd_re.findall(s)
return ret
def _default_arguments(self, obj):
"""Return the list of default arguments of obj if it is callable,
or empty list otherwise."""
call_obj = obj
ret = []
if inspect.isbuiltin(obj):
pass
elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
if inspect.isclass(obj):
#for cython embedsignature=True the constructor docstring
#belongs to the object itself not __init__
ret += self._default_arguments_from_docstring(
getattr(obj, '__doc__', ''))
# for classes, check for __init__,__new__
call_obj = (getattr(obj, '__init__', None) or
getattr(obj, '__new__', None))
# for all others, check if they are __call__able
elif hasattr(obj, '__call__'):
call_obj = obj.__call__
ret += self._default_arguments_from_docstring(
getattr(call_obj, '__doc__', ''))
_keeps = (inspect.Parameter.KEYWORD_ONLY,
inspect.Parameter.POSITIONAL_OR_KEYWORD)
try:
sig = inspect.signature(obj)
ret.extend(k for k, v in sig.parameters.items() if
v.kind in _keeps)
except ValueError:
pass
return list(set(ret))
@context_matcher()
def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
"""Match named parameters (kwargs) of the last open function."""
matches = self.python_func_kw_matches(context.token)
return _convert_matcher_v1_result_to_v2_no_no(matches, type="param")
def python_func_kw_matches(self, text):
"""Match named parameters (kwargs) of the last open function.
.. deprecated:: 8.6
You can use :meth:`python_func_kw_matcher` instead.
"""
if "." in text: # a parameter cannot be dotted
return []
try: regexp = self.__funcParamsRegex
except AttributeError:
regexp = self.__funcParamsRegex = re.compile(r'''
'.*?(?<!\\)' | # single quoted strings or
".*?(?<!\\)" | # double quoted strings or
\w+ | # identifier
\S # other characters
''', re.VERBOSE | re.DOTALL)
# 1. find the nearest identifier that comes before an unclosed
# parenthesis before the cursor
# e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
tokens = regexp.findall(self.text_until_cursor)
iterTokens = reversed(tokens)
openPar = 0
for token in iterTokens:
if token == ')':
openPar -= 1
elif token == '(':
openPar += 1
if openPar > 0:
# found the last unclosed parenthesis
break
else:
return []
# 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
ids = []
isId = re.compile(r'\w+$').match
while True:
try:
ids.append(next(iterTokens))
if not isId(ids[-1]):
ids.pop()
break
if not next(iterTokens) == '.':
break
except StopIteration:
break
# Find all named arguments already assigned to, as to avoid suggesting
# them again
usedNamedArgs = set()
par_level = -1
for token, next_token in zip(tokens, tokens[1:]):
if token == '(':
par_level += 1
elif token == ')':
par_level -= 1
if par_level != 0:
continue
if next_token != '=':
continue
usedNamedArgs.add(token)
argMatches = []
try:
callableObj = '.'.join(ids[::-1])
namedArgs = self._default_arguments(eval(callableObj,
self.namespace))
# Remove used named arguments from the list, no need to show twice
for namedArg in set(namedArgs) - usedNamedArgs:
if namedArg.startswith(text):
argMatches.append("%s=" %namedArg)
except:
pass
return argMatches
@staticmethod
def _get_keys(obj: Any) -> list[Any]:
# Objects can define their own completions by defining an
# _ipy_key_completions_() method.
method = get_real_method(obj, '_ipython_key_completions_')
if method is not None:
return method()
# Special case some common in-memory dict-like types
if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
try:
return list(obj.keys())
except Exception:
return []
elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
try:
return list(obj.obj.keys())
except Exception:
return []
elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
_safe_isinstance(obj, 'numpy', 'void'):
return obj.dtype.names or []
return []
@context_matcher()
def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
"""Match string keys in a dictionary, after e.g. ``foo[``."""
matches = self.dict_key_matches(context.token)
return _convert_matcher_v1_result_to_v2(
matches, type="dict key", suppress_if_matches=True
)
def dict_key_matches(self, text: str) -> list[str]:
"""Match string keys in a dictionary, after e.g. ``foo[``.
.. deprecated:: 8.6
You can use :meth:`dict_key_matcher` instead.
"""
# Short-circuit on closed dictionary (regular expression would
# not match anyway, but would take quite a while).
if self.text_until_cursor.strip().endswith("]"):
return []
match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
if match is None:
return []
expr, prior_tuple_keys, key_prefix = match.groups()
obj = self._evaluate_expr(expr)
if obj is not_found:
return []
keys = self._get_keys(obj)
if not keys:
return keys
tuple_prefix = guarded_eval(
prior_tuple_keys,
EvaluationContext(
globals=self.global_namespace,
locals=self.namespace,
evaluation=self.evaluation, # type: ignore
in_subscript=True,
auto_import=self._auto_import,
policy_overrides=self.policy_overrides,
),
)
closing_quote, token_offset, matches = match_dict_keys(
keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
)
if not matches:
return []
# get the cursor position of
# - the text being completed
# - the start of the key text
# - the start of the completion
text_start = len(self.text_until_cursor) - len(text)
if key_prefix:
key_start = match.start(3)
completion_start = key_start + token_offset
else:
key_start = completion_start = match.end()
# grab the leading prefix, to make sure all completions start with `text`
if text_start > key_start:
leading = ''
else:
leading = text[text_start:completion_start]
# append closing quote and bracket as appropriate
# this is *not* appropriate if the opening quote or bracket is outside
# the text given to this method, e.g. `d["""a\nt
can_close_quote = False
can_close_bracket = False
continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
if continuation.startswith(closing_quote):
# do not close if already closed, e.g. `d['a<tab>'`
continuation = continuation[len(closing_quote) :]
else:
can_close_quote = True
continuation = continuation.strip()
# e.g. `pandas.DataFrame` has different tuple indexer behaviour,
# handling it is out of scope, so let's avoid appending suffixes.
has_known_tuple_handling = isinstance(obj, dict)
can_close_bracket = (
not continuation.startswith("]") and self.auto_close_dict_keys
)
can_close_tuple_item = (
not continuation.startswith(",")
and has_known_tuple_handling
and self.auto_close_dict_keys
)
can_close_quote = can_close_quote and self.auto_close_dict_keys
# fast path if closing quote should be appended but not suffix is allowed
if not can_close_quote and not can_close_bracket and closing_quote:
return [leading + k for k in matches]
results = []
end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
for k, state_flag in matches.items():
result = leading + k
if can_close_quote and closing_quote:
result += closing_quote
if state_flag == end_of_tuple_or_item:
# We do not know which suffix to add,
# e.g. both tuple item and string
# match this item.
pass
if state_flag in end_of_tuple_or_item and can_close_bracket:
result += "]"
if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
result += ", "
results.append(result)
return results
@context_matcher()
def unicode_name_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
"""Match Latex-like syntax for unicode characters base
on the name of the character.
This does ``\\GREEK SMALL LETTER ETA`` -> ``η``
Works only on valid python 3 identifier, or on combining characters that
will combine to form a valid identifier.
"""
text = context.text_until_cursor
slashpos = text.rfind('\\')
if slashpos > -1:
s = text[slashpos+1:]
try :
unic = unicodedata.lookup(s)
# allow combining chars
if ('a'+unic).isidentifier():
return {
"completions": [SimpleCompletion(text=unic, type="unicode")],
"suppress": True,
"matched_fragment": "\\" + s,
}
except KeyError:
pass
return {
"completions": [],
"suppress": False,
}
@context_matcher()
def latex_name_matcher(self, context: CompletionContext):
"""Match Latex syntax for unicode characters.
This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α``
"""
fragment, matches = self.latex_matches(context.text_until_cursor)
return _convert_matcher_v1_result_to_v2(
matches, type="latex", fragment=fragment, suppress_if_matches=True
)
def latex_matches(self, text: str) -> tuple[str, Sequence[str]]:
"""Match Latex syntax for unicode characters.
This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α``
.. deprecated:: 8.6
You can use :meth:`latex_name_matcher` instead.
"""
slashpos = text.rfind('\\')
if slashpos > -1:
s = text[slashpos:]
if s in latex_symbols:
# Try to complete a full latex symbol to unicode
# \\alpha -> α
return s, [latex_symbols[s]]
else:
# If a user has partially typed a latex symbol, give them
# a full list of options \al -> [\aleph, \alpha]
matches = [k for k in latex_symbols if k.startswith(s)]
if matches:
return s, matches
return '', ()
@context_matcher()
def custom_completer_matcher(self, context):
"""Dispatch custom completer.
If a match is found, suppresses all other matchers except for Jedi.
"""
matches = self.dispatch_custom_completer(context.token) or []
result = _convert_matcher_v1_result_to_v2(
matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
)
result["ordered"] = True
result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
return result
def dispatch_custom_completer(self, text):
"""
.. deprecated:: 8.6
You can use :meth:`custom_completer_matcher` instead.
"""
if not self.custom_completers:
return
line = self.line_buffer
if not line.strip():
return None
# Create a little structure to pass all the relevant information about
# the current completion to any custom completer.
event = SimpleNamespace()
event.line = line
event.symbol = text
cmd = line.split(None,1)[0]
event.command = cmd
event.text_until_cursor = self.text_until_cursor
# for foo etc, try also to find completer for %foo
if not cmd.startswith(self.magic_escape):
try_magic = self.custom_completers.s_matches(
self.magic_escape + cmd)
else:
try_magic = []
for c in itertools.chain(self.custom_completers.s_matches(cmd),
try_magic,
self.custom_completers.flat_matches(self.text_until_cursor)):
try:
res = c(event)
if res:
# first, try case sensitive match
withcase = [r for r in res if r.startswith(text)]
if withcase:
return withcase
# if none, then case insensitive ones are ok too
text_low = text.lower()
return [r for r in res if r.lower().startswith(text_low)]
except TryNext:
pass
except KeyboardInterrupt:
"""
If custom completer take too long,
let keyboard interrupt abort and return nothing.
"""
break
return None
def completions(self, text: str, offset: int)->Iterator[Completion]:
"""
Returns an iterator over the possible completions
.. warning::
Unstable
This function is unstable, API may change without warning.
It will also raise unless use in proper context manager.
Parameters
----------
text : str
Full text of the current input, multi line string.
offset : int
Integer representing the position of the cursor in ``text``. Offset
is 0-based indexed.
Yields
------
Completion
Notes
-----
The cursor on a text can either be seen as being "in between"
characters or "On" a character depending on the interface visible to
the user. For consistency the cursor being on "in between" characters X
and Y is equivalent to the cursor being "on" character Y, that is to say
the character the cursor is on is considered as being after the cursor.
Combining characters may span more that one position in the
text.
.. note::
If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
fake Completion token to distinguish completion returned by Jedi
and usual IPython completion.
.. note::
Completions are not completely deduplicated yet. If identical
completions are coming from different sources this function does not
ensure that each completion object will only be present once.
"""
warnings.warn("_complete is a provisional API (as of IPython 6.0). "
"It may change without warnings. "
"Use in corresponding context manager.",
category=ProvisionalCompleterWarning, stacklevel=2)
seen = set()
profiler:Optional[cProfile.Profile]
try:
if self.profile_completions:
import cProfile
profiler = cProfile.Profile()
profiler.enable()
else:
profiler = None
for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
if c and (c in seen):
continue
yield c
seen.add(c)
except KeyboardInterrupt:
"""if completions take too long and users send keyboard interrupt,
do not crash and return ASAP. """
pass
finally:
if profiler is not None:
profiler.disable()
ensure_dir_exists(self.profiler_output_dir)
output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
print("Writing profiler output to", output_path)
profiler.dump_stats(output_path)
def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
"""
Core completion module.Same signature as :any:`completions`, with the
extra `timeout` parameter (in seconds).
Computing jedi's completion ``.type`` can be quite expensive (it is a
lazy property) and can require some warm-up, more warm up than just
computing the ``name`` of a completion. The warm-up can be :
- Long warm-up the first time a module is encountered after
install/update: actually build parse/inference tree.
- first time the module is encountered in a session: load tree from
disk.
We don't want to block completions for tens of seconds so we give the
completer a "budget" of ``_timeout`` seconds per invocation to compute
completions types, the completions that have not yet been computed will
be marked as "unknown" an will have a chance to be computed next round
are things get cached.
Keep in mind that Jedi is not the only thing treating the completion so
keep the timeout short-ish as if we take more than 0.3 second we still
have lots of processing to do.
"""
deadline = time.monotonic() + _timeout
before = full_text[:offset]
cursor_line, cursor_column = position_to_cursor(full_text, offset)
jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
def is_non_jedi_result(
result: MatcherResult, identifier: str
) -> TypeGuard[SimpleMatcherResult]:
return identifier != jedi_matcher_id
results = self._complete(
full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
)
non_jedi_results: dict[str, SimpleMatcherResult] = {
identifier: result
for identifier, result in results.items()
if is_non_jedi_result(result, identifier)
}
jedi_matches = (
cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
if jedi_matcher_id in results
else ()
)
iter_jm = iter(jedi_matches)
if _timeout:
for jm in iter_jm:
try:
type_ = jm.type
except Exception:
if self.debug:
print("Error in Jedi getting type of ", jm)
type_ = None
delta = len(jm.name_with_symbols) - len(jm.complete)
if type_ == 'function':
signature = _make_signature(jm)
else:
signature = ''
yield Completion(start=offset - delta,
end=offset,
text=jm.name_with_symbols,
type=type_,
signature=signature,
_origin='jedi')
if time.monotonic() > deadline:
break
for jm in iter_jm:
delta = len(jm.name_with_symbols) - len(jm.complete)
yield Completion(
start=offset - delta,
end=offset,
text=jm.name_with_symbols,
type=_UNKNOWN_TYPE, # don't compute type for speed
_origin="jedi",
signature="",
)
# TODO:
# Suppress this, right now just for debug.
if jedi_matches and non_jedi_results and self.debug:
some_start_offset = before.rfind(
next(iter(non_jedi_results.values()))["matched_fragment"]
)
yield Completion(
start=some_start_offset,
end=offset,
text="--jedi/ipython--",
_origin="debug",
type="none",
signature="",
)
ordered: list[Completion] = []
sortable: list[Completion] = []
for origin, result in non_jedi_results.items():
matched_text = result["matched_fragment"]
start_offset = before.rfind(matched_text)
is_ordered = result.get("ordered", False)
container = ordered if is_ordered else sortable
# I'm unsure if this is always true, so let's assert and see if it
# crash
assert before.endswith(matched_text)
for simple_completion in result["completions"]:
completion = Completion(
start=start_offset,
end=offset,
text=simple_completion.text,
_origin=origin,
signature="",
type=simple_completion.type or _UNKNOWN_TYPE,
)
container.append(completion)
yield from list(self._deduplicate(ordered + self._sort(sortable)))[
:MATCHES_LIMIT
]
def complete(
self, text=None, line_buffer=None, cursor_pos=None
) -> tuple[str, Sequence[str]]:
"""Find completions for the given text and line context.
Note that both the text and the line_buffer are optional, but at least
one of them must be given.
Parameters
----------
text : string, optional
Text to perform the completion on. If not given, the line buffer
is split using the instance's CompletionSplitter object.
line_buffer : string, optional
If not given, the completer attempts to obtain the current line
buffer via readline. This keyword allows clients which are
requesting for text completions in non-readline contexts to inform
the completer of the entire text.
cursor_pos : int, optional
Index of the cursor in the full line buffer. Should be provided by
remote frontends where kernel has no access to frontend state.
Returns
-------
Tuple of two items:
text : str
Text that was actually used in the completion.
matches : list
A list of completion matches.
Notes
-----
This API is likely to be deprecated and replaced by
:any:`IPCompleter.completions` in the future.
"""
warnings.warn('`Completer.complete` is pending deprecation since '
'IPython 6.0 and will be replaced by `Completer.completions`.',
PendingDeprecationWarning)
# potential todo, FOLD the 3rd throw away argument of _complete
# into the first 2 one.
# TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
# TODO: should we deprecate now, or does it stay?
results = self._complete(
line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
)
jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
return self._arrange_and_extract(
results,
# TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
skip_matchers={jedi_matcher_id},
# this API does not support different start/end positions (fragments of token).
abort_if_offset_changes=True,
)
def _arrange_and_extract(
self,
results: dict[str, MatcherResult],
skip_matchers: set[str],
abort_if_offset_changes: bool,
):
sortable: list[AnyMatcherCompletion] = []
ordered: list[AnyMatcherCompletion] = []
most_recent_fragment = None
for identifier, result in results.items():
if identifier in skip_matchers:
continue
if not result["completions"]:
continue
if not most_recent_fragment:
most_recent_fragment = result["matched_fragment"]
if (
abort_if_offset_changes
and result["matched_fragment"] != most_recent_fragment
):
break
if result.get("ordered", False):
ordered.extend(result["completions"])
else:
sortable.extend(result["completions"])
if not most_recent_fragment:
most_recent_fragment = "" # to satisfy typechecker (and just in case)
return most_recent_fragment, [
m.text for m in self._deduplicate(ordered + self._sort(sortable))
]
def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
full_text=None) -> _CompleteResult:
"""
Like complete but can also returns raw jedi completions as well as the
origin of the completion text. This could (and should) be made much
cleaner but that will be simpler once we drop the old (and stateful)
:any:`complete` API.
With current provisional API, cursor_pos act both (depending on the
caller) as the offset in the ``text`` or ``line_buffer``, or as the
``column`` when passing multiline strings this could/should be renamed
but would add extra noise.
Parameters
----------
cursor_line
Index of the line the cursor is on. 0 indexed.
cursor_pos
Position of the cursor in the current line/line_buffer/text. 0
indexed.
line_buffer : optional, str
The current line the cursor is in, this is mostly due to legacy
reason that readline could only give a us the single current line.
Prefer `full_text`.
text : str
The current "token" the cursor is in, mostly also for historical
reasons. as the completer would trigger only after the current line
was parsed.
full_text : str
Full text of the current cell.
Returns
-------
An ordered dictionary where keys are identifiers of completion
matchers and values are ``MatcherResult``s.
"""
# if the cursor position isn't given, the only sane assumption we can
# make is that it's at the end of the line (the common case)
if cursor_pos is None:
cursor_pos = len(line_buffer) if text is None else len(text)
if self.use_main_ns:
self.namespace = __main__.__dict__
# if text is either None or an empty string, rely on the line buffer
if (not line_buffer) and full_text:
line_buffer = full_text.split('\n')[cursor_line]
if not text: # issue #11508: check line_buffer before calling split_line
text = (
self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
)
# If no line buffer is given, assume the input text is all there was
if line_buffer is None:
line_buffer = text
# deprecated - do not use `line_buffer` in new code.
self.line_buffer = line_buffer
self.text_until_cursor = self.line_buffer[:cursor_pos]
if not full_text:
full_text = line_buffer
context = CompletionContext(
full_text=full_text,
cursor_position=cursor_pos,
cursor_line=cursor_line,
token=self._extract_code(text),
limit=MATCHES_LIMIT,
)
# Start with a clean slate of completions
results: dict[str, MatcherResult] = {}
jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
suppressed_matchers: set[str] = set()
matchers = {
_get_matcher_id(matcher): matcher
for matcher in sorted(
self.matchers, key=_get_matcher_priority, reverse=True
)
}
for matcher_id, matcher in matchers.items():
matcher_id = _get_matcher_id(matcher)
if matcher_id in self.disable_matchers:
continue
if matcher_id in results:
warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
if matcher_id in suppressed_matchers:
continue
result: MatcherResult
try:
if _is_matcher_v1(matcher):
result = _convert_matcher_v1_result_to_v2_no_no(
matcher(text), type=_UNKNOWN_TYPE
)
elif _is_matcher_v2(matcher):
result = matcher(context)
else:
api_version = _get_matcher_api_version(matcher)
raise ValueError(f"Unsupported API version {api_version}")
except BaseException:
# Show the ugly traceback if the matcher causes an
# exception, but do NOT crash the kernel!
sys.excepthook(*sys.exc_info())
continue
# set default value for matched fragment if suffix was not selected.
result["matched_fragment"] = result.get("matched_fragment", context.token)
if not suppressed_matchers:
suppression_recommended: Union[bool, set[str]] = result.get(
"suppress", False
)
suppression_config = (
self.suppress_competing_matchers.get(matcher_id, None)
if isinstance(self.suppress_competing_matchers, dict)
else self.suppress_competing_matchers
)
should_suppress = (
(suppression_config is True)
or (suppression_recommended and (suppression_config is not False))
) and has_any_completions(result)
if should_suppress:
suppression_exceptions: set[str] = result.get(
"do_not_suppress", set()
)
if isinstance(suppression_recommended, Iterable):
to_suppress = set(suppression_recommended)
else:
to_suppress = set(matchers)
suppressed_matchers = to_suppress - suppression_exceptions
new_results = {}
for previous_matcher_id, previous_result in results.items():
if previous_matcher_id not in suppressed_matchers:
new_results[previous_matcher_id] = previous_result
results = new_results
results[matcher_id] = result
_, matches = self._arrange_and_extract(
results,
# TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
# if it was omission, we can remove the filtering step, otherwise remove this comment.
skip_matchers={jedi_matcher_id},
abort_if_offset_changes=False,
)
# populate legacy stateful API
self.matches = matches
return results
@staticmethod
def _deduplicate(
matches: Sequence[AnyCompletion],
) -> Iterable[AnyCompletion]:
filtered_matches: dict[str, AnyCompletion] = {}
for match in matches:
text = match.text
if (
text not in filtered_matches
or filtered_matches[text].type == _UNKNOWN_TYPE
):
filtered_matches[text] = match
return filtered_matches.values()
@staticmethod
def _sort(matches: Sequence[AnyCompletion]):
return sorted(matches, key=lambda x: completions_sorting_key(x.text))
@context_matcher()
def fwd_unicode_matcher(self, context: CompletionContext):
"""Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
# TODO: use `context.limit` to terminate early once we matched the maximum
# number that will be used downstream; can be added as an optional to
# `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
return _convert_matcher_v1_result_to_v2(
matches, type="unicode", fragment=fragment, suppress_if_matches=True
)
def fwd_unicode_match(self, text: str) -> tuple[str, Sequence[str]]:
"""
Forward match a string starting with a backslash with a list of
potential Unicode completions.
Will compute list of Unicode character names on first call and cache it.
.. deprecated:: 8.6
You can use :meth:`fwd_unicode_matcher` instead.
Returns
-------
At tuple with:
- matched text (empty if no matches)
- list of potential completions, empty tuple otherwise)
"""
# TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
# We could do a faster match using a Trie.
# Using pygtrie the following seem to work:
# s = PrefixSet()
# for c in range(0,0x10FFFF + 1):
# try:
# s.add(unicodedata.name(chr(c)))
# except ValueError:
# pass
# [''.join(k) for k in s.iter(prefix)]
# But need to be timed and adds an extra dependency.
slashpos = text.rfind('\\')
# if text starts with slash
if slashpos > -1:
# PERF: It's important that we don't access self._unicode_names
# until we're inside this if-block. _unicode_names is lazily
# initialized, and it takes a user-noticeable amount of time to
# initialize it, so we don't want to initialize it unless we're
# actually going to use it.
s = text[slashpos + 1 :]
sup = s.upper()
candidates = [x for x in self.unicode_names if x.startswith(sup)]
if candidates:
return s, candidates
candidates = [x for x in self.unicode_names if sup in x]
if candidates:
return s, candidates
splitsup = sup.split(" ")
candidates = [
x for x in self.unicode_names if all(u in x for u in splitsup)
]
if candidates:
return s, candidates
return "", ()
# if text does not start with slash
else:
return '', ()
@property
def unicode_names(self) -> list[str]:
"""List of names of unicode code points that can be completed.
The list is lazily initialized on first access.
"""
if self._unicode_names is None:
names = []
for c in range(0,0x10FFFF + 1):
try:
names.append(unicodedata.name(chr(c)))
except ValueError:
pass
self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
return self._unicode_names
def _unicode_name_compute(ranges: list[tuple[int, int]]) -> list[str]:
names = []
for start,stop in ranges:
for c in range(start, stop) :
try:
names.append(unicodedata.name(chr(c)))
except ValueError:
pass
return names
| IPCompleter |
python | getsentry__sentry | src/sentry/relay/config/metric_extraction.py | {
"start": 34652,
"end": 35554
} | class ____(TypedDict):
condition: RuleCondition
targetMetrics: Sequence[str]
targetTag: str
tagValue: str
_TRANSACTION_METRICS_TO_RULE_FIELD = {
TransactionMetric.LCP.value: "event.measurements.lcp.value",
TransactionMetric.DURATION.value: "event.duration",
}
_SATISFACTION_TARGET_METRICS = (
"s:transactions/user@none",
"d:transactions/duration@millisecond",
"d:transactions/measurements.lcp@millisecond",
)
_SATISFACTION_TARGET_TAG = "satisfaction"
_HISTOGRAM_OUTLIERS_TARGET_METRICS = {
"duration": "d:transactions/duration@millisecond",
"lcp": "d:transactions/measurements.lcp@millisecond",
"fcp": "d:transactions/measurements.fcp@millisecond",
}
_HISTOGRAM_OUTLIERS_SOURCE_FIELDS = {
"duration": "event.duration",
"lcp": "event.measurements.lcp.value",
"fcp": "event.measurements.fcp.value",
}
@dataclass
| MetricConditionalTaggingRule |
python | walkccc__LeetCode | solutions/106. Construct Binary Tree from Inorder and Postorder Traversal/106.py | {
"start": 0,
"end": 805
} | class ____:
def buildTree(
self,
inorder: list[int],
postorder: list[int],
) -> TreeNode | None:
inToIndex = {num: i for i, num in enumerate(inorder)}
def build(
inStart: int,
inEnd: int,
postStart: int,
postEnd: int,
) -> TreeNode | None:
if inStart > inEnd:
return None
rootVal = postorder[postEnd]
rootInIndex = inToIndex[rootVal]
leftSize = rootInIndex - inStart
root = TreeNode(rootVal)
root.left = build(inStart, rootInIndex - 1, postStart,
postStart + leftSize - 1)
root.right = build(rootInIndex + 1, inEnd, postStart + leftSize,
postEnd - 1)
return root
return build(0, len(inorder) - 1, 0, len(postorder) - 1)
| Solution |
python | huggingface__transformers | templates/adding_a_new_example_script/{{cookiecutter.directory_name}}/run_{{cookiecutter.example_shortcut}}.py | {
"start": 3077,
"end": 4965
} | class ____:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
"""
model_name_or_path: str = field(
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"}
)
use_fast_tokenizer: bool = field(
default=True,
metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
)
model_revision: str = field(
default="main",
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
)
token: str = field(
default=None,
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
"generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
trust_remote_code: bool = field(
default=False,
metadata={
"help": (
"Whether to trust the execution of code from datasets/models defined on the Hub."
" This option should only be set to `True` for repositories you trust and in which you have read the"
" code, as it will execute code present on the Hub on your local machine."
)
},
)
{% endif %}
@dataclass
| ModelArguments |
python | networkx__networkx | networkx/algorithms/community/tests/test_kclique.py | {
"start": 786,
"end": 2413
} | class ____:
def setup_method(self):
self.G = nx.karate_club_graph()
def _check_communities(self, k, expected):
communities = set(nx.community.k_clique_communities(self.G, k))
assert communities == expected
def test_k2(self):
# clique percolation with k=2 is just connected components
expected = {frozenset(self.G)}
self._check_communities(2, expected)
def test_k3(self):
comm1 = [
0,
1,
2,
3,
7,
8,
12,
13,
14,
15,
17,
18,
19,
20,
21,
22,
23,
26,
27,
28,
29,
30,
31,
32,
33,
]
comm2 = [0, 4, 5, 6, 10, 16]
comm3 = [24, 25, 31]
expected = {frozenset(comm1), frozenset(comm2), frozenset(comm3)}
self._check_communities(3, expected)
def test_k4(self):
expected = {
frozenset([0, 1, 2, 3, 7, 13]),
frozenset([8, 32, 30, 33]),
frozenset([32, 33, 29, 23]),
}
self._check_communities(4, expected)
def test_k5(self):
expected = {frozenset([0, 1, 2, 3, 7, 13])}
self._check_communities(5, expected)
def test_k6(self):
expected = set()
self._check_communities(6, expected)
def test_bad_k():
with pytest.raises(nx.NetworkXError):
list(nx.community.k_clique_communities(nx.Graph(), 1))
| TestZacharyKarateClub |
python | sympy__sympy | sympy/codegen/cfunctions.py | {
"start": 11874,
"end": 12102
} | class ____(BooleanFunction):
nargs = 1
@classmethod
def eval(cls, arg):
if arg is S.NaN:
return true
elif arg.is_number:
return false
else:
return None
| isnan |
python | getsentry__sentry | src/sentry/analytics/events/sentry_app_installed.py | {
"start": 77,
"end": 233
} | class ____(analytics.Event):
user_id: int
organization_id: int
sentry_app: str
analytics.register(SentryAppInstalledEvent)
| SentryAppInstalledEvent |
python | weaviate__weaviate-python-client | weaviate/collections/classes/tenants.py | {
"start": 3691,
"end": 3906
} | class ____(Tenant): # noqa: D101
"""Wrapper around Tenant for output purposes."""
def model_post_init(self, __context: Any) -> None: # noqa: D102
self._model_post_init(user_input=False)
| TenantOutput |
python | matplotlib__matplotlib | lib/matplotlib/_mathtext.py | {
"start": 15391,
"end": 20154
} | class ____(TruetypeFonts):
"""
Use the Bakoma TrueType fonts for rendering.
Symbols are strewn about a number of font files, each of which has
its own proprietary 8-bit encoding.
"""
_fontmap = {
'cal': 'cmsy10',
'rm': 'cmr10',
'tt': 'cmtt10',
'it': 'cmmi10',
'bf': 'cmb10',
'sf': 'cmss10',
'ex': 'cmex10',
}
def __init__(self, default_font_prop: FontProperties, load_glyph_flags: LoadFlags):
self._stix_fallback = StixFonts(default_font_prop, load_glyph_flags)
super().__init__(default_font_prop, load_glyph_flags)
for key, val in self._fontmap.items():
fullpath = findfont(val)
self.fontmap[key] = fullpath
self.fontmap[val] = fullpath
_slanted_symbols = set(r"\int \oint".split())
def _get_glyph(self, fontname: str, font_class: str,
sym: str) -> tuple[FT2Font, int, bool]:
font = None
if fontname in self.fontmap and sym in latex_to_bakoma:
basename, num = latex_to_bakoma[sym]
slanted = (basename == "cmmi10") or sym in self._slanted_symbols
font = self._get_font(basename)
elif len(sym) == 1:
slanted = (fontname == "it")
font = self._get_font(fontname)
if font is not None:
num = ord(sym)
if font is not None and font.get_char_index(num) != 0:
return font, num, slanted
else:
return self._stix_fallback._get_glyph(fontname, font_class, sym)
# The Bakoma fonts contain many pre-sized alternatives for the
# delimiters. The AutoSizedChar class will use these alternatives
# and select the best (closest sized) glyph.
_size_alternatives = {
'(': [('rm', '('), ('ex', '\xa1'), ('ex', '\xb3'),
('ex', '\xb5'), ('ex', '\xc3')],
')': [('rm', ')'), ('ex', '\xa2'), ('ex', '\xb4'),
('ex', '\xb6'), ('ex', '\x21')],
'{': [('cal', '{'), ('ex', '\xa9'), ('ex', '\x6e'),
('ex', '\xbd'), ('ex', '\x28')],
'}': [('cal', '}'), ('ex', '\xaa'), ('ex', '\x6f'),
('ex', '\xbe'), ('ex', '\x29')],
# The fourth size of '[' is mysteriously missing from the BaKoMa
# font, so I've omitted it for both '[' and ']'
'[': [('rm', '['), ('ex', '\xa3'), ('ex', '\x68'),
('ex', '\x22')],
']': [('rm', ']'), ('ex', '\xa4'), ('ex', '\x69'),
('ex', '\x23')],
r'\lfloor': [('ex', '\xa5'), ('ex', '\x6a'),
('ex', '\xb9'), ('ex', '\x24')],
r'\rfloor': [('ex', '\xa6'), ('ex', '\x6b'),
('ex', '\xba'), ('ex', '\x25')],
r'\lceil': [('ex', '\xa7'), ('ex', '\x6c'),
('ex', '\xbb'), ('ex', '\x26')],
r'\rceil': [('ex', '\xa8'), ('ex', '\x6d'),
('ex', '\xbc'), ('ex', '\x27')],
r'\langle': [('ex', '\xad'), ('ex', '\x44'),
('ex', '\xbf'), ('ex', '\x2a')],
r'\rangle': [('ex', '\xae'), ('ex', '\x45'),
('ex', '\xc0'), ('ex', '\x2b')],
r'\__sqrt__': [('ex', '\x70'), ('ex', '\x71'),
('ex', '\x72'), ('ex', '\x73')],
r'\backslash': [('ex', '\xb2'), ('ex', '\x2f'),
('ex', '\xc2'), ('ex', '\x2d')],
r'/': [('rm', '/'), ('ex', '\xb1'), ('ex', '\x2e'),
('ex', '\xcb'), ('ex', '\x2c')],
r'\widehat': [('rm', '\x5e'), ('ex', '\x62'), ('ex', '\x63'),
('ex', '\x64')],
r'\widetilde': [('rm', '\x7e'), ('ex', '\x65'), ('ex', '\x66'),
('ex', '\x67')],
r'<': [('cal', 'h'), ('ex', 'D')],
r'>': [('cal', 'i'), ('ex', 'E')]
}
for alias, target in [(r'\leftparen', '('),
(r'\rightparen', ')'),
(r'\leftbrace', '{'),
(r'\rightbrace', '}'),
(r'\leftbracket', '['),
(r'\rightbracket', ']'),
(r'\{', '{'),
(r'\}', '}'),
(r'\[', '['),
(r'\]', ']')]:
_size_alternatives[alias] = _size_alternatives[target]
def get_sized_alternatives_for_symbol(self, fontname: str,
sym: str) -> list[tuple[str, str]]:
return self._size_alternatives.get(sym, [(fontname, sym)])
| BakomaFonts |
python | tensorflow__tensorflow | tensorflow/compiler/tests/image_ops_test.py | {
"start": 39121,
"end": 48981
} | class ____(xla_test.XLATestCase):
def testNMSV3(self):
boxes_data = [[0, 0, 1, 1], [0, 0.1, 1, 1.1], [0, -0.1, 1, 0.9],
[0, 10, 1, 11], [0, 10.1, 1, 11.1], [0, 100, 1, 101]]
boxes_np = np.array(boxes_data, dtype=np.float32)
scores_data = [0.9, 0.75, 0.6, 0.95, 0.5, 0.3]
scores_np = np.array(scores_data, dtype=np.float32)
max_output_size = 6
iou_threshold_np = np.array(0.5, dtype=np.float32)
with self.session() as sess:
boxes = array_ops.placeholder(boxes_np.dtype, shape=boxes_np.shape)
scores = array_ops.placeholder(scores_np.dtype, shape=scores_np.shape)
iou_threshold = array_ops.placeholder(iou_threshold_np.dtype,
iou_threshold_np.shape)
with self.test_scope():
selected_indices = image_ops.non_max_suppression_v3(
boxes=boxes,
scores=scores,
max_output_size=max_output_size,
iou_threshold=iou_threshold,
score_threshold=float("-inf"))
inputs_feed = {
boxes: boxes_np,
scores: scores_np,
iou_threshold: iou_threshold_np
}
(indices_tf) = sess.run(selected_indices, feed_dict=inputs_feed)
self.assertEqual(indices_tf.size, 3)
self.assertAllClose(indices_tf[:3], [3, 0, 5])
def testNMS128From1024(self):
num_boxes = 1024
boxes_np = np.random.normal(50, 10, (num_boxes, 4)).astype("f4")
scores_np = np.random.normal(0.5, 0.1, (num_boxes,)).astype("f4")
max_output_size = 128
iou_threshold_np = np.array(0.5, dtype=np.float32)
score_threshold_np = np.array(0.0, dtype=np.float32)
with self.session() as sess:
boxes = array_ops.placeholder(boxes_np.dtype, shape=boxes_np.shape)
scores = array_ops.placeholder(scores_np.dtype, shape=scores_np.shape)
iou_threshold = array_ops.placeholder(iou_threshold_np.dtype,
iou_threshold_np.shape)
score_threshold = array_ops.placeholder(score_threshold_np.dtype,
score_threshold_np.shape)
with self.test_scope():
selected_indices = image_ops.non_max_suppression_padded(
boxes=boxes,
scores=scores,
max_output_size=max_output_size,
iou_threshold=iou_threshold,
score_threshold=score_threshold,
pad_to_max_output_size=True)
inputs_feed = {
boxes: boxes_np,
scores: scores_np,
score_threshold: score_threshold_np,
iou_threshold: iou_threshold_np
}
(indices_tf, _) = sess.run(selected_indices, feed_dict=inputs_feed)
self.assertEqual(indices_tf.size, max_output_size)
def testNMS3From6Boxes(self):
# Three boxes are selected based on IOU.
boxes_data = [[0, 0, 1, 1], [0, 0.1, 1, 1.1], [0, -0.1, 1, 0.9],
[0, 10, 1, 11], [0, 10.1, 1, 11.1], [0, 100, 1, 101]]
boxes_np = np.array(boxes_data, dtype=np.float32)
scores_data = [0.9, 0.75, 0.6, 0.95, 0.5, 0.3]
scores_np = np.array(scores_data, dtype=np.float32)
max_output_size = 3
iou_threshold_np = np.array(0.5, dtype=np.float32)
score_threshold_np = np.array(0.0, dtype=np.float32)
with self.session() as sess:
boxes = array_ops.placeholder(boxes_np.dtype, shape=boxes_np.shape)
scores = array_ops.placeholder(scores_np.dtype, shape=scores_np.shape)
iou_threshold = array_ops.placeholder(iou_threshold_np.dtype,
iou_threshold_np.shape)
score_threshold = array_ops.placeholder(score_threshold_np.dtype,
score_threshold_np.shape)
with self.test_scope():
selected_indices = image_ops.non_max_suppression_padded(
boxes=boxes,
scores=scores,
max_output_size=max_output_size,
iou_threshold=iou_threshold,
score_threshold=score_threshold,
pad_to_max_output_size=True)
inputs_feed = {
boxes: boxes_np,
scores: scores_np,
score_threshold: score_threshold_np,
iou_threshold: iou_threshold_np
}
(indices_tf, num_valid) = sess.run(
selected_indices, feed_dict=inputs_feed)
self.assertEqual(indices_tf.size, max_output_size)
self.assertEqual(num_valid, 3)
self.assertAllClose(indices_tf[:num_valid], [3, 0, 5])
def testNMS3Then2WithScoreThresh(self):
# Three boxes are selected based on IOU.
# One is filtered out by score threshold.
boxes_data = [[0, 0, 1, 1], [0, 0.1, 1, 1.1], [0, -0.1, 1, 0.9],
[0, 10, 1, 11], [0, 10.1, 1, 11.1], [0, 100, 1, 101]]
boxes_np = np.array(boxes_data, dtype=np.float32)
scores_data = [0.9, 0.75, 0.6, 0.95, 0.5, 0.3]
scores_np = np.array(scores_data, dtype=np.float32)
max_output_size = 3
iou_threshold_np = np.array(0.5, dtype=np.float32)
score_threshold_np = np.array(0.4, dtype=np.float32)
with self.session() as sess:
boxes = array_ops.placeholder(boxes_np.dtype, shape=boxes_np.shape)
scores = array_ops.placeholder(scores_np.dtype, shape=scores_np.shape)
iou_threshold = array_ops.placeholder(iou_threshold_np.dtype,
iou_threshold_np.shape)
score_threshold = array_ops.placeholder(score_threshold_np.dtype,
score_threshold_np.shape)
with self.test_scope():
selected_indices = image_ops.non_max_suppression_padded(
boxes=boxes,
scores=scores,
max_output_size=max_output_size,
iou_threshold=iou_threshold,
score_threshold=score_threshold,
pad_to_max_output_size=True)
inputs_feed = {
boxes: boxes_np,
scores: scores_np,
iou_threshold: iou_threshold_np,
score_threshold: score_threshold_np
}
(indices_tf, num_valid) = sess.run(
selected_indices, feed_dict=inputs_feed)
self.assertEqual(indices_tf.size, max_output_size)
self.assertEqual(num_valid, 2)
self.assertAllClose(indices_tf[:num_valid], [3, 0])
def testNMS3Then1WithScoreMaxThresh(self):
# Three boxes are selected based on IOU.
# One is filtered out by score threshold.
# One is filtered out by max_output_size.
boxes_data = [[0, 0, 1, 1], [0, 0.1, 1, 1.1], [0, -0.1, 1, 0.9],
[0, 10, 1, 11], [0, 10.1, 1, 11.1], [0, 100, 1, 101]]
boxes_np = np.array(boxes_data, dtype=np.float32)
scores_data = [0.9, 0.75, 0.6, 0.95, 0.5, 0.3]
scores_np = np.array(scores_data, dtype=np.float32)
max_output_size = 1
iou_threshold_np = np.array(0.5, dtype=np.float32)
score_threshold_np = np.array(0.4, dtype=np.float32)
with self.session() as sess:
boxes = array_ops.placeholder(boxes_np.dtype, shape=boxes_np.shape)
scores = array_ops.placeholder(scores_np.dtype, shape=scores_np.shape)
iou_threshold = array_ops.placeholder(iou_threshold_np.dtype,
iou_threshold_np.shape)
score_threshold = array_ops.placeholder(score_threshold_np.dtype,
score_threshold_np.shape)
with self.test_scope():
selected_indices = image_ops.non_max_suppression_padded(
boxes=boxes,
scores=scores,
max_output_size=max_output_size,
iou_threshold=iou_threshold,
score_threshold=score_threshold,
pad_to_max_output_size=True)
inputs_feed = {
boxes: boxes_np,
scores: scores_np,
iou_threshold: iou_threshold_np,
score_threshold: score_threshold_np
}
(indices_tf, num_valid) = sess.run(
selected_indices, feed_dict=inputs_feed)
self.assertEqual(indices_tf.size, max_output_size)
self.assertEqual(num_valid, 1)
self.assertAllClose(indices_tf[:num_valid], [3])
def testSelectFromContinuousOverLap(self):
# Tests that a suppressed box does not itself suppress other boxes.
boxes_data = [[0, 0, 1, 1], [0, 0.2, 1, 1.2], [0, 0.4, 1, 1.4],
[0, 0.6, 1, 1.6], [0, 0.8, 1, 1.8], [0, 2, 1, 3]]
boxes_np = np.array(boxes_data, dtype=np.float32)
scores_data = [0.9, 0.75, 0.6, 0.5, 0.4, 0.3]
scores_np = np.array(scores_data, dtype=np.float32)
max_output_size = 3
iou_threshold_np = np.array(0.5, dtype=np.float32)
score_threshold_np = np.array(0.1, dtype=np.float32)
with self.session() as sess:
boxes = array_ops.placeholder(boxes_np.dtype, shape=boxes_np.shape)
scores = array_ops.placeholder(scores_np.dtype, shape=scores_np.shape)
iou_threshold = array_ops.placeholder(iou_threshold_np.dtype,
iou_threshold_np.shape)
score_threshold = array_ops.placeholder(score_threshold_np.dtype,
score_threshold_np.shape)
with self.test_scope():
selected_indices = image_ops.non_max_suppression_padded(
boxes=boxes,
scores=scores,
max_output_size=max_output_size,
iou_threshold=iou_threshold,
score_threshold=score_threshold,
pad_to_max_output_size=True)
inputs_feed = {
boxes: boxes_np,
scores: scores_np,
iou_threshold: iou_threshold_np,
score_threshold: score_threshold_np
}
(indices_tf, num_valid) = sess.run(
selected_indices, feed_dict=inputs_feed)
self.assertEqual(indices_tf.size, max_output_size)
self.assertEqual(num_valid, 3)
self.assertAllClose(indices_tf[:num_valid], [0, 2, 4])
| NonMaxSuppressionTest |
python | huggingface__transformers | src/transformers/models/visual_bert/modeling_visual_bert.py | {
"start": 13262,
"end": 14524
} | class ____(GradientCheckpointingLayer):
def __init__(self, config):
super().__init__()
self.chunk_size_feed_forward = config.chunk_size_feed_forward
self.seq_len_dim = 1
self.attention = VisualBertAttention(config)
self.intermediate = VisualBertIntermediate(config)
self.output = VisualBertOutput(config)
def forward(
self,
hidden_states,
attention_mask=None,
output_attentions=False,
):
self_attention_outputs = self.attention(
hidden_states,
attention_mask,
output_attentions=output_attentions,
)
attention_output = self_attention_outputs[0]
outputs = self_attention_outputs[1:] # add self attentions if we output attention weights
layer_output = apply_chunking_to_forward(
self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output
)
outputs = (layer_output,) + outputs
return outputs
def feed_forward_chunk(self, attention_output):
intermediate_output = self.intermediate(attention_output)
layer_output = self.output(intermediate_output, attention_output)
return layer_output
| VisualBertLayer |
python | walkccc__LeetCode | solutions/718. Maximum Length of Repeated Subarray/718.py | {
"start": 0,
"end": 481
} | class ____:
def findLength(self, nums1: list[int], nums2: list[int]) -> int:
m = len(nums1)
n = len(nums2)
ans = 0
# dp[i][j] := the maximum length of a subarray that appears in both
# nums1[i..m) and nums2[j..n)
dp = [[0] * (n + 1) for _ in range(m + 1)]
for i in reversed(range(m)):
for j in reversed(range(n)):
if nums1[i] == nums2[j]:
dp[i][j] = dp[i + 1][j + 1] + 1
ans = max(ans, dp[i][j])
return ans
| Solution |
python | docker__docker-py | tests/unit/api_volume_test.py | {
"start": 139,
"end": 3886
} | class ____(BaseAPIClientTest):
def test_list_volumes(self):
volumes = self.client.volumes()
assert 'Volumes' in volumes
assert len(volumes['Volumes']) == 2
args = fake_request.call_args
assert args[0][0] == 'GET'
assert args[0][1] == f"{url_prefix}volumes"
def test_list_volumes_and_filters(self):
volumes = self.client.volumes(filters={'dangling': True})
assert 'Volumes' in volumes
assert len(volumes['Volumes']) == 2
args = fake_request.call_args
assert args[0][0] == 'GET'
assert args[0][1] == f"{url_prefix}volumes"
assert args[1] == {'params': {'filters': '{"dangling": ["true"]}'},
'timeout': 60}
def test_create_volume(self):
name = 'perfectcherryblossom'
result = self.client.create_volume(name)
assert 'Name' in result
assert result['Name'] == name
assert 'Driver' in result
assert result['Driver'] == 'local'
args = fake_request.call_args
assert args[0][0] == 'POST'
assert args[0][1] == f"{url_prefix}volumes/create"
assert json.loads(args[1]['data']) == {'Name': name}
@requires_api_version('1.23')
def test_create_volume_with_labels(self):
name = 'perfectcherryblossom'
result = self.client.create_volume(name, labels={
'com.example.some-label': 'some-value'
})
assert result["Labels"] == {
'com.example.some-label': 'some-value'
}
@requires_api_version('1.23')
def test_create_volume_with_invalid_labels(self):
name = 'perfectcherryblossom'
with pytest.raises(TypeError):
self.client.create_volume(name, labels=1)
def test_create_volume_with_driver(self):
name = 'perfectcherryblossom'
driver_name = 'sshfs'
self.client.create_volume(name, driver=driver_name)
args = fake_request.call_args
assert args[0][0] == 'POST'
assert args[0][1] == f"{url_prefix}volumes/create"
data = json.loads(args[1]['data'])
assert 'Driver' in data
assert data['Driver'] == driver_name
def test_create_volume_invalid_opts_type(self):
with pytest.raises(TypeError):
self.client.create_volume(
'perfectcherryblossom', driver_opts='hello=world'
)
with pytest.raises(TypeError):
self.client.create_volume(
'perfectcherryblossom', driver_opts=['hello=world']
)
with pytest.raises(TypeError):
self.client.create_volume(
'perfectcherryblossom', driver_opts=''
)
@requires_api_version('1.24')
def test_create_volume_with_no_specified_name(self):
result = self.client.create_volume(name=None)
assert 'Name' in result
assert result['Name'] is not None
assert 'Driver' in result
assert result['Driver'] == 'local'
assert 'Scope' in result
assert result['Scope'] == 'local'
def test_inspect_volume(self):
name = 'perfectcherryblossom'
result = self.client.inspect_volume(name)
assert 'Name' in result
assert result['Name'] == name
assert 'Driver' in result
assert result['Driver'] == 'local'
args = fake_request.call_args
assert args[0][0] == 'GET'
assert args[0][1] == f'{url_prefix}volumes/{name}'
def test_remove_volume(self):
name = 'perfectcherryblossom'
self.client.remove_volume(name)
args = fake_request.call_args
assert args[0][0] == 'DELETE'
assert args[0][1] == f'{url_prefix}volumes/{name}'
| VolumeTest |
python | django__django | tests/admin_inlines/models.py | {
"start": 2468,
"end": 2616
} | class ____(models.Model):
dummy = models.IntegerField()
holder = models.ForeignKey(Holder3, models.CASCADE)
# Models for ticket #8190
| Inner3 |
python | tensorflow__tensorflow | tensorflow/python/kernel_tests/numerics_test.py | {
"start": 1255,
"end": 2535
} | class ____(test.TestCase):
def testVerifyTensorAllFiniteSucceeds(self):
x_shape = [5, 4]
x = np.random.random_sample(x_shape).astype(np.float32)
with test_util.use_gpu():
t = constant_op.constant(x, shape=x_shape, dtype=dtypes.float32)
t_verified = numerics.verify_tensor_all_finite(t,
"Input is not a number.")
self.assertAllClose(x, self.evaluate(t_verified))
def testVerifyTensorAllFiniteFails(self):
x_shape = [5, 4]
x = np.random.random_sample(x_shape).astype(np.float32)
my_msg = "Input is not a number."
# Test NaN.
x[0] = np.nan
with test_util.use_gpu():
with self.assertRaisesOpError(my_msg):
t = constant_op.constant(x, shape=x_shape, dtype=dtypes.float32)
t_verified = numerics.verify_tensor_all_finite(t, my_msg)
self.evaluate(t_verified)
# Test Inf.
x[0] = np.inf
with test_util.use_gpu():
with self.assertRaisesOpError(my_msg):
t = constant_op.constant(x, shape=x_shape, dtype=dtypes.float32)
t_verified = numerics.verify_tensor_all_finite(t, my_msg)
self.evaluate(t_verified)
@test_util.run_v1_only("add_check_numerics_op() is meant to be a v1-only API")
| VerifyTensorAllFiniteTest |
python | pytest-dev__pytest | testing/test_monkeypatch.py | {
"start": 6017,
"end": 9660
} | class ____:
"""
os.environ keys and values should be native strings, otherwise it will cause problems with other modules (notably
subprocess). On Python 2 os.environ accepts anything without complaining, while Python 3 does the right thing
and raises an error.
"""
VAR_NAME = "PYTEST_INTERNAL_MY_VAR"
def test_setenv_non_str_warning(self, monkeypatch: MonkeyPatch) -> None:
value = 2
msg = (
"Value of environment variable PYTEST_INTERNAL_MY_VAR type should be str, "
"but got 2 (type: int); converted to str implicitly"
)
with pytest.warns(pytest.PytestWarning, match=re.escape(msg)):
monkeypatch.setenv(str(self.VAR_NAME), value) # type: ignore[arg-type]
def test_setenv_prepend() -> None:
import os
monkeypatch = MonkeyPatch()
monkeypatch.setenv("XYZ123", "2", prepend="-")
monkeypatch.setenv("XYZ123", "3", prepend="-")
assert os.environ["XYZ123"] == "3-2"
monkeypatch.undo()
assert "XYZ123" not in os.environ
def test_monkeypatch_plugin(pytester: Pytester) -> None:
reprec = pytester.inline_runsource(
"""
def test_method(monkeypatch):
assert monkeypatch.__class__.__name__ == "MonkeyPatch"
"""
)
res = reprec.countoutcomes()
assert tuple(res) == (1, 0, 0), res
def test_syspath_prepend(mp: MonkeyPatch) -> None:
old = list(sys.path)
mp.syspath_prepend("world")
mp.syspath_prepend("hello")
assert sys.path[0] == "hello"
assert sys.path[1] == "world"
mp.undo()
assert sys.path == old
mp.undo()
assert sys.path == old
def test_syspath_prepend_double_undo(mp: MonkeyPatch) -> None:
old_syspath = sys.path[:]
try:
mp.syspath_prepend("hello world")
mp.undo()
sys.path.append("more hello world")
mp.undo()
assert sys.path[-1] == "more hello world"
finally:
sys.path[:] = old_syspath
def test_chdir_with_path_local(mp: MonkeyPatch, tmp_path: Path) -> None:
mp.chdir(tmp_path)
assert os.getcwd() == str(tmp_path)
def test_chdir_with_str(mp: MonkeyPatch, tmp_path: Path) -> None:
mp.chdir(str(tmp_path))
assert os.getcwd() == str(tmp_path)
def test_chdir_undo(mp: MonkeyPatch, tmp_path: Path) -> None:
cwd = os.getcwd()
mp.chdir(tmp_path)
mp.undo()
assert os.getcwd() == cwd
def test_chdir_double_undo(mp: MonkeyPatch, tmp_path: Path) -> None:
mp.chdir(str(tmp_path))
mp.undo()
os.chdir(tmp_path)
mp.undo()
assert os.getcwd() == str(tmp_path)
def test_issue185_time_breaks(pytester: Pytester) -> None:
pytester.makepyfile(
"""
import time
def test_m(monkeypatch):
def f():
raise Exception
monkeypatch.setattr(time, "time", f)
"""
)
result = pytester.runpytest()
result.stdout.fnmatch_lines(
"""
*1 passed*
"""
)
def test_importerror(pytester: Pytester) -> None:
p = pytester.mkpydir("package")
p.joinpath("a.py").write_text(
textwrap.dedent(
"""\
import doesnotexist
x = 1
"""
),
encoding="utf-8",
)
pytester.path.joinpath("test_importerror.py").write_text(
textwrap.dedent(
"""\
def test_importerror(monkeypatch):
monkeypatch.setattr('package.a.x', 2)
"""
),
encoding="utf-8",
)
result = pytester.runpytest()
result.stdout.fnmatch_lines(
"""
*import error in package.a: No module named 'doesnotexist'*
"""
)
| TestEnvironWarnings |
python | allegroai__clearml | clearml/backend_api/services/v2_23/queues.py | {
"start": 51237,
"end": 53024
} | class ____(Request):
"""
Gets queue information
:param queue: Queue ID
:type queue: str
:param max_task_entries: Max number of queue task entries to return
:type max_task_entries: int
"""
_service = "queues"
_action = "get_by_id"
_version = "2.23"
_schema = {
"definitions": {},
"properties": {
"max_task_entries": {
"description": "Max number of queue task entries to return",
"type": "integer",
},
"queue": {"description": "Queue ID", "type": "string"},
},
"required": ["queue"],
"type": "object",
}
def __init__(self, queue: str, max_task_entries: Optional[int] = None, **kwargs: Any) -> None:
super(GetByIdRequest, self).__init__(**kwargs)
self.queue = queue
self.max_task_entries = max_task_entries
@schema_property("queue")
def queue(self) -> str:
return self._property_queue
@queue.setter
def queue(self, value: str) -> None:
if value is None:
self._property_queue = None
return
self.assert_isinstance(value, "queue", six.string_types)
self._property_queue = value
@schema_property("max_task_entries")
def max_task_entries(self) -> Optional[int]:
return self._property_max_task_entries
@max_task_entries.setter
def max_task_entries(self, value: Optional[int]) -> None:
if value is None:
self._property_max_task_entries = None
return
if isinstance(value, float) and value.is_integer():
value = int(value)
self.assert_isinstance(value, "max_task_entries", six.integer_types)
self._property_max_task_entries = value
| GetByIdRequest |
python | keras-team__keras | keras/src/losses/losses_test.py | {
"start": 30955,
"end": 34032
} | class ____(testing.TestCase):
def setup(self):
self.y_pred = np.asarray(
[0.4, 0.9, 0.12, 0.36, 0.3, 0.4], dtype=np.float32
).reshape((2, 3))
self.y_true = np.asarray(
[0.5, 0.8, 0.12, 0.7, 0.43, 0.8], dtype=np.float32
).reshape((2, 3))
self.batch_size = 2
self.expected_losses = np.multiply(
self.y_true, np.log(self.y_true / self.y_pred)
)
def test_config(self):
k_obj = losses.KLDivergence(reduction="sum", name="kld")
self.assertEqual(k_obj.name, "kld")
self.assertEqual(k_obj.reduction, "sum")
def test_unweighted(self):
self.setup()
k_obj = losses.KLDivergence()
loss = k_obj(self.y_true, self.y_pred)
expected_loss = np.sum(self.expected_losses) / self.batch_size
self.assertAlmostEqual(loss, expected_loss, 3)
def test_scalar_weighted(self):
self.setup()
k_obj = losses.KLDivergence()
sample_weight = 2.3
loss = k_obj(self.y_true, self.y_pred, sample_weight=sample_weight)
expected_loss = (
sample_weight * np.sum(self.expected_losses) / self.batch_size
)
self.assertAlmostEqual(loss, expected_loss, 3)
# Verify we get the same output when the same input is given
loss_2 = k_obj(self.y_true, self.y_pred, sample_weight=sample_weight)
self.assertAlmostEqual(loss, loss_2, 3)
def test_sample_weighted(self):
self.setup()
k_obj = losses.KLDivergence()
sample_weight = np.asarray([1.2, 3.4], dtype=np.float32).reshape((2, 1))
loss = k_obj(self.y_true, self.y_pred, sample_weight=sample_weight)
expected_loss = np.multiply(
self.expected_losses,
np.asarray(
[1.2, 1.2, 1.2, 3.4, 3.4, 3.4], dtype=np.float32
).reshape(2, 3),
)
expected_loss = np.sum(expected_loss) / self.batch_size
self.assertAlmostEqual(loss, expected_loss, 3)
def test_timestep_weighted(self):
self.setup()
k_obj = losses.KLDivergence()
y_true = self.y_true.reshape(2, 3, 1)
y_pred = self.y_pred.reshape(2, 3, 1)
sample_weight = np.asarray([3, 6, 5, 0, 4, 2]).reshape(2, 3)
expected_losses = np.sum(
np.multiply(y_true, np.log(y_true / y_pred)), axis=-1
)
loss = k_obj(y_true, y_pred, sample_weight=sample_weight)
num_timesteps = 3
expected_loss = np.sum(expected_losses * sample_weight) / (
self.batch_size * num_timesteps
)
self.assertAlmostEqual(loss, expected_loss, 3)
def test_zero_weighted(self):
self.setup()
k_obj = losses.KLDivergence()
loss = k_obj(self.y_true, self.y_pred, sample_weight=0)
self.assertAlmostEqual(loss, 0.0, 3)
def test_dtype_arg(self):
self.setup()
k_obj = losses.KLDivergence(dtype="bfloat16")
loss = k_obj(self.y_true, self.y_pred)
self.assertDType(loss, "bfloat16")
| KLDivergenceTest |
python | viewflow__viewflow | viewflow/templatetags/viewflow.py | {
"start": 3445,
"end": 3855
} | class ____(BaseViewsetURLNode):
"""
Reverse a url to a view within viewset
Example::
{% reverse viewset viewname args kwargs %}
"""
def _reverse_url(self, viewset, view_name, args, kwargs, current_app, context):
return viewset.reverse(
view_name, args=args, kwargs=kwargs, current_app=current_app
)
@register.tag("current_viewset_reverse")
| ViewsetURLNode |
python | walkccc__LeetCode | solutions/421. Maximum XOR of Two Numbers in an Array/421-2.py | {
"start": 854,
"end": 1167
} | class ____:
def findMaximumXOR(self, nums: list[int]) -> int:
maxNum = max(nums)
if maxNum == 0:
return 0
maxBit = int(math.log2(maxNum))
ans = 0
bitTrie = BitTrie(maxBit)
for num in nums:
ans = max(ans, bitTrie.getMaxXor(num))
bitTrie.insert(num)
return ans
| Solution |
python | kubernetes-client__python | kubernetes/base/leaderelection/resourcelock/configmaplock.py | {
"start": 850,
"end": 5857
} | class ____:
def __init__(self, name, namespace, identity):
"""
:param name: name of the lock
:param namespace: namespace
:param identity: A unique identifier that the candidate is using
"""
self.api_instance = client.CoreV1Api()
self.leader_electionrecord_annotationkey = 'control-plane.alpha.kubernetes.io/leader'
self.name = name
self.namespace = namespace
self.identity = str(identity)
self.configmap_reference = None
self.lock_record = {
'holderIdentity': None,
'leaseDurationSeconds': None,
'acquireTime': None,
'renewTime': None
}
# get returns the election record from a ConfigMap Annotation
def get(self, name, namespace):
"""
:param name: Name of the configmap object information to get
:param namespace: Namespace in which the configmap object is to be searched
:return: 'True, election record' if object found else 'False, exception response'
"""
try:
api_response = self.api_instance.read_namespaced_config_map(name, namespace)
# If an annotation does not exist - add the leader_electionrecord_annotationkey
annotations = api_response.metadata.annotations
if annotations is None or annotations == '':
api_response.metadata.annotations = {self.leader_electionrecord_annotationkey: ''}
self.configmap_reference = api_response
return True, None
# If an annotation exists but, the leader_electionrecord_annotationkey does not then add it as a key
if not annotations.get(self.leader_electionrecord_annotationkey):
api_response.metadata.annotations = {self.leader_electionrecord_annotationkey: ''}
self.configmap_reference = api_response
return True, None
lock_record = self.get_lock_object(json.loads(annotations[self.leader_electionrecord_annotationkey]))
self.configmap_reference = api_response
return True, lock_record
except ApiException as e:
return False, e
def create(self, name, namespace, election_record):
"""
:param electionRecord: Annotation string
:param name: Name of the configmap object to be created
:param namespace: Namespace in which the configmap object is to be created
:return: 'True' if object is created else 'False' if failed
"""
body = client.V1ConfigMap(
metadata={"name": name,
"annotations": {self.leader_electionrecord_annotationkey: json.dumps(self.get_lock_dict(election_record))}})
try:
api_response = self.api_instance.create_namespaced_config_map(namespace, body, pretty=True)
return True
except ApiException as e:
logging.info("Failed to create lock as {}".format(e))
return False
def update(self, name, namespace, updated_record):
"""
:param name: name of the lock to be updated
:param namespace: namespace the lock is in
:param updated_record: the updated election record
:return: True if update is successful False if it fails
"""
try:
# Set the updated record
self.configmap_reference.metadata.annotations[self.leader_electionrecord_annotationkey] = json.dumps(self.get_lock_dict(updated_record))
api_response = self.api_instance.replace_namespaced_config_map(name=name, namespace=namespace,
body=self.configmap_reference)
return True
except ApiException as e:
logging.info("Failed to update lock as {}".format(e))
return False
def get_lock_object(self, lock_record):
leader_election_record = LeaderElectionRecord(None, None, None, None)
if lock_record.get('holderIdentity'):
leader_election_record.holder_identity = lock_record['holderIdentity']
if lock_record.get('leaseDurationSeconds'):
leader_election_record.lease_duration = lock_record['leaseDurationSeconds']
if lock_record.get('acquireTime'):
leader_election_record.acquire_time = lock_record['acquireTime']
if lock_record.get('renewTime'):
leader_election_record.renew_time = lock_record['renewTime']
return leader_election_record
def get_lock_dict(self, leader_election_record):
self.lock_record['holderIdentity'] = leader_election_record.holder_identity
self.lock_record['leaseDurationSeconds'] = leader_election_record.lease_duration
self.lock_record['acquireTime'] = leader_election_record.acquire_time
self.lock_record['renewTime'] = leader_election_record.renew_time
return self.lock_record | ConfigMapLock |
python | mlflow__mlflow | dev/clint/tests/rules/test_mlflow_class_name.py | {
"start": 377,
"end": 450
} | class ____:
pass
# Bad - nested occurrence of MLFlow
| CustomMLflowHandler |
python | doocs__leetcode | solution/1100-1199/1180.Count Substrings with Only One Distinct Letter/Solution2.py | {
"start": 0,
"end": 313
} | class ____:
def countLetters(self, s: str) -> int:
ans = 0
i, n = 0, len(s)
while i < n:
j = i
cnt = 0
while j < n and s[j] == s[i]:
j += 1
cnt += 1
ans += cnt
i = j
return ans
| Solution |
python | scipy__scipy | scipy/stats/_continuous_distns.py | {
"start": 3732,
"end": 6317
} | class ____(rv_continuous):
r"""Kolmogorov-Smirnov one-sided test statistic distribution.
This is the distribution of the one-sided Kolmogorov-Smirnov (KS)
statistics :math:`D_n^+` and :math:`D_n^-`
for a finite sample size ``n >= 1`` (the shape parameter).
%(before_notes)s
See Also
--------
kstwobign, kstwo, kstest
Notes
-----
:math:`D_n^+` and :math:`D_n^-` are given by
.. math::
D_n^+ &= \text{sup}_x (F_n(x) - F(x)),\\
D_n^- &= \text{sup}_x (F(x) - F_n(x)),\\
where :math:`F` is a continuous CDF and :math:`F_n` is an empirical CDF.
`ksone` describes the distribution under the null hypothesis of the KS test
that the empirical CDF corresponds to :math:`n` i.i.d. random variates
with CDF :math:`F`.
%(after_notes)s
References
----------
.. [1] Birnbaum, Z. W. and Tingey, F.H. "One-sided confidence contours
for probability distribution functions", The Annals of Mathematical
Statistics, 22(4), pp 592-596 (1951).
Examples
--------
>>> import numpy as np
>>> from scipy.stats import ksone
>>> import matplotlib.pyplot as plt
>>> fig, ax = plt.subplots(1, 1)
Display the probability density function (``pdf``):
>>> n = 1e+03
>>> x = np.linspace(ksone.ppf(0.01, n),
... ksone.ppf(0.99, n), 100)
>>> ax.plot(x, ksone.pdf(x, n),
... 'r-', lw=5, alpha=0.6, label='ksone pdf')
Alternatively, the distribution object can be called (as a function)
to fix the shape, location and scale parameters. This returns a "frozen"
RV object holding the given parameters fixed.
Freeze the distribution and display the frozen ``pdf``:
>>> rv = ksone(n)
>>> ax.plot(x, rv.pdf(x), 'k-', lw=2, label='frozen pdf')
>>> ax.legend(loc='best', frameon=False)
>>> plt.show()
Check accuracy of ``cdf`` and ``ppf``:
>>> vals = ksone.ppf([0.001, 0.5, 0.999], n)
>>> np.allclose([0.001, 0.5, 0.999], ksone.cdf(vals, n))
True
"""
def _argcheck(self, n):
return (n >= 1) & (n == np.round(n))
def _shape_info(self):
return [_ShapeInfo("n", True, (1, np.inf), (True, False))]
def _pdf(self, x, n):
return -scu._smirnovp(n, x)
def _cdf(self, x, n):
return scu._smirnovc(n, x)
def _sf(self, x, n):
return sc.smirnov(n, x)
def _ppf(self, q, n):
return scu._smirnovci(n, q)
def _isf(self, q, n):
return sc.smirnovi(n, q)
ksone = ksone_gen(a=0.0, b=1.0, name='ksone')
| ksone_gen |
python | plotly__plotly.py | tests/test_optional/test_tools/test_figure_factory.py | {
"start": 4118,
"end": 29438
} | class ____(TestCaseNoTemplate, NumpyTestUtilsMixin):
def test_unequal_ohlc_length(self):
# check: PlotlyError if open, high, low, close are not the same length
# for TraceFactory.create_ohlc and TraceFactory.create_candlestick
kwargs = {
"open": [1],
"high": [1, 3],
"low": [1, 2],
"close": [1, 2],
"direction": ["increasing"],
}
self.assertRaises(PlotlyError, ff.create_ohlc, **kwargs)
self.assertRaises(PlotlyError, ff.create_candlestick, **kwargs)
kwargs = {
"open": [1, 2],
"high": [1, 2, 3],
"low": [1, 2],
"close": [1, 2],
"direction": ["decreasing"],
}
self.assertRaises(PlotlyError, ff.create_ohlc, **kwargs)
self.assertRaises(PlotlyError, ff.create_candlestick, **kwargs)
kwargs = {"open": [1, 2], "high": [2, 3], "low": [0], "close": [1, 3]}
self.assertRaises(PlotlyError, ff.create_ohlc, **kwargs)
self.assertRaises(PlotlyError, ff.create_candlestick, **kwargs)
kwargs = {"open": [1, 2], "high": [2, 3], "low": [1, 2], "close": [1]}
self.assertRaises(PlotlyError, ff.create_ohlc, **kwargs)
self.assertRaises(PlotlyError, ff.create_candlestick, **kwargs)
def test_direction_arg(self):
# check: PlotlyError if direction is not defined as "increasing" or
# "decreasing" for TraceFactory.create_ohlc and
# TraceFactory.create_candlestick
kwargs = {
"open": [1, 4],
"high": [1, 5],
"low": [1, 2],
"close": [1, 2],
"direction": ["inc"],
}
self.assertRaisesRegex(
PlotlyError,
"direction must be defined as 'increasing', 'decreasing', or 'both'",
ff.create_ohlc,
**kwargs,
)
self.assertRaisesRegex(
PlotlyError,
"direction must be defined as 'increasing', 'decreasing', or 'both'",
ff.create_candlestick,
**kwargs,
)
kwargs = {
"open": [1, 2],
"high": [1, 3],
"low": [1, 2],
"close": [1, 2],
"direction": ["d"],
}
self.assertRaisesRegex(
PlotlyError,
"direction must be defined as 'increasing', 'decreasing', or 'both'",
ff.create_ohlc,
**kwargs,
)
self.assertRaisesRegex(
PlotlyError,
"direction must be defined as 'increasing', 'decreasing', or 'both'",
ff.create_candlestick,
**kwargs,
)
def test_high_highest_value(self):
# check: PlotlyError if the "high" value is less than the corresponding
# open, low, or close value because if the "high" value is not the
# highest (or equal) then the data may have been entered incorrectly.
kwargs = {"open": [2, 3], "high": [4, 2], "low": [1, 1], "close": [1, 2]}
self.assertRaisesRegex(
PlotlyError,
"Oops! Looks like some of "
"your high values are less "
"the corresponding open, "
"low, or close values. "
"Double check that your data "
"is entered in O-H-L-C order",
ff.create_ohlc,
**kwargs,
)
self.assertRaisesRegex(
PlotlyError,
"Oops! Looks like some of "
"your high values are less "
"the corresponding open, "
"low, or close values. "
"Double check that your data "
"is entered in O-H-L-C order",
ff.create_candlestick,
**kwargs,
)
def test_low_lowest_value(self):
# check: PlotlyError if the "low" value is greater than the
# corresponding open, high, or close value because if the "low" value
# is not the lowest (or equal) then the data may have been entered
# incorrectly.
# create_ohlc_increase
kwargs = {"open": [2, 3], "high": [4, 6], "low": [3, 1], "close": [1, 2]}
self.assertRaisesRegex(
PlotlyError,
"Oops! Looks like some of "
"your low values are greater "
"than the corresponding high"
", open, or close values. "
"Double check that your data "
"is entered in O-H-L-C order",
ff.create_ohlc,
**kwargs,
)
self.assertRaisesRegex(
PlotlyError,
"Oops! Looks like some of "
"your low values are greater "
"than the corresponding high"
", open, or close values. "
"Double check that your data "
"is entered in O-H-L-C order",
ff.create_candlestick,
**kwargs,
)
def test_one_ohlc(self):
# This should create one "increase" (i.e. close > open) ohlc stick
ohlc = ff.create_ohlc(open=[33.0], high=[33.2], low=[32.7], close=[33.1])
expected_ohlc = {
"layout": {"hovermode": "closest", "xaxis": {"zeroline": False}},
"data": [
{
"y": [33.0, 33.0, 33.2, 32.7, 33.1, 33.1, None],
"line": {"width": 1, "color": "#3D9970"},
"showlegend": False,
"name": "Increasing",
"text": ["Open", "Open", "High", "Low", "Close", "Close", ""],
"mode": "lines",
"type": "scatter",
"x": [-0.2, 0, 0, 0, 0, 0.2, None],
},
{
"y": [],
"line": {"width": 1, "color": "#FF4136"},
"showlegend": False,
"name": "Decreasing",
"text": (),
"mode": "lines",
"type": "scatter",
"x": [],
},
],
}
self.assert_fig_equal(
ohlc["data"][0], expected_ohlc["data"][0], ignore=["uid", "text"]
)
self.assert_fig_equal(
ohlc["data"][1], expected_ohlc["data"][1], ignore=["uid", "text"]
)
self.assert_fig_equal(ohlc["layout"], expected_ohlc["layout"])
def test_one_ohlc_increase(self):
# This should create one "increase" (i.e. close > open) ohlc stick
ohlc_incr = ff.create_ohlc(
open=[33.0], high=[33.2], low=[32.7], close=[33.1], direction="increasing"
)
expected_ohlc_incr = {
"data": [
{
"line": {"color": "#3D9970", "width": 1},
"mode": "lines",
"name": "Increasing",
"showlegend": False,
"text": ["Open", "Open", "High", "Low", "Close", "Close", ""],
"type": "scatter",
"x": [-0.2, 0, 0, 0, 0, 0.2, None],
"y": [33.0, 33.0, 33.2, 32.7, 33.1, 33.1, None],
}
],
"layout": {"hovermode": "closest", "xaxis": {"zeroline": False}},
}
self.assert_fig_equal(ohlc_incr["data"][0], expected_ohlc_incr["data"][0])
self.assert_fig_equal(ohlc_incr["layout"], expected_ohlc_incr["layout"])
def test_one_ohlc_decrease(self):
# This should create one "increase" (i.e. close > open) ohlc stick
ohlc_decr = ff.create_ohlc(
open=[33.0], high=[33.2], low=[30.7], close=[31.1], direction="decreasing"
)
expected_ohlc_decr = {
"data": [
{
"line": {"color": "#FF4136", "width": 1},
"mode": "lines",
"name": "Decreasing",
"showlegend": False,
"text": ["Open", "Open", "High", "Low", "Close", "Close", ""],
"type": "scatter",
"x": [-0.2, 0, 0, 0, 0, 0.2, None],
"y": [33.0, 33.0, 33.2, 30.7, 31.1, 31.1, None],
}
],
"layout": {"hovermode": "closest", "xaxis": {"zeroline": False}},
}
self.assert_fig_equal(ohlc_decr["data"][0], expected_ohlc_decr["data"][0])
self.assert_fig_equal(ohlc_decr["layout"], expected_ohlc_decr["layout"])
# TO-DO: put expected fig in a different file and then call to compare
def test_one_candlestick(self):
# This should create one "increase" (i.e. close > open) candlestick
can_inc = ff.create_candlestick(
open=[33.0], high=[33.2], low=[32.7], close=[33.1]
)
exp_can_inc = {
"data": [
{
"boxpoints": False,
"fillcolor": "#3D9970",
"line": {"color": "#3D9970"},
"name": "Increasing",
"showlegend": False,
"type": "box",
"whiskerwidth": 0,
"x": [0, 0, 0, 0, 0, 0],
"y": [32.7, 33.0, 33.1, 33.1, 33.1, 33.2],
},
{
"boxpoints": False,
"fillcolor": "#ff4136",
"line": {"color": "#ff4136"},
"name": "Decreasing",
"showlegend": False,
"type": "box",
"whiskerwidth": 0,
"x": [],
"y": [],
},
],
"layout": {},
}
self.assert_fig_equal(can_inc["data"][0], exp_can_inc["data"][0])
self.assert_fig_equal(can_inc["layout"], exp_can_inc["layout"])
def test_datetime_ohlc(self):
# Check expected outcome for ohlc chart with datetime xaxis
high_data = [34.20, 34.37, 33.62, 34.25, 35.18, 33.25, 35.37, 34.62]
low_data = [31.70, 30.75, 32.87, 31.62, 30.81, 32.75, 32.75, 32.87]
close_data = [34.10, 31.93, 33.37, 33.18, 31.18, 33.10, 32.93, 33.70]
open_data = [33.01, 33.31, 33.50, 32.06, 34.12, 33.05, 33.31, 33.50]
x = [
datetime.datetime(year=2013, month=3, day=4),
datetime.datetime(year=2013, month=6, day=5),
datetime.datetime(year=2013, month=9, day=6),
datetime.datetime(year=2013, month=12, day=4),
datetime.datetime(year=2014, month=3, day=5),
datetime.datetime(year=2014, month=6, day=6),
datetime.datetime(year=2014, month=9, day=4),
datetime.datetime(year=2014, month=12, day=5),
]
ohlc_d = ff.create_ohlc(open_data, high_data, low_data, close_data, dates=x)
ex_ohlc_d = {
"data": [
{
"line": {"color": "#3D9970", "width": 1},
"mode": "lines",
"name": "Increasing",
"showlegend": False,
"text": [
"Open",
"Open",
"High",
"Low",
"Close",
"Close",
"",
"Open",
"Open",
"High",
"Low",
"Close",
"Close",
"",
"Open",
"Open",
"High",
"Low",
"Close",
"Close",
"",
"Open",
"Open",
"High",
"Low",
"Close",
"Close",
"",
],
"type": "scatter",
"x": [
datetime.datetime(2013, 2, 14, 4, 48),
datetime.datetime(2013, 3, 4, 0, 0),
datetime.datetime(2013, 3, 4, 0, 0),
datetime.datetime(2013, 3, 4, 0, 0),
datetime.datetime(2013, 3, 4, 0, 0),
datetime.datetime(2013, 3, 21, 19, 12),
None,
datetime.datetime(2013, 11, 16, 4, 48),
datetime.datetime(2013, 12, 4, 0, 0),
datetime.datetime(2013, 12, 4, 0, 0),
datetime.datetime(2013, 12, 4, 0, 0),
datetime.datetime(2013, 12, 4, 0, 0),
datetime.datetime(2013, 12, 21, 19, 12),
None,
datetime.datetime(2014, 5, 19, 4, 48),
datetime.datetime(2014, 6, 6, 0, 0),
datetime.datetime(2014, 6, 6, 0, 0),
datetime.datetime(2014, 6, 6, 0, 0),
datetime.datetime(2014, 6, 6, 0, 0),
datetime.datetime(2014, 6, 23, 19, 12),
None,
datetime.datetime(2014, 11, 17, 4, 48),
datetime.datetime(2014, 12, 5, 0, 0),
datetime.datetime(2014, 12, 5, 0, 0),
datetime.datetime(2014, 12, 5, 0, 0),
datetime.datetime(2014, 12, 5, 0, 0),
datetime.datetime(2014, 12, 22, 19, 12),
None,
],
"y": [
33.01,
33.01,
34.2,
31.7,
34.1,
34.1,
None,
32.06,
32.06,
34.25,
31.62,
33.18,
33.18,
None,
33.05,
33.05,
33.25,
32.75,
33.1,
33.1,
None,
33.5,
33.5,
34.62,
32.87,
33.7,
33.7,
None,
],
},
{
"line": {"color": "#FF4136", "width": 1},
"mode": "lines",
"name": "Decreasing",
"showlegend": False,
"text": [
"Open",
"Open",
"High",
"Low",
"Close",
"Close",
"",
"Open",
"Open",
"High",
"Low",
"Close",
"Close",
"",
"Open",
"Open",
"High",
"Low",
"Close",
"Close",
"",
"Open",
"Open",
"High",
"Low",
"Close",
"Close",
"",
],
"type": "scatter",
"x": [
datetime.datetime(2013, 5, 18, 4, 48),
datetime.datetime(2013, 6, 5, 0, 0),
datetime.datetime(2013, 6, 5, 0, 0),
datetime.datetime(2013, 6, 5, 0, 0),
datetime.datetime(2013, 6, 5, 0, 0),
datetime.datetime(2013, 6, 22, 19, 12),
None,
datetime.datetime(2013, 8, 19, 4, 48),
datetime.datetime(2013, 9, 6, 0, 0),
datetime.datetime(2013, 9, 6, 0, 0),
datetime.datetime(2013, 9, 6, 0, 0),
datetime.datetime(2013, 9, 6, 0, 0),
datetime.datetime(2013, 9, 23, 19, 12),
None,
datetime.datetime(2014, 2, 15, 4, 48),
datetime.datetime(2014, 3, 5, 0, 0),
datetime.datetime(2014, 3, 5, 0, 0),
datetime.datetime(2014, 3, 5, 0, 0),
datetime.datetime(2014, 3, 5, 0, 0),
datetime.datetime(2014, 3, 22, 19, 12),
None,
datetime.datetime(2014, 8, 17, 4, 48),
datetime.datetime(2014, 9, 4, 0, 0),
datetime.datetime(2014, 9, 4, 0, 0),
datetime.datetime(2014, 9, 4, 0, 0),
datetime.datetime(2014, 9, 4, 0, 0),
datetime.datetime(2014, 9, 21, 19, 12),
None,
],
"y": [
33.31,
33.31,
34.37,
30.75,
31.93,
31.93,
None,
33.5,
33.5,
33.62,
32.87,
33.37,
33.37,
None,
34.12,
34.12,
35.18,
30.81,
31.18,
31.18,
None,
33.31,
33.31,
35.37,
32.75,
32.93,
32.93,
None,
],
},
],
"layout": {"hovermode": "closest", "xaxis": {"zeroline": False}},
}
self.assert_fig_equal(ohlc_d["data"][0], ex_ohlc_d["data"][0])
self.assert_fig_equal(ohlc_d["data"][1], ex_ohlc_d["data"][1])
self.assert_fig_equal(ohlc_d["layout"], ex_ohlc_d["layout"])
def test_datetime_candlestick(self):
# Check expected outcome for candlestick chart with datetime xaxis
high_data = [34.20, 34.37, 33.62, 34.25, 35.18, 33.25, 35.37, 34.62]
low_data = [31.70, 30.75, 32.87, 31.62, 30.81, 32.75, 32.75, 32.87]
close_data = [34.10, 31.93, 33.37, 33.18, 31.18, 33.10, 32.93, 33.70]
open_data = [33.01, 33.31, 33.50, 32.06, 34.12, 33.05, 33.31, 33.50]
x = [
datetime.datetime(year=2013, month=3, day=4),
datetime.datetime(year=2013, month=6, day=5),
datetime.datetime(year=2013, month=9, day=6),
datetime.datetime(year=2013, month=12, day=4),
datetime.datetime(year=2014, month=3, day=5),
datetime.datetime(year=2014, month=6, day=6),
datetime.datetime(year=2014, month=9, day=4),
datetime.datetime(year=2014, month=12, day=5),
]
candle = ff.create_candlestick(
open_data, high_data, low_data, close_data, dates=x
)
exp_candle = {
"data": [
{
"boxpoints": False,
"fillcolor": "#3D9970",
"line": {"color": "#3D9970"},
"name": "Increasing",
"showlegend": False,
"type": "box",
"whiskerwidth": 0,
"x": [
datetime.datetime(2013, 3, 4, 0, 0),
datetime.datetime(2013, 3, 4, 0, 0),
datetime.datetime(2013, 3, 4, 0, 0),
datetime.datetime(2013, 3, 4, 0, 0),
datetime.datetime(2013, 3, 4, 0, 0),
datetime.datetime(2013, 3, 4, 0, 0),
datetime.datetime(2013, 12, 4, 0, 0),
datetime.datetime(2013, 12, 4, 0, 0),
datetime.datetime(2013, 12, 4, 0, 0),
datetime.datetime(2013, 12, 4, 0, 0),
datetime.datetime(2013, 12, 4, 0, 0),
datetime.datetime(2013, 12, 4, 0, 0),
datetime.datetime(2014, 6, 6, 0, 0),
datetime.datetime(2014, 6, 6, 0, 0),
datetime.datetime(2014, 6, 6, 0, 0),
datetime.datetime(2014, 6, 6, 0, 0),
datetime.datetime(2014, 6, 6, 0, 0),
datetime.datetime(2014, 6, 6, 0, 0),
datetime.datetime(2014, 12, 5, 0, 0),
datetime.datetime(2014, 12, 5, 0, 0),
datetime.datetime(2014, 12, 5, 0, 0),
datetime.datetime(2014, 12, 5, 0, 0),
datetime.datetime(2014, 12, 5, 0, 0),
datetime.datetime(2014, 12, 5, 0, 0),
],
"y": [
31.7,
33.01,
34.1,
34.1,
34.1,
34.2,
31.62,
32.06,
33.18,
33.18,
33.18,
34.25,
32.75,
33.05,
33.1,
33.1,
33.1,
33.25,
32.87,
33.5,
33.7,
33.7,
33.7,
34.62,
],
},
{
"boxpoints": False,
"fillcolor": "#FF4136",
"line": {"color": "#FF4136"},
"name": "Decreasing",
"showlegend": False,
"type": "box",
"whiskerwidth": 0,
"x": [
datetime.datetime(2013, 6, 5, 0, 0),
datetime.datetime(2013, 6, 5, 0, 0),
datetime.datetime(2013, 6, 5, 0, 0),
datetime.datetime(2013, 6, 5, 0, 0),
datetime.datetime(2013, 6, 5, 0, 0),
datetime.datetime(2013, 6, 5, 0, 0),
datetime.datetime(2013, 9, 6, 0, 0),
datetime.datetime(2013, 9, 6, 0, 0),
datetime.datetime(2013, 9, 6, 0, 0),
datetime.datetime(2013, 9, 6, 0, 0),
datetime.datetime(2013, 9, 6, 0, 0),
datetime.datetime(2013, 9, 6, 0, 0),
datetime.datetime(2014, 3, 5, 0, 0),
datetime.datetime(2014, 3, 5, 0, 0),
datetime.datetime(2014, 3, 5, 0, 0),
datetime.datetime(2014, 3, 5, 0, 0),
datetime.datetime(2014, 3, 5, 0, 0),
datetime.datetime(2014, 3, 5, 0, 0),
datetime.datetime(2014, 9, 4, 0, 0),
datetime.datetime(2014, 9, 4, 0, 0),
datetime.datetime(2014, 9, 4, 0, 0),
datetime.datetime(2014, 9, 4, 0, 0),
datetime.datetime(2014, 9, 4, 0, 0),
datetime.datetime(2014, 9, 4, 0, 0),
],
"y": [
30.75,
33.31,
31.93,
31.93,
31.93,
34.37,
32.87,
33.5,
33.37,
33.37,
33.37,
33.62,
30.81,
34.12,
31.18,
31.18,
31.18,
35.18,
32.75,
33.31,
32.93,
32.93,
32.93,
35.37,
],
},
],
"layout": {},
}
self.assert_fig_equal(candle["data"][0], exp_candle["data"][0])
self.assert_fig_equal(candle["data"][1], exp_candle["data"][1])
self.assert_fig_equal(candle["layout"], exp_candle["layout"])
| TestFinanceCharts |
python | jmcnamara__XlsxWriter | xlsxwriter/test/comparison/test_chart_points02.py | {
"start": 315,
"end": 1299
} | class ____(ExcelComparisonTest):
"""
Test file created by XlsxWriter against a file created by Excel.
"""
def setUp(self):
self.set_filename("chart_points02.xlsx")
def test_create_file(self):
"""Test the creation of an XlsxWriter file with point formatting."""
workbook = Workbook(self.got_filename)
worksheet = workbook.add_worksheet()
chart = workbook.add_chart({"type": "pie"})
data = [2, 5, 4, 1, 7, 4]
worksheet.write_column("A1", data)
chart.add_series(
{
"values": "=Sheet1!$A$1:$A$6",
"points": [
None,
{"border": {"color": "red", "dash_type": "square_dot"}},
None,
{"fill": {"color": "yellow"}},
],
}
)
worksheet.insert_chart("E9", chart)
workbook.close()
self.assertExcelEqual()
| TestCompareXLSXFiles |
python | neetcode-gh__leetcode | python/1514-path-with-maximum-probability.py | {
"start": 0,
"end": 699
} | class ____:
def maxProbability(self, n: int, edges: List[List[int]], succProb: List[float], start: int, end: int) -> float:
adj = collections.defaultdict(list)
for i in range(len(edges)):
src, dst = edges[i]
adj[src].append([dst, succProb[i]])
adj[dst].append([src, succProb[i]])
pq = [(-1, start)]
visit = set()
while pq:
prob, cur = heapq.heappop(pq)
visit.add(cur)
if cur == end:
return prob * -1
for nei, edgeProb in adj[cur]:
if nei not in visit:
heapq.heappush(pq, (prob * edgeProb, nei))
return 0
| Solution |
python | python__mypy | mypy/test/teststubgen.py | {
"start": 4483,
"end": 5119
} | class ____(unittest.TestCase):
def test_walk_packages(self) -> None:
with ModuleInspect() as m:
assert_equal(set(walk_packages(m, ["mypy.errors"])), {"mypy.errors"})
assert_equal(
set(walk_packages(m, ["mypy.errors", "mypy.stubgen"])),
{"mypy.errors", "mypy.stubgen"},
)
all_mypy_packages = set(walk_packages(m, ["mypy"]))
self.assertTrue(
all_mypy_packages.issuperset(
{"mypy", "mypy.errors", "mypy.stubgen", "mypy.test", "mypy.test.helpers"}
)
)
| StubgenCliParseSuite |
python | getsentry__sentry | src/sentry/workflow_engine/migrations/0087_relink_crons_to_compatible_issue_workflows.py | {
"start": 1338,
"end": 1616
} | class ____:
"""Represents an action with its type and config."""
type: str
config: dict[str, Any]
def to_dict(self) -> dict[str, Any]:
return {
"type": self.type,
"config": self.config,
}
@dataclass(frozen=True)
| ActionData |
python | sphinx-doc__sphinx | sphinx/builders/latex/util.py | {
"start": 117,
"end": 1703
} | class ____(Babel):
cyrillic_languages = ('bulgarian', 'kazakh', 'mongolian', 'russian', 'ukrainian')
def __init__(self, language_code: str, use_polyglossia: bool = False) -> None:
self.language_code = language_code
self.use_polyglossia = use_polyglossia
self.supported = True
super().__init__(language_code)
def uses_cyrillic(self) -> bool:
return self.language in self.cyrillic_languages
def is_supported_language(self) -> bool:
return self.supported
def language_name(self, language_code: str) -> str:
language = super().language_name(language_code)
if language == 'ngerman' and self.use_polyglossia:
# polyglossia calls new orthography (Neue Rechtschreibung) as
# german (with new spelling option).
return 'german'
elif language:
return language
elif language_code.startswith('zh'):
return 'english' # fallback to english (behaves like supported)
else:
self.supported = False
return 'english' # fallback to english
def get_mainlanguage_options(self) -> str | None:
r"""Return options for polyglossia's ``\setmainlanguage``."""
if self.use_polyglossia is False:
return None
elif self.language == 'german':
language = super().language_name(self.language_code)
if language == 'ngerman':
return 'spelling=new'
else:
return 'spelling=old'
else:
return None
| ExtBabel |
python | arrow-py__arrow | tests/test_locales.py | {
"start": 94037,
"end": 96057
} | class ____:
def test_format_timeframe(self):
# Now
assert self.locale._format_timeframe("now", 0) == "sada"
# Second(s)
assert self.locale._format_timeframe("second", 1) == "sekundu"
assert self.locale._format_timeframe("seconds", 3) == "3 sekunde"
assert self.locale._format_timeframe("seconds", 30) == "30 sekundi"
# Minute(s)
assert self.locale._format_timeframe("minute", 1) == "minutu"
assert self.locale._format_timeframe("minutes", 4) == "4 minute"
assert self.locale._format_timeframe("minutes", 40) == "40 minuta"
# Hour(s)
assert self.locale._format_timeframe("hour", 1) == "sat"
assert self.locale._format_timeframe("hours", 3) == "3 sata"
assert self.locale._format_timeframe("hours", 23) == "23 sati"
# Day(s)
assert self.locale._format_timeframe("day", 1) == "dan"
assert self.locale._format_timeframe("days", 4) == "4 dana"
assert self.locale._format_timeframe("days", 12) == "12 dana"
# Week(s)
assert self.locale._format_timeframe("week", 1) == "nedelju"
assert self.locale._format_timeframe("weeks", 2) == "2 nedelje"
assert self.locale._format_timeframe("weeks", 11) == "11 nedelja"
# Month(s)
assert self.locale._format_timeframe("month", 1) == "mesec"
assert self.locale._format_timeframe("months", 2) == "2 meseca"
assert self.locale._format_timeframe("months", 11) == "11 meseci"
# Year(s)
assert self.locale._format_timeframe("year", 1) == "godinu"
assert self.locale._format_timeframe("years", 2) == "2 godine"
assert self.locale._format_timeframe("years", 12) == "12 godina"
def test_weekday(self):
dt = arrow.Arrow(2015, 4, 11, 17, 30, 00)
assert self.locale.day_name(dt.isoweekday()) == "subota"
assert self.locale.day_abbreviation(dt.isoweekday()) == "su"
@pytest.mark.usefixtures("lang_locale")
| TestSerbianLocale |
python | nedbat__coveragepy | tests/plugin1.py | {
"start": 1578,
"end": 1932
} | class ____(FileReporter):
"""Dead-simple FileReporter."""
def lines(self) -> set[TLineNo]:
return {105, 106, 107, 205, 206, 207}
def coverage_init(
reg: Plugins,
options: Any, # pylint: disable=unused-argument
) -> None:
"""Called by coverage to initialize the plugins here."""
reg.add_file_tracer(Plugin())
| MyFileReporter |
python | scipy__scipy | benchmarks/benchmarks/spatial.py | {
"start": 17945,
"end": 18568
} | class ____(Benchmark):
params = [10, 1000, 10000]
param_names = ['num_points']
def setup(self, num_points):
points = generate_spherical_points(50)
# any two points from the random spherical points
# will suffice for the interpolation bounds:
self.start = points[0]
self.end = points[-1]
self.t = np.linspace(0, 1, num_points)
def time_geometric_slerp_3d(self, num_points):
# time geometric_slerp() for 3D interpolation
geometric_slerp(start=self.start,
end=self.end,
t=self.t)
| GeometricSlerpBench |
python | ray-project__ray | python/ray/serve/_private/router.py | {
"start": 20846,
"end": 39735
} | class ____:
def __init__(
self,
controller_handle: ActorHandle,
deployment_id: DeploymentID,
handle_id: str,
self_actor_id: str,
handle_source: DeploymentHandleSource,
event_loop: asyncio.BaseEventLoop,
enable_strict_max_ongoing_requests: bool,
node_id: str,
availability_zone: Optional[str],
prefer_local_node_routing: bool,
resolve_request_arg_func: Coroutine = resolve_deployment_response,
request_router_class: Optional[Callable] = None,
request_router_kwargs: Optional[Dict[str, Any]] = None,
request_router: Optional[RequestRouter] = None,
_request_router_initialized_event: Optional[asyncio.Event] = None,
):
"""Used to assign requests to downstream replicas for a deployment.
The routing behavior is delegated to a RequestRouter; this is a thin
wrapper that adds metrics and logging.
"""
self._controller_handle = controller_handle
self.deployment_id = deployment_id
self._self_actor_id = self_actor_id
self._handle_source = handle_source
self._event_loop = event_loop
self._request_router_class = request_router_class
self._request_router_kwargs = (
request_router_kwargs if request_router_kwargs else {}
)
self._enable_strict_max_ongoing_requests = enable_strict_max_ongoing_requests
self._node_id = node_id
self._availability_zone = availability_zone
self._prefer_local_node_routing = prefer_local_node_routing
# By default, deployment is available unless we receive news
# otherwise through a long poll broadcast from the controller.
self._deployment_available = True
# The request router will be lazy loaded to decouple form the initialization.
self._request_router: Optional[RequestRouter] = request_router
if _request_router_initialized_event:
self._request_router_initialized = _request_router_initialized_event
else:
future = asyncio.run_coroutine_threadsafe(create_event(), self._event_loop)
self._request_router_initialized = future.result()
if self._request_router:
self._request_router_initialized.set()
self._resolve_request_arg_func = resolve_request_arg_func
self._running_replicas: Optional[List[RunningReplicaInfo]] = None
# Flipped to `True` once the router has received a non-empty
# replica set at least once.
self._running_replicas_populated: bool = False
# Initializing `self._metrics_manager` before `self.long_poll_client` is
# necessary to avoid race condition where `self.update_deployment_config()`
# might be called before `self._metrics_manager` instance is created.
self._metrics_manager = RouterMetricsManager(
deployment_id,
handle_id,
self_actor_id,
handle_source,
controller_handle,
metrics.Counter(
"serve_num_router_requests",
description="The number of requests processed by the router.",
tag_keys=("deployment", "route", "application", "handle", "actor_id"),
),
metrics.Gauge(
"serve_deployment_queued_queries",
description=(
"The current number of queries to this deployment waiting"
" to be assigned to a replica."
),
tag_keys=("deployment", "application", "handle", "actor_id"),
),
metrics.Gauge(
"serve_num_ongoing_requests_at_replicas",
description=(
"The current number of requests to this deployment that "
"have been submitted to a replica."
),
tag_keys=("deployment", "application", "handle", "actor_id"),
),
event_loop,
)
# The Router needs to stay informed about changes to the target deployment's
# running replicas and deployment config. We do this via the long poll system.
# However, for efficiency, we don't want to create a LongPollClient for every
# DeploymentHandle, so we use a shared LongPollClient that all Routers
# register themselves with. But first, the router needs to get a fast initial
# update so that it can start serving requests, which we do with a dedicated
# LongPollClient that stops running once the shared client takes over.
self.long_poll_client = LongPollClient(
controller_handle,
{
(
LongPollNamespace.DEPLOYMENT_TARGETS,
deployment_id,
): self.update_deployment_targets,
(
LongPollNamespace.DEPLOYMENT_CONFIG,
deployment_id,
): self.update_deployment_config,
},
call_in_event_loop=self._event_loop,
)
shared = SharedRouterLongPollClient.get_or_create(
controller_handle, self._event_loop
)
shared.register(self)
@property
def request_router(self) -> Optional[RequestRouter]:
"""Get and lazy loading request router.
If the request_router_class not provided, and the request router is not
yet initialized, then it will return None. Otherwise, if request router
is not yet initialized, it will be initialized and returned. Also,
setting `self._request_router_initialized` to signal that the request
router is initialized.
"""
if not self._request_router and self._request_router_class:
request_router = self._request_router_class(
deployment_id=self.deployment_id,
handle_source=self._handle_source,
self_node_id=self._node_id,
self_actor_id=self._self_actor_id,
self_actor_handle=ray.get_runtime_context().current_actor
if ray.get_runtime_context().get_actor_id()
else None,
# Streaming ObjectRefGenerators are not supported in Ray Client
use_replica_queue_len_cache=self._enable_strict_max_ongoing_requests,
create_replica_wrapper_func=lambda r: RunningReplica(r),
prefer_local_node_routing=self._prefer_local_node_routing,
prefer_local_az_routing=RAY_SERVE_PROXY_PREFER_LOCAL_AZ_ROUTING,
self_availability_zone=self._availability_zone,
)
request_router.initialize_state(**(self._request_router_kwargs))
# Populate the running replicas if they are already available.
if self._running_replicas is not None:
request_router._update_running_replicas(self._running_replicas)
self._request_router = request_router
self._request_router_initialized.set()
# Log usage telemetry to indicate that custom request router
# feature is being used in this cluster.
if (
self._request_router_class.__name__
!= PowerOfTwoChoicesRequestRouter.__name__
):
ServeUsageTag.CUSTOM_REQUEST_ROUTER_USED.record("1")
return self._request_router
def running_replicas_populated(self) -> bool:
return self._running_replicas_populated
def update_deployment_targets(self, deployment_target_info: DeploymentTargetInfo):
self._deployment_available = deployment_target_info.is_available
running_replicas = deployment_target_info.running_replicas
if self.request_router:
self.request_router._update_running_replicas(running_replicas)
else:
# In this case, the request router hasn't been initialized yet.
# Store the running replicas so that we can update the request
# router once it is initialized.
self._running_replicas = running_replicas
self._metrics_manager._update_running_replicas(running_replicas)
if running_replicas:
self._running_replicas_populated = True
def update_deployment_config(self, deployment_config: DeploymentConfig):
self._request_router_class = (
deployment_config.request_router_config.get_request_router_class()
)
self._request_router_kwargs = (
deployment_config.request_router_config.request_router_kwargs
)
self._metrics_manager.update_deployment_config(
deployment_config,
curr_num_replicas=len(self.request_router.curr_replicas),
)
async def _resolve_request_arguments(
self,
pr: PendingRequest,
) -> None:
"""Asynchronously resolve and replace top-level request args and kwargs."""
if pr.resolved:
return
new_args = list(pr.args)
new_kwargs = pr.kwargs.copy()
# Map from index -> task for resolving positional arg
resolve_arg_tasks = {}
for i, obj in enumerate(pr.args):
task = await self._resolve_request_arg_func(obj, pr.metadata)
if task is not None:
resolve_arg_tasks[i] = task
# Map from key -> task for resolving key-word arg
resolve_kwarg_tasks = {}
for k, obj in pr.kwargs.items():
task = await self._resolve_request_arg_func(obj, pr.metadata)
if task is not None:
resolve_kwarg_tasks[k] = task
# Gather all argument resolution tasks concurrently.
if resolve_arg_tasks or resolve_kwarg_tasks:
all_tasks = list(resolve_arg_tasks.values()) + list(
resolve_kwarg_tasks.values()
)
await asyncio.wait(all_tasks)
# Update new args and new kwargs with resolved arguments
for index, task in resolve_arg_tasks.items():
new_args[index] = task.result()
for key, task in resolve_kwarg_tasks.items():
new_kwargs[key] = task.result()
pr.args = new_args
pr.kwargs = new_kwargs
pr.resolved = True
def _process_finished_request(
self,
replica_id: ReplicaID,
parent_request_id: str,
response_id: str,
result: Union[Any, RayError],
):
self._metrics_manager.dec_num_running_requests_for_replica(replica_id)
if isinstance(result, ActorDiedError):
# Replica has died but controller hasn't notified the router yet.
# Don't consider this replica for requests in the future, and retry
# routing request.
if self.request_router:
self.request_router.on_replica_actor_died(replica_id)
logger.warning(
f"{replica_id} will not be considered for future "
"requests because it has died."
)
elif isinstance(result, ActorUnavailableError):
# There are network issues, or replica has died but GCS is down so
# ActorUnavailableError will be raised until GCS recovers. For the
# time being, invalidate the cache entry so that we don't try to
# send requests to this replica without actively probing, and retry
# routing request.
if self.request_router:
self.request_router.on_replica_actor_unavailable(replica_id)
logger.warning(
f"Request failed because {replica_id} is temporarily unavailable."
)
async def _route_and_send_request_once(
self,
pr: PendingRequest,
response_id: str,
is_retry: bool,
) -> Optional[ReplicaResult]:
result: Optional[ReplicaResult] = None
replica: Optional[RunningReplica] = None
try:
num_curr_replicas = len(self.request_router.curr_replicas)
with self._metrics_manager.wrap_queued_request(is_retry, num_curr_replicas):
# If the pending request is uninitialized, we do so by resolving the
# request arguments. This should only be done once per request, and
# should happen after incrementing `num_queued_requests`, so that Serve
# can upscale the downstream deployment while arguments are resolving.
if not pr.resolved:
await self._resolve_request_arguments(pr)
replica = await self.request_router._choose_replica_for_request(
pr, is_retry=is_retry
)
# If the queue len cache is disabled or we're sending a request to Java,
# then directly send the query and hand the response back. The replica will
# never reject requests in this code path.
with_rejection = (
self._enable_strict_max_ongoing_requests
and not replica.is_cross_language
)
result = replica.try_send_request(pr, with_rejection=with_rejection)
# Proactively update the queue length cache.
self.request_router.on_send_request(replica.replica_id)
# Keep track of requests that have been sent out to replicas
if RAY_SERVE_COLLECT_AUTOSCALING_METRICS_ON_HANDLE:
_request_context = ray.serve.context._get_serve_request_context()
request_id: str = _request_context.request_id
self._metrics_manager.inc_num_running_requests_for_replica(
replica.replica_id
)
callback = partial(
self._process_finished_request,
replica.replica_id,
request_id,
response_id,
)
result.add_done_callback(callback)
if not with_rejection:
return result
queue_info = await result.get_rejection_response()
self.request_router.on_new_queue_len_info(replica.replica_id, queue_info)
if queue_info.accepted:
self.request_router.on_request_routed(pr, replica.replica_id, result)
return result
except asyncio.CancelledError:
# NOTE(edoakes): this is not strictly necessary because there are
# currently no `await` statements between getting the ref and returning,
# but I'm adding it defensively.
if result is not None:
result.cancel()
raise
except ActorDiedError:
# Replica has died but controller hasn't notified the router yet.
# Don't consider this replica for requests in the future, and retry
# routing request.
if replica is not None:
self.request_router.on_replica_actor_died(replica.replica_id)
logger.warning(
f"{replica.replica_id} will not be considered for future "
"requests because it has died."
)
except ActorUnavailableError:
# There are network issues, or replica has died but GCS is down so
# ActorUnavailableError will be raised until GCS recovers. For the
# time being, invalidate the cache entry so that we don't try to
# send requests to this replica without actively probing, and retry
# routing request.
if replica is not None:
self.request_router.on_replica_actor_unavailable(replica.replica_id)
logger.warning(f"{replica.replica_id} is temporarily unavailable.")
return None
async def route_and_send_request(
self,
pr: PendingRequest,
response_id: str,
) -> ReplicaResult:
"""Choose a replica for the request and send it.
This will block indefinitely if no replicas are available to handle the
request, so it's up to the caller to time out or cancel the request.
"""
# Wait for the router to be initialized before sending the request.
await self._request_router_initialized.wait()
is_retry = False
while True:
result = await self._route_and_send_request_once(
pr,
response_id,
is_retry,
)
if result is not None:
return result
# If the replica rejects the request, retry the routing process. The
# request will be placed on the front of the queue to avoid tail latencies.
# TODO(edoakes): this retry procedure is not perfect because it'll reset the
# process of choosing candidates replicas (i.e., for locality-awareness).
is_retry = True
async def assign_request(
self,
request_meta: RequestMetadata,
*request_args,
**request_kwargs,
) -> ReplicaResult:
"""Assign a request to a replica and return the resulting object_ref."""
if not self._deployment_available:
raise DeploymentUnavailableError(self.deployment_id)
response_id = generate_request_id()
assign_request_task = asyncio.current_task()
ray.serve.context._add_request_pending_assignment(
request_meta.internal_request_id, response_id, assign_request_task
)
assign_request_task.add_done_callback(
lambda _: ray.serve.context._remove_request_pending_assignment(
request_meta.internal_request_id, response_id
)
)
# Wait for the router to be initialized before sending the request.
await self._request_router_initialized.wait()
with self._metrics_manager.wrap_request_assignment(request_meta):
replica_result = None
try:
replica_result = await self.route_and_send_request(
PendingRequest(
args=list(request_args),
kwargs=request_kwargs,
metadata=request_meta,
),
response_id,
)
return replica_result
except asyncio.CancelledError:
# NOTE(edoakes): this is not strictly necessary because
# there are currently no `await` statements between
# getting the ref and returning, but I'm adding it defensively.
if replica_result is not None:
replica_result.cancel()
raise
async def shutdown(self):
await self._metrics_manager.shutdown()
| AsyncioRouter |
python | weaviate__weaviate-python-client | weaviate/rbac/models.py | {
"start": 783,
"end": 860
} | class ____(TypedDict):
userId: str
userType: str
| WeaviateUserAssignment |
python | getsentry__sentry | src/sentry/plugins/base/v1.py | {
"start": 691,
"end": 1248
} | class ____(type):
def __new__(cls, name, bases, attrs):
new_cls: type[IPlugin] = type.__new__(cls, name, bases, attrs) # type: ignore[assignment]
if IPlugin in bases:
return new_cls
if not hasattr(new_cls, "title"):
new_cls.title = new_cls.__name__
if not hasattr(new_cls, "slug"):
new_cls.slug = new_cls.title.replace(" ", "-").lower()
if "logger" not in attrs:
new_cls.logger = logging.getLogger(f"sentry.plugins.{new_cls.slug}")
return new_cls
| PluginMount |
python | sqlalchemy__sqlalchemy | test/dialect/postgresql/test_types.py | {
"start": 114527,
"end": 117301
} | class ____(fixtures.TestBase):
__backend__ = True
__only_on__ = "postgresql"
__unsupported_on__ = ("postgresql+pg8000",)
@testing.combinations(
sqltypes.ARRAY, postgresql.ARRAY, argnames="array_cls"
)
@testing.combinations(
sqltypes.JSON, postgresql.JSON, postgresql.JSONB, argnames="json_cls"
)
def test_array_of_json(self, array_cls, json_cls, metadata, connection):
tbl = Table(
"json_table",
self.metadata,
Column("id", Integer, primary_key=True),
Column(
"json_col",
array_cls(json_cls),
),
)
self.metadata.create_all(connection)
connection.execute(
tbl.insert(),
[
{"id": 1, "json_col": ["foo"]},
{"id": 2, "json_col": [{"foo": "bar"}, [1]]},
{"id": 3, "json_col": [None]},
{"id": 4, "json_col": [42]},
{"id": 5, "json_col": [True]},
{"id": 6, "json_col": None},
],
)
sel = select(tbl.c.json_col).order_by(tbl.c.id)
eq_(
connection.execute(sel).fetchall(),
[
(["foo"],),
([{"foo": "bar"}, [1]],),
([None],),
([42],),
([True],),
(None,),
],
)
eq_(
connection.exec_driver_sql(
"""select json_col::text = array['"foo"']::json[]::text"""
" from json_table where id = 1"
).scalar(),
True,
)
eq_(
connection.exec_driver_sql(
"select json_col::text = "
"""array['{"foo": "bar"}', '[1]']::json[]::text"""
" from json_table where id = 2"
).scalar(),
True,
)
eq_(
connection.exec_driver_sql(
"""select json_col::text = array['null']::json[]::text"""
" from json_table where id = 3"
).scalar(),
True,
)
eq_(
connection.exec_driver_sql(
"""select json_col::text = array['42']::json[]::text"""
" from json_table where id = 4"
).scalar(),
True,
)
eq_(
connection.exec_driver_sql(
"""select json_col::text = array['true']::json[]::text"""
" from json_table where id = 5"
).scalar(),
True,
)
eq_(
connection.exec_driver_sql(
"select json_col is null from json_table where id = 6"
).scalar(),
True,
)
| ArrayJSON |
python | nryoung__algorithms | tests/test_sorting.py | {
"start": 3673,
"end": 3932
} | class ____(SortingAlgorithmTestCase):
"""
Test Selection sort on a small range from 0-9
"""
def test_selectionsort(self):
self.output = selection_sort.sort(self.input)
self.assertEqual(self.correct, self.output)
| TestSelectionort |
python | viewflow__viewflow | viewflow/jsonstore.py | {
"start": 7065,
"end": 7127
} | class ____(JSONFieldMixin, fields.TextField):
pass
| TextField |
python | scipy__scipy | scipy/stats/_multivariate.py | {
"start": 31146,
"end": 37100
} | class ____(multi_rv_frozen):
__class_getitem__ = None
def __init__(self, mean=None, cov=1, allow_singular=False, seed=None,
maxpts=None, abseps=1e-5, releps=1e-5):
"""Create a frozen multivariate normal distribution.
Parameters
----------
mean : array_like, default: ``[0]``
Mean of the distribution.
cov : array_like, default: ``[1]``
Symmetric positive (semi)definite covariance matrix of the
distribution.
allow_singular : bool, default: ``False``
Whether to allow a singular covariance matrix.
seed : {None, int, `numpy.random.Generator`, `numpy.random.RandomState`}, optional
If `seed` is None (or `np.random`), the `numpy.random.RandomState`
singleton is used.
If `seed` is an int, a new ``RandomState`` instance is used,
seeded with `seed`.
If `seed` is already a ``Generator`` or ``RandomState`` instance
then that instance is used.
maxpts : integer, optional
The maximum number of points to use for integration of the
cumulative distribution function (default ``1000000*dim``)
abseps : float, optional
Absolute error tolerance for the cumulative distribution function
(default 1e-5)
releps : float, optional
Relative error tolerance for the cumulative distribution function
(default 1e-5)
Examples
--------
When called with the default parameters, this will create a 1D random
variable with mean 0 and covariance 1:
>>> from scipy.stats import multivariate_normal
>>> r = multivariate_normal()
>>> r.mean
array([ 0.])
>>> r.cov
array([[1.]])
""" # numpy/numpydoc#87 # noqa: E501
self._dist = multivariate_normal_gen(seed)
self.dim, self.mean, self.cov_object = (
self._dist._process_parameters(mean, cov, allow_singular))
self.allow_singular = allow_singular or self.cov_object._allow_singular
if not maxpts:
maxpts = 1000000 * self.dim
self.maxpts = maxpts
self.abseps = abseps
self.releps = releps
@property
def cov(self):
return self.cov_object.covariance
def logpdf(self, x):
x = self._dist._process_quantiles(x, self.dim)
out = self._dist._logpdf(x, self.mean, self.cov_object)
if np.any(self.cov_object.rank < self.dim):
out_of_bounds = ~self.cov_object._support_mask(x-self.mean)
out[out_of_bounds] = -np.inf
return _squeeze_output(out)
def pdf(self, x):
return np.exp(self.logpdf(x))
def logcdf(self, x, *, lower_limit=None, rng=None):
cdf = self.cdf(x, lower_limit=lower_limit, rng=rng)
# the log of a negative real is complex, and cdf can be negative
# if lower limit is greater than upper limit
cdf = cdf + 0j if np.any(cdf < 0) else cdf
out = np.log(cdf)
return out
def cdf(self, x, *, lower_limit=None, rng=None):
x = self._dist._process_quantiles(x, self.dim)
rng = self._dist._get_random_state(rng)
out = self._dist._cdf(x, self.mean, self.cov_object.covariance,
self.maxpts, self.abseps, self.releps,
lower_limit, rng)
return _squeeze_output(out)
def rvs(self, size=1, random_state=None):
return self._dist.rvs(self.mean, self.cov_object, size, random_state)
def entropy(self):
"""Computes the differential entropy of the multivariate normal.
Returns
-------
h : scalar
Entropy of the multivariate normal distribution
"""
log_pdet = self.cov_object.log_pdet
rank = self.cov_object.rank
return 0.5 * (rank * (_LOG_2PI + 1) + log_pdet)
# Set frozen generator docstrings from corresponding docstrings in
# multivariate_normal_gen and fill in default strings in class docstrings
for name in ['logpdf', 'pdf', 'logcdf', 'cdf', 'rvs']:
method = multivariate_normal_gen.__dict__[name]
method_frozen = multivariate_normal_frozen.__dict__[name]
method_frozen.__doc__ = doccer.docformat(method.__doc__,
mvn_docdict_noparams)
method.__doc__ = doccer.docformat(method.__doc__, mvn_docdict_params)
_matnorm_doc_default_callparams = """\
mean : array_like, optional
Mean of the distribution (default: `None`)
rowcov : array_like, optional
Among-row covariance matrix of the distribution (default: ``1``)
colcov : array_like, optional
Among-column covariance matrix of the distribution (default: ``1``)
"""
_matnorm_doc_callparams_note = """\
If `mean` is set to `None` then a matrix of zeros is used for the mean.
The dimensions of this matrix are inferred from the shape of `rowcov` and
`colcov`, if these are provided, or set to ``1`` if ambiguous.
`rowcov` and `colcov` can be two-dimensional array_likes specifying the
covariance matrices directly. Alternatively, a one-dimensional array will
be be interpreted as the entries of a diagonal matrix, and a scalar or
zero-dimensional array will be interpreted as this value times the
identity matrix.
"""
_matnorm_doc_frozen_callparams = ""
_matnorm_doc_frozen_callparams_note = """\
See class definition for a detailed description of parameters."""
matnorm_docdict_params = {
'_matnorm_doc_default_callparams': _matnorm_doc_default_callparams,
'_matnorm_doc_callparams_note': _matnorm_doc_callparams_note,
'_doc_random_state': _doc_random_state
}
matnorm_docdict_noparams = {
'_matnorm_doc_default_callparams': _matnorm_doc_frozen_callparams,
'_matnorm_doc_callparams_note': _matnorm_doc_frozen_callparams_note,
'_doc_random_state': _doc_random_state
}
| multivariate_normal_frozen |
python | pypa__hatch | tests/backend/builders/test_config.py | {
"start": 73583,
"end": 83663
} | class ____:
@pytest.mark.parametrize("separator", ["/", "\\"])
def test_default(self, isolation, separator, platform):
if separator == "\\" and not platform.windows:
pytest.skip("Not running on Windows")
builder = MockBuilder(str(isolation))
assert isinstance(builder.config.exclude_spec, pathspec.GitIgnoreSpec)
assert builder.config.exclude_spec.match_file(f"dist{separator}file.py")
def test_global_invalid_type(self, isolation):
config = {"tool": {"hatch": {"build": {"exclude": ""}}}}
builder = MockBuilder(str(isolation), config=config)
with pytest.raises(TypeError, match="Field `tool.hatch.build.exclude` must be an array of strings"):
_ = builder.config.exclude_spec
@pytest.mark.parametrize("separator", ["/", "\\"])
def test_global(self, isolation, separator, platform):
if separator == "\\" and not platform.windows:
pytest.skip("Not running on Windows")
config = {"tool": {"hatch": {"build": {"exclude": ["foo", "bar/baz"]}}}}
builder = MockBuilder(str(isolation), config=config)
assert builder.config.exclude_spec.match_file(f"foo{separator}file.py")
assert builder.config.exclude_spec.match_file(f"bar{separator}baz{separator}file.py")
assert not builder.config.exclude_spec.match_file(f"bar{separator}file.py")
def test_global_pattern_not_string(self, isolation):
config = {"tool": {"hatch": {"build": {"exclude": [0]}}}}
builder = MockBuilder(str(isolation), config=config)
with pytest.raises(TypeError, match="Pattern #1 in field `tool.hatch.build.exclude` must be a string"):
_ = builder.config.exclude_spec
def test_global_pattern_empty_string(self, isolation):
config = {"tool": {"hatch": {"build": {"exclude": [""]}}}}
builder = MockBuilder(str(isolation), config=config)
with pytest.raises(
ValueError, match="Pattern #1 in field `tool.hatch.build.exclude` cannot be an empty string"
):
_ = builder.config.exclude_spec
@pytest.mark.parametrize("separator", ["/", "\\"])
def test_target(self, isolation, separator, platform):
if separator == "\\" and not platform.windows:
pytest.skip("Not running on Windows")
config = {"tool": {"hatch": {"build": {"targets": {"foo": {"exclude": ["foo", "bar/baz"]}}}}}}
builder = MockBuilder(str(isolation), config=config)
builder.PLUGIN_NAME = "foo"
assert builder.config.exclude_spec.match_file(f"foo{separator}file.py")
assert builder.config.exclude_spec.match_file(f"bar{separator}baz{separator}file.py")
assert not builder.config.exclude_spec.match_file(f"bar{separator}file.py")
def test_target_pattern_not_string(self, isolation):
config = {"tool": {"hatch": {"build": {"targets": {"foo": {"exclude": [0]}}}}}}
builder = MockBuilder(str(isolation), config=config)
builder.PLUGIN_NAME = "foo"
with pytest.raises(
TypeError, match="Pattern #1 in field `tool.hatch.build.targets.foo.exclude` must be a string"
):
_ = builder.config.exclude_spec
def test_target_pattern_empty_string(self, isolation):
config = {"tool": {"hatch": {"build": {"targets": {"foo": {"exclude": [""]}}}}}}
builder = MockBuilder(str(isolation), config=config)
builder.PLUGIN_NAME = "foo"
with pytest.raises(
ValueError, match="Pattern #1 in field `tool.hatch.build.targets.foo.exclude` cannot be an empty string"
):
_ = builder.config.exclude_spec
@pytest.mark.parametrize("separator", ["/", "\\"])
def test_target_overrides_global(self, isolation, separator, platform):
if separator == "\\" and not platform.windows:
pytest.skip("Not running on Windows")
config = {"tool": {"hatch": {"build": {"exclude": ["foo"], "targets": {"foo": {"exclude": ["bar"]}}}}}}
builder = MockBuilder(str(isolation), config=config)
builder.PLUGIN_NAME = "foo"
assert not builder.config.exclude_spec.match_file(f"foo{separator}file.py")
assert builder.config.exclude_spec.match_file(f"bar{separator}file.py")
@pytest.mark.parametrize("separator", ["/", "\\"])
def test_vcs_git(self, temp_dir, separator, platform):
if separator == "\\" and not platform.windows:
pytest.skip("Not running on Windows")
with temp_dir.as_cwd():
config = {"tool": {"hatch": {"build": {"exclude": ["foo"]}}}}
builder = MockBuilder(str(temp_dir), config=config)
vcs_ignore_file = temp_dir / ".gitignore"
vcs_ignore_file.write_text("/bar\n*.pyc")
assert builder.config.exclude_spec.match_file(f"foo{separator}file.py")
assert builder.config.exclude_spec.match_file(f"bar{separator}file.py")
assert builder.config.exclude_spec.match_file(f"baz{separator}bar{separator}file.pyc")
assert not builder.config.exclude_spec.match_file(f"baz{separator}bar{separator}file.py")
@pytest.mark.parametrize("separator", ["/", "\\"])
def test_ignore_vcs_git(self, temp_dir, separator, platform):
if separator == "\\" and not platform.windows:
pytest.skip("Not running on Windows")
with temp_dir.as_cwd():
config = {"tool": {"hatch": {"build": {"ignore-vcs": True, "exclude": ["foo"]}}}}
builder = MockBuilder(str(temp_dir), config=config)
vcs_ignore_file = temp_dir / ".gitignore"
vcs_ignore_file.write_text("/bar\n*.pyc")
assert builder.config.exclude_spec.match_file(f"foo{separator}file.py")
assert not builder.config.exclude_spec.match_file(f"bar{separator}file.py")
@pytest.mark.parametrize("separator", ["/", "\\"])
def test_vcs_git_boundary(self, temp_dir, separator, platform):
if separator == "\\" and not platform.windows:
pytest.skip("Not running on Windows")
project_dir = temp_dir / "project"
project_dir.mkdir()
(project_dir / ".git").mkdir()
with project_dir.as_cwd():
config = {"tool": {"hatch": {"build": {"exclude": ["foo"]}}}}
builder = MockBuilder(str(project_dir), config=config)
vcs_ignore_file = temp_dir / ".gitignore"
vcs_ignore_file.write_text("/bar\n*.pyc")
assert builder.config.exclude_spec.match_file(f"foo{separator}file.py")
assert not builder.config.exclude_spec.match_file(f"bar{separator}file.py")
@pytest.mark.parametrize("separator", ["/", "\\"])
def test_vcs_git_exclude_whitelisted_file(self, temp_dir, separator, platform):
if separator == "\\" and not platform.windows:
pytest.skip("Not running on Windows")
with temp_dir.as_cwd():
config = {"tool": {"hatch": {"build": {"exclude": ["foo/bar"]}}}}
builder = MockBuilder(str(temp_dir), config=config)
vcs_ignore_file = temp_dir / ".gitignore"
vcs_ignore_file.write_text("foo/*\n!foo/bar")
assert builder.config.path_is_excluded(f"foo{separator}deb") is True
assert builder.config.path_is_excluded(f"foo{separator}bar") is True
@pytest.mark.parametrize("separator", ["/", "\\"])
def test_vcs_mercurial(self, temp_dir, separator, platform):
if separator == "\\" and not platform.windows:
pytest.skip("Not running on Windows")
with temp_dir.as_cwd():
config = {"tool": {"hatch": {"build": {"exclude": ["foo"]}}}}
builder = MockBuilder(str(temp_dir), config=config)
vcs_ignore_file = temp_dir / ".hgignore"
vcs_ignore_file.write_text("syntax: glob\n/bar\n*.pyc")
assert builder.config.exclude_spec.match_file(f"foo{separator}file.py")
assert builder.config.exclude_spec.match_file(f"bar{separator}file.py")
assert builder.config.exclude_spec.match_file(f"baz{separator}bar{separator}file.pyc")
assert not builder.config.exclude_spec.match_file(f"baz{separator}bar{separator}file.py")
@pytest.mark.parametrize("separator", ["/", "\\"])
def test_ignore_vcs_mercurial(self, temp_dir, separator, platform):
if separator == "\\" and not platform.windows:
pytest.skip("Not running on Windows")
with temp_dir.as_cwd():
config = {"tool": {"hatch": {"build": {"ignore-vcs": True, "exclude": ["foo"]}}}}
builder = MockBuilder(str(temp_dir), config=config)
vcs_ignore_file = temp_dir / ".hgignore"
vcs_ignore_file.write_text("syntax: glob\n/bar\n*.pyc")
assert builder.config.exclude_spec.match_file(f"foo{separator}file.py")
assert not builder.config.exclude_spec.match_file(f"bar{separator}file.py")
@pytest.mark.parametrize("separator", ["/", "\\"])
def test_vcs_mercurial_boundary(self, temp_dir, separator, platform):
if separator == "\\" and not platform.windows:
pytest.skip("Not running on Windows")
project_dir = temp_dir / "project"
project_dir.mkdir()
(project_dir / ".hg").mkdir()
with project_dir.as_cwd():
config = {"tool": {"hatch": {"build": {"exclude": ["foo"]}}}}
builder = MockBuilder(str(project_dir), config=config)
vcs_ignore_file = temp_dir / ".hgignore"
vcs_ignore_file.write_text("syntax: glob\n/bar\n*.pyc")
assert builder.config.exclude_spec.match_file(f"foo{separator}file.py")
assert not builder.config.exclude_spec.match_file(f"bar{separator}file.py")
def test_override_default_global_exclude_patterns(self, isolation):
builder = MockBuilder(str(isolation))
builder.config.default_global_exclude = list
assert builder.config.exclude_spec is None
assert not builder.config.path_is_excluded(".git/file")
| TestPatternExclude |
python | eventlet__eventlet | eventlet/backdoor.py | {
"start": 891,
"end": 4043
} | class ____(greenlets.greenlet):
def __init__(self, desc, hostport, locals):
self.hostport = hostport
self.locals = locals
# mangle the socket
self.desc = FileProxy(desc)
greenlets.greenlet.__init__(self)
def run(self):
try:
console = InteractiveConsole(self.locals)
console.interact()
finally:
self.switch_out()
self.finalize()
def switch(self, *args, **kw):
self.saved = sys.stdin, sys.stderr, sys.stdout
sys.stdin = sys.stdout = sys.stderr = self.desc
greenlets.greenlet.switch(self, *args, **kw)
def switch_out(self):
sys.stdin, sys.stderr, sys.stdout = self.saved
def finalize(self):
# restore the state of the socket
self.desc = None
if len(self.hostport) >= 2:
host = self.hostport[0]
port = self.hostport[1]
print("backdoor closed to %s:%s" % (host, port,))
else:
print('backdoor closed')
def backdoor_server(sock, locals=None):
""" Blocking function that runs a backdoor server on the socket *sock*,
accepting connections and running backdoor consoles for each client that
connects.
The *locals* argument is a dictionary that will be included in the locals()
of the interpreters. It can be convenient to stick important application
variables in here.
"""
listening_on = sock.getsockname()
if sock.family == socket.AF_INET:
# Expand result to IP + port
listening_on = '%s:%s' % listening_on
elif sock.family == socket.AF_INET6:
ip, port, _, _ = listening_on
listening_on = '%s:%s' % (ip, port,)
# No action needed if sock.family == socket.AF_UNIX
print("backdoor server listening on %s" % (listening_on,))
try:
while True:
socketpair = None
try:
socketpair = sock.accept()
backdoor(socketpair, locals)
except OSError as e:
# Broken pipe means it was shutdown
if get_errno(e) != errno.EPIPE:
raise
finally:
if socketpair:
socketpair[0].close()
finally:
sock.close()
def backdoor(conn_info, locals=None):
"""Sets up an interactive console on a socket with a single connected
client. This does not block the caller, as it spawns a new greenlet to
handle the console. This is meant to be called from within an accept loop
(such as backdoor_server).
"""
conn, addr = conn_info
if conn.family == socket.AF_INET:
host, port = addr
print("backdoor to %s:%s" % (host, port))
elif conn.family == socket.AF_INET6:
host, port, _, _ = addr
print("backdoor to %s:%s" % (host, port))
else:
print('backdoor opened')
fl = conn.makefile("rw")
console = SocketConsole(fl, addr, locals)
hub = hubs.get_hub()
hub.schedule_call_global(0, console.switch)
if __name__ == '__main__':
backdoor_server(eventlet.listen(('127.0.0.1', 9000)), {})
| SocketConsole |
python | jmcnamara__XlsxWriter | xlsxwriter/test/workbook/test_write_defined_names.py | {
"start": 299,
"end": 2618
} | class ____(unittest.TestCase):
"""
Test the Workbook _write_defined_names() method.
"""
def setUp(self):
self.fh = StringIO()
self.workbook = Workbook()
self.workbook._set_filehandle(self.fh)
def test_write_defined_names_1(self):
"""Test the _write_defined_names() method"""
self.workbook.defined_names = [["_xlnm.Print_Titles", 0, "Sheet1!$1:$1", 0]]
self.workbook._write_defined_names()
exp = """<definedNames><definedName name="_xlnm.Print_Titles" localSheetId="0">Sheet1!$1:$1</definedName></definedNames>"""
got = self.fh.getvalue()
self.assertEqual(exp, got)
def test_write_defined_names_2(self):
"""Test the _write_defined_names() method"""
self.workbook.add_worksheet()
self.workbook.add_worksheet()
self.workbook.add_worksheet("Sheet 3")
self.workbook.define_name("""'Sheet 3'!Bar""", """='Sheet 3'!$A$1""")
self.workbook.define_name("""Abc""", """=Sheet1!$A$1""")
self.workbook.define_name("""Baz""", """=0.98""")
self.workbook.define_name("""Sheet1!Bar""", """=Sheet1!$A$1""")
self.workbook.define_name("""Sheet2!Bar""", """=Sheet2!$A$1""")
self.workbook.define_name("""Sheet2!aaa""", """=Sheet2!$A$1""")
self.workbook.define_name("""'Sheet 3'!car""", '="Saab 900"')
self.workbook.define_name("""_Egg""", """=Sheet1!$A$1""")
self.workbook.define_name("""_Fog""", """=Sheet1!$A$1""")
self.workbook._prepare_defined_names()
self.workbook._write_defined_names()
exp = """<definedNames><definedName name="_Egg">Sheet1!$A$1</definedName><definedName name="_Fog">Sheet1!$A$1</definedName><definedName name="aaa" localSheetId="1">Sheet2!$A$1</definedName><definedName name="Abc">Sheet1!$A$1</definedName><definedName name="Bar" localSheetId="2">'Sheet 3'!$A$1</definedName><definedName name="Bar" localSheetId="0">Sheet1!$A$1</definedName><definedName name="Bar" localSheetId="1">Sheet2!$A$1</definedName><definedName name="Baz">0.98</definedName><definedName name="car" localSheetId="2">"Saab 900"</definedName></definedNames>"""
got = self.fh.getvalue()
self.assertEqual(exp, got)
def tearDown(self):
self.workbook.fileclosed = 1
| TestWriteDefinedNames |
python | openai__openai-python | src/openai/types/beta/realtime/realtime_server_event.py | {
"start": 3688,
"end": 5408
} | class ____(BaseModel):
event_id: str
"""The unique ID of the server event."""
response_id: str
"""The unique ID of the response that produced the audio."""
type: Literal["output_audio_buffer.cleared"]
"""The event type, must be `output_audio_buffer.cleared`."""
RealtimeServerEvent: TypeAlias = Annotated[
Union[
ConversationCreatedEvent,
ConversationItemCreatedEvent,
ConversationItemDeletedEvent,
ConversationItemInputAudioTranscriptionCompletedEvent,
ConversationItemInputAudioTranscriptionDeltaEvent,
ConversationItemInputAudioTranscriptionFailedEvent,
ConversationItemRetrieved,
ConversationItemTruncatedEvent,
ErrorEvent,
InputAudioBufferClearedEvent,
InputAudioBufferCommittedEvent,
InputAudioBufferSpeechStartedEvent,
InputAudioBufferSpeechStoppedEvent,
RateLimitsUpdatedEvent,
ResponseAudioDeltaEvent,
ResponseAudioDoneEvent,
ResponseAudioTranscriptDeltaEvent,
ResponseAudioTranscriptDoneEvent,
ResponseContentPartAddedEvent,
ResponseContentPartDoneEvent,
ResponseCreatedEvent,
ResponseDoneEvent,
ResponseFunctionCallArgumentsDeltaEvent,
ResponseFunctionCallArgumentsDoneEvent,
ResponseOutputItemAddedEvent,
ResponseOutputItemDoneEvent,
ResponseTextDeltaEvent,
ResponseTextDoneEvent,
SessionCreatedEvent,
SessionUpdatedEvent,
TranscriptionSessionUpdatedEvent,
OutputAudioBufferStarted,
OutputAudioBufferStopped,
OutputAudioBufferCleared,
],
PropertyInfo(discriminator="type"),
]
| OutputAudioBufferCleared |
python | bokeh__bokeh | src/bokeh/core/property/data_frame.py | {
"start": 2851,
"end": 3539
} | class ____(Property["DataFrame"]):
""" Accept Pandas DataFrame values.
This class is pandas-specific - are more generic one is
``EagerDataFrame()``.
This property only exists to support type validation, e.g. for "accepts"
clauses. It is not serializable itself, and is not useful to add to
Bokeh models directly.
"""
def __init__(self) -> None:
super().__init__()
def validate(self, value: Any, detail: bool = True) -> None:
import pandas as pd
if isinstance(value, pd.DataFrame):
return
msg = "" if not detail else f"expected Pandas DataFrame, got {value!r}"
raise ValueError(msg)
| PandasDataFrame |
python | ray-project__ray | python/ray/tune/utils/file_transfer.py | {
"start": 10935,
"end": 17431
} | class ____:
"""Actor wrapping around a packing job.
This actor is used for chunking the packed data into smaller chunks that
can be transferred via the object store more efficiently.
The actor will start packing the directory when initialized, and separate
chunks can be received by calling the remote ``next()`` task.
Args:
source_dir: Path to local directory to pack into tarfile.
exclude: Pattern of files to exclude, e.g.
``["*/checkpoint_*]`` to exclude trial checkpoints.
files_stats: Dict of relative filenames mapping to a tuple of
(mtime, filesize). Only files that differ from these stats
will be packed.
chunk_size_bytes: Cut bytes stream into chunks of this size in bytes.
max_size_bytes: If packed data exceeds this value, raise an error before
transfer. If ``None``, no limit is enforced.
"""
def __init__(
self,
source_dir: str,
exclude: Optional[List] = None,
files_stats: Optional[Dict[str, Tuple[float, int]]] = None,
chunk_size_bytes: int = _DEFAULT_CHUNK_SIZE_BYTES,
max_size_bytes: Optional[int] = _DEFAULT_MAX_SIZE_BYTES,
):
self.stream = _pack_dir(
source_dir=source_dir, exclude=exclude, files_stats=files_stats
)
# Get buffer size
self.stream.seek(0, 2)
file_size = self.stream.tell()
if max_size_bytes and file_size > max_size_bytes:
raise RuntimeError(
f"Packed directory {source_dir} content has a size of "
f"{_gib_string(file_size)}, which exceeds the limit "
f"of {_gib_string(max_size_bytes)}. Please check the directory "
f"contents. If you want to transfer everything, you can increase "
f"or disable the limit by passing the `max_size` argument."
)
self.chunk_size = chunk_size_bytes
self.max_size = max_size_bytes
self.iter = None
def get_full_data(self) -> bytes:
return self.stream.getvalue()
def _chunk_generator(self) -> Generator[bytes, None, None]:
self.stream.seek(0)
data = self.stream.read(self.chunk_size)
while data:
yield data
data = self.stream.read(self.chunk_size)
def next(self) -> Optional[bytes]:
if not self.iter:
self.iter = iter(self._chunk_generator())
try:
return next(self.iter)
except StopIteration:
return None
def _iter_remote(actor: ray.ActorID) -> Generator[bytes, None, None]:
"""Iterate over actor task and return as generator."""
while True:
buffer = ray.get(actor.next.remote())
if buffer is None:
return
yield buffer
def _unpack_dir(stream: io.BytesIO, target_dir: str, *, _retry: bool = True) -> None:
"""Unpack tarfile stream into target directory."""
stream.seek(0)
target_dir = os.path.normpath(target_dir)
try:
# Timeout 0 means there will be only one attempt to acquire
# the file lock. If it cannot be acquired, a TimeoutError
# will be thrown.
with TempFileLock(f"{target_dir}.lock", timeout=0):
with tarfile.open(fileobj=stream) as tar:
tar.extractall(target_dir)
except TimeoutError:
# wait, but do not do anything
with TempFileLock(f"{target_dir}.lock"):
pass
# if the dir was locked due to being deleted,
# recreate
if not os.path.exists(target_dir):
if _retry:
_unpack_dir(stream, target_dir, _retry=False)
else:
raise RuntimeError(
f"Target directory {target_dir} does not exist "
"and couldn't be recreated. "
"Please raise an issue on GitHub: "
"https://github.com/ray-project/ray/issues"
)
@ray.remote
def _unpack_from_actor(pack_actor: ray.ActorID, target_dir: str) -> None:
"""Iterate over chunks received from pack actor and unpack."""
stream = io.BytesIO()
for buffer in _iter_remote(pack_actor):
stream.write(buffer)
_unpack_dir(stream, target_dir=target_dir)
def _copy_dir(
source_dir: str,
target_dir: str,
*,
exclude: Optional[List] = None,
_retry: bool = True,
) -> None:
"""Copy dir with shutil on the actor."""
target_dir = os.path.normpath(target_dir)
try:
# Timeout 0 means there will be only one attempt to acquire
# the file lock. If it cannot be acquired, a TimeoutError
# will be thrown.
with TempFileLock(f"{target_dir}.lock", timeout=0):
_delete_path_unsafe(target_dir)
_ignore_func = None
if exclude:
def _ignore(path, names):
ignored_names = set()
rel_path = os.path.relpath(path, source_dir)
for name in names:
candidate = os.path.join(rel_path, name)
for excl in exclude:
if fnmatch.fnmatch(candidate, excl):
ignored_names.add(name)
break
return ignored_names
_ignore_func = _ignore
shutil.copytree(source_dir, target_dir, ignore=_ignore_func)
except TimeoutError:
# wait, but do not do anything
with TempFileLock(f"{target_dir}.lock"):
pass
# if the dir was locked due to being deleted,
# recreate
if not os.path.exists(target_dir):
if _retry:
_copy_dir(source_dir, target_dir, _retry=False)
else:
raise RuntimeError(
f"Target directory {target_dir} does not exist "
"and couldn't be recreated. "
"Please raise an issue on GitHub: "
"https://github.com/ray-project/ray/issues"
)
# Only export once
_remote_copy_dir = ray.remote(_copy_dir)
def _delete_path_unsafe(target_path: str):
"""Delete path (files and directories). No filelock."""
if os.path.exists(target_path):
if os.path.isdir(target_path):
shutil.rmtree(target_path)
else:
os.remove(target_path)
return True
return False
| _PackActor |
python | kamyu104__LeetCode-Solutions | Python/balance-a-binary-search-tree.py | {
"start": 66,
"end": 217
} | class ____(object):
def __init__(self, x):
self.val = x
self.left = None
self.right = None
# dfs solution with stack
| TreeNode |
python | run-llama__llama_index | llama-index-core/llama_index/core/agent/react/types.py | {
"start": 148,
"end": 410
} | class ____(BaseModel):
"""Reasoning step."""
@abstractmethod
def get_content(self) -> str:
"""Get content."""
@property
@abstractmethod
def is_done(self) -> bool:
"""Is the reasoning step the last one."""
| BaseReasoningStep |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-github/source_github/streams.py | {
"start": 33717,
"end": 34101
} | class ____(IncrementalMixin, GithubStream):
"""
API docs: https://docs.github.com/en/rest/pulls/comments?apiVersion=2022-11-28#list-review-comments-in-a-repository
"""
use_cache = True
large_stream = True
def path(self, stream_slice: Mapping[str, Any] = None, **kwargs) -> str:
return f"repos/{stream_slice['repository']}/pulls/comments"
| ReviewComments |
python | realpython__materials | python-isinstance/balls.py | {
"start": 0,
"end": 106
} | class ____:
def __init__(self, color, shape):
self.color = color
self.shape = shape
| Ball |
python | kamyu104__LeetCode-Solutions | Python/cat-and-mouse.py | {
"start": 1767,
"end": 3945
} | class ____(object):
def catMouseGame(self, graph):
"""
:type graph: List[List[int]]
:rtype: int
"""
HOLE, MOUSE_START, CAT_START = range(3)
DRAW, MOUSE, CAT = range(3)
def parents(m, c, t):
if t == CAT:
for nm in graph[m]:
yield nm, c, MOUSE^CAT^t
else:
for nc in graph[c]:
if nc != HOLE:
yield m, nc, MOUSE^CAT^t
color = collections.defaultdict(int)
degree = {}
ignore = set(graph[HOLE])
for m in xrange(len(graph)):
for c in xrange(len(graph)):
degree[m, c, MOUSE] = len(graph[m])
degree[m, c, CAT] = len(graph[c])-(c in ignore)
q1 = collections.deque()
q2 = collections.deque()
for i in xrange(len(graph)):
if i == HOLE:
continue
color[HOLE, i, CAT] = MOUSE
q1.append((HOLE, i, CAT))
for t in [MOUSE, CAT]:
color[i, i, t] = CAT
q2.append((i, i, t))
while q1:
i, j, t = q1.popleft()
for ni, nj, nt in parents(i, j, t):
if color[ni, nj, nt] != DRAW:
continue
if t == CAT:
color[ni, nj, nt] = MOUSE
q1.append((ni, nj, nt))
continue
degree[ni, nj, nt] -= 1
if not degree[ni, nj, nt]:
color[ni, nj, nt] = MOUSE
q1.append((ni, nj, nt))
while q2:
i, j, t = q2.popleft()
for ni, nj, nt in parents(i, j, t):
if color[ni, nj, nt] != DRAW:
continue
if t == MOUSE:
color[ni, nj, nt] = CAT
q2.append((ni, nj, nt))
continue
degree[ni, nj, nt] -= 1
if not degree[ni, nj, nt]:
color[ni, nj, nt] = CAT
q2.append((ni, nj, nt))
return color[MOUSE_START, CAT_START, MOUSE]
| Solution2 |
python | walkccc__LeetCode | solutions/1558. Minimum Numbers of Function Calls to Make Target Array/1558.py | {
"start": 0,
"end": 190
} | class ____:
def minOperations(self, nums: list[int]) -> int:
mx = max(nums)
return (sum(num.bit_count() for num in nums) +
(0 if mx == 0 else mx.bit_length() - 1))
| Solution |
python | facebook__pyre-check | source/interprocedural_analyses/taint/test/integration/class_interval.py | {
"start": 3092,
"end": 3325
} | class ____:
def m3(self):
return _test_source() # Interval: (-∞,+∞) /\ [9,10] = [9,10]
def propagate_source_empty(c: C6):
return _test_sink(c.m1()) # Interval: [6,7] /\ [2,5] = Empty
"""
A7: [1,2]
B7: [3,4]
"""
| E6 |
python | scikit-learn__scikit-learn | sklearn/utils/tests/test_tags.py | {
"start": 456,
"end": 524
} | class ____(TransformerMixin, BaseEstimator):
pass
| EmptyTransformer |
python | pytorch__pytorch | test/distributed/tensor/test_common_rules.py | {
"start": 556,
"end": 16758
} | class ____(DTensorContinuousTestBase):
# hard code world size to 4 as we need to test
# at least with 2d mesh
world_size = 4
def _gen_tensor_meta(self, shape):
empty_tensor = torch.empty(shape)
return TensorMeta(
empty_tensor.shape,
empty_tensor.stride(),
empty_tensor.dtype,
)
def test_einop_basic_propagation(self):
# plain einsum, mm
mesh = DeviceMesh(self.device_type(), torch.arange(self.world_size))
mm_call = aten.mm.default
# propagate col-wise sharding
mat1, mat2 = [-1, -1], [-1, 0]
mat1_tensor_meta = self._gen_tensor_meta(torch.Size([8, 4]))
mat2_tensor_meta = self._gen_tensor_meta(torch.Size([4, 8]))
mat1_spec = DTensorSpec.from_dim_map(
mesh, mat1, [], tensor_meta=mat1_tensor_meta
)
mat2_spec = DTensorSpec.from_dim_map(
mesh, mat2, [], tensor_meta=mat2_tensor_meta
)
output_sharding = einop_rule(
"mk,kn->mn", OpSchema(mm_call, (mat1_spec, mat2_spec), {})
)
output_spec = output_sharding.output_spec
self.assertIsNotNone(output_spec)
self.assertEqual(output_spec.dim_map, [-1, 0])
# propagate row-wise sharding
mat1, mat2 = [0, -1], [-1, -1]
mat1_spec = DTensorSpec.from_dim_map(
mesh, mat1, [], tensor_meta=mat1_tensor_meta
)
mat2_spec = DTensorSpec.from_dim_map(
mesh, mat2, [], tensor_meta=mat2_tensor_meta
)
output_sharding = einop_rule(
"mk,kn->mn", OpSchema(mm_call, (mat1_spec, mat2_spec), {})
)
output_spec = output_sharding.output_spec
self.assertIsNotNone(output_spec)
self.assertEqual(output_spec.dim_map, [0, -1])
# generate partial
mat1, mat2 = [-1, 0], [0, -1]
mat1_spec = DTensorSpec.from_dim_map(
mesh, mat1, [], tensor_meta=mat1_tensor_meta
)
mat2_spec = DTensorSpec.from_dim_map(
mesh, mat2, [], tensor_meta=mat2_tensor_meta
)
output_sharding = einop_rule(
"mk,kn->mn", OpSchema(mm_call, (mat1_spec, mat2_spec), {})
)
output_spec = output_sharding.output_spec
self.assertIsNotNone(output_spec)
self.assertTrue(output_spec.placements[0].is_partial())
def test_einop_pointwise_propagation(self):
mesh = DeviceMesh(self.device_type(), torch.arange(self.world_size))
add_call = aten.add.Tensor
# addition
mat1_tensor_meta = self._gen_tensor_meta(torch.Size([8, 8]))
mat1 = [0, -1]
mat1_spec = DTensorSpec.from_dim_map(
mesh, mat1, [], tensor_meta=mat1_tensor_meta
)
output_sharding = einop_rule(
"ij,ij->ij", OpSchema(add_call, (mat1_spec, mat1_spec), {})
)
output_spec = output_sharding.output_spec
self.assertIsNotNone(output_spec)
self.assertEqual(output_spec.dim_map, [0, -1])
# broadcast addition
mat1_tensor_meta = self._gen_tensor_meta(torch.Size([8, 8]))
mat1 = [-1, 0, -1]
mat1_spec = DTensorSpec.from_dim_map(
mesh, mat1, [], tensor_meta=mat1_tensor_meta
)
mat2_tensor_meta = self._gen_tensor_meta(torch.Size([2]))
mat2_spec = DTensorSpec.from_dim_map(
mesh, [-1], [], tensor_meta=mat2_tensor_meta
)
output_sharding = einop_rule(
"ijk,k->ijk", OpSchema(add_call, (mat1_spec, mat2_spec), {})
)
output_spec = output_sharding.output_spec
self.assertIsNotNone(output_spec)
self.assertEqual(output_spec.dim_map, [-1, 0, -1])
# broadcast to a common shape
mat1_tensor_meta = self._gen_tensor_meta(torch.Size([8, 8, 8]))
mat2_tensor_meta = self._gen_tensor_meta(torch.Size([1, 8]))
mat1_spec = DTensorSpec.from_dim_map(
mesh, [0, -1, -1], [], tensor_meta=mat1_tensor_meta
)
mat2_spec = DTensorSpec.from_dim_map(
mesh, [-1, -1], [], tensor_meta=mat2_tensor_meta
)
output_sharding = einop_rule(
"ijk,1k->ijk", OpSchema(add_call, (mat1_spec, mat2_spec), {})
)
output_spec = output_sharding.output_spec
self.assertIsNotNone(output_spec)
self.assertEqual(output_spec.dim_map, [0, -1, -1])
def test_einop_merge_sharding(self):
# 2d mesh einop merge sharding
mesh_shape = torch.arange(self.world_size).reshape(
self.world_size // 2, self.world_size // 2
)
mesh = DeviceMesh(self.device_type(), mesh_shape)
mm_call = aten.mm.default
mat1, mat2 = [0, -1], [-1, 1]
mat1_tensor_meta = self._gen_tensor_meta(torch.Size([8, 4]))
mat2_tensor_meta = self._gen_tensor_meta(torch.Size([4, 8]))
mat1_spec = DTensorSpec.from_dim_map(
mesh, mat1, [], tensor_meta=mat1_tensor_meta
)
mat2_spec = DTensorSpec.from_dim_map(
mesh, mat2, [], tensor_meta=mat2_tensor_meta
)
output_sharding = einop_rule(
"mk,kn->mn", OpSchema(mm_call, (mat1_spec, mat2_spec), {})
)
output_spec = output_sharding.output_spec
self.assertIsNotNone(output_spec)
self.assertEqual(output_spec.dim_map, [0, 1])
def test_einop_linearity(self):
mesh_shape = torch.arange(self.world_size).reshape(
self.world_size // 2, self.world_size // 2
)
mesh = DeviceMesh(self.device_type(), mesh_shape)
mm_call = aten.mm.default
mat1, mat2 = [0, -1], [-1, -1]
mat1_tensor_meta = self._gen_tensor_meta(torch.Size([8, 4]))
mat2_tensor_meta = self._gen_tensor_meta(torch.Size([4, 8]))
mat1_spec = DTensorSpec.from_dim_map(
mesh, mat1, [1], tensor_meta=mat1_tensor_meta
)
mat2_spec = DTensorSpec.from_dim_map(
mesh, mat2, [], tensor_meta=mat2_tensor_meta
)
# if not turn on linearity, partial sum is not eligible to propagate, we return
# suggestion to reshard inputs with no partial sum (i.e. all_reduce one input)
output_sharding = einop_rule(
"mk,kn->mn", OpSchema(mm_call, (mat1_spec, mat2_spec), {})
)
self.assertIsNone(output_sharding.output_spec)
suggestions = output_sharding.redistribute_schema
self.assertIsNotNone(suggestions)
suggested_spec = suggestions.args_schema[0]
self.assertFalse(suggested_spec.placements[1].is_partial())
# einop prop with linearity on mm, should give back suggestion
# on converting placements to partial
output_sharding = einop_rule(
"mk,kn->mn",
OpSchema(mm_call, (mat1_spec, mat2_spec), {}),
linearity=True,
)
self.assertIsNone(output_sharding.output_spec)
suggestions = output_sharding.redistribute_schema
self.assertIsNotNone(suggestions)
mat2_spec = suggestions.args_schema[1]
# mat2 mesh dim 1 should become partial now!
self.assertTrue(mat2_spec.placements[1].is_partial())
# einop prop with linearity on point-wise, should give back suggestion
# on converting placements to partial
add_call = aten.add.Tensor
mat1, mat2 = [0, -1], [0, -1]
mat1_tensor_meta = self._gen_tensor_meta(torch.Size([8, 6]))
mat2_tensor_meta = self._gen_tensor_meta(torch.Size([8, 6]))
mat1_spec = DTensorSpec.from_dim_map(
mesh, mat1, [1], tensor_meta=mat1_tensor_meta
)
mat2_spec = DTensorSpec.from_dim_map(
mesh, mat2, [], tensor_meta=mat2_tensor_meta
)
output_sharding = einop_rule(
"ij,ij->ij",
OpSchema(add_call, (mat1_spec, mat2_spec), {}),
linearity=True,
)
self.assertIsNone(output_sharding.output_spec)
suggestions = output_sharding.redistribute_schema
self.assertIsNotNone(suggestions)
mat2_spec = suggestions.args_schema[1]
# mat2 mesh dim 1 should become partial now!
self.assertTrue(mat2_spec.placements[1].is_partial())
def test_einop_multi_sharding_on_mesh_dim(self):
# einop prop with multi sharding on same mesh dim
mesh_shape = torch.arange(self.world_size)
mesh = DeviceMesh(self.device_type(), mesh_shape)
mm_call = aten.mm.default
mat1, mat2 = [0, -1], [0, -1]
mat1_tensor_meta = self._gen_tensor_meta(torch.Size([8, 12]))
mat2_tensor_meta = self._gen_tensor_meta(torch.Size([12, 4]))
mat1_spec = DTensorSpec.from_dim_map(
mesh, mat1, [], tensor_meta=mat1_tensor_meta
)
mat2_spec = DTensorSpec.from_dim_map(
mesh, mat2, [], tensor_meta=mat2_tensor_meta
)
output_sharding = einop_rule(
"mk,kn->mn", OpSchema(mm_call, (mat1_spec, mat2_spec), {})
)
output_spec = output_sharding.output_spec
self.assertIsNone(output_spec)
self.assertIsNotNone(output_sharding.redistribute_schema)
# ensure that the suggestion is to reshard the second
# arg by all_gather its tensor dim sharding
schema_suggestion = output_sharding.redistribute_schema
self.assertEqual(schema_suggestion.args_schema[0].dim_map, [0, -1])
self.assertEqual(schema_suggestion.args_schema[1].dim_map, [-1, -1])
def test_einop_errors(self):
mesh_shape = torch.arange(self.world_size).reshape(
self.world_size // 2, self.world_size // 2
)
mesh = DeviceMesh(self.device_type(), mesh_shape)
add_call = aten.add.Tensor
mat1, mat2 = [0, -1], [1, -1]
mat1_tensor_meta = self._gen_tensor_meta(torch.Size([8, 4]))
mat2_tensor_meta = self._gen_tensor_meta(torch.Size([8, 4]))
mat1_spec = DTensorSpec.from_dim_map(
mesh, mat1, [], tensor_meta=mat1_tensor_meta
)
mat2_spec = DTensorSpec.from_dim_map(
mesh, mat2, [], tensor_meta=mat2_tensor_meta
)
with self.assertRaisesRegex(RuntimeError, "sharded two different ways:"):
einop_rule("ij,ij->ij", OpSchema(add_call, (mat1_spec, mat2_spec), {}))
def test_pointwise_rules_broadcasting(self):
mesh = DeviceMesh(self.device_type(), torch.arange(self.world_size))
where_call = aten.where.self
inp1, inp2, inp3 = [0], [], [-1, -1]
inp1_tensor_meta = self._gen_tensor_meta(torch.Size([8]))
inp2_tensor_meta = self._gen_tensor_meta(torch.Size([]))
inp3_tensor_meta = self._gen_tensor_meta(torch.Size([1, 1]))
condition = DTensorSpec.from_dim_map(
mesh, inp1, [], tensor_meta=inp1_tensor_meta
)
self_tensor = DTensorSpec.from_dim_map(
mesh, inp2, [], tensor_meta=inp2_tensor_meta
)
other_tensor = DTensorSpec.from_dim_map(
mesh, inp3, [], tensor_meta=inp3_tensor_meta
)
# propagate point-wise sharding with broadcasting
output_sharding = pointwise_rule(
OpSchema(where_call, (condition, self_tensor, other_tensor), {})
)
output_spec = output_sharding.output_spec
self.assertIsNotNone(output_spec)
self.assertEqual(output_spec.dim_map, [-1, 0])
def test_pointwise_rules_suggestion(self):
mesh = DeviceMesh(self.device_type(), torch.arange(self.world_size))
lerp_call = aten.lerp.Scalar
# propagate point-wise sharding
inp1, inp2 = [-1, -1], [-1, 0]
mat1_tensor_meta = self._gen_tensor_meta(torch.Size([8, 4]))
mat2_tensor_meta = self._gen_tensor_meta(torch.Size([8, 4]))
mat1_spec = DTensorSpec.from_dim_map(
mesh, inp1, [], tensor_meta=mat1_tensor_meta
)
mat2_spec = DTensorSpec.from_dim_map(
mesh, inp2, [], tensor_meta=mat2_tensor_meta
)
# adding a positional argument -1 to arg schema
output_sharding = pointwise_rule(
OpSchema(lerp_call, (mat1_spec, mat2_spec, -1), {})
)
self.assertIsNone(output_sharding.output_spec)
self.assertIsNotNone(output_sharding.redistribute_schema)
# ensure that the suggestion from pointwise rules still have
# the positional args that are not DTensorSpec
schema_suggestion = output_sharding.redistribute_schema
self.assertEqual(len(schema_suggestion.args_schema), 3)
self.assertEqual(schema_suggestion.args_schema[2], -1)
def test_pointwise_multi_sharding_on_mesh_dim(self):
# 2d mesh pointwise sharding
mesh_shape = torch.arange(self.world_size).reshape(
self.world_size // 2, self.world_size // 2
)
mesh = DeviceMesh(self.device_type(), mesh_shape)
add_call = aten.add.Tensor
# basic case to test implicit broadcasting shape alignment
mat1, mat2 = [-1, 0], [0]
mat1_tensor_meta = self._gen_tensor_meta(torch.Size([20, 6]))
mat2_tensor_meta = self._gen_tensor_meta(torch.Size([6]))
mat1_spec = DTensorSpec.from_dim_map(
mesh, mat1, [], tensor_meta=mat1_tensor_meta
)
mat2_spec = DTensorSpec.from_dim_map(
mesh, mat2, [], tensor_meta=mat2_tensor_meta
)
output_sharding = pointwise_rule(OpSchema(add_call, (mat1_spec, mat2_spec), {}))
output_spec = output_sharding.output_spec
self.assertIsNotNone(output_spec)
self.assertEqual(output_spec.dim_map, [-1, 0])
# more advanced case that needs reshard one input to align sharding
mat1, mat2 = [0, -1, -1, 1], [0, -1, 1]
mat1_tensor_meta = self._gen_tensor_meta(torch.Size([12, 1, 1, 8]))
mat2_tensor_meta = self._gen_tensor_meta(torch.Size([12, 4, 8]))
mat1_spec = DTensorSpec.from_dim_map(
mesh, mat1, [], tensor_meta=mat1_tensor_meta
)
mat2_spec = DTensorSpec.from_dim_map(
mesh, mat2, [], tensor_meta=mat2_tensor_meta
)
output_sharding = pointwise_rule(OpSchema(add_call, (mat1_spec, mat2_spec), {}))
output_spec = output_sharding.output_spec
self.assertIsNone(output_spec)
self.assertIsNotNone(output_sharding.redistribute_schema)
# ensure that the suggestion is to reshard the first
# arg by all_gather first tensor dim sharding
schema_suggestion = output_sharding.redistribute_schema
self.assertEqual(schema_suggestion.args_schema[0].dim_map, [-1, -1, -1, 1])
self.assertEqual(schema_suggestion.args_schema[1].dim_map, mat2)
def test_pointwise_enforce_sharding_multi_sharding_on_mesh_dim(self):
# 2d mesh pointwise sharding
mesh_shape = torch.arange(self.world_size).reshape(
self.world_size // 2, self.world_size // 2
)
mesh = DeviceMesh(self.device_type(), mesh_shape)
add_call = aten.add_.Tensor
# more advanced case that needs reshard one input to align sharding
mat1, mat2 = [0, -1, 1], [-1, -1, 0]
mat1_tensor_meta = self._gen_tensor_meta(torch.Size([12, 4, 8]))
mat2_tensor_meta = self._gen_tensor_meta(torch.Size([12, 1, 8]))
mat1_spec = DTensorSpec.from_dim_map(
mesh, mat1, [], tensor_meta=mat1_tensor_meta
)
mat2_spec = DTensorSpec.from_dim_map(
mesh, mat2, [], tensor_meta=mat2_tensor_meta
)
output_sharding = pointwise_rule(OpSchema(add_call, (mat1_spec, mat2_spec), {}))
output_spec = output_sharding.output_spec
self.assertIsNone(output_spec)
self.assertIsNotNone(output_sharding.redistribute_schema)
# ensure that the suggestion is to reshard the second
# arg as we should enforce the sharding of the first arg
schema_suggestion = output_sharding.redistribute_schema
self.assertEqual(schema_suggestion.args_schema[0].dim_map, mat1)
self.assertEqual(schema_suggestion.args_schema[1].dim_map, mat1)
if __name__ == "__main__":
run_tests()
| CommonRulesTest |
python | anthropics__anthropic-sdk-python | src/anthropic/types/beta/beta_bash_code_execution_tool_result_error_param.py | {
"start": 241,
"end": 564
} | class ____(TypedDict, total=False):
error_code: Required[
Literal[
"invalid_tool_input", "unavailable", "too_many_requests", "execution_time_exceeded", "output_file_too_large"
]
]
type: Required[Literal["bash_code_execution_tool_result_error"]]
| BetaBashCodeExecutionToolResultErrorParam |
python | dagster-io__dagster | python_modules/dagster/dagster/_core/definitions/reconstruct.py | {
"start": 5802,
"end": 6918
} | class ____(NamedTupleSerializer):
def before_unpack(self, _, unpacked_dict: dict[str, Any]) -> dict[str, Any]: # pyright: ignore[reportIncompatibleMethodOverride]
solid_selection_str = unpacked_dict.get("solid_selection_str")
solids_to_execute = unpacked_dict.get("solids_to_execute")
if solid_selection_str:
unpacked_dict["op_selection"] = json.loads(solid_selection_str)
elif solids_to_execute:
unpacked_dict["op_selection"] = solids_to_execute
return unpacked_dict
def pack_items(self, *args, **kwargs):
for k, v in super().pack_items(*args, **kwargs):
if k == "op_selection":
new_v = json.dumps(v["__set__"]) if v else None # pyright: ignore[reportCallIssue,reportArgumentType,reportIndexIssue]
yield "solid_selection_str", new_v
else:
yield k, v
@whitelist_for_serdes(
serializer=ReconstructableJobSerializer,
storage_name="ReconstructablePipeline",
storage_field_names={
"job_name": "pipeline_name",
},
)
| ReconstructableJobSerializer |
python | joke2k__faker | faker/providers/person/ga_IE/__init__.py | {
"start": 265,
"end": 70554
} | class ____(PersonProvider):
formats = (
"{{first_name_male}} {{last_name}}",
"{{first_name_male}} {{last_name}}",
"{{first_name_male}} {{last_name}}",
"{{first_name_male}} {{last_name}}",
"{{first_name_male}} {{last_name}}-{{last_name}}",
"{{first_name_female}} {{last_name}}",
"{{first_name_female}} {{last_name}}",
"{{first_name_female}} {{last_name}}",
"{{first_name_female}} {{last_name}}",
"{{first_name_female}} {{last_name}}-{{last_name}}",
"{{prefix_male}} {{first_name_male}} {{last_name}}",
"{{prefix_female}} {{first_name_female}} {{last_name}}",
"{{prefix_male}} {{first_name_male}} {{last_name}}",
"{{prefix_female}} {{first_name_female}} {{last_name}}",
)
first_names_male = (
"Aaron",
"Adam",
"Adrian",
"Aengus",
"Aidan",
"Aiden",
"Alan",
"Albert",
"Alexander",
"Alfred",
"Alistair",
"Allan",
"Allen",
"Alphonsus",
"Ambrose",
"Andre",
"Andreas",
"Andrew",
"Angus",
"Anthony",
"Antonio",
"Aongus",
"Arnold",
"Arthur",
"Ashley",
"Augustine",
"Austin",
"Barry",
"Bartholomew",
"Bartley",
"Basil",
"Benedict",
"Benjamin",
"Bernard",
"Billy",
"Brendan",
"Brian",
"Brien",
"Bruce",
"Bryan",
"Camillus",
"Canice",
"Carl",
"Carlos",
"Cathal",
"Cecil",
"Charles",
"Christian",
"Christopher",
"Cian",
"Ciaran",
"Cillian",
"Clement",
"Clifford",
"Clinton",
"Clive",
"Coleman",
"Colin",
"Colm",
"Colman",
"Colum",
"Columba",
"Conal",
"Conall",
"Conan",
"Conleth",
"Conn",
"Connell",
"Connor",
"Conor",
"Cormac",
"Cormack",
"Cornelius",
"Craig",
"Cyril",
"Daire",
"Damian",
"Damien",
"Daniel",
"Danny",
"Dara",
"Daragh",
"Daren",
"Darin",
"Darragh",
"Darran",
"Darrell",
"Darren",
"Darrin",
"Darryl",
"David",
"Davin",
"Dean",
"Declan",
"Denis",
"Dennis",
"Dereck",
"Derek",
"Derick",
"Dermot",
"Dermott",
"Derrick",
"Desmond",
"Diarmaid",
"Diarmuid",
"Domhnall",
"Dominic",
"Dominick",
"Don",
"Donagh",
"Donal",
"Donald",
"Donnacha",
"Donncha",
"Donough",
"Douglas",
"Duncan",
"Eamon",
"Eamonn",
"Eanna",
"Edmond",
"Edmund",
"Edward",
"Edwin",
"Emmet",
"Emmett",
"Enda",
"Eoghan",
"Eoin",
"Eric",
"Ernest",
"Eugene",
"Evan",
"Fabian",
"Feargal",
"Fearghal",
"Fergal",
"Fergus",
"Fiachra",
"Finbar",
"Finbarr",
"Finian",
"Fintan",
"Fionan",
"Flannan",
"Florence",
"Francis",
"Frank",
"Frederick",
"Gabriel",
"Garech",
"Gareth",
"Garret",
"Garreth",
"Garrett",
"Garry",
"Garvan",
"Gary",
"Gavan",
"Gavin",
"Gearoid",
"Geoffrey",
"George",
"Gerald",
"Gerard",
"Gerrard",
"Gilbert",
"Glen",
"Glenn",
"Gordan",
"Gordon",
"Graham",
"Gregory",
"Guy",
"Harold",
"Henry",
"Herbert",
"Howard",
"Hubert",
"Hugh",
"Ian",
"Ivan",
"Ivor",
"Jack",
"James",
"Jarlath",
"Jarleth",
"Jason",
"Jean",
"Jeffrey",
"Jeremiah",
"Jeremy",
"Jermiah",
"Jerome",
"Jesse",
"Jim",
"John",
"Jonathan",
"Joseph",
"Jude",
"Julian",
"Justin",
"Karl",
"Keith",
"Kenneth",
"Kevin",
"Kiaran",
"Kieran",
"Kiernan",
"Kieron",
"Kilian",
"Killian",
"Kirk",
"Laurence",
"Lawrence",
"Lee",
"Leigh",
"Leo",
"Leonard",
"Leslie",
"Liam",
"Lorcan",
"Louis",
"Luke",
"Mac",
"Malachy",
"Malcolm",
"Manus",
"Marc",
"Marcus",
"Mark",
"Martin",
"Mathew",
"Matthew",
"Maurice",
"Mel",
"Melvin",
"Mervin",
"Mervyn",
"Miceal",
"Michael",
"Micheal",
"Michel",
"Morgan",
"Mortimer",
"Myles",
"Naoise",
"Neal",
"Neil",
"Neill",
"Neville",
"Nial",
"Niall",
"Nicholas",
"Nigel",
"Noel",
"Norman",
"Oisin",
"Oliver",
"Owen",
"Padraic",
"Padraig",
"Padraigh",
"Pascal",
"Paschal",
"Patrick",
"Paul",
"Pauric",
"Peadar",
"Peader",
"Pearse",
"Peter",
"Phelim",
"Philip",
"Phillip",
"Pierce",
"Ralph",
"Raphael",
"Ray",
"Raymond",
"Redmond",
"Reginald",
"Richard",
"Robert",
"Robin",
"Roderick",
"Rodger",
"Rodney",
"Roger",
"Rolf",
"Ronald",
"Ronan",
"Rory",
"Ross",
"Rossa",
"Rowan",
"Roy",
"Ruairi",
"Russell",
"Samuel",
"Scott",
"Seamus",
"Sean",
"Sebastian",
"Senan",
"Seosamh",
"Shane",
"Shaun",
"Sheamus",
"Simon",
"Spencer",
"Stanley",
"Stephen",
"Steve",
"Steven",
"Stewart",
"Stuart",
"Sylvester",
"Tadhg",
"Terence",
"Thaddeus",
"Thomas",
"Timothy",
"Tomas",
"Tony",
"Trevor",
"Troy",
"Turlough",
"Ultan",
"Valentine",
"Victor",
"Vincent",
"Vivian",
"Walter",
"Warren",
"Wayne",
"Wesley",
"William",
"Willie",
)
first_names_female = (
"Abina",
"Adele",
"Adeline",
"Adrianne",
"Adrienne",
"Aedin",
"Agnes",
"Aideen",
"Ailbhe",
"Aileen",
"Ailis",
"Ailish",
"Aine",
"Aishling",
"Aisling",
"Alexandra",
"Alexis",
"Alice",
"Alicia",
"Alison",
"Allison",
"Alma",
"Alva",
"Amanda",
"Amber",
"Amelia",
"Amy",
"Anastasia",
"Anastatia",
"Andrea",
"Andrena",
"Angela",
"Angelina",
"Angeline",
"Anita",
"Ann",
"Anna",
"Anne",
"Annette",
"Annie",
"Antoinette",
"Antonia",
"Aoife",
"April",
"Arlene",
"Ashley",
"Ashling",
"Assumpta",
"Attracta",
"Audrey",
"Averil",
"Avril",
"Bairbre",
"Barbara",
"Beatrice",
"Belinda",
"Bernadette",
"Bernadine",
"Bernice",
"Beverley",
"Blathnaid",
"Breda",
"Breeda",
"Breege",
"Breffni",
"Brenda",
"Brid",
"Bridget",
"Bridie",
"Briget",
"Brighid",
"Brigid",
"Brona",
"Bronagh",
"Bronwen",
"Bronwyn",
"Cait",
"Caitriona",
"Camilla",
"Caoimhe",
"Cara",
"Carina",
"Carla",
"Carmel",
"Carmen",
"Carol",
"Carole",
"Caroline",
"Carolyn",
"Catherina",
"Catherine",
"Catheriona",
"Cathleen",
"Cathrina",
"Cathrine",
"Cathriona",
"Cathy",
"Catriona",
"Cecelia",
"Cecilia",
"Celene",
"Celia",
"Celina",
"Celine",
"Charlotte",
"Charmaine",
"Cheryl",
"Christina",
"Christine",
"Ciara",
"Clair",
"Claire",
"Clara",
"Clare",
"Claudia",
"Claudine",
"Cliodhna",
"Cliona",
"Clodagh",
"Colette",
"Colleen",
"Collette",
"Concepta",
"Cora",
"Corinna",
"Corona",
"Cynthia",
"Dana",
"Danielle",
"Daphne",
"Dara",
"Daragh",
"Darina",
"Darragh",
"Davida",
"Davnet",
"Dawn",
"Dearbhail",
"Dearbhla",
"Debbie",
"Deborah",
"Deborrah",
"Debra",
"Deidre",
"Deirdre",
"Delia",
"Denise",
"Derval",
"Dervilla",
"Dervla",
"Diana",
"Diane",
"Diann",
"Dianne",
"Dolores",
"Dona",
"Donna",
"Dora",
"Doreen",
"Dorothy",
"Dymphna",
"Dympna",
"Eavan",
"Edel",
"Edith",
"Edwina",
"Eileen",
"Eilis",
"Eilish",
"Eimear",
"Eimer",
"Eithne",
"Elaine",
"Eleanor",
"Elena",
"Elizabeth",
"Ella",
"Ellen",
"Elva",
"Emer",
"Emily",
"Emma",
"Erica",
"Erika",
"Estelle",
"Esther",
"Ethel",
"Ethna",
"Ethne",
"Eunice",
"Eva",
"Eve",
"Eveline",
"Evelyn",
"Felicity",
"Fidelma",
"Finola",
"Fiona",
"Fionna",
"Fionnuala",
"Fionnula",
"Florence",
"Frances",
"Freda",
"Gabrielle",
"Gail",
"Gemma",
"Genevieve",
"Georgina",
"Geraldine",
"Gerardine",
"Gertrude",
"Gillian",
"Gina",
"Glenda",
"Gloria",
"Grace",
"Grainne",
"Grania",
"Gretta",
"Gwen",
"Gwendolen",
"Gwendoline",
"Hannah",
"Hanora",
"Harriet",
"Hazel",
"Heather",
"Heidi",
"Helan",
"Helen",
"Helena",
"Helga",
"Henrietta",
"Hilary",
"Hilda",
"Hillary",
"Honora",
"Ida",
"Ide",
"Imelda",
"Inez",
"Ingrid",
"Irene",
"Iris",
"Isabel",
"Isobel",
"Ita",
"Jacinta",
"Jacintha",
"Jacqueline",
"Jane",
"Janet",
"Janette",
"Janice",
"Janine",
"Jayne",
"Jean",
"Jeanette",
"Jeanne",
"Jeannette",
"Jenifer",
"Jennifer",
"Jessica",
"Jill",
"Jillian",
"Joan",
"Joanna",
"Joanne",
"Jocelyn",
"Johanna",
"Johanne",
"Josephine",
"Joy",
"Joyce",
"Juanita",
"Judith",
"Judy",
"Julia",
"Julianna",
"Julie",
"Juliet",
"Juliette",
"June",
"Justine",
"Kara",
"Karan",
"Karen",
"Karin",
"Karina",
"Kate",
"Katharina",
"Katharine",
"Katherina",
"Katherine",
"Kathleen",
"Kathryn",
"Katrina",
"Katriona",
"Kerry",
"Kim",
"Lara",
"Laura",
"Lavinia",
"Leah",
"Lena",
"Leona",
"Leone",
"Leonie",
"Leonora",
"Lesley",
"Leslie",
"Lilian",
"Lillian",
"Linda",
"Lisa",
"Liza",
"Loraine",
"Loretta",
"Loretto",
"Lorna",
"Lorraine",
"Louise",
"Loyola",
"Lucia",
"Lucinda",
"Lucy",
"Lynda",
"Lynn",
"Lynne",
"Madeline",
"Maeve",
"Maighread",
"Maire",
"Mairead",
"Mairin",
"Majella",
"Mandy",
"Marcella",
"Marese",
"Margaret",
"Marguerite",
"Maria",
"Marian",
"Marianne",
"Marie",
"Marilyn",
"Marina",
"Marion",
"Marjorie",
"Marlene",
"Martha",
"Martina",
"Mary",
"Matilda",
"Maura",
"Maureen",
"Maxine",
"Melanie",
"Melinda",
"Melissa",
"Michaela",
"Michele",
"Michell",
"Michelle",
"Miranda",
"Miriam",
"Moira",
"Mona",
"Monica",
"Monique",
"Moya",
"Muireann",
"Muriel",
"Myra",
"Nadine",
"Naimh",
"Nancy",
"Naomh",
"Naomi",
"Natalie",
"Natasha",
"Neasa",
"Nessa",
"Niamh",
"Nichola",
"Nicola",
"Nicole",
"Nina",
"Noeleen",
"Noeline",
"Noelle",
"Noirin",
"Noleen",
"Nollaig",
"Nora",
"Norah",
"Noreen",
"Norma",
"Nuala",
"Olga",
"Olive",
"Olivia",
"Olwen",
"Oonagh",
"Orla",
"Orlaith",
"Orna",
"Pamela",
"Patricia",
"Paula",
"Paulette",
"Pauline",
"Pearl",
"Penelope",
"Petrina",
"Philomena",
"Phyllis",
"Priscilla",
"Rachael",
"Rachel",
"Rebecca",
"Regina",
"Rena",
"Rhona",
"Rhonda",
"Rita",
"Roberta",
"Roisin",
"Rona",
"Rosa",
"Rosaleen",
"Rosanna",
"Rosanne",
"Rosarie",
"Rosario",
"Rose",
"Rosemarie",
"Rosemary",
"Roslyn",
"Rowena",
"Ruth",
"Sally",
"Samanta",
"Samantha",
"Sandra",
"Sara",
"Sarah",
"Saundra",
"Serena",
"Sharon",
"Shauna",
"Sheela",
"Sheelagh",
"Sheena",
"Sheila",
"Shiela",
"Shinead",
"Shirley",
"Shona",
"Sile",
"Simone",
"Sinead",
"Siobain",
"Sioban",
"Siobhain",
"Siobhan",
"Sonia",
"Sonya",
"Sophia",
"Sophie",
"Sorcha",
"Stella",
"Stephanie",
"Susan",
"Susanna",
"Susanne",
"Suzanne",
"Sylvia",
"Tania",
"Tanya",
"Tara",
"Teresa",
"Thelma",
"Theresa",
"Therese",
"Tina",
"Toni",
"Tonya",
"Tracey",
"Tracy",
"Treacy",
"Treasa",
"Trina",
"Triona",
"Una",
"Ursula",
"Valerie",
"Vanessa",
"Vera",
"Veronica",
"Victoria",
"Violet",
"Virginia",
"Vivian",
"Vivien",
"Vivienne",
"Wendy",
"Winifred",
"Yolanda",
"Yvette",
"Yvonne",
"Zita",
"Zoe",
)
first_names = first_names_male + first_names_female
last_names = (
"A tSithigh",
"Achaorainn",
"Ailín",
"Ainmneach",
"Airmeas",
"Bailís",
"Bairéad",
"Baisceir",
"Baróid",
"Barún",
"Bhailís",
"Blowick",
"Bodaicín",
"Bodhlaeir",
"Bodhlaer",
"Breasail",
"Breathnach",
"Briain",
"Briútean",
"Bruadar",
"Bruiséal",
"Brún",
"Budhlaeir",
"Burnach",
"Bácaeir",
"Bácaer",
"Béataigh",
"Béireach",
"Cadhain",
"Cafua",
"Caimbeul",
"Caimbéal",
"Callahan",
"Caomhánach",
"Capua",
"Capuaigh",
"Carmaig",
"Cartúr",
"Carville",
"Carún",
"Ceafarcaigh",
"Ceanainn",
"Ceara",
"Ceirisc",
"Ceorais",
"Ceothach",
"Ceothánach",
"Cheara",
"Ciaragáin",
"Cill-Dia",
"Cillín",
"Cinnéir",
"Ciosóg",
"Ciothaigh",
"Ciothóg",
"Ciúinín",
"Clárach",
"Coincheanainn",
"Coinnér",
"Coinnín",
"Coinín",
"Colum",
"Comartún",
"Conaola",
"Conbhae",
"Condún",
"Confhaola",
"Conrach",
"Conraoi",
"Consaidín",
"Cormican",
"Coscair",
"Criomhthain",
"Criostóir",
"Criostúir",
"Cróil",
"Cuidithe",
"Cuillín",
"Cuineáin",
"Cuirtéis",
"Curraoin",
"Céide",
"Céitinn",
"Cíosóg",
"Cúndún",
"Cúnún",
"Daltún",
"Diolún",
"Dionún",
"Doghair",
"Doingeard",
"Dorcha",
"Droma",
"Duffy",
"Dáibhís",
"Déiseach",
"Díscín",
"Dúinsméarach",
"Each",
"Eilfirt",
"Fearraigh",
"Feirtéar",
"Firtéar",
"Freis",
"Gabháin",
"Gineá",
"Ginneá",
"Ginneádha",
"Giobún",
"Gionnachtaigh",
"Glionnáin",
"Glostéir",
"Grialais",
"Gubain",
"Gugán",
"Gáineard",
"Géaran",
"Habha",
"Haicéad",
"Hynman",
"Innseadún",
"Iústás",
"Kirwan",
"Laidhléis",
"Laighnigh",
"Landy",
"Lochlann",
"Loibhéad",
"Lonndún",
"Luibhéad",
"Lás",
"Lása",
"Lúiséad",
"Lúnam",
"Mac Aidicín",
"Mac Ailpín",
"Mac Ailín",
"Mac Aindriais",
"Mac Aindriú",
"Mac Airligh",
"Mac Airt",
"Mac Aitigín",
"Mac Alastair",
"Mac Alastroim",
"Mac Allmhúráin",
"Mac Amhalghaidh",
"Mac Amhlaigh",
"Mac Amhlaoigh",
"Mac Amhlaoimh",
"Mac Anabadha",
"Mac Anna",
"Mac Annraoi",
"Mac Anraoi",
"Mac Aodha",
"Mac Aodhchain",
"Mac Aodhchaoin",
"Mac Aodhgáin",
"Mac Aodháin",
"Mac Aogáin",
"Mac Aoidh",
"Mac Aonghais",
"Mac Aonghuis",
"Mac Aonghusa",
"Mac Arta",
"Mac Artáin",
"Mac Artúir",
"Mac Bhaitéir",
"Mac Bhloscaigh",
"Mac Bhriain",
"Mac Braoin",
"Mac Braonáin",
"Mac Briartaigh",
"Mac Brádaigh",
"Mac Cafraigh",
"Mac Cailpín",
"Mac Cailín",
"Mac Cairbre",
"Mac Caiside",
"Mac Caisleáin",
"Mac Caislin",
"Mac Caisín",
"Mac Caithir",
"Mac Caitigín",
"Mac Calaigh",
"Mac Calbhaigh",
"Mac Callanáin",
"Mac Canainn",
"Mac Canna",
"Mac Caochlaigh",
"Mac Caochlaí",
"Mac Caocháin",
"Mac Caoidheáin",
"Mac Carluis",
"Mac Carmaig",
"Mac Carra",
"Mac Carrghamhna",
"Mac Carrghamhne",
"Mac Cartáin",
"Mac Casaide",
"Mac Casarlaigh",
"Mac Catailín",
"Mac Cathail",
"Mac Cathaoir",
"Mac Cathasaigh",
"Mac Cathbhaid",
"Mac Cathmhaoil",
"Mac Catháin",
"Mac Ceallabhuí",
"Mac Ceallaigh",
"Mac Ceallbhuí",
"Mac Ceamharcaigh",
"Mac Ceannabháin",
"Mac Ceanndubháin",
"Mac Cearbhaill",
"Mac Cearnaigh",
"Mac Cearáin",
"Mac Ceoinín",
"Mac Ciaráin",
"Mac Cillín",
"Mac Cinnéide",
"Mac Cionnaith",
"Mac Ciúrtáin",
"Mac Claochlaí",
"Mac Clochartaigh",
"Mac Cluanaigh",
"Mac Clúin",
"Mac Cnáimhsighe",
"Mac Cnáimhsí",
"Mac Cnáimhín",
"Mac Cobhthaigh",
"Mac Cochláin",
"Mac Coileáin",
"Mac Coiligh",
"Mac Coillín",
"Mac Coilín",
"Mac Coimín",
"Mac Coineoil",
"Mac Coingheallá",
"Mac Coinneirtinne",
"Mac Coinnich",
"Mac Coinnigh",
"Mac Coinín",
"Mac Coisdeala",
"Mac Coisdealbha",
"Mac Coisteala",
"Mac Coitir",
"Mac Colla",
"Mac Coluim",
"Mac Comhghaill",
"Mac Comní",
"Mac Con Rí",
"Mac Con Ultaigh",
"Mac Con na Buaile",
"Mac Conacha",
"Mac Conagail",
"Mac Conaill",
"Mac Conallta",
"Mac Conaola",
"Mac Conaonaigh",
"Mac Conbhuí",
"Mac Concharraige",
"Mac Conchoille",
"Mac Conchradha",
"Mac Conduibh",
"Mac Confhaola",
"Mac Confraoich",
"Mac Congail",
"Mac Conghaile",
"Mac Conghamhna",
"Mac Conleágha",
"Mac Conluain",
"Mac Conmara",
"Mac Conmhaoil",
"Mac Conmí",
"Mac Connacháin",
"Mac Connallta",
"Mac Connghamhna",
"Mac Connmhaigh",
"Mac Connáin",
"Mac Connóil",
"Mac Connól",
"Mac Conraoi",
"Mac Consaidín",
"Mac Conámha",
"Mac Conóil",
"Mac Corcoráin",
"Mac Cormaic",
"Mac Corra",
"Mac Corrghamhna",
"Mac Coscair",
"Mac Cosgair",
"Mac Costagáin",
"Mac Craith",
"Mac Craobháin",
"Mac Criomhthain",
"Mac Crosáin",
"Mac Cruitín",
"Mac Crábháin",
"Mac Créadaigh",
"Mac Críodáin",
"Mac Críonáin",
"Mac Cuag",
"Mac Cuaig",
"Mac Cualáin",
"Mac Cuarta",
"Mac Cuidithe",
"Mac Cuileannáin",
"Mac Cuileanáin",
"Mac Cuilleáin",
"Mac Cuinn",
"Mac Cuinneagáin",
"Mac Cuirc",
"Mac Cumascaigh",
"Mac Cumhail",
"Mac Cunnaidh",
"Mac Curdaigh",
"Mac Curraidh",
"Mac Curraoin",
"Mac Curtáin",
"Mac Cába",
"Mac Cárthaigh",
"Mac Céide",
"Mac Cúilriabhaigh",
"Mac Daeid",
"Mac Daibheid",
"Mac Daibhíd",
"Mac Dhiarmada",
"Mac Dhonncha",
"Mac Dhonnchadha",
"Mac Dhonnchaidh",
"Mac Dhorchaidh",
"Mac Dhuarcáin",
"Mac Dhubhghail",
"Mac Dhubhghaill",
"Mac Dhuibh",
"Mac Dhuibhir",
"Mac Dhuinneabháin",
"Mac Dhuinnshlé",
"Mac Dhuinnshléibhe",
"Mac Dháibhidh",
"Mac Dháibhis",
"Mac Dhúirnín",
"Mac Diarmada",
"Mac Domhnaill",
"Mac Donncha",
"Mac Donnchadha",
"Mac Duarcáin",
"Mac Dubhghaill",
"Mac Dubhradáin",
"Mac Duibhir",
"Mac Dáibhid",
"Mac Dáibhidh",
"Mac Dáid",
"Mac Déid",
"Mac Eachaidh",
"Mac Eachain",
"Mac Eachmharcaigh",
"Mac Eacháin",
"Mac Ealanaidh",
"Mac Eibhir",
"Mac Eiteagáin",
"Mac Eitheagáin",
"Mac Eochadha",
"Mac Eochagáin",
"Mac Eochaidh",
"Mac Eocháin",
"Mac Eoghain",
"Mac Eoin",
"Mac Eoinín",
"Mac Eóinín",
"Mac Eóthach",
"Mac Fearadhaigh",
"Mac Fhaoláin",
"Mac Fhearadhaigh",
"Mac Fhearchair",
"Mac Fheargail",
"Mac Fhearghail",
"Mac Fhearghaile",
"Mac Fhearghusa",
"Mac Fhearraigh",
"Mac Fheorais",
"Mac Fhiachra",
"Mac Fhinn",
"Mac Fhinneachtaigh",
"Mac Fhionghuin",
"Mac Fhionnachta",
"Mac Fhionnachtaigh",
"Mac Fhionnghaile",
"Mac Fhionnlaich",
"Mac Fhionnlaoich",
"Mac Fhionntaigh",
"Mac Fhionáin",
"Mac Fhlaithbheartaigh",
"Mac Fhlaithimh",
"Mac Fhlannagáin",
"Mac Fhlannchadha",
"Mac Fhlannáin",
"Mac Fhloinn",
"Mac Fhuallaigh",
"Mac Fhualáin",
"Mac Fhíontaigh",
"Mac Fhógartaigh",
"Mac Firbhisigh",
"Mac Gabhann",
"Mac Gafraigh",
"Mac Gairbhe",
"Mac Gairbhia",
"Mac Gairbhín",
"Mac Gamhna",
"Mac Gaoith",
"Mac Gaoithín",
"Mac Gaora",
"Mac Garaidh",
"Mac Gearachaigh",
"Mac Gearailt",
"Mac Gearchaigh",
"Mac Geimhridh",
"Mac Ghille Fhaoláin",
"Mac Ghille Mhaoil",
"Mac Ghille Íosa",
"Mac Ghilleathain",
"Mac Ghoill",
"Mac Gilleathain",
"Mac Ginneadha",
"Mac Ginneá",
"Mac Giobúin",
"Mac Giolla",
"Mac Giolla Bhaird",
"Mac Giolla Bhríde",
"Mac Giolla Bhuí",
"Mac Giolla Bháin",
"Mac Giolla Chaoin",
"Mac Giolla Chatáin",
"Mac Giolla Cheara",
"Mac Giolla Choda",
"Mac Giolla Choille",
"Mac Giolla Choinnigh",
"Mac Giolla Chomhghaill",
"Mac Giolla Deacair",
"Mac Giolla Dhiarmada",
"Mac Giolla Dhuibh",
"Mac Giolla Dhuinn",
"Mac Giolla Dhé",
"Mac Giolla Domhnaigh",
"Mac Giolla Easboig",
"Mac Giolla Eoghain",
"Mac Giolla Eoin",
"Mac Giolla Eáin",
"Mac Giolla Fhaoláin",
"Mac Giolla Fhinnéin",
"Mac Giolla Geimhridh",
"Mac Giolla Ghailing",
"Mac Giolla Gheimhridh",
"Mac Giolla Ghuala",
"Mac Giolla Ghunna",
"Mac Giolla Iasachta",
"Mac Giolla Luaithrinn",
"Mac Giolla Léith",
"Mac Giolla Mhuire",
"Mac Giolla Mhuiris",
"Mac Giolla Mháirtín",
"Mac Giolla Mhártain",
"Mac Giolla Mhóir",
"Mac Giolla Phádraig",
"Mac Giolla Phóil",
"Mac Giolla Riabhaigh",
"Mac Giolla Rua",
"Mac Giolla Seanáin",
"Mac Giolla Tuile",
"Mac Giolla Uidhir",
"Mac Giolla an Chloig",
"Mac Giolla an Átha",
"Mac Giolla na Naomh",
"Mac Giolla Íosa",
"Mac Giollagáin",
"Mac Giollarnáth",
"Mac Giollarua",
"Mac Giollaruaidhe",
"Mac Glionnáin",
"Mac Glionáin",
"Mac Gloin",
"Mac Gloinn",
"Mac Goill",
"Mac Gormáin",
"Mac Gothraidh",
"Mac Grallaigh",
"Mac Grealaigh",
"Mac Grialais",
"Mac Grianna",
"Mac Grianra",
"Mac Grádha",
"Mac Gráinne",
"Mac Gréil",
"Mac Gréill",
"Mac Gréine",
"Mac Guibhir",
"Mac Guidhir",
"Mac Gáineard",
"Mac Géibheannaigh",
"Mac Géidigh",
"Mac Gíontaigh",
"Mac Hugo",
"Mac Héil",
"Mac Igo",
"Mac Inneirghe",
"Mac Iomaire",
"Mac Ionrachtaigh",
"Mac Laghmainn",
"Mac Laithbheartaigh",
"Mac Laithimh",
"Mac Lathaigh",
"Mac Leannáin",
"Mac Leóid",
"Mac Liam",
"Mac Lochlainn",
"Mac Loingsigh",
"Mac Luain",
"Mac Lughadha",
"Mac Lughbhadha",
"Mac Léanacháin",
"Mac Maicín",
"Mac Maitiú",
"Mac Maoláin",
"Mac Maonagail",
"Mac Maongail",
"Mac Mathghamhna",
"Mac Mathúna",
"Mac Meanman",
"Mac Mhuircheartaigh",
"Mac Muireadhaigh",
"Mac Muiris",
"Mac Murchadha",
"Mac Mághnuis",
"Mac Máirtín",
"Mac Nailín",
"Mac Neacail",
"Mac Neachtain",
"Mac Nia",
"Mac Niadh",
"Mac Niallghais",
"Mac Niallghuis",
"Mac Niocail",
"Mac Niocláis",
"Mac Néill",
"Mac Oibicín",
"Mac Oilifir",
"Mac Oireachtaigh",
"Mac Oistigín",
"Mac Oisín",
"Mac Oitir",
"Mac Oralaigh",
"Mac Oscair",
"Mac Osgair",
"Mac Phartholáin",
"Mac Philbín",
"Mac Philib",
"Mac Pháidín",
"Mac Phártholáin",
"Mac Phártoláin",
"Mac Páidín",
"Mac Rabhartaigh",
"Mac Raghallaigh",
"Mac Raghnaill",
"Mac Raith",
"Mac Rath",
"Mac Reachtain",
"Mac Reanacháin",
"Mac Riada",
"Mac Riagáin",
"Mac Riocaird",
"Mac Risteard",
"Mac Robhartaigh",
"Mac Rodáin",
"Mac Roibín",
"Mac Ruaidhrí",
"Mac Ruairc",
"Mac Ráighne",
"Mac Réamoinn",
"Mac Réill",
"Mac Seafraidh",
"Mac Seafraigh",
"Mac Seanlaoich",
"Mac Searraigh",
"Mac Seinín",
"Mac Seoin",
"Mac Seághain",
"Mac Seáin",
"Mac Shamhráin",
"Mac Sheitric",
"Mac Sheoinín",
"Mac Shitric",
"Mac Shiúrdáin",
"Mac Shiúrtáin",
"Mac Shómais",
"Mac Siacais",
"Mac Sléibhín",
"Mac Spealáin",
"Mac Stibhin",
"Mac Stiofáin",
"Mac Stín",
"Mac Suibhne",
"Mac Séamuis",
"Mac Séartha",
"Mac Síomóin",
"Mac Síthigh",
"Mac Taidhg",
"Mac Tamhais",
"Mac Thaidhg",
"Mac Thiarnáin",
"Mac Thighearnaigh",
"Mac Thighearnáin",
"Mac Thoirbhealaigh",
"Mac Thoirdhealbhaigh",
"Mac Thomáis",
"Mac Thorcail",
"Mac Thréinfhear",
"Mac Thréinfhir",
"Mac Thuathail",
"Mac Thuathaláin",
"Mac Thámhais",
"Mac Thómais",
"Mac Tiarnáin",
"Mac Tomáis",
"Mac Tuathail",
"Mac Tuathaláin",
"Mac Tuile",
"Mac Támhais",
"Mac Uaid",
"Mac Uaitéir",
"Mac Ualghairg",
"Mac Uallacháin",
"Mac Ualtair",
"Mac Ugo",
"Mac Uibhrín",
"Mac Uidhir",
"Mac Uidhlinn",
"Mac Uiginn",
"Mac Uilcín",
"Mac Uí Bheannuille",
"Mac Uí Smál",
"Mac a Déise",
"Mac a' Bhuí",
"Mac an Aba",
"Mac an Abbadh",
"Mac an Adhastair",
"Mac an Airchinnigh",
"Mac an Bhaird",
"Mac an Bheatha",
"Mac an Bheithigh",
"Mac an Bhiadhtaigh",
"Mac an Bhiocáire",
"Mac an Bhreitheamhain",
"Mac an Bhreithimh",
"Mac an Bhua",
"Mac an Chrosáin",
"Mac an Deagánaigh",
"Mac an Déisigh",
"Mac an Fhailghigh",
"Mac an Fhir",
"Mac an Ghabhann",
"Mac an Ghallóglaigh",
"Mac an Ghirr",
"Mac an Ghoill",
"Mac an Iarla",
"Mac an Iascaire",
"Mac an Iomaire",
"Mac an Leagha",
"Mac an Leágha",
"Mac an Liagha",
"Mac an Luain",
"Mac an Mhadaidh",
"Mac an Mhaoir",
"Mac an Mhilidh",
"Mac an Mháistir",
"Mac an Mhíleadha",
"Mac an Mhílidh",
"Mac an Oirchinnigh",
"Mac an Oireachtaigh",
"Mac an Phearsain",
"Mac an Ridire",
"Mac an Rí",
"Mac an Ríogh",
"Mac an Ultaigh",
"Mac an tSagairt",
"Mac an tSaoi",
"Mac an tSaoir",
"Mac an tSionnaigh",
"Mac an Átha",
"Mac an Éanaigh",
"Mac mBriartaigh",
"Mac na Midhe",
"Mac Ádhaimh",
"Mac Éil",
"Mac Énrí",
"Mac Íomhair",
"Mac Íosóg",
"Mac Óda",
"Mac Ógáin",
"Mac Úgó",
"MacCrohan",
"Macnamee",
"Maguidhir",
"McGilligan",
"Meadóg",
"Meidhreach",
"Mistéal",
"Mríosáin",
"Muilleoir",
"Máirtín",
"Mártan",
"Méaláid",
"Neachtain",
"Neancól",
"Paor",
"Peircín",
"Philib",
"Piogóid",
"Pléimeann",
"Pléimionn",
"Proinnsias",
"Puirséal",
"Páirceir",
"Póil",
"Raghna",
"Raifteirí",
"Risteard",
"Ruairc",
"Ruiséal",
"Réamonn",
"Rís",
"Scannláin",
"Scribhín",
"Searlóg",
"Searraigh",
"Seitric",
"Seoighe",
"Sionainn",
"Soolachán",
"Stac",
"Standún",
"Stondún",
"Stundún",
"Suipéal",
"Sáirséal",
"Tighe",
"Traoin",
"Treoigh",
"Treó",
"Treóigh",
"Triall",
"Tréinfhear",
"Turraoin",
"Táilliúir",
"Tóibín",
"Uaithne",
"a Búrc",
"a Búrca",
"a Goireachtaigh",
"a Gíontaigh",
"a' Cillartráin",
"de Bailís",
"de Barra",
"de Bhailis",
"de Bhailís",
"de Bhaldraithe",
"de Bhial",
"de Bhosc",
"de Bhulbh",
"de Bhulf",
"de Bhál",
"de Bláca",
"de Brae",
"de Breit",
"de Brún",
"de Buadha",
"de Builtéir",
"de Buitléir",
"de Báth",
"de Béalatún",
"de Búrc",
"de Búrca",
"de Carún",
"de Ceapóg",
"de Cléir",
"de Creag",
"de Crúis",
"de Cúrsa",
"de Faoite",
"de Fréin",
"de Geard",
"de Geárd",
"de Grae",
"de Grás",
"de Hae",
"de Hindeberg",
"de Híde",
"de Hóir",
"de Hór",
"de Hóra",
"de Hórdha",
"de Liostún",
"de Londra",
"de Long",
"de Lonndra",
"de Lonndraigh",
"de Lonnradh",
"de Lás",
"de Lása",
"de Lásaidhe",
"de Léadús",
"de Léis",
"de Lóndra",
"de Lúndra",
"de Mórdha",
"de Nais",
"de Neancól",
"de Noraidh",
"de Nógla",
"de Paor",
"de Priondargás",
"de Priondragáis",
"de Róisde",
"de Róiste",
"de Rós",
"de Searlóg",
"de Siún",
"de Spáin",
"de Stac",
"de Stondún",
"de Stóc",
"de Treó",
"de hÓra",
"de nGeard",
"de nGeárd",
"Ághas",
"Ás",
"Ó Bannáin",
"Ó Banáin",
"Ó Baoighealláin",
"Ó Baoighill",
"Ó Baoill",
"Ó Beacháin",
"Ó Beaglaoich",
"Ó Beagáin",
"Ó Beannuille",
"Ó Bearnáin",
"Ó Beartlaigh",
"Ó Bearáin",
"Ó Beigg",
"Ó Beirgin",
"Ó Beirn",
"Ó Beirne",
"Ó Beoláin",
"Ó Bhaldraithe",
"Ó Bheacháin",
"Ó Bia",
"Ó Biacháin",
"Ó Biaidh",
"Ó Biasta",
"Ó Biataigh",
"Ó Bionáin",
"Ó Biorainn",
"Ó Bioráin",
"Ó Birn",
"Ó Blioscáin",
"Ó Bláthmhaic",
"Ó Bogáin",
"Ó Bolghuidhir",
"Ó Bolguidhir",
"Ó Bortacháin",
"Ó Bradáin",
"Ó Braoin",
"Ó Braonáin",
"Ó Breanndáin",
"Ó Breasail",
"Ó Breasláin",
"Ó Breisleáin",
"Ó Briain",
"Ó Brianáin",
"Ó Bric",
"Ó Brisleáin",
"Ó Broic",
"Ó Broin",
"Ó Brolcháin",
"Ó Brosnacháin",
"Ó Bruacháin",
"Ó Bruadair",
"Ó Bruic",
"Ó Brádaigh",
"Ó Bráonáin",
"Ó Bréanáin",
"Ó Bríonáin",
"Ó Brógáin",
"Ó Bróithe",
"Ó Buachalla",
"Ó Buadhacháin",
"Ó Buadhaigh",
"Ó Báidh",
"Ó Báin",
"Ó Béagáin",
"Ó Béarra",
"Ó Béice",
"Ó Cabhail",
"Ó Cabraigh",
"Ó Cadhain",
"Ó Cadhla",
"Ó Cadhlaigh",
"Ó Cafraigh",
"Ó Cafua",
"Ó Caibe",
"Ó Caidín",
"Ó Cailpín",
"Ó Cailín",
"Ó Caingne",
"Ó Cainnigh",
"Ó Cairbre",
"Ó Cairealláin",
"Ó Caiside",
"Ó Caisín",
"Ó Caithlín",
"Ó Caitigín",
"Ó Calaigh",
"Ó Calbhaigh",
"Ó Callanáin",
"Ó Calláin",
"Ó Calnáin",
"Ó Canainn",
"Ó Caobhacáin",
"Ó Caobháin",
"Ó Caochlaigh",
"Ó Caochlaí",
"Ó Caocháin",
"Ó Caodhla",
"Ó Caodháin",
"Ó Caoidheáin",
"Ó Caoile",
"Ó Caoileáin",
"Ó Caoillidhe",
"Ó Caoilte",
"Ó Caoimh",
"Ó Caoin",
"Ó Caoindealbháin",
"Ó Caoinigh",
"Ó Caoinleáin",
"Ó Caola",
"Ó Caollaidhe",
"Ó Caollaí",
"Ó Caoláin",
"Ó Caomháin",
"Ó Caomhánaigh",
"Ó Caona",
"Ó Caonaigh",
"Ó Caotháin",
"Ó Caoáin",
"Ó Capua",
"Ó Capuaigh",
"Ó Carbaire",
"Ó Carra",
"Ó Carragáin",
"Ó Carraidhin",
"Ó Carrghamhna",
"Ó Carráin",
"Ó Cartáin",
"Ó Carúin",
"Ó Casaide",
"Ó Casarlaigh",
"Ó Cathail",
"Ó Cathala",
"Ó Cathaláin",
"Ó Cathaoir",
"Ó Cathasaigh",
"Ó Cathbhuadha",
"Ó Cathbhuadhaigh",
"Ó Cathbhuaidh",
"Ó Cathláin",
"Ó Cathmhaoil",
"Ó Catháin",
"Ó Ceafarcaigh",
"Ó Ceallabhuí",
"Ó Ceallacháin",
"Ó Ceallaigh",
"Ó Ceamharcaigh",
"Ó Ceanainn",
"Ó Ceannabháin",
"Ó Ceannaigh",
"Ó Ceanndubháin",
"Ó Ceannduibh",
"Ó Ceannfhaola",
"Ó Ceannfhaolaidh",
"Ó Ceanntabhail",
"Ó Cearbhaill",
"Ó Cearbhalláin",
"Ó Cearbhláin",
"Ó Cearbháin",
"Ó Cearmada",
"Ó Cearnaigh",
"Ó Cearr",
"Ó Cearrúcáin",
"Ó Cearrúin",
"Ó Cearáin",
"Ó Ceatharnaigh",
"Ó Ceiriúcháin",
"Ó Ceithearnaigh",
"Ó Ceocháin",
"Ó Ceoinín",
"Ó Ceothánaigh",
"Ó Ceárna",
"Ó Ciabháin",
"Ó Cianaigh",
"Ó Cianáin",
"Ó Ciaragáin",
"Ó Ciaraigh",
"Ó Ciarba",
"Ó Ciardha",
"Ó Ciardhubháin",
"Ó Ciarmhacáin",
"Ó Ciarmhaic",
"Ó Ciaráin",
"Ó Ciarúcáin",
"Ó Cibhil",
"Ó Cilltráin",
"Ó Cillín",
"Ó Cinnseala",
"Ó Cinnseamáin",
"Ó Cinnéide",
"Ó Cinnéir",
"Ó Ciollabháin",
"Ó Cioltráin",
"Ó Cionnaigh",
"Ó Cionnaith",
"Ó Cionnfhaola",
"Ó Cioráin",
"Ó Ciosáin",
"Ó Ciothaigh",
"Ó Ciúrtáin",
"Ó Claimhín",
"Ó Claochlaoigh",
"Ó Claochlaí",
"Ó Claonáin",
"Ó Clocharta",
"Ó Clochartaigh",
"Ó Clochasaigh",
"Ó Cluanáin",
"Ó Cléirchín",
"Ó Cléireacháin",
"Ó Cléirigh",
"Ó Clúin",
"Ó Clúmháin",
"Ó Clúnáin",
"Ó Cnuacháin",
"Ó Cnáimhsighe",
"Ó Cnáimhsí",
"Ó Cnáimhín",
"Ó Cobhthaigh",
"Ó Cochláin",
"Ó Coighin",
"Ó Coigil",
"Ó Coigligh",
"Ó Coile",
"Ó Coileáin",
"Ó Coiligeáin",
"Ó Coillte",
"Ó Coillín",
"Ó Coiléir",
"Ó Coilín",
"Ó Coimín",
"Ó Coincheanainn",
"Ó Coineoil",
"Ó Coineáin",
"Ó Coineóil",
"Ó Coingheallaigh",
"Ó Coinghialla",
"Ó Coinghiallaigh",
"Ó Coinghíola",
"Ó Coinne",
"Ó Coinneacháin",
"Ó Coinneáin",
"Ó Coinnigh",
"Ó Coinnleáin",
"Ó Coinnéir",
"Ó Coinín",
"Ó Coirbín",
"Ó Coirnín",
"Ó Coisdeala",
"Ó Coisdealbha",
"Ó Coisteala",
"Ó Coistealbhaigh",
"Ó Coitir",
"Ó Coitirigh",
"Ó Colla",
"Ó Collaigh",
"Ó Collaráin",
"Ó Collata",
"Ó Colláin",
"Ó Colmáin",
"Ó Coluim",
"Ó Comair",
"Ó Comhdhain",
"Ó Comhghaill",
"Ó Comhghain",
"Ó Comhraí",
"Ó Comáin",
"Ó Conaill",
"Ó Conaire",
"Ó Conalláin",
"Ó Conaola",
"Ó Conaráin",
"Ó Conbhaigh",
"Ó Conbhaí",
"Ó Conbhuaidh",
"Ó Conbhuidhe",
"Ó Conbhuí",
"Ó Conbhá",
"Ó Conbá",
"Ó Conchobhair",
"Ó Conchubhair",
"Ó Conchúir",
"Ó Confhaola",
"Ó Conghaile",
"Ó Conghamhna",
"Ó Conláin",
"Ó Conmhacháin",
"Ó Conmhaí",
"Ó Conmhaídhe",
"Ó Conmhuí",
"Ó Connachtaigh",
"Ó Connachtáin",
"Ó Connacháin",
"Ó Connaigh",
"Ó Connbhuí",
"Ó Connchamháin",
"Ó Connghamhna",
"Ó Connmhacháin",
"Ó Connmhaigh",
"Ó Connmhaí",
"Ó Connollaigh",
"Ó Connóil",
"Ó Connúcháin",
"Ó Conra",
"Ó Conrach",
"Ó Conraoi",
"Ó Consaidín",
"Ó Conthra",
"Ó Contra",
"Ó Conáin",
"Ó Conóil",
"Ó Conúcháin",
"Ó Corbáin",
"Ó Corcora",
"Ó Corcoráin",
"Ó Corlaigh",
"Ó Cormacáin",
"Ó Cormaic",
"Ó Corra",
"Ó Corracháin",
"Ó Corradáin",
"Ó Corragáin",
"Ó Corraidh",
"Ó Corraidhin",
"Ó Corraigh",
"Ó Corrdhuibh",
"Ó Corrghamhna",
"Ó Corráin",
"Ó Coscair",
"Ó Cosgair",
"Ó Costagáin",
"Ó Cosáin",
"Ó Craidheáin",
"Ó Craith",
"Ó Craobháin",
"Ó Creag",
"Ó Creagáin",
"Ó Creimín",
"Ó Criagáin",
"Ó Crimín",
"Ó Criomhthain",
"Ó Criostóir",
"Ó Criostúir",
"Ó Croidheáin",
"Ó Croithín",
"Ó Crotaigh",
"Ó Cruacháin",
"Ó Cruadhlaoich",
"Ó Crucháin",
"Ó Crábháin",
"Ó Cráibhín",
"Ó Créagáin",
"Ó Críodáin",
"Ó Críogáin",
"Ó Críonáin",
"Ó Cródhal",
"Ó Cróinín",
"Ó Crónallaigh",
"Ó Crónghaile",
"Ó Cuacach",
"Ó Cuagáin",
"Ó Cualáin",
"Ó Cuana",
"Ó Cuanacháin",
"Ó Cuanaigh",
"Ó Cuanna",
"Ó Cuannaigh",
"Ó Cuanáin",
"Ó Cuarnáin",
"Ó Cuideagáin",
"Ó Cuideagánaigh",
"Ó Cuidithe",
"Ó Cuigeannaigh",
"Ó Cuileamhain",
"Ó Cuileannáin",
"Ó Cuileanáin",
"Ó Cuilinn",
"Ó Cuill",
"Ó Cuilleáin",
"Ó Cuilliudha",
"Ó Cuilliú",
"Ó Cuilín",
"Ó Cuimilín",
"Ó Cuimín",
"Ó Cuineáin",
"Ó Cuinn",
"Ó Cuinneacháin",
"Ó Cuinneagáin",
"Ó Cuinneáin",
"Ó Cuinnleáin",
"Ó Cuinnéir",
"Ó Cuirc",
"Ó Cuireáin",
"Ó Cuirleáin",
"Ó Cuirreáin",
"Ó Cuirrín",
"Ó Cuirtéir",
"Ó Cullaigh",
"Ó Cumhail",
"Ó Cumhaill",
"Ó Cunnaidh",
"Ó Curraidh",
"Ó Curraidhin",
"Ó Curraoin",
"Ó Curráin",
"Ó Cádáin",
"Ó Cápa",
"Ó Cárthaigh",
"Ó Céadagáin",
"Ó Céadaigh",
"Ó Céide",
"Ó Céidigh",
"Ó Céileachair",
"Ó Céilleachair",
"Ó Céirín",
"Ó Céitig",
"Ó Céitinn",
"Ó Céitín",
"Ó Cérúcáin",
"Ó Cíobháin",
"Ó Cíobhánaigh",
"Ó Cíoráin",
"Ó Cíosóig",
"Ó Círríc",
"Ó Cógáin",
"Ó Cómair",
"Ó Córrain",
"Ó Cúirnín",
"Ó Cúise",
"Ó Cúlacháin",
"Ó Cúláin",
"Ó Cúndúin",
"Ó Cúnúin",
"Ó Cúrnáin",
"Ó Dabhoireann",
"Ó Dabhráin",
"Ó Dabháin",
"Ó Daeid",
"Ó Daghnáin",
"Ó Daibhidh",
"Ó Daibhín",
"Ó Daimhín",
"Ó Danachair",
"Ó Daochain",
"Ó Daoda",
"Ó Daola",
"Ó Dargáin",
"Ó Deagánaigh",
"Ó Deargáin",
"Ó Dearmada",
"Ó Dearáin",
"Ó Deasmhumhna",
"Ó Deirg",
"Ó Deoraidhin",
"Ó Deoráin",
"Ó Deágha",
"Ó Deághdha",
"Ó Diarmada",
"Ó Dighe",
"Ó Diolain",
"Ó Dioláin",
"Ó Diolúin",
"Ó Dioráin",
"Ó Diothchain",
"Ó Diothcháin",
"Ó Direáin",
"Ó Dochartaigh",
"Ó Doghair",
"Ó Doibhilin",
"Ó Doighre",
"Ó Doirnín",
"Ó Dolainn",
"Ó Domhnaill",
"Ó Domhnalláin",
"Ó Donaoile",
"Ó Donchadha",
"Ó Donchú",
"Ó Donghaile",
"Ó Donnabháin",
"Ó Donnacha",
"Ó Donnagáin",
"Ó Donncha",
"Ó Donnchadha",
"Ó Donnchaidh",
"Ó Donnchú",
"Ó Donndhubhartaigh",
"Ó Donndubhartaigh",
"Ó Donnghaile",
"Ó Donnghusa",
"Ó Donnáin",
"Ó Doraí",
"Ó Dorchaidh",
"Ó Dorchaidhe",
"Ó Dorchaigh",
"Ó Dorcháin",
"Ó Dordáin",
"Ó Drisceoil",
"Ó Droighneáin",
"Ó Droma",
"Ó Druacháin",
"Ó Dríscín",
"Ó Drócháin",
"Ó Dróna",
"Ó Drónaidhe",
"Ó Duarcáin",
"Ó Dubha",
"Ó Dubhabhoireann",
"Ó Dubhagáin",
"Ó Dubhaigh",
"Ó Dubhartaigh",
"Ó Dubhchain",
"Ó Dubhda",
"Ó Dubhdháin",
"Ó Dubhdábhoireann",
"Ó Dubhghaill",
"Ó Dubhgáin",
"Ó Dubhlaigh",
"Ó Dubhlainn",
"Ó Dubhlaoich",
"Ó Dubhluachra",
"Ó Dubhláin",
"Ó Dubhshláine",
"Ó Dubhthaigh",
"Ó Dubhthaigh recte Dooly",
"Ó Dubhuidhe",
"Ó Dubháin",
"Ó Duibhealla",
"Ó Duibheannaigh",
"Ó Duibhfhinn",
"Ó Duibhgeadáin",
"Ó Duibhgeannaigh",
"Ó Duibhgeannáin",
"Ó Duibhghealla",
"Ó Duibhghiolla",
"Ó Duibhginn",
"Ó Duibhir",
"Ó Duibhleanna",
"Ó Duibhlearga",
"Ó Duibhne",
"Ó Duibhthe",
"Ó Duibhín",
"Ó Duibhínn",
"Ó Duigeannaigh",
"Ó Duigneáin",
"Ó Duilearga",
"Ó Duilleáin",
"Ó Duineacha",
"Ó Duinn",
"Ó Duinneacha",
"Ó Duinneacháin",
"Ó Duinnléi",
"Ó Duinnshlé",
"Ó Duinnshléibhe",
"Ó Duinnín",
"Ó Duirnín",
"Ó Duithche",
"Ó Dulchaointigh",
"Ó Duncáin",
"Ó Dunshléibhe",
"Ó Dáibhidh",
"Ó Dáibhis",
"Ó Dála",
"Ó Dálaigh",
"Ó Déadaigh",
"Ó Déid",
"Ó Déide",
"Ó Déisigh",
"Ó Díghe",
"Ó Díochon",
"Ó Díocháin",
"Ó Díomasaigh",
"Ó Díscín",
"Ó Dóláin",
"Ó Dúda",
"Ó Dúgáin",
"Ó Dúlaigh",
"Ó Dúnadhaighe",
"Ó Dúnaighe",
"Ó Dúnaí",
"Ó Dúnlaing",
"Ó Dúnláing",
"Ó Dúnáin",
"Ó Dúnúrta",
"Ó Dúraí",
"Ó Dúrcháin",
"Ó Dúrcáin",
"Ó Fachtna",
"Ó Faircheallaigh",
"Ó Faith",
"Ó Fallamháin",
"Ó Faodhagáin",
"Ó Faoláin",
"Ó Faranáin",
"Ó Fatha",
"Ó Fathaigh",
"Ó Fatharta",
"Ó Fathartaigh",
"Ó Fearachair",
"Ó Fearacháin",
"Ó Fearadhaigh",
"Ó Fearchair",
"Ó Feardhaigh",
"Ó Fearghail",
"Ó Fearghaile",
"Ó Fearghaíosa",
"Ó Fearghusa",
"Ó Fearraidhe",
"Ó Fearraigh",
"Ó Fearraí",
"Ó Fearáin",
"Ó Feithín",
"Ó Fiacha",
"Ó Fiachna",
"Ó Fiachra",
"Ó Fiacháin",
"Ó Fiaich",
"Ó Fiannachta",
"Ó Fiannachtaigh",
"Ó Fiannaidh",
"Ó Fiannaidhe",
"Ó Fiannaigh",
"Ó Figheadóra",
"Ó Filbín",
"Ó Finn",
"Ó Finneachta",
"Ó Finneadha",
"Ó Finnthighearn",
"Ó Fiodhabhra",
"Ó Fionnachta",
"Ó Fionnachtaigh",
"Ó Fionnagáin",
"Ó Fionnalláin",
"Ó Fionndhubhcáin",
"Ó Fionnghaile",
"Ó Fionnghalaigh",
"Ó Fionnghusa",
"Ó Fionnlaoich",
"Ó Fionnmhacáin",
"Ó Fionntáin",
"Ó Fionnáin",
"Ó Fithchealla",
"Ó Fithcheallaigh",
"Ó Flabháin",
"Ó Flaithbhearta",
"Ó Flaithbheartaigh",
"Ó Flaitheamháin",
"Ó Flaithearta",
"Ó Flaithimh",
"Ó Flaithimhín",
"Ó Flaitile",
"Ó Flanagáin",
"Ó Flannabhra",
"Ó Flannagáin",
"Ó Flannchadha",
"Ó Flannghaile",
"Ó Flathamháin",
"Ó Flatharta",
"Ó Flathartaigh",
"Ó Floinn",
"Ó Flárta",
"Ó Fodhladha",
"Ó Foghludha",
"Ó Foghlú",
"Ó Foghlúdha",
"Ó Frainclín",
"Ó Frighil",
"Ó Frithile",
"Ó Fuada",
"Ó Fuadacháin",
"Ó Fuallaigh",
"Ó Fualáin",
"Ó Fuartháin",
"Ó Fuaruisce",
"Ó Fuaráin",
"Ó Fágáin",
"Ó Fáilbhe",
"Ó Fárta",
"Ó Fátharta",
"Ó Féichín",
"Ó Féinneadha",
"Ó Féith",
"Ó Fíona",
"Ó Fíonartaigh",
"Ó Fógarta",
"Ó Fógartaigh",
"Ó Fóghladha",
"Ó Fóráin",
"Ó Fúraigh",
"Ó Gabhacháin",
"Ó Gabhann",
"Ó Gabhláin",
"Ó Gabháin",
"Ó Gacháin",
"Ó Gadhra",
"Ó Gaibhre",
"Ó Gaibhtheacháin",
"Ó Gailliúin",
"Ó Gaillín",
"Ó Gairbhia",
"Ó Gairbhighe",
"Ó Gairbhín",
"Ó Gallchobhair",
"Ó Gallchóir",
"Ó Galláin",
"Ó Galáin",
"Ó Gamhna",
"Ó Gamhnáin",
"Ó Gaoithín",
"Ó Gaora",
"Ó Garbháin",
"Ó Gatháin",
"Ó Gealabháin",
"Ó Gealagáin",
"Ó Gealbháin",
"Ó Geannáin",
"Ó Geanáin",
"Ó Gearabháin",
"Ó Geargáin",
"Ó Gibne",
"Ó Gilliúin",
"Ó Gillín",
"Ó Ginneá",
"Ó Gioballáin",
"Ó Giobaláin",
"Ó Giobláin",
"Ó Giobúin",
"Ó Giolla Rua",
"Ó Giollagáin",
"Ó Giollaruaidhe",
"Ó Giolláin",
"Ó Gionnáin",
"Ó Gionáin",
"Ó Glaisne",
"Ó Glasáin",
"Ó Gleannáin",
"Ó Gliasáin",
"Ó Glionnáin",
"Ó Gloinn",
"Ó Gloinne",
"Ó Gláibhín",
"Ó Gláimhín",
"Ó Gnímh",
"Ó Gobhann",
"Ó Gobáin",
"Ó Gogáin",
"Ó Goibín",
"Ó Goillidhe",
"Ó Goilín",
"Ó Goireachtaigh",
"Ó Golláin",
"Ó Gormáin",
"Ó Graith",
"Ó Grallaigh",
"Ó Gramhna",
"Ó Greadaigh",
"Ó Grealaigh",
"Ó Greanacháin",
"Ó Grialais",
"Ó Griallais",
"Ó Grianna",
"Ó Grianáin",
"Ó Grifín",
"Ó Gruagáin",
"Ó Gráda",
"Ó Grádaigh",
"Ó Gráinne",
"Ó Grálaigh",
"Ó Grállaigh",
"Ó Gréacháin",
"Ó Gréil",
"Ó Gréill",
"Ó Gríbhthín",
"Ó Grífín",
"Ó Gríobhtha",
"Ó Gríobhtháin",
"Ó Gríofa",
"Ó Gríofha",
"Ó Guaire",
"Ó Guairim",
"Ó Guillí",
"Ó Guithín",
"Ó Gábháin",
"Ó Gáibhtheacháin",
"Ó Gáibhín",
"Ó Gáineard",
"Ó Gánaird",
"Ó Géaráin",
"Ó Géibheannaigh",
"Ó Géibhinn",
"Ó Gíontaigh",
"Ó Gúnáin",
"Ó Hadhlairt",
"Ó Hadhra",
"Ó Haibheartaigh",
"Ó Haichir",
"Ó Haicéad",
"Ó Haidhleart",
"Ó Hailgheanáin",
"Ó Hailgheasa",
"Ó Hailpín",
"Ó Hailín",
"Ó Haimhirgín",
"Ó Hainchín",
"Ó Hainifín",
"Ó Hainion",
"Ó Hainligh",
"Ó Hainmhireach",
"Ó Hainmneach",
"Ó Hainthín",
"Ó Hainín",
"Ó Hairbheasaigh",
"Ó Hairmeasaigh",
"Ó Hairmheasaigh",
"Ó Hairt",
"Ó Hairtnéada",
"Ó Haiseadha",
"Ó Haithbheartaigh",
"Ó Haithchir",
"Ó Haitheasa",
"Ó Hallacháin",
"Ó Hallmhúráin",
"Ó Halmhain",
"Ó Hanluain",
"Ó Hannagáin",
"Ó Hannaidh",
"Ó Hannlaoigh",
"Ó Hannracháin",
"Ó Hannraoi",
"Ó Hanrachtaigh",
"Ó Hanraoi",
"Ó Haodha",
"Ó Haodhgáin",
"Ó Haogáin",
"Ó Haoidhne",
"Ó Haoilbheard",
"Ó Haoileáin",
"Ó Haolláin",
"Ó Haoláin",
"Ó Haonghuis",
"Ó Haonghusa",
"Ó Harcáin",
"Ó Hargadáin",
"Ó Hargáin",
"Ó Harrachtáin",
"Ó Harragáin",
"Ó Harta",
"Ó Hartagáin",
"Ó Heachadha",
"Ó Heachthigheirn",
"Ó Headhra",
"Ó Heaghra",
"Ó Heaghráin",
"Ó Heallaigh",
"Ó Hearbhaird",
"Ó Hearbhard",
"Ó Hearcáin",
"Ó Hearghail",
"Ó Hearghaile",
"Ó Hearnáin",
"Ó Hearráin",
"Ó Hearáin",
"Ó Heibhrín",
"Ó Heichthigheirn",
"Ó Heideagáin",
"Ó Heidhin",
"Ó Heifearnáin",
"Ó Heifrín",
"Ó Heigheartaigh",
"Ó Heilíre",
"Ó Heimhrín",
"Ó Heireamhóin",
"Ó Heislin",
"Ó Heiteagáin",
"Ó Heithchir",
"Ó Heithir",
"Ó Helaoire",
"Ó Heochach",
"Ó Heochadha",
"Ó Heochaidh",
"Ó Heodhasa",
"Ó Heodhusa",
"Ó Heoghain",
"Ó Heoghanáin",
"Ó Hiarfhlaithe",
"Ó Hiarfhlatha",
"Ó Hiarnáin",
"Ó Hiceadha",
"Ó Hicidhe",
"Ó Hicí",
"Ó Hicín",
"Ó Hicóg",
"Ó Hifearnáin",
"Ó Highne",
"Ó Hinneirghe",
"Ó Hinnéirghe",
"Ó Hinéirigh",
"Ó Hinéirí",
"Ó Hiocóg",
"Ó Hiolláin",
"Ó Hioláin",
"Ó Hionnghaile",
"Ó Hiorbhaird",
"Ó Hiorbhard",
"Ó Hodhráin",
"Ó Hoibicín",
"Ó Hoirbheaird",
"Ó Hoirbheard",
"Ó Hoirchinnigh",
"Ó Hoireabaird",
"Ó Hoireabhaird",
"Ó Hoireabhard",
"Ó Hoireachtaigh",
"Ó Hoiscín",
"Ó Hoistín",
"Ó Hoisín",
"Ó Hollaráin",
"Ó Holláin",
"Ó Hollúin",
"Ó Horcáin",
"Ó Horgáin",
"Ó Houracháin",
"Ó Huaillearan",
"Ó Huaithne",
"Ó Huaithnín",
"Ó Hualla",
"Ó Huallacháin",
"Ó Huallaigh",
"Ó Huidhir",
"Ó Huiginn",
"Ó Huigín",
"Ó Huirthille",
"Ó Huiscín",
"Ó Huitseacháin",
"Ó Hulláin",
"Ó Hurdail",
"Ó Hurmholtaigh",
"Ó Hurthuile",
"Ó Hágáin",
"Ó Hágúrtaigh",
"Ó Háilíosa",
"Ó Háinle",
"Ó Háinlí",
"Ó Hánusaigh",
"Ó Hárlaigh",
"Ó Héadtromáin",
"Ó Héaghráin",
"Ó Héalaigh",
"Ó Héalaithe",
"Ó Héamhthaigh",
"Ó Héanacháin",
"Ó Héanagáin",
"Ó Héanaigh",
"Ó Héideáin",
"Ó Héigcheartaigh",
"Ó Héigearta",
"Ó Héigeartaigh",
"Ó Héigheartaigh",
"Ó Héighne",
"Ó Héighnigh",
"Ó Héighniú",
"Ó Héilidhe",
"Ó Héiligh",
"Ó Héilí",
"Ó Héimhthigh",
"Ó Héimhín",
"Ó Héineacháin",
"Ó Héinrí",
"Ó Héiní",
"Ó Hénrí",
"Ó Hícín",
"Ó Híghne",
"Ó Híomhair",
"Ó Hóbáin",
"Ó Hódhra",
"Ó Hódhráin",
"Ó Hóghartaigh",
"Ó Hógáin",
"Ó Hóráin",
"Ó Húbáin",
"Ó Húrdail",
"Ó Labhra",
"Ó Labhradha",
"Ó Labhrú",
"Ó Lachnáin",
"Ó Lachtnáin",
"Ó Ladhradha",
"Ó Laideáin",
"Ó Laidhe",
"Ó Laidhigh",
"Ó Laidhin",
"Ó Laighin",
"Ó Laighnigh",
"Ó Lailligh",
"Ó Lailliú",
"Ó Laimhbheartaigh",
"Ó Lainn",
"Ó Laithbheartaigh",
"Ó Laithimh",
"Ó Laithmhe",
"Ó Lallaidh",
"Ó Lallaigh",
"Ó Lamhna",
"Ó Lanagáin",
"Ó Laochdha",
"Ó Laodhóg",
"Ó Laoghaire",
"Ó Laoghóg",
"Ó Laoi",
"Ó Laoidh",
"Ó Laoidhe",
"Ó Laoidhigh",
"Ó Laoingsigh",
"Ó Laoithe",
"Ó Lapáin",
"Ó Larcáin",
"Ó Leallaigh",
"Ó Leamhna",
"Ó Leannáin",
"Ó Leathaigh",
"Ó Leathlobhair",
"Ó Leidhin",
"Ó Leidhinn",
"Ó Leighin",
"Ó Leighinn",
"Ó Liadhain",
"Ó Liaghain",
"Ó Liain",
"Ó Liathaigh",
"Ó Liatháin",
"Ó Lideadha",
"Ó Lighe",
"Ó Liodáin",
"Ó Lionacháin",
"Ó Lionnáin",
"Ó Lochlainn",
"Ó Lochnáin",
"Ó Lochráin",
"Ó Lochtnáin",
"Ó Loideáin",
"Ó Loididh",
"Ó Loineacháin",
"Ó Loingscigh",
"Ó Loingse",
"Ó Loingseacháin",
"Ó Loingsigh",
"Ó Loinn",
"Ó Loinne",
"Ó Loinnigh",
"Ó Loinnsge",
"Ó Loinnsgigh",
"Ó Loirgneáin",
"Ó Lomgaigh",
"Ó Lonagáin",
"Ó Lonargáin",
"Ó Londáin",
"Ó Longaigh",
"Ó Longáin",
"Ó Lonnáin",
"Ó Lonáin",
"Ó Lorcáin",
"Ó Luachra",
"Ó Luag",
"Ó Luain",
"Ó Luaire",
"Ó Luanaigh",
"Ó Luasa",
"Ó Luasaigh",
"Ó Lubhaing",
"Ó Ludhóg",
"Ó Luineacháin",
"Ó Luinigh",
"Ó Lunaigh",
"Ó Lupáin",
"Ó Lurgáin",
"Ó Láimhín",
"Ó Lámháin",
"Ó Lás",
"Ó Lása",
"Ó Léanacháin",
"Ó Léineacháin",
"Ó Líonacháin",
"Ó Líthe",
"Ó Lócháin",
"Ó Lógáin",
"Ó Lónáin",
"Ó Lórdáin",
"Ó Lúbhaing",
"Ó Lúbhóg",
"Ó Lúing",
"Ó Lúóg",
"Ó Macasa",
"Ó Macháin",
"Ó Madadháin",
"Ó Madagáin",
"Ó Madaidh",
"Ó Madaidhe",
"Ó Madaidhin",
"Ó Madaoin",
"Ó Madáin",
"Ó Magáin",
"Ó Maicín",
"Ó Maidín",
"Ó Maille",
"Ó Mainchín",
"Ó Maine",
"Ó Maingín",
"Ó Mainichín",
"Ó Mainnín",
"Ó Mainín",
"Ó Maithnín",
"Ó Malóid",
"Ó Manacháin",
"Ó Manntáin",
"Ó Mantáin",
"Ó Maoil Aodha",
"Ó Maoil Eoin",
"Ó Maoil Mheana",
"Ó Maoilchiaráin",
"Ó Maoilchéir",
"Ó Maoilchéire",
"Ó Maoilcéir",
"Ó Maoildhia",
"Ó Maoileacháin",
"Ó Maoileagáin",
"Ó Maoileala",
"Ó Maoileanaigh",
"Ó Maoilearca",
"Ó Maoileoghain",
"Ó Maoileoin",
"Ó Maoileáin",
"Ó Maoilfheabhail",
"Ó Maoilia",
"Ó Maoiliadh",
"Ó Maoiligeáin",
"Ó Maoilmhiadhaigh",
"Ó Maoilmhichíl",
"Ó Maoilmhín",
"Ó Maoilriain",
"Ó Maoilshearcaigh",
"Ó Maoiléadaigh",
"Ó Maoiléide",
"Ó Maoilín",
"Ó Maoineacháin",
"Ó Maoinigh",
"Ó Maoir",
"Ó Maol Aodha",
"Ó Maolagáin",
"Ó Maolalaidh",
"Ó Maolalaigh",
"Ó Maolalla",
"Ó Maolallaidh",
"Ó Maolallaigh",
"Ó Maolchaoine",
"Ó Maolchatha",
"Ó Maolchathaigh",
"Ó Maolchraoibhe",
"Ó Maoldhomhnaigh",
"Ó Maoldomhnaigh",
"Ó Maoldúin",
"Ó Maolfhabhail",
"Ó Maolfhachtna",
"Ó Maolfhábhail",
"Ó Maolfhábhaill",
"Ó Maolghuala",
"Ó Maolmhochóirghe",
"Ó Maolmhuaidh",
"Ó Maolmhudhóg",
"Ó Maolmhuire",
"Ó Maolmuaidh",
"Ó Maolriagháin",
"Ó Maolriain",
"Ó Maolruaidh",
"Ó Maolruaidhe",
"Ó Maolruana",
"Ó Maolruanaigh",
"Ó Maolruanaí",
"Ó Maoltuile",
"Ó Maoláin",
"Ó Maonaigh",
"Ó Maonghaile",
"Ó Maothagáin",
"Ó Maranáin",
"Ó Marcacháin",
"Ó Marcaigh",
"Ó Marnáin",
"Ó Martain",
"Ó Mathghamhna",
"Ó Mathúna",
"Ó Meachair",
"Ó Meadhra",
"Ó Meadhraí",
"Ó Meadóg",
"Ó Mealláin",
"Ó Meardha",
"Ó Mearlaigh",
"Ó Mearáin",
"Ó Meidhir",
"Ó Meirligh",
"Ó Meirnigh",
"Ó Meiscill",
"Ó Meitheagáin",
"Ó Meádhra",
"Ó Meádhraí",
"Ó Meára",
"Ó Meáraidh",
"Ó Meáraí",
"Ó Miadha",
"Ó Miadhacháin",
"Ó Miadhaigh",
"Ó Mianaigh",
"Ó Mianáin",
"Ó Milléadha",
"Ó Miléadha",
"Ó Mionacháin",
"Ó Mocháin",
"Ó Mochóirghe",
"Ó Mochóraigh",
"Ó Modhráin",
"Ó Moghráin",
"Ó Mogáin",
"Ó Moidhe",
"Ó Moinéal",
"Ó Moithide",
"Ó Molraoghain",
"Ó Monacháin",
"Ó Monghaile",
"Ó Mongáin",
"Ó Moráin",
"Ó Mothair",
"Ó Motháin",
"Ó Mraoiligh",
"Ó Muadaigh",
"Ó Muaráin",
"Ó Mugabháin",
"Ó Mugáin",
"Ó Muichille",
"Ó Muighe",
"Ó Muilcín",
"Ó Muilleagáin",
"Ó Muilligh",
"Ó Muimhneacháin",
"Ó Muimhnigh",
"Ó Muineacháin",
"Ó Muineóg",
"Ó Muinghíle",
"Ó Muinilligh",
"Ó Muinneacháin",
"Ó Muinníle",
"Ó Muircheartaigh",
"Ó Muireadhaigh",
"Ó Muireagáin",
"Ó Muireann",
"Ó Muireáin",
"Ó Muireán",
"Ó Muirgeáin",
"Ó Muirgheasa",
"Ó Muirgheasáin",
"Ó Muirighthe",
"Ó Muirithe",
"Ó Muirneacháin",
"Ó Muirthile",
"Ó Muirthín",
"Ó Mullala",
"Ó Mulláin",
"Ó Muláin",
"Ó Muracháin",
"Ó Murachú",
"Ó Murae",
"Ó Muraoile",
"Ó Murchadha",
"Ó Murchaidhe",
"Ó Murcháin",
"Ó Murchú",
"Ó Murghaile",
"Ó Murnáin",
"Ó Murraigh",
"Ó Murthuile",
"Ó Máille",
"Ó Máirtín",
"Ó Málóid",
"Ó Máthúna",
"Ó Méalóid",
"Ó Méalóide",
"Ó Mídhia",
"Ó Míléada",
"Ó Míocháin",
"Ó Míodhacháin",
"Ó Míodhcháin",
"Ó Míonáin",
"Ó Móiníol",
"Ó Móirín",
"Ó Móracháin",
"Ó Mórdha",
"Ó Móráin",
"Ó Múrnáin",
"Ó Naoidheanáin",
"Ó Neabhail",
"Ó Neachtain",
"Ó Nearaigh",
"Ó Nia",
"Ó Niadh",
"Ó Niaidh",
"Ó Niallagáin",
"Ó Niallghuis",
"Ó Nialláin",
"Ó Nianáin",
"Ó Niatháin",
"Ó Nuadhain",
"Ó Nuadhan",
"Ó Nualláin",
"Ó Nuanáin",
"Ó Nádhraigh",
"Ó Náradhaigh",
"Ó Náraigh",
"Ó Néill",
"Ó Núin",
"Ó Núnáin",
"Ó Partlainn",
"Ó Peatáin",
"Ó Pilbín",
"Ó Piotáin",
"Ó Praoidheáil",
"Ó Priongalóid",
"Ó Rabhartaigh",
"Ó Rabhlaigh",
"Ó Rachtagáin",
"Ó Raghaill",
"Ó Raghaille",
"Ó Raghallaigh",
"Ó Raifearta",
"Ó Raifteirí",
"Ó Raighill",
"Ó Raighilligh",
"Ó Raighle",
"Ó Raighne",
"Ó Raigne",
"Ó Raithbheartaigh",
"Ó Raithile",
"Ó Rallaigh",
"Ó Rathaile",
"Ó Rathallaigh",
"Ó Reachtabhair",
"Ó Reachtabhra",
"Ó Reachtagáin",
"Ó Reachtair",
"Ó Reachtaire",
"Ó Reachtar",
"Ó Reachtúire",
"Ó Reannacháin",
"Ó Reithil",
"Ó Riabhaigh",
"Ó Riada",
"Ó Riagáin",
"Ó Riain",
"Ó Riallaigh",
"Ó Riardáin",
"Ó Rinn",
"Ó Riolláin",
"Ó Robhacháin",
"Ó Robhartaigh",
"Ó Rodacháin",
"Ó Rodaigh",
"Ó Rodaí",
"Ó Rodáin",
"Ó Roithleáin",
"Ó Rothallaigh",
"Ó Rothlainn",
"Ó Ruacháin",
"Ó Ruadhainn",
"Ó Ruadhcháin",
"Ó Ruadháin",
"Ó Ruaidhe",
"Ó Ruaidhinn",
"Ó Ruaidhrí",
"Ó Ruaidhín",
"Ó Ruairc",
"Ó Ruanadha",
"Ó Ruanaidhe",
"Ó Ruanaí",
"Ó Ruanáin",
"Ó Rudaigh",
"Ó Rághaill",
"Ó Ráighle",
"Ó Ráighne",
"Ó Ráinne",
"Ó Ránaigh",
"Ó Réagáin",
"Ó Ríle",
"Ó Ríoghbhardáin",
"Ó Ríogáin",
"Ó Ríordáin",
"Ó Rócháin",
"Ó Róláin",
"Ó Rónáin",
"Ó Rúnaidhe",
"Ó Rúnú",
"Ó Rúáin",
"Ó Saoraidhe",
"Ó Scalaidhe",
"Ó Scalaighe",
"Ó Scallaigh",
"Ó Scanaill",
"Ó Scanláin",
"Ó Scannail",
"Ó Scannaill",
"Ó Scannláin",
"Ó Scealláin",
"Ó Scolaidhe",
"Ó Scolaighe",
"Ó Scolaí",
"Ó Scollaigh",
"Ó Scolláin",
"Ó Scéacháin",
"Ó Seachnasaigh",
"Ó Seanacháin",
"Ó Seanaigh",
"Ó Seanainn",
"Ó Seanáin",
"Ó Searcaigh",
"Ó Searraigh",
"Ó Seasnáin",
"Ó Seibhleáin",
"Ó Seibhlin",
"Ó Seibhlín",
"Ó Seighin",
"Ó Seireadáin",
"Ó Seitheacháin",
"Ó Seithneacháin",
"Ó Seochfhradha",
"Ó Seochrú",
"Ó Sgulla",
"Ó Siadhacháin",
"Ó Siadhail",
"Ó Siaghail",
"Ó Siardáin",
"Ó Sibhleáin",
"Ó Sidheáil",
"Ó Simeoin",
"Ó Siochfhradha",
"Ó Siochrú",
"Ó Sionacháin",
"Ó Sionnaigh",
"Ó Sionáin",
"Ó Sioradáin",
"Ó Sith",
"Ó Siúrdáin",
"Ó Slatara",
"Ó Sluaghdháin",
"Ó Slámáin",
"Ó Sléibhín",
"Ó Smealáin",
"Ó Smoláin",
"Ó Somacháin",
"Ó Sosnáin",
"Ó Spealáin",
"Ó Spiolláin",
"Ó Spioláin",
"Ó Spoláin",
"Ó Stiofáin",
"Ó Suibhne",
"Ó Sé",
"Ó Séagha",
"Ó Síocháin",
"Ó Síoda",
"Ó Síomóin",
"Ó Síoráin",
"Ó Síothcháin",
"Ó Sírín",
"Ó Síthigh",
"Ó Síththe",
"Ó Súilleabháin",
"Ó Súilliobháin",
"Ó Taichligh",
"Ó Taidhg",
"Ó Tarlaigh",
"Ó Tarpaigh",
"Ó Teangana",
"Ó Teangnaí",
"Ó Teimhneáin",
"Ó Tiarnaigh",
"Ó Tiarnáin",
"Ó Tighearna",
"Ó Tighearnaigh",
"Ó Tighearnáin",
"Ó Tiobraide",
"Ó Tiomanaidh",
"Ó Tiomanaigh",
"Ó Tiománaidhe",
"Ó Tiománaí",
"Ó Toirbhealaigh",
"Ó Tolain",
"Ó Tomhnair",
"Ó Tomáis",
"Ó Tonra",
"Ó Tormaigh",
"Ó Traoin",
"Ó Treabhair",
"Ó Treasa",
"Ó Treasaigh",
"Ó Treasaí",
"Ó Triall",
"Ó Tréinfhear",
"Ó Tuachair",
"Ó Tuairisc",
"Ó Tuairisg",
"Ó Tuama",
"Ó Tuamáin",
"Ó Tuaraisce",
"Ó Tuaruisce",
"Ó Tuataigh",
"Ó Tuathaigh",
"Ó Tuathail",
"Ó Tuathaill",
"Ó Tuathaláin",
"Ó Tuathalín",
"Ó Tuathlainn",
"Ó Tuile",
"Ó Tuimlin",
"Ó Turraoin",
"Ó Téacháin",
"Ó Téidheacháin",
"Ó Tóláin",
"Ó Tórpaigh",
"Ó hAithchir",
"Ó hAlmhain",
"Ó hAnáin",
"Ó hAoidhgin",
"Ó hAonacháin",
"Ó hEachairn",
"Ó hEagáin",
"Ó hEanna",
"Ó hEarchaidh",
"Ó hEarchú",
"Ó hIfearnáin",
"Ó hOileáin",
"Ó hÉadhnú",
"Ó hÉalaí",
"Ó hÉaluighthe",
"Ó hÉidhniú",
"Ó hÉidhní",
"Ó hÉimhigh",
"Ó hÉinniú",
"Ó Ánusaigh",
"ÓBroinín",
)
prefixes_female = ("Mrs.", "Ms.", "Miss", "Dr.")
prefixes_male = ("Mr.", "Dr.")
| Provider |
python | getsentry__sentry | tests/sentry/api/endpoints/test_auth_login.py | {
"start": 197,
"end": 1753
} | class ____(APITestCase):
endpoint = "sentry-api-0-auth-login"
method = "post"
def setUp(self) -> None:
# Requests to set the test cookie
self.client.get(reverse("sentry-api-0-auth-config"))
def test_login_invalid_password(self) -> None:
response = self.get_error_response(
username=self.user.username, password="bizbar", status_code=400
)
assert response.data["errors"]["__all__"] == [
"Please enter a correct username and password. Note that both fields may be case-sensitive."
]
def test_login_valid_credentials(self) -> None:
response = self.get_success_response(username=self.user.username, password="admin")
assert response.data["nextUri"] == "/organizations/new/"
def test_must_reactivate(self) -> None:
self.user.update(is_active=False)
response = self.get_success_response(username=self.user.username, password="admin")
assert response.data["nextUri"] == "/auth/reactivate/"
@patch(
"sentry.api.endpoints.auth_login.ratelimiter.backend.is_limited",
autospec=True,
return_value=True,
)
def test_login_ratelimit(self, is_limited: MagicMock) -> None:
response = self.get_error_response(
username=self.user.username, password="admin", status_code=400
)
assert [str(s) for s in response.data["errors"]["__all__"]] == [
"You have made too many failed authentication attempts. Please try again later."
]
| AuthLoginEndpointTest |
python | huggingface__transformers | src/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py | {
"start": 149403,
"end": 150469
} | class ____(nn.Module):
def __init__(self, dim: int):
super().__init__()
self.dwconv = Qwen3OmniMoeCausalConvNet(
dim,
dim,
kernel_size=7,
groups=dim,
dilation=1,
)
self.norm = nn.LayerNorm(dim, eps=1e-6)
self.pwconv1 = nn.Linear(dim, 4 * dim)
self.act = nn.GELU()
self.pwconv2 = nn.Linear(4 * dim, dim)
self.gamma = nn.Parameter(1e-6 * torch.ones(dim))
def forward(self, hidden_states):
input = hidden_states
hidden_states = self.dwconv(hidden_states)
hidden_states = hidden_states.permute(0, 2, 1)
hidden_states = self.norm(hidden_states)
hidden_states = self.pwconv1(hidden_states)
hidden_states = self.act(hidden_states)
hidden_states = self.pwconv2(hidden_states)
hidden_states = self.gamma * hidden_states
hidden_states = hidden_states.permute(0, 2, 1)
hidden_states = input + hidden_states
return hidden_states
| Qwen3OmniMoeConvNeXtBlock |
python | getsentry__sentry | src/sentry/issues/grouptype.py | {
"start": 18552,
"end": 19062
} | class ____(GroupType):
type_id = 1021
slug = "query_injection_vulnerability"
description = "Potential Query Injection Vulnerability"
category = GroupCategory.PERFORMANCE.value
category_v2 = GroupCategory.DB_QUERY.value
enable_auto_resolve = False
enable_escalation_detection = False
noise_config = NoiseConfig(ignore_limit=10)
default_priority = PriorityLevel.MEDIUM
# 2000 was ProfileBlockingFunctionMainThreadType
@dataclass(frozen=True)
| QueryInjectionVulnerabilityGroupType |
python | huggingface__transformers | src/transformers/models/csm/modeling_csm.py | {
"start": 2425,
"end": 5633
} | class ____(ModelOutput):
r"""
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Language modeling loss (for next-token prediction).
logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache).
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
`past_key_values` input) to speed up sequential decoding.
depth_decoder_loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Language modeling loss (for next-token prediction) of the depth decoder model.
depth_decoder_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
Prediction scores of the depth decoder (scores for each vocabulary token before SoftMax).
depth_decoder_past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache).
depth_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
depth_decoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
backbone_loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Language modeling loss (for next-token prediction) of the backbone model.
"""
loss: Optional[torch.FloatTensor] = None
logits: Optional[torch.FloatTensor] = None
past_key_values: Optional[Cache] = None
hidden_states: Optional[tuple[torch.FloatTensor, ...]] = None
attentions: Optional[tuple[torch.FloatTensor, ...]] = None
depth_decoder_loss: Optional[torch.FloatTensor] = None
depth_decoder_logits: Optional[torch.FloatTensor] = None
depth_decoder_past_key_values: Optional[Cache] = None
depth_decoder_hidden_states: Optional[tuple[torch.FloatTensor, ...]] = None
depth_decoder_attentions: Optional[tuple[torch.FloatTensor, ...]] = None
backbone_loss: Optional[torch.FloatTensor] = None
@use_kernel_forward_from_hub("RMSNorm")
| CsmOutputWithPast |
python | falconry__falcon | tests/test_utils.py | {
"start": 25689,
"end": 46436
} | class ____:
"""Verify some branches not covered elsewhere."""
def test_path_escape_chars_in_create_environ(self):
env = testing.create_environ('/hello%20world%21')
assert env['PATH_INFO'] == '/hello world!'
def test_no_prefix_allowed_for_query_strings_in_create_environ(self):
with pytest.raises(ValueError):
testing.create_environ(query_string='?foo=bar')
def test_plus_in_path_in_create_environ(self):
env = testing.create_environ('/mnt/grub2/lost+found/inode001')
assert env['PATH_INFO'] == '/mnt/grub2/lost+found/inode001'
def test_none_header_value_in_create_environ(self):
env = testing.create_environ('/', headers={'X-Foo': None})
assert env['HTTP_X_FOO'] == ''
def test_decode_empty_result(self, app):
client = testing.TestClient(app)
response = client.simulate_request(path='/')
assert response.json == falcon.HTTPNotFound().to_dict()
def test_httpnow_alias_for_backwards_compat(self):
# Ensure that both the alias and decorated alias work
assert (
testing.httpnow is falcon.util.http_now
or inspect.unwrap(testing.httpnow) is falcon.util.http_now
)
def test_default_headers(self, app):
resource = testing.SimpleTestResource()
app.add_route('/', resource)
headers = {
'Authorization': 'Bearer 123',
}
client = testing.TestClient(app, headers=headers)
client.simulate_get()
assert resource.captured_req.auth == headers['Authorization']
client.simulate_get(headers=None)
assert resource.captured_req.auth == headers['Authorization']
def test_default_headers_with_override(self, app):
resource = testing.SimpleTestResource()
app.add_route('/', resource)
override_before = 'something-something'
override_after = 'something-something'[::-1]
headers = {
'Authorization': 'Bearer XYZ',
'Accept': 'application/vnd.siren+json',
'X-Override-Me': override_before,
}
client = testing.TestClient(app, headers=headers)
client.simulate_get(headers={'X-Override-Me': override_after})
assert resource.captured_req.auth == headers['Authorization']
assert resource.captured_req.accept == headers['Accept']
assert resource.captured_req.get_header('X-Override-Me') == override_after
def test_status(self, app):
resource = testing.SimpleTestResource(status=falcon.HTTP_702)
app.add_route('/', resource)
client = testing.TestClient(app)
result = client.simulate_get()
assert result.status == falcon.HTTP_702
@pytest.mark.parametrize(
'simulate',
[
testing.simulate_get,
testing.simulate_post,
],
)
@pytest.mark.parametrize(
'value',
(
'd\xff\xff\x00',
'quick fox jumps over the lazy dog',
'{"hello": "WORLD!"}',
'Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praese',
'{"hello": "WORLD!", "greetings": "fellow traveller"}',
'\xe9\xe8',
),
)
def test_repr_result_when_body_varies(self, asgi, util, value, simulate):
if isinstance(value, str):
value = bytes(value, 'UTF-8')
if asgi:
resource = testing.SimpleTestResourceAsync(body=value)
else:
resource = testing.SimpleTestResource(body=value)
app = util.create_app(asgi)
app.add_route('/hello', resource)
result = simulate(app, '/hello')
captured_resp = resource.captured_resp
content = captured_resp.text
if len(value) > 40:
content = value[:20] + b'...' + value[-20:]
else:
content = value
args = [
captured_resp.status,
captured_resp.headers['content-type'],
str(content),
]
expected_content = ' '.join(filter(None, args))
expected_result = f'Result<{expected_content}>'
assert str(result) == expected_result
def test_repr_without_content_type_header(self, asgi):
value = b'huh'
header = [('Not-content-type', 'no!')]
result = falcon.testing.Result([value], falcon.HTTP_200, header)
expected_result = f'Result<200 OK {value}>'
assert str(result) == expected_result
@pytest.mark.parametrize(
'simulate',
[
testing.simulate_get,
testing.simulate_post,
],
)
@pytest.mark.parametrize(
'value',
(
'd\xff\xff\x00',
'quick fox jumps over the lazy dog',
'{"hello": "WORLD!"}',
'Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praese',
'{"hello": "WORLD!", "greetings": "fellow traveller"}',
'\xe9\xe8',
),
)
def test_rich_repr_result_when_body_varies(self, asgi, util, value, simulate):
if isinstance(value, str):
value = bytes(value, 'UTF-8')
if asgi:
resource = testing.SimpleTestResourceAsync(body=value)
else:
resource = testing.SimpleTestResource(body=value)
app = util.create_app(asgi)
app.add_route('/hello', resource)
result: falcon.testing.Result = simulate(app, '/hello')
captured_resp = resource.captured_resp
content = captured_resp.text
if len(value) > 40:
content = value[:20] + b'...' + value[-20:]
else:
content = value
args = [
captured_resp.status,
captured_resp.headers['content-type'],
str(content),
]
status_color: str
for prefix, color in (
('1', 'blue'),
('2', 'green'),
('3', 'magenta'),
('4', 'red'),
('5', 'red'),
):
if captured_resp.status.startswith(prefix):
status_color = color
result_template = (
'[bold]Result[/]<[bold {}]{}[/] [italic yellow]{}[/] [grey50]{}[/]>'
)
expected_result = result_template.format(status_color, *args)
assert result.__rich__() == expected_result
@pytest.mark.parametrize(
'value',
(
'd\xff\xff\x00',
'quick fox jumps over the lazy dog',
'{"hello": "WORLD!"}',
'Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praese',
'{"hello": "WORLD!", "greetings": "fellow traveller"}',
'\xe9\xe8',
),
)
@pytest.mark.parametrize(
'status_color_pair',
(
(falcon.HTTP_101, 'blue'),
(falcon.HTTP_200, 'green'),
(falcon.HTTP_301, 'magenta'),
(falcon.HTTP_404, 'red'),
(falcon.HTTP_500, 'red'),
),
)
def test_rich_repr_with_different_statuses(self, asgi, status_color_pair, value):
expected_status, expected_color = status_color_pair
if isinstance(value, str):
value = bytes(value, 'UTF-8')
result = falcon.testing.Result(
[value], expected_status, [('content-type', 'dummy')]
)
if len(value) > 40:
content = value[:20] + b'...' + value[-20:]
else:
content = value
expected_result_template = (
'[bold]Result[/]<[bold {}]{}[/] [italic yellow]{}[/] [grey50]{}[/]>'
)
expected_result = expected_result_template.format(
expected_color, expected_status, 'dummy', content
)
assert result.__rich__() == expected_result
def test_wsgi_iterable_not_closeable(self):
result = testing.Result([], falcon.HTTP_200, [])
assert not result.content
assert result.json is None
def test_path_must_start_with_slash(self, app):
app.add_route('/', testing.SimpleTestResource())
client = testing.TestClient(app)
with pytest.raises(ValueError):
client.simulate_get('foo')
def test_cached_text_in_result(self, app):
app.add_route('/', testing.SimpleTestResource(body='test'))
client = testing.TestClient(app)
result = client.simulate_get()
assert result.text == result.text
@pytest.mark.parametrize(
'resource_type',
[
testing.SimpleTestResource,
testing.SimpleTestResourceAsync,
],
)
def test_simple_resource_body_json_xor(self, resource_type):
with pytest.raises(ValueError):
resource_type(body='', json={})
def test_query_string(self, app):
class SomeResource:
def on_get(self, req, resp):
doc = {}
doc['oid'] = req.get_param_as_int('oid')
doc['detailed'] = req.get_param_as_bool('detailed')
doc['things'] = req.get_param_as_list('things', int)
doc['query_string'] = req.query_string
resp.text = json.dumps(doc)
app.req_options.auto_parse_qs_csv = True
app.add_route('/', SomeResource())
client = testing.TestClient(app)
result = client.simulate_get(query_string='oid=42&detailed=no&things=1')
assert result.json['oid'] == 42
assert not result.json['detailed']
assert result.json['things'] == [1]
params = {'oid': 42, 'detailed': False}
result = client.simulate_get(params=params)
assert result.json['oid'] == params['oid']
assert not result.json['detailed']
assert result.json['things'] is None
params = {'oid': 1978, 'detailed': 'yes', 'things': [1, 2, 3]}
result = client.simulate_get(params=params)
assert result.json['oid'] == params['oid']
assert result.json['detailed']
assert result.json['things'] == params['things']
expected_qs = 'things=1&things=2&things=3'
result = client.simulate_get(params={'things': [1, 2, 3]})
assert result.json['query_string'] == expected_qs
expected_qs = 'things=1,2,3'
result = client.simulate_get(params={'things': [1, 2, 3]}, params_csv=True)
assert result.json['query_string'] == expected_qs
def test_query_string_no_question(self, app):
app.add_route('/', testing.SimpleTestResource())
client = testing.TestClient(app)
with pytest.raises(ValueError):
client.simulate_get(query_string='?x=1')
def test_query_string_in_path(self, app):
resource = testing.SimpleTestResource()
app.add_route('/thing', resource)
client = testing.TestClient(app)
with pytest.raises(ValueError):
client.simulate_get(path='/thing?x=1', query_string='things=1,2,3')
with pytest.raises(ValueError):
client.simulate_get(path='/thing?x=1', params={'oid': 1978})
with pytest.raises(ValueError):
client.simulate_get(
path='/thing?x=1', query_string='things=1,2,3', params={'oid': 1978}
)
client.simulate_get(path='/thing?detailed=no&oid=1337')
assert resource.captured_req.path == '/thing'
assert resource.captured_req.query_string == 'detailed=no&oid=1337'
@pytest.mark.parametrize(
'document',
[
# NOTE(vytas): using an exact binary fraction here to avoid special
# code branch for approximate equality as it is not the focus here
16.0625,
123456789,
True,
'',
'I am a \u1d0a\ua731\u1d0f\u0274 string.',
[1, 3, 3, 7],
{'message': '\xa1Hello Unicode! \U0001f638'},
{
'count': 4,
'items': [
{'number': 'one'},
{'number': 'two'},
{'number': 'three'},
{'number': 'four'},
],
'next': None,
},
],
)
def test_simulate_json_body(self, asgi, util, document):
resource = (
testing.SimpleTestResourceAsync() if asgi else testing.SimpleTestResource()
)
app = util.create_app(asgi)
app.add_route('/', resource)
json_types = ('application/json', 'application/json; charset=UTF-8')
client = testing.TestClient(app)
client.simulate_post(
'/', json=document, headers={'capture-req-body-bytes': '-1'}
)
assert json.loads(resource.captured_req_body.decode()) == document
assert resource.captured_req.content_type in json_types
headers = {
'Content-Type': 'x-falcon/peregrine',
'X-Falcon-Type': 'peregrine',
'capture-req-media': 'y',
}
body = 'If provided, `json` parameter overrides `body`.'
client.simulate_post('/', headers=headers, body=body, json=document)
assert resource.captured_req_media == document
assert resource.captured_req.content_type in json_types
assert resource.captured_req.get_header('X-Falcon-Type') == 'peregrine'
@pytest.mark.parametrize(
'remote_addr',
[
None,
'127.0.0.1',
'8.8.8.8',
'104.24.101.85',
'2606:4700:30::6818:6455',
],
)
def test_simulate_remote_addr(self, app, remote_addr):
class ShowMyIPResource:
def on_get(self, req, resp):
resp.text = req.remote_addr
resp.content_type = falcon.MEDIA_TEXT
app.add_route('/', ShowMyIPResource())
client = testing.TestClient(app)
resp = client.simulate_get('/', remote_addr=remote_addr)
assert resp.status_code == 200
if remote_addr is None:
assert resp.text == '127.0.0.1'
else:
assert resp.text == remote_addr
def test_simulate_hostname(self, app):
resource = testing.SimpleTestResource()
app.add_route('/', resource)
client = testing.TestClient(app)
client.simulate_get('/', protocol='https', host='falcon.readthedocs.io')
assert resource.captured_req.uri == 'https://falcon.readthedocs.io/'
@pytest.mark.parametrize(
'extras,expected_headers',
[
(
{},
(('user-agent', 'falcon-client/' + falcon.__version__),),
),
(
{'HTTP_USER_AGENT': 'URL/Emacs', 'HTTP_X_FALCON': 'peregrine'},
(('user-agent', 'URL/Emacs'), ('x-falcon', 'peregrine')),
),
],
)
def test_simulate_with_environ_extras(self, extras, expected_headers):
app = falcon.App()
resource = testing.SimpleTestResource()
app.add_route('/', resource)
client = testing.TestClient(app)
client.simulate_get('/', extras=extras)
for header, value in expected_headers:
assert resource.captured_req.get_header(header) == value
def test_override_method_with_extras(self, asgi, util):
app = util.create_app(asgi)
app.add_route('/', testing.SimpleTestResource(body='test'))
client = testing.TestClient(app)
with pytest.raises(ValueError):
if asgi:
client.simulate_get('/', extras={'method': 'PATCH'})
else:
client.simulate_get('/', extras={'REQUEST_METHOD': 'PATCH'})
result = client.simulate_get('/', extras={'REQUEST_METHOD': 'GET'})
assert result.status_code == 200
assert result.text == 'test'
@pytest.mark.parametrize(
'content_type',
[
'application/json',
'application/json; charset=UTF-8',
'application/yaml',
],
)
def test_simulate_content_type(self, util, content_type):
class MediaMirror:
def on_post(self, req, resp):
resp.media = req.media
app = util.create_app(asgi=False)
app.add_route('/', MediaMirror())
client = testing.TestClient(app)
headers = {'Content-Type': content_type}
payload = b'{"hello": "world"}'
resp = client.simulate_post('/', headers=headers, body=payload)
if MEDIA_JSON in content_type:
assert resp.status_code == 200
assert resp.json == {'hello': 'world'}
else:
# JSON handler should not have been called for YAML
assert resp.status_code == 415
@pytest.mark.parametrize(
'content_type',
[
MEDIA_JSON,
MEDIA_JSON + '; charset=UTF-8',
MEDIA_YAML,
MEDIA_MSGPACK,
MEDIA_URLENCODED,
],
)
def test_simulate_content_type_extra_handler(
self, asgi, util, content_type, msgpack
):
class TestResourceAsync(testing.SimpleTestResourceAsync):
def __init__(self):
super().__init__()
async def on_post(self, req, resp):
await super().on_post(req, resp)
resp.media = {'hello': 'back'}
resp.content_type = content_type
class TestResource(testing.SimpleTestResource):
def __init__(self):
super().__init__()
def on_post(self, req, resp):
super().on_post(req, resp)
resp.media = {'hello': 'back'}
resp.content_type = content_type
resource = TestResourceAsync() if asgi else TestResource()
app = util.create_app(asgi)
app.add_route('/', resource)
json_handler = TrackingJSONHandler()
msgpack_handler = TrackingMessagePackHandler()
form_handler = TrackingFormHandler()
# NOTE(kgriffs): Do not use MEDIA_* so that we can sanity-check that
# our constants that are used in the pytest parametrization match
# up to what we expect them to be.
extra_handlers = {
'application/json': json_handler,
'application/msgpack': msgpack_handler,
'application/x-www-form-urlencoded': form_handler,
}
app.req_options.media_handlers.update(extra_handlers)
app.resp_options.media_handlers.update(extra_handlers)
client = testing.TestClient(app)
headers = {
'Content-Type': content_type,
'capture-req-media': 'y',
}
if MEDIA_JSON in content_type:
payload = b'{"hello": "world"}'
elif content_type == MEDIA_MSGPACK:
payload = b'\x81\xa5hello\xa5world'
elif content_type == MEDIA_URLENCODED:
payload = b'hello=world'
else:
payload = None
resp = client.simulate_post('/', headers=headers, body=payload)
if MEDIA_JSON in content_type:
assert resp.status_code == 200
assert resp.json == {'hello': 'back'}
# Test that our custom deserializer was called
assert json_handler.deserialize_count == 1
assert resource.captured_req_media == {'hello': 'world'}
# Verify that other handlers were not called
assert msgpack_handler.deserialize_count == 0
assert form_handler.deserialize_count == 0
elif content_type == MEDIA_MSGPACK:
assert resp.status_code == 200
assert resp.content == b'\x81\xa5hello\xa4back'
# Test that our custom deserializer was called
assert msgpack_handler.deserialize_count == 1
assert resource.captured_req_media == {'hello': 'world'}
# Verify that other handlers were not called
assert json_handler.deserialize_count == 0
assert form_handler.deserialize_count == 0
elif content_type == MEDIA_URLENCODED:
assert resp.status_code == 200
assert resp.content == b'hello=back'
# Test that our custom deserializer was called
assert form_handler.deserialize_count == 1
assert resource.captured_req_media == {'hello': 'world'}
# Verify that other handlers were not called
assert json_handler.deserialize_count == 0
assert msgpack_handler.deserialize_count == 0
else:
# YAML should not get handled
for handler in (json_handler, msgpack_handler):
assert handler.deserialize_count == 0
assert resource.captured_req_media is None
assert resp.status_code == 415
| TestFalconTestingUtils |
python | qdrant__qdrant-client | qdrant_client/http/models/models.py | {
"start": 146963,
"end": 147824
} | class ____(BaseModel):
index_name: Optional[str] = Field(default=None, description="")
unfiltered_plain: "OperationDurationStatistics" = Field(..., description="")
unfiltered_hnsw: "OperationDurationStatistics" = Field(..., description="")
unfiltered_sparse: "OperationDurationStatistics" = Field(..., description="")
filtered_plain: "OperationDurationStatistics" = Field(..., description="")
filtered_small_cardinality: "OperationDurationStatistics" = Field(..., description="")
filtered_large_cardinality: "OperationDurationStatistics" = Field(..., description="")
filtered_exact: "OperationDurationStatistics" = Field(..., description="")
filtered_sparse: "OperationDurationStatistics" = Field(..., description="")
unfiltered_exact: "OperationDurationStatistics" = Field(..., description="")
| VectorIndexSearchesTelemetry |
python | run-llama__llama_index | llama-index-integrations/evaluation/llama-index-evaluation-tonic-validate/llama_index/evaluation/tonic_validate/augmentation_accuracy.py | {
"start": 364,
"end": 2000
} | class ____(BaseEvaluator):
"""
Tonic Validate's augmentation accuracy metric.
The output score is a float between 0.0 and 1.0.
See https://docs.tonic.ai/validate/ for more details.
Args:
openai_service(OpenAIService): The OpenAI service to use. Specifies the chat
completion model to use as the LLM evaluator. Defaults to "gpt-4".
"""
def __init__(self, openai_service: Optional[Any] = None):
if openai_service is None:
openai_service = OpenAIService("gpt-4")
self.openai_service = openai_service
self.metric = AugmentationAccuracyMetric()
async def aevaluate(
self,
query: Optional[str] = None,
response: Optional[str] = None,
contexts: Optional[Sequence[str]] = None,
**kwargs: Any,
) -> EvaluationResult:
from tonic_validate.classes.benchmark import BenchmarkItem
from tonic_validate.classes.llm_response import LLMResponse
benchmark_item = BenchmarkItem(question=query)
llm_response = LLMResponse(
llm_answer=response,
llm_context_list=contexts,
benchmark_item=benchmark_item,
)
score = self.metric.score(llm_response, self.openai_service)
return EvaluationResult(
query=query, contexts=contexts, response=response, score=score
)
def _get_prompts(self) -> PromptDictType:
return {}
def _get_prompt_modules(self) -> PromptMixinType:
return {}
def _update_prompts(self, prompts_dict: PromptDictType) -> None:
return
| AugmentationAccuracyEvaluator |
python | PyCQA__pylint | tests/functional/a/alternative/alternative_union_syntax_error.py | {
"start": 2811,
"end": 2929
} | class ____:
my_var: int | str # [unsupported-binary-operation]
@my_decorator
@dataclasses.dataclass
| CustomDataClass3 |
python | apache__airflow | providers/amazon/tests/unit/amazon/aws/operators/test_dms.py | {
"start": 19081,
"end": 21304
} | class ____:
filter = [{"Name": "replication-type", "Values": ["cdc"]}]
def test_init(self):
op = DmsDescribeReplicationConfigsOperator(task_id="test_task")
assert op.filter is None
@pytest.mark.db_test
@mock.patch.object(DmsHook, "conn")
def test_template_fields_native(
self, mock_conn, session, clean_dags_dagruns_and_dagbundles, testing_dag_bundle
):
logical_date = timezone.datetime(2020, 1, 1)
Variable.set("test_filter", self.filter, session=session)
dag = DAG(
"test_dms",
schedule=None,
start_date=logical_date,
render_template_as_native_obj=True,
)
op = DmsDescribeReplicationConfigsOperator(
task_id="test_task", filter="{{ var.value.test_filter }}", dag=dag
)
if AIRFLOW_V_3_0_PLUS:
sync_dag_to_db(dag)
dag_version = DagVersion.get_latest_version(dag.dag_id)
ti = TaskInstance(task=op, dag_version_id=dag_version.id)
dag_run = DagRun(
dag_id=dag.dag_id,
run_id="test",
run_type=DagRunType.MANUAL,
state=DagRunState.RUNNING,
logical_date=logical_date,
)
sync_dag_to_db(dag)
dag_version = DagVersion.get_latest_version(dag.dag_id)
ti = TaskInstance(task=op, dag_version_id=dag_version.id)
dag_run = DagRun(
dag_id=dag.dag_id,
run_id="test",
run_type=DagRunType.MANUAL,
state=DagRunState.RUNNING,
logical_date=logical_date,
)
else:
dag_run = DagRun(
dag_id=dag.dag_id,
run_id="test",
run_type=DagRunType.MANUAL,
state=DagRunState.RUNNING,
execution_date=logical_date,
)
ti = TaskInstance(task=op)
ti.dag_run = dag_run
session.add(ti)
session.commit()
context = ti.get_template_context(session)
ti.render_templates(context)
assert op.filter == self.filter
| TestDmsDescribeReplicationConfigsOperator |
python | pypa__pipenv | pipenv/patched/pip/_vendor/rich/live.py | {
"start": 1044,
"end": 14270
} | class ____(JupyterMixin, RenderHook):
"""Renders an auto-updating live display of any given renderable.
Args:
renderable (RenderableType, optional): The renderable to live display. Defaults to displaying nothing.
console (Console, optional): Optional Console instance. Defaults to an internal Console instance writing to stdout.
screen (bool, optional): Enable alternate screen mode. Defaults to False.
auto_refresh (bool, optional): Enable auto refresh. If disabled, you will need to call `refresh()` or `update()` with refresh flag. Defaults to True
refresh_per_second (float, optional): Number of times per second to refresh the live display. Defaults to 4.
transient (bool, optional): Clear the renderable on exit (has no effect when screen=True). Defaults to False.
redirect_stdout (bool, optional): Enable redirection of stdout, so ``print`` may be used. Defaults to True.
redirect_stderr (bool, optional): Enable redirection of stderr. Defaults to True.
vertical_overflow (VerticalOverflowMethod, optional): How to handle renderable when it is too tall for the console. Defaults to "ellipsis".
get_renderable (Callable[[], RenderableType], optional): Optional callable to get renderable. Defaults to None.
"""
def __init__(
self,
renderable: Optional[RenderableType] = None,
*,
console: Optional[Console] = None,
screen: bool = False,
auto_refresh: bool = True,
refresh_per_second: float = 4,
transient: bool = False,
redirect_stdout: bool = True,
redirect_stderr: bool = True,
vertical_overflow: VerticalOverflowMethod = "ellipsis",
get_renderable: Optional[Callable[[], RenderableType]] = None,
) -> None:
assert refresh_per_second > 0, "refresh_per_second must be > 0"
self._renderable = renderable
self.console = console if console is not None else get_console()
self._screen = screen
self._alt_screen = False
self._redirect_stdout = redirect_stdout
self._redirect_stderr = redirect_stderr
self._restore_stdout: Optional[IO[str]] = None
self._restore_stderr: Optional[IO[str]] = None
self._lock = RLock()
self.ipy_widget: Optional[Any] = None
self.auto_refresh = auto_refresh
self._started: bool = False
self.transient = True if screen else transient
self._refresh_thread: Optional[_RefreshThread] = None
self.refresh_per_second = refresh_per_second
self.vertical_overflow = vertical_overflow
self._get_renderable = get_renderable
self._live_render = LiveRender(
self.get_renderable(), vertical_overflow=vertical_overflow
)
@property
def is_started(self) -> bool:
"""Check if live display has been started."""
return self._started
def get_renderable(self) -> RenderableType:
renderable = (
self._get_renderable()
if self._get_renderable is not None
else self._renderable
)
return renderable or ""
def start(self, refresh: bool = False) -> None:
"""Start live rendering display.
Args:
refresh (bool, optional): Also refresh. Defaults to False.
"""
with self._lock:
if self._started:
return
self.console.set_live(self)
self._started = True
if self._screen:
self._alt_screen = self.console.set_alt_screen(True)
self.console.show_cursor(False)
self._enable_redirect_io()
self.console.push_render_hook(self)
if refresh:
try:
self.refresh()
except Exception:
# If refresh fails, we want to stop the redirection of sys.stderr,
# so the error stacktrace is properly displayed in the terminal.
# (or, if the code that calls Rich captures the exception and wants to display something,
# let this be displayed in the terminal).
self.stop()
raise
if self.auto_refresh:
self._refresh_thread = _RefreshThread(self, self.refresh_per_second)
self._refresh_thread.start()
def stop(self) -> None:
"""Stop live rendering display."""
with self._lock:
if not self._started:
return
self.console.clear_live()
self._started = False
if self.auto_refresh and self._refresh_thread is not None:
self._refresh_thread.stop()
self._refresh_thread = None
# allow it to fully render on the last even if overflow
self.vertical_overflow = "visible"
with self.console:
try:
if not self._alt_screen and not self.console.is_jupyter:
self.refresh()
finally:
self._disable_redirect_io()
self.console.pop_render_hook()
if not self._alt_screen and self.console.is_terminal:
self.console.line()
self.console.show_cursor(True)
if self._alt_screen:
self.console.set_alt_screen(False)
if self.transient and not self._alt_screen:
self.console.control(self._live_render.restore_cursor())
if self.ipy_widget is not None and self.transient:
self.ipy_widget.close() # pragma: no cover
def __enter__(self) -> "Live":
self.start(refresh=self._renderable is not None)
return self
def __exit__(
self,
exc_type: Optional[Type[BaseException]],
exc_val: Optional[BaseException],
exc_tb: Optional[TracebackType],
) -> None:
self.stop()
def _enable_redirect_io(self) -> None:
"""Enable redirecting of stdout / stderr."""
if self.console.is_terminal or self.console.is_jupyter:
if self._redirect_stdout and not isinstance(sys.stdout, FileProxy):
self._restore_stdout = sys.stdout
sys.stdout = cast("TextIO", FileProxy(self.console, sys.stdout))
if self._redirect_stderr and not isinstance(sys.stderr, FileProxy):
self._restore_stderr = sys.stderr
sys.stderr = cast("TextIO", FileProxy(self.console, sys.stderr))
def _disable_redirect_io(self) -> None:
"""Disable redirecting of stdout / stderr."""
if self._restore_stdout:
sys.stdout = cast("TextIO", self._restore_stdout)
self._restore_stdout = None
if self._restore_stderr:
sys.stderr = cast("TextIO", self._restore_stderr)
self._restore_stderr = None
@property
def renderable(self) -> RenderableType:
"""Get the renderable that is being displayed
Returns:
RenderableType: Displayed renderable.
"""
renderable = self.get_renderable()
return Screen(renderable) if self._alt_screen else renderable
def update(self, renderable: RenderableType, *, refresh: bool = False) -> None:
"""Update the renderable that is being displayed
Args:
renderable (RenderableType): New renderable to use.
refresh (bool, optional): Refresh the display. Defaults to False.
"""
if isinstance(renderable, str):
renderable = self.console.render_str(renderable)
with self._lock:
self._renderable = renderable
if refresh:
self.refresh()
def refresh(self) -> None:
"""Update the display of the Live Render."""
with self._lock:
self._live_render.set_renderable(self.renderable)
if self.console.is_jupyter: # pragma: no cover
try:
from IPython.display import display
from ipywidgets import Output
except ImportError:
import warnings
warnings.warn('install "ipywidgets" for Jupyter support')
else:
if self.ipy_widget is None:
self.ipy_widget = Output()
display(self.ipy_widget)
with self.ipy_widget:
self.ipy_widget.clear_output(wait=True)
self.console.print(self._live_render.renderable)
elif self.console.is_terminal and not self.console.is_dumb_terminal:
with self.console:
self.console.print(Control())
elif (
not self._started and not self.transient
): # if it is finished allow files or dumb-terminals to see final result
with self.console:
self.console.print(Control())
def process_renderables(
self, renderables: List[ConsoleRenderable]
) -> List[ConsoleRenderable]:
"""Process renderables to restore cursor and display progress."""
self._live_render.vertical_overflow = self.vertical_overflow
if self.console.is_interactive:
# lock needs acquiring as user can modify live_render renderable at any time unlike in Progress.
with self._lock:
reset = (
Control.home()
if self._alt_screen
else self._live_render.position_cursor()
)
renderables = [reset, *renderables, self._live_render]
elif (
not self._started and not self.transient
): # if it is finished render the final output for files or dumb_terminals
renderables = [*renderables, self._live_render]
return renderables
if __name__ == "__main__": # pragma: no cover
import random
import time
from itertools import cycle
from typing import Dict, List, Tuple
from .align import Align
from .console import Console
from .live import Live as Live
from .panel import Panel
from .rule import Rule
from .syntax import Syntax
from .table import Table
console = Console()
syntax = Syntax(
'''def loop_last(values: Iterable[T]) -> Iterable[Tuple[bool, T]]:
"""Iterate and generate a tuple with a flag for last value."""
iter_values = iter(values)
try:
previous_value = next(iter_values)
except StopIteration:
return
for value in iter_values:
yield False, previous_value
previous_value = value
yield True, previous_value''',
"python",
line_numbers=True,
)
table = Table("foo", "bar", "baz")
table.add_row("1", "2", "3")
progress_renderables = [
"You can make the terminal shorter and taller to see the live table hide"
"Text may be printed while the progress bars are rendering.",
Panel("In fact, [i]any[/i] renderable will work"),
"Such as [magenta]tables[/]...",
table,
"Pretty printed structures...",
{"type": "example", "text": "Pretty printed"},
"Syntax...",
syntax,
Rule("Give it a try!"),
]
examples = cycle(progress_renderables)
exchanges = [
"SGD",
"MYR",
"EUR",
"USD",
"AUD",
"JPY",
"CNH",
"HKD",
"CAD",
"INR",
"DKK",
"GBP",
"RUB",
"NZD",
"MXN",
"IDR",
"TWD",
"THB",
"VND",
]
with Live(console=console) as live_table:
exchange_rate_dict: Dict[Tuple[str, str], float] = {}
for index in range(100):
select_exchange = exchanges[index % len(exchanges)]
for exchange in exchanges:
if exchange == select_exchange:
continue
time.sleep(0.4)
if random.randint(0, 10) < 1:
console.log(next(examples))
exchange_rate_dict[(select_exchange, exchange)] = 200 / (
(random.random() * 320) + 1
)
if len(exchange_rate_dict) > len(exchanges) - 1:
exchange_rate_dict.pop(list(exchange_rate_dict.keys())[0])
table = Table(title="Exchange Rates")
table.add_column("Source Currency")
table.add_column("Destination Currency")
table.add_column("Exchange Rate")
for (source, dest), exchange_rate in exchange_rate_dict.items():
table.add_row(
source,
dest,
Text(
f"{exchange_rate:.4f}",
style="red" if exchange_rate < 1.0 else "green",
),
)
live_table.update(Align.center(table))
| Live |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-github/source_github/streams.py | {
"start": 23762,
"end": 25644
} | class ____(SemiIncrementalMixin, GithubStream):
"""
API docs: https://docs.github.com/en/rest/pulls/pulls?apiVersion=2022-11-28#list-pull-requests
"""
use_cache = True
large_stream = True
def __init__(self, **kwargs):
super().__init__(**kwargs)
self._first_read = True
def read_records(self, stream_state: Mapping[str, Any] = None, **kwargs) -> Iterable[Mapping[str, Any]]:
"""
Decide if this a first read or not by the presence of the state object
"""
self._first_read = not bool(stream_state)
yield from super().read_records(stream_state=stream_state, **kwargs)
def path(self, stream_slice: Mapping[str, Any] = None, **kwargs) -> str:
return f"repos/{stream_slice['repository']}/pulls"
def transform(self, record: MutableMapping[str, Any], stream_slice: Mapping[str, Any]) -> MutableMapping[str, Any]:
record = super().transform(record=record, stream_slice=stream_slice)
for nested in ("head", "base"):
entry = record.get(nested, {})
entry["repo_id"] = (record.get("head", {}).pop("repo", {}) or {}).get("id")
return record
def request_params(self, **kwargs) -> MutableMapping[str, Any]:
base_params = super().request_params(**kwargs)
# The very first time we read this stream we want to read ascending so we can save state in case of
# a halfway failure. But if there is state, we read descending to allow incremental behavior.
params = {"state": "all", "sort": "updated", "direction": self.is_sorted}
return {**base_params, **params}
@property
def is_sorted(self) -> str:
"""
Depending if there any state we read stream in ascending or descending order.
"""
if self._first_read:
return "asc"
return "desc"
| PullRequests |
python | tensorflow__tensorflow | tensorflow/python/kernel_tests/control_flow/cond_v2_test.py | {
"start": 2438,
"end": 48258
} | class ____(test.TestCase):
def _testCond(self, true_fn, false_fn, train_vals, feed_dict=None):
if not feed_dict:
feed_dict = {}
with self.session(graph=ops.get_default_graph()) as sess:
pred = array_ops.placeholder(dtypes.bool, name="pred")
expected = tf_cond.cond(
array_ops.squeeze_v2(pred), true_fn, false_fn, name="expected")
actual = cond_v2.cond_v2(pred, true_fn, false_fn, name="actual")
expected_grad = gradients_impl.gradients(expected, train_vals)
actual_grad = gradients_impl.gradients(actual, train_vals)
sess_run_args = {pred: True}
sess_run_args.update(feed_dict)
expected_val, actual_val, expected_grad_val, actual_grad_val = sess.run(
(expected, actual, expected_grad, actual_grad), sess_run_args)
self.assertEqual(expected_val, actual_val)
self.assertEqual(expected_grad_val, actual_grad_val)
sess_run_args = {pred: [[True]]}
sess_run_args.update(feed_dict)
expected_val, actual_val, expected_grad_val, actual_grad_val = sess.run(
(expected, actual, expected_grad, actual_grad), sess_run_args)
self.assertEqual(expected_val, actual_val)
self.assertEqual(expected_grad_val, actual_grad_val)
sess_run_args = {pred: False}
sess_run_args.update(feed_dict)
expected_val, actual_val, expected_grad_val, actual_grad_val = sess.run(
(expected, actual, expected_grad, actual_grad), sess_run_args)
self.assertEqual(expected_val, actual_val)
self.assertEqual(expected_grad_val, actual_grad_val)
sess_run_args = {pred: [[False]]}
sess_run_args.update(feed_dict)
expected_val, actual_val, expected_grad_val, actual_grad_val = sess.run(
(expected, actual, expected_grad, actual_grad), sess_run_args)
self.assertEqual(expected_val, actual_val)
self.assertEqual(expected_grad_val, actual_grad_val)
@test_util.run_deprecated_v1
def testBasic(self):
x = constant_op.constant(1.0, name="x")
y = constant_op.constant(2.0, name="y")
def true_fn():
return x * 2.0
def false_fn():
return y * 3.0
self._testCond(true_fn, false_fn, [x])
self._testCond(true_fn, false_fn, [x, y])
self._testCond(true_fn, false_fn, [y])
def testReturnsIndexedSlicesAndNones(self):
@def_function.function
def build_cond_with_indexed_slices():
pred = constant_op.constant(True)
def true_fn():
return math_ops._as_indexed_slices(constant_op.constant([1.])), None
def false_fn():
return math_ops._as_indexed_slices(constant_op.constant([2.])), None
result = cond_v2.cond_v2(pred, true_fn, false_fn)
self.assertIsNone(result[1])
return ops.convert_to_tensor(result[0])
output = build_cond_with_indexed_slices()
self.assertAllEqual(output, [1.])
def testReturnsNonesAndIndexedSlices(self):
@def_function.function
def build_cond_with_indexed_slices():
pred = constant_op.constant(True)
def true_fn():
return (None, None, None,
math_ops._as_indexed_slices(constant_op.constant([1.])))
def false_fn():
return (None, None, None,
math_ops._as_indexed_slices(constant_op.constant([2.])))
result = cond_v2.cond_v2(pred, true_fn, false_fn)
self.assertIsNone(result[0])
self.assertIsNone(result[1])
self.assertIsNone(result[2])
return ops.convert_to_tensor(result[3])
output = build_cond_with_indexed_slices()
self.assertAllEqual(output, [1.])
def testCondNestedFunctionGradientWithSavedModel(self):
class Model(module_lib.Module):
def __init__(self):
self.v = resource_variable_ops.ResourceVariable([[1., 1.], [1., 1.]])
@def_function.function
def call(self, x, cond):
@def_function.function
def true_fn():
# Einsum doesn't have a symbolic gradient op registered.
# Taking gradient of an einsum op will fail if its python gradient
# function is not found after loaded from a SavedModel.
return gen_linalg_ops.einsum([x, self.v], "ab,bc->ac")
@def_function.function
def false_fn():
return x
return cond_v2.cond_v2(cond > 0, true_fn, false_fn)
model = Model()
x = constant_op.constant([[1., 1.], [1., 1.]])
cond = constant_op.constant(1.)
with backprop.GradientTape() as tape:
y = tape.gradient(model.call(x, cond), model.v)
self.assertAllEqual(y, [[2., 2.], [2., 2.]])
saved_model_dir = os.path.join(self.create_tempdir(), "saved_model")
save_lib.save(model, saved_model_dir)
loaded_model = load_lib.load(saved_model_dir)
with backprop.GradientTape() as tape:
y = tape.gradient(loaded_model.call(x, cond), loaded_model.v)
self.assertAllEqual(y, [[2., 2.], [2., 2.]])
def testCondNestedFunctionGradientWithXlaDynamicCondition(self):
v = resource_variable_ops.ResourceVariable([[1., 1.], [1., 1.]])
@def_function.function(
jit_compile=True,
input_signature=[
tensor_spec.TensorSpec([None, 2]),
tensor_spec.TensorSpec([]),
],
)
def f(x, cond):
@def_function.function
def true_fn():
return gen_linalg_ops.einsum([x, v], "ab,bc->ac")
@def_function.function
def false_fn():
return x
return cond_v2.cond_v2(cond > 0, true_fn, false_fn)
x = constant_op.constant([[1., 1.], [1., 1.]])
cond = constant_op.constant(1.)
with backprop.GradientTape() as tape:
# Shape of x in HLO graph should be [<=2, 2].
y = tape.gradient(f(x, cond), v)
self.assertAllEqual(y, [[2., 2.], [2., 2.]])
x = constant_op.constant([[1., 1.], [1., 1.], [1., 1.]])
with backprop.GradientTape() as tape:
# HLO graph should be re-compiled to handle x with shape [<=3, 2].
y = tape.gradient(f(x, cond), v)
self.assertAllEqual(y, [[3., 3.], [3., 3.]])
def testExternalControlDependencies(self):
with ops.Graph().as_default(), self.test_session():
v = variables.Variable(1.0)
self.evaluate(v.initializer)
op = v.assign_add(1.0)
def true_branch():
with ops.control_dependencies([op]):
return 1.0
cond_v2.cond_v2(array_ops.placeholder_with_default(False, None),
true_branch,
lambda: 2.0).eval()
self.assertAllEqual(self.evaluate(v), 2.0)
@test_util.run_deprecated_v1
def testMultipleOutputs(self):
x = constant_op.constant(1.0, name="x")
y = constant_op.constant(3.0, name="y")
def true_fn():
return x * y, y
def false_fn():
return x, y * 3.0
self._testCond(true_fn, false_fn, [x])
self._testCond(true_fn, false_fn, [x, y])
self._testCond(true_fn, false_fn, [y])
@test_util.run_deprecated_v1
def testBasic2(self):
x = constant_op.constant(1.0, name="x")
y = constant_op.constant(2.0, name="y")
def true_fn():
return x * y * 2.0
def false_fn():
return 2.0
self._testCond(true_fn, false_fn, [x])
self._testCond(true_fn, false_fn, [x, y])
self._testCond(true_fn, false_fn, [y])
@test_util.run_deprecated_v1
def testNoInputs(self):
with self.cached_session() as sess:
pred = array_ops.placeholder(dtypes.bool, name="pred")
def true_fn():
return constant_op.constant(1.0)
def false_fn():
return constant_op.constant(2.0)
out = cond_v2.cond_v2(pred, true_fn, false_fn)
self.assertEqual(sess.run(out, {pred: True}), (1.0,))
self.assertEqual(sess.run(out, {pred: False}), (2.0,))
def _createCond(self, name, use_fast_cond=False):
"""Creates a cond_v2 call and returns the output tensor and the cond op."""
pred = constant_op.constant(True, name="pred")
x = constant_op.constant(1.0, name="x")
def true_fn():
return x
def false_fn():
return x + 1
if use_fast_cond:
output = cond_v2.fast_cond_v2(pred, true_fn, false_fn, name=name)
cond_op = output.op
else:
output = cond_v2.cond_v2(pred, true_fn, false_fn, name=name)
cond_op = output.op.inputs[0].op
self.assertEqual(cond_op.type, "StatelessIf")
return output, cond_op
def _createNestedCond(self, name):
"""Like _createCond but creates a nested cond_v2 call as well."""
pred = constant_op.constant(True, name="pred")
x = constant_op.constant(1.0, name="x")
def true_fn():
return cond_v2.cond_v2(pred, lambda: x, lambda: x + 1)
def false_fn():
return x + 2
output = cond_v2.cond_v2(pred, true_fn, false_fn, name=name)
cond_op = output.op.inputs[0].op
self.assertEqual(cond_op.type, "StatelessIf")
return output, cond_op
def testDefaultName(self):
with ops.Graph().as_default():
_, cond_op = self._createCond(None)
self.assertEqual(cond_op.name, "cond")
self.assertRegex(cond_op.get_attr("then_branch").name, r"cond_true_\d*")
self.assertRegex(cond_op.get_attr("else_branch").name, r"cond_false_\d*")
with ops.Graph().as_default():
with ops.name_scope("foo"):
_, cond1_op = self._createCond("")
self.assertEqual(cond1_op.name, "foo/cond")
self.assertRegex(
cond1_op.get_attr("then_branch").name, r"foo_cond_true_\d*")
self.assertRegex(
cond1_op.get_attr("else_branch").name, r"foo_cond_false_\d*")
_, cond2_op = self._createCond(None)
self.assertEqual(cond2_op.name, "foo/cond_1")
self.assertRegex(
cond2_op.get_attr("then_branch").name, r"foo_cond_1_true_\d*")
self.assertRegex(
cond2_op.get_attr("else_branch").name, r"foo_cond_1_false_\d*")
@test_util.run_v2_only
def testInheritParentNameScope(self):
@def_function.function
def f():
with ops.name_scope("foo"):
def then_branch():
with ops.name_scope("then"):
actual_name_scope = ops.get_name_scope()
expected_name_scope = "foo/cond/then"
self.assertEqual(actual_name_scope, expected_name_scope)
return 0.
def else_branch():
with ops.name_scope("else"):
actual_name_scope = ops.get_name_scope()
expected_name_scope = "foo/cond/else"
self.assertEqual(actual_name_scope, expected_name_scope)
return 0.
return cond_v2.cond_v2(
constant_op.constant(True), then_branch, else_branch)
f()
@test_util.run_v1_only("b/120545219")
def testFunctionInCond(self):
with ops.Graph().as_default():
x = constant_op.constant(1.0, name="x")
y = constant_op.constant(2.0, name="y")
def true_fn():
@def_function.function
def fn():
return x * y * 2.0
return fn()
def false_fn():
return 2.0
self._testCond(true_fn, false_fn, [x])
self._testCond(true_fn, false_fn, [x, y])
self._testCond(true_fn, false_fn, [y])
@test_util.run_deprecated_v1
def testNestedFunctionInCond(self):
x = constant_op.constant(1.0, name="x")
y = constant_op.constant(2.0, name="y")
def true_fn():
return 2.0
def false_fn():
@def_function.function
def fn():
@def_function.function
def nested_fn():
return x * y * 2.0
return nested_fn()
return fn()
self._testCond(true_fn, false_fn, [x])
self._testCond(true_fn, false_fn, [x, y])
self._testCond(true_fn, false_fn, [y])
@test_util.run_deprecated_v1
def testDoubleNestedFunctionInCond(self):
x = constant_op.constant(1.0, name="x")
y = constant_op.constant(2.0, name="y")
def true_fn():
@def_function.function
def fn():
@def_function.function
def nested_fn():
@def_function.function
def nested_nested_fn():
return x * y * 2.0
return nested_nested_fn()
return nested_fn()
return fn()
def false_fn():
return 2.0
self._testCond(true_fn, false_fn, [x])
self._testCond(true_fn, false_fn, [x, y])
self._testCond(true_fn, false_fn, [y])
def testNestedCond(self):
def run_test(pred_value):
def build_graph():
pred = array_ops.placeholder(dtypes.bool, name="pred")
x = constant_op.constant(1.0, name="x")
y = constant_op.constant(2.0, name="y")
def true_fn():
return 2.0
def false_fn():
def false_true_fn():
return x * y * 2.0
def false_false_fn():
return x * 5.0
return _cond(pred, false_true_fn, false_false_fn, "inside_false_fn")
return x, y, pred, true_fn, false_fn
with ops.Graph().as_default():
x, y, pred, true_fn, false_fn = build_graph()
self._testCond(true_fn, false_fn, [x, y], {pred: pred_value})
self._testCond(true_fn, false_fn, [x], {pred: pred_value})
self._testCond(true_fn, false_fn, [y], {pred: pred_value})
run_test(True)
run_test(False)
def testNestedCondBothBranches(self):
def run_test(pred_value):
def build_graph():
pred = array_ops.placeholder(dtypes.bool, name="pred")
x = constant_op.constant(1.0, name="x")
y = constant_op.constant(2.0, name="y")
def true_fn():
return _cond(pred, lambda: x + y, lambda: x * x, name=None)
def false_fn():
return _cond(pred, lambda: x - y, lambda: y * y, name=None)
return x, y, pred, true_fn, false_fn
with ops.Graph().as_default():
x, y, pred, true_fn, false_fn = build_graph()
self._testCond(true_fn, false_fn, [x, y], {pred: pred_value})
self._testCond(true_fn, false_fn, [x], {pred: pred_value})
self._testCond(true_fn, false_fn, [y], {pred: pred_value})
run_test(True)
run_test(False)
def testDoubleNestedCond(self):
def run_test(pred1_value, pred2_value):
def build_graph():
pred1 = array_ops.placeholder(dtypes.bool, name="pred1")
pred2 = array_ops.placeholder(dtypes.bool, name="pred2")
x = constant_op.constant(1.0, name="x")
y = constant_op.constant(2.0, name="y")
def true_fn():
return 2.0
def false_fn():
def false_true_fn():
def false_true_true_fn():
return x * y * 2.0
def false_true_false_fn():
return x * 10.0
return _cond(
pred1,
false_true_true_fn,
false_true_false_fn,
name="inside_false_true_fn")
def false_false_fn():
return x * 5.0
return _cond(
pred2, false_true_fn, false_false_fn, name="inside_false_fn")
return x, y, pred1, pred2, true_fn, false_fn
with ops.Graph().as_default():
x, y, pred1, pred2, true_fn, false_fn = build_graph()
self._testCond(true_fn, false_fn, [x, y], {
pred1: pred1_value,
pred2: pred2_value
})
x, y, pred1, pred2, true_fn, false_fn = build_graph()
self._testCond(true_fn, false_fn, [x], {
pred1: pred1_value,
pred2: pred2_value
})
x, y, pred1, pred2, true_fn, false_fn = build_graph()
self._testCond(true_fn, false_fn, [y], {
pred1: pred1_value,
pred2: pred2_value
})
run_test(True, True)
run_test(True, False)
run_test(False, False)
run_test(False, True)
def testGradientFromInsideFunction(self):
def build_graph():
pred_outer = array_ops.placeholder(dtypes.bool, name="pred_outer")
pred_inner = array_ops.placeholder(dtypes.bool, name="pred_inner")
x = constant_op.constant(1.0, name="x")
y = constant_op.constant(2.0, name="y")
def true_fn():
return 2.0
def false_fn():
def inner_true_fn():
return x * y * 2.0
def inner_false_fn():
return x * 5.0
return cond_v2.cond_v2(
pred_inner, inner_true_fn, inner_false_fn, name="inner_cond")
cond_outer = cond_v2.cond_v2(
pred_outer, true_fn, false_fn, name="outer_cond")
# Compute grads inside a tf function.
@def_function.function
def nesting_fn():
return gradients_impl.gradients(cond_outer, [x, y])
grads = nesting_fn()
return grads, pred_outer, pred_inner
with ops.Graph().as_default():
grads, pred_outer, pred_inner = build_graph()
with self.session(graph=ops.get_default_graph()) as sess:
self.assertSequenceEqual(
sess.run(grads, {
pred_outer: True,
pred_inner: True
}), [0., 0.])
self.assertSequenceEqual(
sess.run(grads, {
pred_outer: True,
pred_inner: False
}), [0., 0.])
self.assertSequenceEqual(
sess.run(grads, {
pred_outer: False,
pred_inner: True
}), [4., 2.])
self.assertSequenceEqual(
sess.run(grads, {
pred_outer: False,
pred_inner: False
}), [5., 0.])
def testGradientFromInsideNestedFunction(self):
def build_graph():
pred_outer = array_ops.placeholder(dtypes.bool, name="pred_outer")
pred_inner = array_ops.placeholder(dtypes.bool, name="pred_inner")
x = constant_op.constant(1.0, name="x")
y = constant_op.constant(2.0, name="y")
def true_fn():
return 2.0
def false_fn():
def inner_true_fn():
return x * y * 2.0
def inner_false_fn():
return x * 5.0
return cond_v2.cond_v2(
pred_inner, inner_true_fn, inner_false_fn, name="inner_cond")
cond_outer = cond_v2.cond_v2(
pred_outer, true_fn, false_fn, name="outer_cond")
# Compute grads inside a tf function.
@def_function.function
def nesting_fn():
@def_function.function
def inner_nesting_fn():
return gradients_impl.gradients(cond_outer, [x, y])
return inner_nesting_fn()
grads = nesting_fn()
return grads, pred_outer, pred_inner
with ops.Graph().as_default():
grads, pred_outer, pred_inner = build_graph()
with self.session(graph=ops.get_default_graph()) as sess:
self.assertSequenceEqual(
sess.run(grads, {
pred_outer: True,
pred_inner: True
}), [0., 0.])
self.assertSequenceEqual(
sess.run(grads, {
pred_outer: True,
pred_inner: False
}), [0., 0.])
self.assertSequenceEqual(
sess.run(grads, {
pred_outer: False,
pred_inner: True
}), [4., 2.])
self.assertSequenceEqual(
sess.run(grads, {
pred_outer: False,
pred_inner: False
}), [5., 0.])
def testBuildCondAndGradientInsideFunction(self):
def build_graph():
pred_outer = array_ops.placeholder(dtypes.bool, name="pred_outer")
pred_inner = array_ops.placeholder(dtypes.bool, name="pred_inner")
x = constant_op.constant(1.0, name="x")
y = constant_op.constant(2.0, name="y")
# Build cond and its gradient inside a tf function.
@def_function.function
def fn():
def true_fn():
return 2.0
def false_fn():
def inner_true_fn():
return x * y * 2.0
def inner_false_fn():
return x * 5.0
return cond_v2.cond_v2(
pred_inner, inner_true_fn, inner_false_fn, name="inner_cond")
cond_outer = cond_v2.cond_v2(
pred_outer, true_fn, false_fn, name="outer_cond")
return gradients_impl.gradients(cond_outer, [x, y])
grads = fn()
return grads, pred_outer, pred_inner
with ops.Graph().as_default(), self.session(
graph=ops.get_default_graph()) as sess:
grads, pred_outer, pred_inner = build_graph()
self.assertSequenceEqual(
sess.run(grads, {
pred_outer: True,
pred_inner: True
}), [0., 0.])
self.assertSequenceEqual(
sess.run(grads, {
pred_outer: True,
pred_inner: False
}), [0., 0.])
self.assertSequenceEqual(
sess.run(grads, {
pred_outer: False,
pred_inner: True
}), [4., 2.])
self.assertSequenceEqual(
sess.run(grads, {
pred_outer: False,
pred_inner: False
}), [5., 0.])
@test_util.run_deprecated_v1
def testSecondDerivative(self):
with self.cached_session() as sess:
pred = array_ops.placeholder(dtypes.bool, name="pred")
x = constant_op.constant(3.0, name="x")
def true_fn():
return math_ops.pow(x, 3)
def false_fn():
return x
cond = cond_v2.cond_v2(pred, true_fn, false_fn, name="cond")
cond_grad = gradients_impl.gradients(cond, [x])
cond_grad_grad = gradients_impl.gradients(cond_grad, [x])
# d[x^3]/dx = 3x^2
true_val = sess.run(cond_grad, {pred: True})
self.assertEqual(true_val, [27.0])
# d[x]/dx = 1
false_val = sess.run(cond_grad, {pred: False})
self.assertEqual(false_val, [1.0])
true_val = sess.run(cond_grad_grad, {pred: True})
# d2[x^3]/dx2 = 6x
self.assertEqual(true_val, [18.0])
false_val = sess.run(cond_grad_grad, {pred: False})
# d2[x]/dx2 = 0
self.assertEqual(false_val, [0.0])
def testGradientOfDeserializedCond(self):
with ops.Graph().as_default():
pred = array_ops.placeholder(dtypes.bool, name="pred")
x = constant_op.constant(3.0, name="x")
ops.add_to_collection("x", x)
def true_fn():
return math_ops.pow(x, 3)
def false_fn():
return x
ops.add_to_collection("pred", pred)
cond = cond_v2.cond_v2(pred, true_fn, false_fn, name="cond")
ops.add_to_collection("cond", cond)
meta_graph = saver.export_meta_graph()
with ops.Graph().as_default() as g:
with self.session(graph=g) as sess:
saver.import_meta_graph(meta_graph)
x = ops.get_collection("x")[0]
pred = ops.get_collection("pred")[0]
cond = ops.get_collection("cond")
cond_grad = gradients_impl.gradients(cond, [x], name="cond_grad")
cond_grad_grad = gradients_impl.gradients(
cond_grad, [x], name="cond_grad_grad")
# d[x^3]/dx = 3x^2
true_val = sess.run(cond_grad, {pred: True})
self.assertEqual(true_val, [27.0])
# d[x]/dx = 1
false_val = sess.run(cond_grad, {pred: False})
self.assertEqual(false_val, [1.0])
true_val = sess.run(cond_grad_grad, {pred: True})
# d2[x^3]/dx2 = 6x
self.assertEqual(true_val, [18.0])
false_val = sess.run(cond_grad_grad, {pred: False})
# d2[x]/dx2 = 0
self.assertEqual(false_val, [0.0])
@test_util.run_deprecated_v1
def testFuncCond(self):
@def_function.function
def fn_with_cond():
cond_v2.cond_v2(
constant_op.constant(True),
lambda: array_ops.zeros([]),
lambda: array_ops.ones([]),
name="cond_1")
return cond_v2.cond_v2(
constant_op.constant(False),
lambda: array_ops.zeros([]),
lambda: array_ops.ones([]),
name="cond_2")
concrete_fn = fn_with_cond.get_concrete_function()
cond_1 = concrete_fn.graph.get_operation_by_name("cond_1")
cond_2 = concrete_fn.graph.get_operation_by_name("cond_2")
# Verify that all functional ops are stateless and cond_2 does not have
# any control inputs.
self.assertEqual(cond_1.type, "StatelessIf")
self.assertEqual(cond_2.type, "StatelessIf")
self.assertLen(cond_2.control_inputs, 0)
fn_output = concrete_fn()
self.assertEqual(fn_output.op.type, "PartitionedCall")
self.assertAllEqual(fn_output, 1.0)
@test_util.run_deprecated_v1
def testFuncCondFunc(self):
@def_function.function
def fn_with_cond():
cond_v2.cond_v2(
constant_op.constant(True),
lambda: constant_op.constant(1.),
lambda: constant_op.constant(2.),
name="cond_1")
@def_function.function
def true_branch():
return constant_op.constant(3.)
return cond_v2.cond_v2(
constant_op.constant(True),
true_branch,
lambda: constant_op.constant(4.),
name="cond_2")
concrete_fn = fn_with_cond.get_concrete_function()
cond_1 = concrete_fn.graph.get_operation_by_name("cond_1")
cond_2 = concrete_fn.graph.get_operation_by_name("cond_2")
# Verify that all functional ops are stateless and cond_2 does not have
# any control inputs.
self.assertEqual(cond_1.type, "StatelessIf")
self.assertEqual(cond_2.type, "StatelessIf")
self.assertLen(cond_2.control_inputs, 0)
cond_2_true_graph, _ = cond_v2.get_func_graphs(cond_2)
cond_2_true_graph_operations = cond_2_true_graph.get_operations()
self.assertEmpty([
op for op in cond_2_true_graph_operations
if op.type == "StatefulPartitionedCall"
])
self.assertLen([
op for op in cond_2_true_graph_operations
if op.type == "PartitionedCall"
], 1)
fn_output = concrete_fn()
self.assertEqual(fn_output.op.type, "PartitionedCall")
self.assertAllEqual(fn_output, 3.0)
@test_util.run_deprecated_v1
def testFuncCondWithVariable(self):
v1 = variables.Variable(2.)
v2 = variables.Variable(4.)
self.evaluate(variables.global_variables_initializer())
def update_v1():
v1.assign(v1)
return v1
def update_v2():
v2.assign(v2)
return v2
@def_function.function
def fn_with_cond():
cond_v2.cond_v2(
constant_op.constant(True),
update_v1,
lambda: constant_op.constant(0.),
name="cond_1")
cond_2 = cond_v2.cond_v2(
constant_op.constant(False),
lambda: constant_op.constant(0.),
update_v1,
name="cond_2")
cond_v2.cond_v2(
constant_op.constant(True),
update_v2,
lambda: constant_op.constant(0.),
name="cond_3")
cond_4 = cond_v2.cond_v2(
constant_op.constant(False),
lambda: constant_op.constant(0.),
lambda: v2,
name="cond_4")
stateless_cond = cond_v2.cond_v2(
constant_op.constant(False),
lambda: constant_op.constant(5.),
lambda: constant_op.constant(6.),
name="stateless_cond")
return cond_2, cond_4, stateless_cond
concrete_fn = fn_with_cond.get_concrete_function()
cond_1 = concrete_fn.graph.get_operation_by_name("cond_1")
cond_2 = concrete_fn.graph.get_operation_by_name("cond_2")
cond_3 = concrete_fn.graph.get_operation_by_name("cond_3")
cond_4 = concrete_fn.graph.get_operation_by_name("cond_4")
stateless_cond = concrete_fn.graph.get_operation_by_name("stateless_cond")
self.assertEqual(cond_1.type, "If")
self.assertEqual(cond_2.type, "If")
self.assertEqual(cond_3.type, "If")
self.assertEqual(cond_4.type, "If")
self.assertEqual(stateless_cond.type, "StatelessIf")
self.assertEmpty(cond_1.control_inputs)
self.assertLen(cond_2.control_inputs, 1)
self.assertIs(cond_2.control_inputs[0], cond_1)
self.assertEmpty(cond_3.control_inputs)
self.assertLen(cond_4.control_inputs, 1)
self.assertIs(cond_4.control_inputs[0], cond_3)
# Does not touch any variable so should not have any control inputs.
self.assertEmpty(stateless_cond.control_inputs)
fn_output = concrete_fn()
self.assertEqual(fn_output[0].op.type, "StatefulPartitionedCall")
self.assertAllEqual(self.evaluate(fn_output), [2.0, 4.0, 6.0])
@test_util.run_deprecated_v1
def testFuncCondFuncWithVariable(self):
v1 = variables.Variable(2.)
v2 = variables.Variable(4.)
self.evaluate(variables.global_variables_initializer())
@def_function.function
def fn_with_cond():
def update_v1():
v1.assign(v1)
return v1
def update_v2():
v2.assign(v2)
return v2
cond_v2.cond_v2(
constant_op.constant(True),
update_v1,
lambda: constant_op.constant(0.),
name="cond_1")
cond_2 = cond_v2.cond_v2(
constant_op.constant(False),
lambda: constant_op.constant(0.),
update_v1,
name="cond_2")
cond_v2.cond_v2(
constant_op.constant(True),
update_v2,
lambda: constant_op.constant(0.),
name="cond_3")
@def_function.function
def cond_4_false_branch():
v2.assign(v2)
return v2
cond_4 = cond_v2.cond_v2(
constant_op.constant(False),
lambda: constant_op.constant(0.),
cond_4_false_branch,
name="cond_4")
return cond_2, cond_4
concrete_fn = fn_with_cond.get_concrete_function()
cond_1 = concrete_fn.graph.get_operation_by_name("cond_1")
cond_2 = concrete_fn.graph.get_operation_by_name("cond_2")
cond_3 = concrete_fn.graph.get_operation_by_name("cond_3")
cond_4 = concrete_fn.graph.get_operation_by_name("cond_4")
self.assertEqual(cond_1.type, "If")
self.assertEqual(cond_2.type, "If")
self.assertEqual(cond_3.type, "If")
self.assertEqual(cond_4.type, "If")
self.assertEmpty(cond_1.control_inputs)
self.assertLen(cond_2.control_inputs, 1)
self.assertIs(cond_2.control_inputs[0], cond_1)
self.assertEmpty(cond_3.control_inputs)
self.assertLen(cond_4.control_inputs, 1)
self.assertIs(cond_4.control_inputs[0], cond_3)
_, cond_4_false_graph = cond_v2.get_func_graphs(cond_4)
cond_4_false_graph_operations = cond_4_false_graph.get_operations()
self.assertEmpty([
op for op in cond_4_false_graph_operations
if op.type == "PartitionedCall"
])
self.assertLen([
op for op in cond_4_false_graph_operations
if op.type == "StatefulPartitionedCall"
], 1)
fn_output = concrete_fn()
self.assertEqual(fn_output[0].op.type, "StatefulPartitionedCall")
self.assertAllEqual(self.evaluate(fn_output), [2.0, 4.0])
def testGradientTapeOfCondWithResourceVariableInFunction(self):
v = variables.Variable(2.)
@def_function.function
def fn_with_cond():
with backprop.GradientTape() as tape:
pred = constant_op.constant(True, dtype=dtypes.bool)
def true_fn():
return math_ops.pow(v, 3)
def false_fn():
return v
cond = cond_v2.cond_v2(pred, true_fn, false_fn, name="cond")
return tape.gradient(cond, v)
self.assertAllEqual(fn_with_cond(), 12.0)
def _CheckIteratedCosGradients(self, func):
def _grad(f):
def _grad_function(primal):
with backprop.GradientTape() as tape:
tape.watch(primal)
primal_out = f(primal)
return tape.gradient(primal_out, primal)
return _grad_function
f = func
one = constant_op.constant(1.)
for expected in [math_ops.cos,
lambda x: -math_ops.sin(x),
lambda x: -math_ops.cos(x),
math_ops.sin,
math_ops.cos]:
self.assertAllClose(expected(one), def_function.function(f)(one))
f = _grad(f)
def testIteratedGradientsCond(self):
def _func(x):
return cond_v2.cond_v2(
constant_op.constant(True),
lambda: math_ops.cos(array_ops.identity(x)),
lambda: math_ops.sin(array_ops.identity(x)))
self._CheckIteratedCosGradients(_func)
def testIteratedGradientsCase(self):
def _func(x):
return cond_v2.indexed_case(
constant_op.constant(1),
[lambda: math_ops.sin(array_ops.identity(x)),
lambda: math_ops.cos(array_ops.identity(x))])
self._CheckIteratedCosGradients(_func)
def testLowering(self):
with ops.Graph().as_default() as g:
# Disable Loop_optimizer grappler pass for this test because it replaces
# Switch with Identity when it's part of a dead branch.
config = config_pb2.ConfigProto()
config.graph_options.rewrite_options.loop_optimization = (
rewriter_config_pb2.RewriterConfig.OFF)
with self.session(graph=g, config=config) as sess:
cond_output, _ = self._createCond("cond")
run_options = config_pb2.RunOptions(output_partition_graphs=True)
run_metadata = config_pb2.RunMetadata()
sess.run(cond_output, options=run_options, run_metadata=run_metadata)
# If lowering was enabled, there should be a `Switch` node
self.assertTrue(
_has_node_with_op(run_metadata, "Switch"),
"A `Switch` op should exist if the graph was lowered.")
# If lowering was enabled, there should be no `If` node
self.assertFalse(
_has_node_with_op(run_metadata, "StatelessIf"),
"An `If` op was found, but it should be lowered.")
@test_util.run_deprecated_v1
def testLoweringDisabledInXLA(self):
with self.session(graph=ops.Graph()) as sess:
# Build the cond_v2 in an XLA context
xla_context = control_flow_ops.XLAControlFlowContext()
xla_context.Enter()
cond_output, cond_op = self._createCond("cond")
xla_context.Exit()
# Check lowering attr is not set.
with self.assertRaises(ValueError):
cond_op.get_attr("_lower_using_switch_merge")
# Check the actual graph that is run.
run_options = config_pb2.RunOptions(output_partition_graphs=True)
run_metadata = config_pb2.RunMetadata()
sess.run(cond_output, options=run_options, run_metadata=run_metadata)
# Lowering disabled in XLA, there should be no `Switch` node
self.assertFalse(
_has_node_with_op(run_metadata, "Switch"),
"A `Switch` op exists, but the graph should not be lowered.")
if test_util.is_xla_enabled():
# If XLA is actually enabled then we expect the StatelessIf to have been
# put inside an XLA cluster.
self.assertFalse(
_has_node_with_op(run_metadata, "StatelessIf"),
("A `StatelessIf` op was found, but the node should have been " +
"clustered."))
self.assertTrue(
_has_node_with_op(run_metadata, "_XlaCompile"),
("An `_XlaCompile` op was not found, but the `StatelessIf` (at " +
"least) op should have been clustered."))
self.assertTrue(
_has_node_with_op(run_metadata, "_XlaRun"),
("An `_XlaRun` op was not found, but the `StatelessIf` (at " +
"least) op should have been clustered."))
else:
# Lowering disabled in XLA, there should still be an `If` node
self.assertTrue(
_has_node_with_op(run_metadata, "StatelessIf"),
("A `StatelessIf` op was not found, but the graph should not be " +
"lowered."))
@test_util.run_deprecated_v1
def testNestedLoweringDisabledInXLA(self):
# Build the cond_v2 in an XLA context
xla_context = control_flow_ops.XLAControlFlowContext()
xla_context.Enter()
_, cond_op = self._createNestedCond("cond")
xla_context.Exit()
# Check lowering attr is not set for either If node.
with self.assertRaises(ValueError):
cond_op.get_attr("_lower_using_switch_merge")
nested_if_ops = []
for func in ops.get_default_graph()._functions.values():
nested_if_ops.extend(
op for op in func.graph.get_operations() if op.type == "StatelessIf")
self.assertEqual(len(nested_if_ops), 1)
with self.assertRaises(ValueError):
nested_if_ops[0].get_attr("_lower_using_switch_merge")
# TODO(skyewm): check the actual graphs that are run once we have a way to
# programmatically access those graphs.
@test_util.run_deprecated_v1
def testLoweringDisabledWithFastCond(self):
with self.session(graph=ops.Graph()) as sess:
# Build the cond_v2 in an XLA context
cond_output, cond_op = self._createCond("cond", use_fast_cond=True)
# Check lowering attr is not set.
lowering_attr = cond_op.get_attr("_lower_using_switch_merge")
self.assertFalse(
lowering_attr, f"Lowering attr is not set as expected. {cond_op}"
)
# Check the actual graph that is run.
run_options = config_pb2.RunOptions(output_partition_graphs=True)
run_metadata = config_pb2.RunMetadata()
sess.run(cond_output, options=run_options, run_metadata=run_metadata)
# Lowering disabled for `fast_cond_v2`, there should be no `Switch` node
self.assertFalse(
_has_node_with_op(run_metadata, "Switch"),
"A `Switch` op exists, but the graph should not be lowered.",
)
if test_util.is_xla_enabled():
# If XLA is actually enabled then we expect the StatelessIf to have been
# put inside an XLA cluster.
self.assertFalse(
_has_node_with_op(run_metadata, "StatelessIf"),
(
"A `StatelessIf` op was found, but the node should have been "
+ "clustered."
),
)
self.assertTrue(
_has_node_with_op(run_metadata, "_XlaCompile"),
(
"An `_XlaCompile` op was not found, but the `StatelessIf` (at "
+ "least) op should have been clustered."
),
)
self.assertTrue(
_has_node_with_op(run_metadata, "_XlaRun"),
(
"An `_XlaRun` op was not found, but the `StatelessIf` (at "
+ "least) op should have been clustered."
),
)
else:
# Lowering disabled for `fast_cond_v2`, there should still be a
# `StatelessIf` node.
self.assertTrue(
_has_node_with_op(run_metadata, "StatelessIf"),
(
"A `StatelessIf` op was not found, but the graph should not be "
+ "lowered."
+ str(run_metadata)
),
)
# b/131355614
@test_util.run_deprecated_v1
def testNoOptionalsInXla(self):
@def_function.function
def func_with_cond():
pred = constant_op.constant(True, name="pred")
x = constant_op.constant(1.0, name="x")
def true_fn():
intermediate = x + 1
return intermediate * x
def false_fn():
return x + 1
output = cond_v2.cond_v2(pred, true_fn, false_fn)
grad = gradients_impl.gradients(output, x)[0]
forward_if_op = output.op.inputs[0].op
gradient_if_op = grad.op.inputs[0].op
def verify_no_optional_ops(op, branch_name):
branch_function = ops.get_default_graph()._get_function(
op.get_attr(branch_name).name)
function_def = branch_function.cached_definition
for node_def in function_def.node_def:
self.assertNotIn(node_def.op, _OPTIONAL_OPS)
verify_no_optional_ops(forward_if_op, "then_branch")
verify_no_optional_ops(forward_if_op, "else_branch")
verify_no_optional_ops(gradient_if_op, "then_branch")
verify_no_optional_ops(gradient_if_op, "else_branch")
return grad
xla_context = control_flow_ops.XLAControlFlowContext()
xla_context.Enter()
func_with_cond()
xla_context.Exit()
@test_util.run_deprecated_v1
def testLoweringDisabledWithSingleThreadedExecutorContext(self):
# Single threaded executor does not support partitioned graphs, so we can't
# run on GPUs (running on GPU requires a mixed CPU/GPU graph).
with self.session(graph=ops.Graph(), use_gpu=False) as sess:
@def_function.function
def _add_cond(x):
return cond_v2.cond_v2(
constant_op.constant(True, name="pred"),
lambda: x,
lambda: x + 1)
x = array_ops.placeholder(shape=None, dtype=dtypes.float32)
with context.function_executor_type("SINGLE_THREADED_EXECUTOR"):
out_cond = _add_cond(x)
# The fact that sess.run() succeeds means lowering is disabled, because
# the single threaded executor does not support cond v1 ops.
sess.run(out_cond, feed_dict={x: 1.0})
@test_util.enable_control_flow_v2
def testStructuredOutputs(self):
x = constant_op.constant(1.0, name="x")
y = constant_op.constant(3.0, name="y")
def true_fn():
return ((x * y,), y)
def false_fn():
return ((x,), y * 3.0)
output = tf_cond.cond(
constant_op.constant(False), true_fn, false_fn)
self.assertEqual(self.evaluate(output[0][0]), 1.)
self.assertEqual(self.evaluate(output[1]), 9.)
@test_util.enable_control_flow_v2
@test_util.run_deprecated_v1
def testRaisesOutputStructuresMismatch(self):
x = constant_op.constant(1.0, name="x")
y = constant_op.constant(3.0, name="y")
def true_fn():
return x * y, y
def false_fn():
return ((x,), y * 3.0)
with self.assertRaisesRegex(
TypeError, "true_fn and false_fn arguments to tf.cond must have the "
"same number, type, and overall structure of return values."):
tf_cond.cond(constant_op.constant(False), true_fn, false_fn)
@test_util.enable_control_flow_v2
def testCondAndTensorArray(self):
x = math_ops.range(-5, 5)
output = tensor_array_ops.TensorArray(dtype=dtypes.int32, size=x.shape[0])
def loop_body(i, output):
def if_true():
return output.write(i, x[i]**2)
def if_false():
return output.write(i, x[i])
output = tf_cond.cond(x[i] > 0, if_true, if_false)
return i + 1, output
_, output = while_loop.while_loop(
lambda i, arr: i < x.shape[0],
loop_body,
loop_vars=(constant_op.constant(0), output))
output_t = output.stack()
self.assertAllEqual(
self.evaluate(output_t), [-5, -4, -3, -2, -1, 0, 1, 4, 9, 16])
@test_util.enable_control_flow_v2
def testCondAndTensorArrayInFunction(self):
@def_function.function
def f():
x = math_ops.range(-5, 5)
output = tensor_array_ops.TensorArray(dtype=dtypes.int32, size=x.shape[0])
def loop_body(i, output):
def if_true():
return output.write(i, x[i]**2)
def if_false():
return output.write(i, x[i])
output = tf_cond.cond(x[i] > 0, if_true, if_false)
return i + 1, output
_, output = while_loop.while_loop(
lambda i, arr: i < x.shape[0],
loop_body,
loop_vars=(constant_op.constant(0), output))
return output.stack()
output_t = f()
self.assertAllEqual(output_t, [-5, -4, -3, -2, -1, 0, 1, 4, 9, 16])
@test_util.run_deprecated_v1
def testForwardPassRewrite(self):
x = constant_op.constant(1.0, name="x")
y = constant_op.constant(1.0, name="y")
def true_fn():
y_plus_one = y + 1.
return x * y_plus_one
output = cond_v2.cond_v2(constant_op.constant(True), true_fn, lambda: x)
if_op = output.op.inputs[0].op
self.assertEqual(if_op.type, "StatelessIf")
# pylint: disable=g-deprecated-assert
self.assertEqual(len(if_op.outputs), 1)
gradients_impl.gradients(output, x)
# if_op should have been rewritten to output `y_plus_one`.
self.assertEqual(len(if_op.outputs), 2)
gradients_impl.gradients(output, x)
# Computing the gradient again shouldn't rewrite if_op again.
self.assertEqual(len(if_op.outputs), 2)
# pylint: enable=g-deprecated-assert
@test_util.run_deprecated_v1
def testDoNotAccumulateConstants(self):
x = constant_op.constant(1.0, name="x")
output = cond_v2.cond_v2(
constant_op.constant(True), lambda: x * 2.0, lambda: x)
if_op = output.op.inputs[0].op
self.assertEqual(if_op.type, "StatelessIf")
# pylint: disable=g-deprecated-assert
self.assertEqual(len(if_op.outputs), 1)
gradients_impl.gradients(output, x)
# Number of outputs does change because
# 1. `x` is a loop input so does not need to be accumulated.
# 2. 2.0 is a constant so it is not accumulated.
self.assertEqual(len(if_op.outputs), 1)
gradients_impl.gradients(output, x)
# Computing the gradient again shouldn't rewrite if_op again.
self.assertEqual(len(if_op.outputs), 1)
# pylint: enable=g-deprecated-assert
def testIsControlFlowGraph(self):
x = constant_op.constant(1.0, name="x")
@def_function.function
def f(c):
def then_branch():
i = x + 1
self.assertTrue(i.graph.is_control_flow_graph)
return i
def else_branch():
i = x + 1
self.assertTrue(i.graph.is_control_flow_graph)
return i
return cond_v2.cond_v2(c, then_branch, else_branch)
i = f(constant_op.constant(True))
self.assertEqual(self.evaluate(i), 2.0)
i = f(constant_op.constant(False))
self.assertEqual(self.evaluate(i), 2.0)
def testGradientOfMixedOptionals(self):
@def_function.function
def f(c):
x = constant_op.constant(1., name="x")
def then_branch():
return x**2.0, gen_optional_ops.optional_from_value(
[constant_op.constant(1)]
)
def else_branch():
return x**3.0, gen_optional_ops.optional_from_value(
[constant_op.constant(1.0)]
)
y, _ = cond_v2.cond_v2(c, then_branch, else_branch)
return gradients_impl.gradients(y, x)
self.assertAllClose([2.], f(constant_op.constant(True)))
| CondV2Test |
python | ray-project__ray | java/test/src/main/resources/test_cross_language_invocation.py | {
"start": 3329,
"end": 3541
} | class ____(object):
def __init__(self, value):
self.value = int(value)
def increase(self, delta):
self.value += int(delta)
return str(self.value).encode("utf-8")
@ray.remote
| Counter |
python | apache__airflow | airflow-core/tests/unit/utils/test_process_utils.py | {
"start": 5013,
"end": 6484
} | class ____:
def test_should_kill_process(self):
before_num_process = subprocess.check_output(["ps", "-ax", "-o", "pid="]).decode().count("\n")
process = multiprocessing.Process(target=my_sleep_subprocess, args=())
process.start()
sleep(0)
num_process = subprocess.check_output(["ps", "-ax", "-o", "pid="]).decode().count("\n")
assert before_num_process + 1 == num_process
process_utils.kill_child_processes_by_pids([process.pid])
num_process = subprocess.check_output(["ps", "-ax", "-o", "pid="]).decode().count("\n")
assert before_num_process == num_process
def test_should_force_kill_process(self, caplog):
process = multiprocessing.Process(target=my_sleep_subprocess_with_signals, args=())
process.start()
sleep(0)
all_processes = subprocess.check_output(["ps", "-ax", "-o", "pid="]).decode().splitlines()
assert str(process.pid) in (x.strip() for x in all_processes)
with caplog.at_level(logging.INFO, logger=process_utils.log.name):
caplog.clear()
process_utils.kill_child_processes_by_pids([process.pid], timeout=0)
assert f"Killing child PID: {process.pid}" in caplog.messages
sleep(0)
all_processes = subprocess.check_output(["ps", "-ax", "-o", "pid="]).decode().splitlines()
assert str(process.pid) not in (x.strip() for x in all_processes)
| TestKillChildProcessesByPids |
python | getsentry__sentry | src/sentry/api/invite_helper.py | {
"start": 1821,
"end": 9715
} | class ____:
@classmethod
def from_session_or_email(
cls,
request: HttpRequest,
organization_id: int,
email: str,
logger: Logger | None = None,
) -> ApiInviteHelper | None:
"""
Initializes the ApiInviteHelper by locating the pending organization
member via the currently set pending invite details in the session, or
via the passed email if no cookie is currently set.
"""
invite_details = get_invite_details(request)
# Came from a different organization.
if (
invite_details.invite_organization_id is not None
and invite_details.invite_organization_id != organization_id
):
invite_details = InviteDetails(None, None, None)
invite = None
if invite_details.invite_token and invite_details.invite_member_id:
invite = organization_service.get_invite_by_id(
organization_id=organization_id,
organization_member_id=invite_details.invite_member_id,
user_id=request.user.id,
)
else:
invite = organization_service.get_invite_by_id(
organization_id=organization_id, email=email, user_id=request.user.id
)
if invite is None:
# Unable to locate the pending organization member. Cannot setup
# the invite helper.
return None
return cls(
request=request,
invite_context=invite,
token=invite_details.invite_token,
logger=logger,
)
@classmethod
def from_session(
cls,
request: HttpRequest,
logger: Logger | None = None,
) -> ApiInviteHelper | None:
invite_details = get_invite_details(request)
if not invite_details.invite_token or not invite_details.invite_member_id:
return None
invite_context = organization_service.get_invite_by_id(
organization_member_id=invite_details.invite_member_id,
organization_id=invite_details.invite_organization_id,
user_id=request.user.id,
)
if invite_context is None:
if logger:
logger.exception("Invalid pending invite cookie")
return None
api_invite_helper = ApiInviteHelper(
request=request,
invite_context=invite_context,
token=invite_details.invite_token,
logger=logger,
)
return api_invite_helper
def __init__(
self,
request: HttpRequest,
invite_context: RpcUserInviteContext,
token: str | None,
logger: Logger | None = None,
) -> None:
self.request = request
self.token = token
self.logger = logger
self.invite_context = invite_context
def handle_member_already_exists(self) -> None:
if self.logger:
self.logger.info(
"Pending org invite not accepted - User already org member",
extra={
"organization_id": self.invite_context.organization.id,
"user_id": self.request.user.id,
},
)
def handle_member_has_no_sso(self) -> None:
if self.logger:
self.logger.info(
"Pending org invite not accepted - User did not have SSO",
extra={
"organization_id": self.invite_context.organization.id,
"user_id": self.request.user.id,
},
)
def handle_invite_not_approved(self) -> None:
if not self.invite_approved:
assert self.invite_context.member
organization_service.delete_organization_member(
organization_member_id=self.invite_context.member.id,
organization_id=self.invite_context.organization.id,
)
@property
def member_pending(self) -> bool:
assert self.invite_context.member
return self.invite_context.member.is_pending
@property
def invite_approved(self) -> bool:
assert self.invite_context.member
return self.invite_context.member.invite_approved
@property
def valid_token(self) -> bool:
if self.token is None:
return False
assert self.invite_context.member
if self.invite_context.member.token_expired:
return False
tokens_are_equal = constant_time_compare(
self.invite_context.member.token or self.invite_context.member.legacy_token,
self.token,
)
return tokens_are_equal
@property
def user_authenticated(self) -> bool:
return self.request.user.is_authenticated
@property
def member_already_exists(self) -> bool:
if not self.user_authenticated:
return False
return self.invite_context.user_id is not None
@property
def valid_request(self) -> bool:
return (
self.member_pending
and self.invite_approved
and self.valid_token
and self.user_authenticated
and not any(self.get_onboarding_steps().values())
)
def accept_invite(self, user: User | AnonymousUser) -> RpcOrganizationMember | None:
member = self.invite_context.member
assert member
if self.member_already_exists:
self.handle_member_already_exists()
if self.invite_context.invite_organization_member_id is not None:
organization_service.delete_organization_member(
organization_member_id=self.invite_context.invite_organization_member_id,
organization_id=self.invite_context.organization.id,
)
return None
try:
provider = AuthProvider.objects.get(organization_id=self.invite_context.organization.id)
except AuthProvider.DoesNotExist:
provider = None
# If SSO is required, check for valid AuthIdentity
if provider and not provider.flags.allow_unlinked:
# AuthIdentity has a unique constraint on provider and user
if not AuthIdentity.objects.filter(auth_provider=provider, user=user.id).exists():
self.handle_member_has_no_sso()
return None
new_om = organization_service.set_user_for_organization_member(
organization_member_id=member.id,
user_id=user.id,
organization_id=self.invite_context.organization.id,
)
if new_om:
self.invite_context.member = member = new_om
create_audit_entry(
self.request,
actor=user,
organization_id=self.invite_context.organization.id,
target_object=member.id,
target_user_id=user.id,
event=audit_log.get_event_id("MEMBER_ACCEPT"),
data=member.get_audit_log_metadata(),
)
metrics.incr("organization.invite-accepted", sample_rate=1.0)
organization_service.schedule_signal(
member_joined,
organization_id=member.organization_id,
args=dict(
user_id=member.user_id,
organization_member_id=member.id,
),
)
return member
def _needs_2fa(self) -> bool:
org_requires_2fa = self.invite_context.organization.flags.require_2fa
return org_requires_2fa and (
not self.request.user.is_authenticated or not self.request.user.has_2fa()
)
def get_onboarding_steps(self) -> dict[str, bool]:
return {
"needs2fa": self._needs_2fa(),
# needs email verification is being removed
"needsEmailVerification": False,
}
| ApiInviteHelper |
python | tensorflow__tensorflow | tensorflow/dtensor/python/tests/input_util_test.py | {
"start": 1894,
"end": 19286
} | class ____(test_util.DTensorBaseTest):
def setUp(self):
super().setUp()
self._num_devices = MESH_SIZE_BATCH * MESH_SIZE_HEIGHT * MESH_SIZE_WIDTH
self.mesh = mesh_util.create_mesh(
devices=['CPU:%d' % i for i in range(self._num_devices)],
mesh_dims=[(MESH_DIM_BATCH, MESH_SIZE_BATCH),
(MESH_DIM_HEIGHT, MESH_SIZE_HEIGHT),
(MESH_DIM_WIDTH, MESH_SIZE_WIDTH)])
self.mesh = self.configTestMesh({'CPU': self.mesh})
self.images = self._images([8, 8, 3])
self.labels = self._labels([1])
def _images(self, shape):
return stateless_random_ops.stateless_random_uniform(
shape, seed=(1, 2), minval=0, maxval=255)
def _labels(self, shape):
return stateless_random_ops.stateless_random_uniform(
shape, seed=(1, 2), minval=0, maxval=10, dtype=dtypes.float32)
def testIterableFailsWithUnknownShapeDatasetSpec(self):
def gen():
yield constant_op.constant([1, 2], dtype=dtypes.int32)
dataset = dataset_ops.DatasetV2.from_generator(
gen,
output_signature=tensor_spec.TensorSpec(
tensor_shape.TensorShape(None), dtype=dtypes.int32))
with self.assertRaisesRegex(
ValueError, 'Dataset element shape must have a valid rank'):
input_util.DTensorDataset(
dataset=dataset,
global_batch_size=8,
mesh=self.mesh,
layouts=Layout.replicated(self.mesh, rank=2))
def testIterMismatchedLayoutFails(self):
dataset = dataset_ops.DatasetV2.from_tensors(self.images).repeat()
# Mismatched rank-3 layout for rank-4 input (after batching)
images_layout = Layout(
[MESH_DIM_BATCH, MESH_DIM_HEIGHT, MESH_DIM_WIDTH], self.mesh)
with self.assertRaisesRegex(ValueError, 'Expected layout with rank 4'):
_ = input_util.DTensorDataset(
dataset=dataset,
global_batch_size=32,
mesh=self.mesh,
layouts=images_layout,
batch_dim=MESH_DIM_BATCH)
@parameterized.named_parameters(('Eager', False), ('Graph', True))
def testRangeIteration(self, is_graph):
batch_size = 8
num_batches = 4
dataset = dataset_ops.DatasetV2.from_tensor_slices(
self._images([batch_size * num_batches, 8, 8, 3]))
images_layout = Layout.batch_sharded(
self.mesh, batch_dim=MESH_DIM_BATCH, rank=4)
d_dataset = input_util.DTensorDataset(
dataset=dataset,
global_batch_size=batch_size,
mesh=self.mesh,
layouts=images_layout,
batch_dim=MESH_DIM_BATCH)
def train(iterator, steps):
iters = 1
output = next(iterator)
for _ in math_ops.range(steps - 1):
output += next(iterator)
iters += 1
if not is_graph:
mesh_util.barrier(self.mesh)
return output, iters
train_fn = polymorphic_function.function(train) if is_graph else train
exception = errors_impl.OutOfRangeError if is_graph else StopIteration
iterator = iter(dataset.batch(batch_size, drop_remainder=True))
output, iters = train_fn(iterator, num_batches)
d_iterator = iter(d_dataset)
d_output, d_iters = train_fn(d_iterator, num_batches)
mesh_util.barrier(self.mesh)
# Try one more iteration which will raise an exception since the iterator is
# exhausted.
with self.assertRaises(exception):
if is_graph:
# FIXME(b/285884302): This flakily raises error
# "Cannot add 'while_cond' function, because a different function"
# Since num_batches is changed to 1, it retriggers SPMD expansion.
# Recreating polymorphic function to avoid running into the error.
train_fn = polymorphic_function.function(train)
train_fn(d_iterator, 1)
# In the graph case, we need to wait for the executor to finish all async
# calls after invoking the tf.function to ensure any pending error is
# raised.
mesh_util.barrier(self.mesh)
self.assertEqual(iters, d_iters)
self.assertDTensorEqual(output, images_layout, d_output)
@parameterized.named_parameters(('Eager', False), ('Graph', True))
def testForInIteration(self, is_graph):
batch_size = 8
num_batches = 4
dataset = dataset_ops.DatasetV2.from_tensor_slices(
self._images([batch_size * num_batches, 8, 8, 3]))
images_layout = Layout.batch_sharded(
self.mesh, batch_dim=MESH_DIM_BATCH, rank=4)
d_dataset = input_util.DTensorDataset(
dataset=dataset,
global_batch_size=batch_size,
mesh=self.mesh,
layouts=images_layout,
batch_dim=MESH_DIM_BATCH)
def train(iterator):
iters = 1
output = next(iterator)
for img in iterator:
output += img
iters += 1
if not is_graph:
mesh_util.barrier(self.mesh)
return output, iters
train_fn = polymorphic_function.function(train) if is_graph else train
iterator = iter(dataset.batch(batch_size, drop_remainder=True))
output, iters = train_fn(iterator)
d_iterator = iter(d_dataset)
d_output, d_iters = train_fn(d_iterator)
self.assertEqual(iters, d_iters)
self.assertDTensorEqual(output, images_layout, d_output)
@parameterized.named_parameters(('Eager', False), ('Graph', True))
def testIterSingleInput(self, is_graph):
dataset = dataset_ops.DatasetV2.from_tensors(self.images).repeat()
batch_size = 32
images_layout = Layout.batch_sharded(
self.mesh, batch_dim=MESH_DIM_BATCH, rank=4)
d_dataset = input_util.DTensorDataset(
dataset=dataset,
global_batch_size=batch_size,
mesh=self.mesh,
layouts=images_layout,
batch_dim=MESH_DIM_BATCH)
self.assertEqual(d_dataset.element_spec.shape, [batch_size, 8, 8, 3])
def train(iterator):
it = next(iterator)
return it
train_fn = polymorphic_function.function(train) if is_graph else train
d_iterator = iter(d_dataset)
self.assertEqual(d_iterator.element_spec.shape, [batch_size, 8, 8, 3])
d_images = train_fn(d_iterator)
mesh_util.barrier(self.mesh)
expected = next(iter(dataset.batch(batch_size, drop_remainder=True)))
mesh_util.barrier(self.mesh)
self.assertDTensorEqual(expected, images_layout, d_images)
@parameterized.named_parameters(('Eager', False), ('Graph', True))
def testIterTupleInputs(self, is_graph):
dataset = dataset_ops.DatasetV2.from_tensors(
(self.images, self.labels)
).repeat()
batch_size = 32
images_layout = Layout.batch_sharded(
self.mesh, batch_dim=MESH_DIM_BATCH, rank=4)
labels_layout = Layout.batch_sharded(
self.mesh, batch_dim=MESH_DIM_BATCH, rank=2)
layouts = (images_layout, labels_layout)
d_dataset = input_util.DTensorDataset(
dataset=dataset,
global_batch_size=batch_size,
mesh=self.mesh,
layouts=layouts,
batch_dim=MESH_DIM_BATCH)
def train(iterator):
return next(iterator)
train_fn = polymorphic_function.function(train) if is_graph else train
d_iterator = iter(d_dataset)
d_images, d_labels = train_fn(d_iterator)
expected_images, expected_labels = next(
iter(dataset.batch(batch_size, drop_remainder=True)))
self.assertDTensorEqual(expected_images, images_layout, d_images)
self.assertDTensorEqual(expected_labels, labels_layout, d_labels)
@parameterized.named_parameters(('Eager', False), ('Graph', True))
def testIterDictInputs(self, is_graph):
dataset = dataset_ops.DatasetV2.from_tensors({
'images': self.images,
'labels': self.labels,
}).repeat()
batch_size = 32
images_layout = Layout.batch_sharded(
self.mesh, batch_dim=MESH_DIM_BATCH, rank=4)
labels_layout = Layout.batch_sharded(
self.mesh, batch_dim=MESH_DIM_BATCH, rank=2)
layouts = {'images': images_layout, 'labels': labels_layout}
d_dataset = input_util.DTensorDataset(
dataset=dataset,
global_batch_size=batch_size,
mesh=self.mesh,
layouts=layouts,
batch_dim=MESH_DIM_BATCH)
def train(iterator):
return next(iterator)
train_fn = polymorphic_function.function(train) if is_graph else train
d_iterator = iter(d_dataset)
d_element = train_fn(d_iterator)
expected = next(iter(dataset.batch(batch_size, drop_remainder=True)))
self.assertDTensorEqual(expected['images'], images_layout,
d_element['images'])
self.assertDTensorEqual(expected['labels'], labels_layout,
d_element['labels'])
@parameterized.named_parameters(('Eager', False), ('Graph', True))
def testIterOnBatchedDataset(self, is_graph):
dataset = dataset_ops.DatasetV2.from_tensors({
'images': self.images,
'labels': self.labels,
}).repeat()
images_layout = Layout.batch_sharded(
self.mesh, batch_dim=MESH_DIM_BATCH, rank=4)
labels_layout = Layout.batch_sharded(
self.mesh, batch_dim=MESH_DIM_BATCH, rank=2)
layouts = {'images': images_layout, 'labels': labels_layout}
global_batch_size = 32
per_replica_batch_size = global_batch_size // MESH_SIZE_BATCH
batched_dataset = dataset.batch(per_replica_batch_size, drop_remainder=True)
d_dataset = input_util.DTensorDataset(
dataset=batched_dataset,
global_batch_size=global_batch_size,
dataset_already_batched=True,
mesh=self.mesh,
layouts=layouts,
batch_dim=MESH_DIM_BATCH)
def train(iterator):
return next(iterator)
train_fn = polymorphic_function.function(train) if is_graph else train
d_iterator = iter(d_dataset)
d_element = train_fn(d_iterator)
expected = next(iter(dataset.batch(global_batch_size, drop_remainder=True)))
self.assertDTensorEqual(expected['images'], images_layout,
d_element['images'])
self.assertDTensorEqual(expected['labels'], labels_layout,
d_element['labels'])
def testIterOnBatchedDatasetFailsOnIncorrectBatchSize(self):
dataset = dataset_ops.DatasetV2.from_tensors({
'images': self.images,
'labels': self.labels,
}).repeat()
images_layout = Layout.batch_sharded(
self.mesh, batch_dim=MESH_DIM_BATCH, rank=4)
labels_layout = Layout.batch_sharded(
self.mesh, batch_dim=MESH_DIM_BATCH, rank=2)
layouts = {'images': images_layout, 'labels': labels_layout}
global_batch_size = 32
per_replica_batch_size = 16 # correct value would be: 32 // 4 = 8
batched_dataset = dataset.batch(
per_replica_batch_size, drop_remainder=True)
with self.assertRaisesRegex(
ValueError,
('per_replica_batch_size does not matched expected size based on the '
'mesh, got 16 but expected 8.')):
_ = input_util.DTensorDataset(
dataset=batched_dataset,
global_batch_size=global_batch_size,
dataset_already_batched=True,
mesh=self.mesh,
layouts=layouts,
batch_dim=MESH_DIM_BATCH)
def testIterOnBatchedDatasetFailsNoDropLastBatch(self):
dataset = dataset_ops.DatasetV2.from_tensors({
'images': self.images,
'labels': self.labels,
}).repeat()
images_layout = Layout.batch_sharded(
self.mesh, batch_dim=MESH_DIM_BATCH, rank=4)
labels_layout = Layout.batch_sharded(
self.mesh, batch_dim=MESH_DIM_BATCH, rank=2)
layouts = {'images': images_layout, 'labels': labels_layout}
global_batch_size = 32
per_replica_batch_size = global_batch_size // MESH_SIZE_BATCH
batched_dataset = dataset.batch(
per_replica_batch_size, drop_remainder=False)
with self.assertRaisesRegex(
ValueError, 'Ensure drop_remainder=True when batching the dataset.'):
_ = input_util.DTensorDataset(
dataset=batched_dataset,
global_batch_size=global_batch_size,
dataset_already_batched=True,
mesh=self.mesh,
layouts=layouts,
batch_dim=MESH_DIM_BATCH)
@parameterized.named_parameters(('Disabled', False), ('Enabled', True))
def testIterPrefetch(self, prefetch):
condition = threading.Condition()
counter = variables.Variable(0)
def count(x):
counter.assign_add(1)
return x
num_batches = 8
batch_size = 4
total_elems = num_batches * batch_size
prefetch_buffer_size = 2 if prefetch else 0
inputs = np.arange(total_elems)
dataset = dataset_ops.DatasetV2.from_tensor_slices(inputs)
dataset = dataset.map(count)
inputs_layout = Layout.batch_sharded(
self.mesh, batch_dim=MESH_DIM_BATCH, rank=1)
d_dataset = input_util.DTensorDataset(
dataset=dataset,
global_batch_size=batch_size,
mesh=self.mesh,
layouts=inputs_layout,
batch_dim=MESH_DIM_BATCH,
prefetch=prefetch_buffer_size if prefetch else None)
# Check nothing was prefetched before iterators were created.
self.assertEqual(counter.numpy(), 0)
# Check nothing was prefetched before the first iteration.
d_iterator = iter(d_dataset)
self.assertEqual(counter.numpy(), 0)
# The number of elements that are expected to be fetched in each iteration.
multiple = batch_size * (self.mesh.size // MESH_SIZE_BATCH)
# Check the number of elements fetched upon for each batch.
for i in range(num_batches):
elem = next(d_iterator)
with condition:
count = min((i + prefetch_buffer_size) * multiple,
num_batches * multiple)
result = condition.wait_for(lambda: counter.numpy() >= count, timeout=5)
self.assertTrue(result)
start_idx, end_idx = i * batch_size, (i + 1) * batch_size
self.assertDTensorEqual(inputs[start_idx:end_idx], inputs_layout, elem)
@parameterized.product(
(
dict(
images_sharding=[UNSHARDED, UNSHARDED, UNSHARDED, UNSHARDED],
labels_sharding=[UNSHARDED, UNSHARDED],
),
dict(
images_sharding=[MESH_DIM_BATCH, UNSHARDED, UNSHARDED, UNSHARDED],
labels_sharding=[MESH_DIM_BATCH, UNSHARDED],
),
dict(
images_sharding=[
UNSHARDED,
MESH_DIM_HEIGHT,
MESH_DIM_WIDTH,
UNSHARDED,
],
labels_sharding=[UNSHARDED, UNSHARDED],
),
dict(
images_sharding=[
UNSHARDED,
MESH_DIM_WIDTH,
MESH_DIM_HEIGHT,
UNSHARDED,
],
labels_sharding=[UNSHARDED, UNSHARDED],
),
dict(
images_sharding=[
MESH_DIM_BATCH,
MESH_DIM_HEIGHT,
MESH_DIM_WIDTH,
UNSHARDED,
],
labels_sharding=[MESH_DIM_BATCH, UNSHARDED],
),
dict(
images_sharding=[
MESH_DIM_BATCH,
MESH_DIM_WIDTH,
MESH_DIM_HEIGHT,
UNSHARDED,
],
labels_sharding=[MESH_DIM_BATCH, UNSHARDED],
),
),
is_graph=[False, True],
through_dtensor=[False, True],
)
def testIterWithLayouts(
self, images_sharding, labels_sharding, is_graph, through_dtensor
):
if through_dtensor:
scope = api.default_mesh(self.mesh)
else:
scope = contextlib.nullcontext()
with scope:
batch_size = 32
dataset = dataset_ops.DatasetV2.from_tensors(
(self.images, self.labels)
).repeat()
batched_dataset = dataset.batch(batch_size, drop_remainder=True)
images_layout = Layout(images_sharding, self.mesh)
labels_layout = Layout(labels_sharding, self.mesh)
layouts = (images_layout, labels_layout)
batch_dim = None
if MESH_DIM_BATCH in images_sharding or MESH_DIM_BATCH in labels_sharding:
batch_dim = MESH_DIM_BATCH
d_dataset = input_util.DTensorDataset(
dataset=dataset,
global_batch_size=batch_size,
mesh=self.mesh,
layouts=layouts,
batch_dim=batch_dim,
)
def train(iterator):
return next(iterator)
train_fn = polymorphic_function.function(train) if is_graph else train
d_iterator = iter(d_dataset)
d_images, d_labels = train_fn(d_iterator)
iterator = iter(batched_dataset)
images, labels = train_fn(iterator)
self.assertDTensorEqual(images, images_layout, d_images)
self.assertDTensorEqual(labels, labels_layout, d_labels)
def testMixedLayoutsFails(self):
dataset = dataset_ops.DatasetV2.from_tensors(
(self.images, self.labels)
).repeat()
images_layout = Layout(
[UNSHARDED, MESH_DIM_HEIGHT, MESH_DIM_WIDTH, UNSHARDED], self.mesh)
labels_layout = Layout([MESH_DIM_BATCH, UNSHARDED], self.mesh)
layouts = (images_layout, labels_layout)
with self.assertRaisesRegex(ValueError, (
f'batch_dim {MESH_DIM_BATCH} was specified but at least one layout did '
'not contain it')):
input_util.DTensorDataset(
dataset=dataset,
global_batch_size=32,
mesh=self.mesh,
layouts=layouts,
batch_dim=MESH_DIM_BATCH)
| DTensorDatasetTest |
python | pytorch__pytorch | torch/distributions/normal.py | {
"start": 352,
"end": 3990
} | class ____(ExponentialFamily):
r"""
Creates a normal (also called Gaussian) distribution parameterized by
:attr:`loc` and :attr:`scale`.
Example::
>>> # xdoctest: +IGNORE_WANT("non-deterministic")
>>> m = Normal(torch.tensor([0.0]), torch.tensor([1.0]))
>>> m.sample() # normally distributed with loc=0 and scale=1
tensor([ 0.1046])
Args:
loc (float or Tensor): mean of the distribution (often referred to as mu)
scale (float or Tensor): standard deviation of the distribution
(often referred to as sigma)
"""
# pyrefly: ignore [bad-override]
arg_constraints = {"loc": constraints.real, "scale": constraints.positive}
support = constraints.real
has_rsample = True
_mean_carrier_measure = 0
@property
def mean(self) -> Tensor:
return self.loc
@property
def mode(self) -> Tensor:
return self.loc
@property
def stddev(self) -> Tensor:
return self.scale
@property
def variance(self) -> Tensor:
return self.stddev.pow(2)
def __init__(
self,
loc: Union[Tensor, float],
scale: Union[Tensor, float],
validate_args: Optional[bool] = None,
) -> None:
self.loc, self.scale = broadcast_all(loc, scale)
if isinstance(loc, _Number) and isinstance(scale, _Number):
batch_shape = torch.Size()
else:
batch_shape = self.loc.size()
super().__init__(batch_shape, validate_args=validate_args)
def expand(self, batch_shape, _instance=None):
new = self._get_checked_instance(Normal, _instance)
batch_shape = torch.Size(batch_shape)
new.loc = self.loc.expand(batch_shape)
new.scale = self.scale.expand(batch_shape)
super(Normal, new).__init__(batch_shape, validate_args=False)
new._validate_args = self._validate_args
return new
def sample(self, sample_shape=torch.Size()):
shape = self._extended_shape(sample_shape)
with torch.no_grad():
return torch.normal(self.loc.expand(shape), self.scale.expand(shape))
def rsample(self, sample_shape: _size = torch.Size()) -> Tensor:
shape = self._extended_shape(sample_shape)
eps = _standard_normal(shape, dtype=self.loc.dtype, device=self.loc.device)
return self.loc + eps * self.scale
def log_prob(self, value):
if self._validate_args:
self._validate_sample(value)
# compute the variance
# pyrefly: ignore [unsupported-operation]
var = self.scale**2
log_scale = (
math.log(self.scale)
if isinstance(self.scale, _Number)
else self.scale.log()
)
return (
-((value - self.loc) ** 2) / (2 * var)
- log_scale
- math.log(math.sqrt(2 * math.pi))
)
def cdf(self, value):
if self._validate_args:
self._validate_sample(value)
return 0.5 * (
1 + torch.erf((value - self.loc) * self.scale.reciprocal() / math.sqrt(2))
)
def icdf(self, value):
return self.loc + self.scale * torch.erfinv(2 * value - 1) * math.sqrt(2)
def entropy(self):
return 0.5 + 0.5 * math.log(2 * math.pi) + torch.log(self.scale)
@property
def _natural_params(self) -> tuple[Tensor, Tensor]:
return (self.loc / self.scale.pow(2), -0.5 * self.scale.pow(2).reciprocal())
# pyrefly: ignore [bad-override]
def _log_normalizer(self, x, y):
return -0.25 * x.pow(2) / y + 0.5 * torch.log(-math.pi / y)
| Normal |
python | getsentry__sentry | tests/sentry/users/api/endpoints/test_auth_index.py | {
"start": 2143,
"end": 2874
} | class ____(APITestCase):
path = "/api/0/auth/"
def test_valid_password(self) -> None:
user = self.create_user("foo@example.com")
response = self.client.post(
self.path,
HTTP_AUTHORIZATION=self.create_basic_auth_header(user.username, "admin"),
)
assert response.status_code == 200
assert response.data["id"] == str(user.id)
def test_invalid_password(self) -> None:
user = self.create_user("foo@example.com")
response = self.client.post(
self.path,
HTTP_AUTHORIZATION=self.create_basic_auth_header(user.username, "foobar"),
)
assert response.status_code == 401
@control_silo_test
| AuthLoginEndpointTest |
python | huggingface__transformers | src/transformers/models/flaubert/modeling_flaubert.py | {
"start": 43602,
"end": 48908
} | class ____(FlaubertPreTrainedModel, GenerationMixin):
_tied_weights_keys = {"pred_layer.proj.weight": "transformer.embeddings.weight"}
def __init__(self, config):
super().__init__(config)
self.transformer = FlaubertModel(config)
self.pred_layer = FlaubertPredLayer(config)
# Initialize weights and apply final processing
self.post_init()
def get_output_embeddings(self):
return self.pred_layer.proj
def set_output_embeddings(self, new_embeddings):
self.pred_layer.proj = new_embeddings
def prepare_inputs_for_generation(self, input_ids, **kwargs):
# Overwritten -- uses a language id
mask_token_id = self.config.mask_token_id
lang_id = self.config.lang_id
effective_batch_size = input_ids.shape[0]
mask_token = torch.full((effective_batch_size, 1), mask_token_id, dtype=torch.long, device=input_ids.device)
input_ids = torch.cat([input_ids, mask_token], dim=1)
if lang_id is not None:
langs = torch.full_like(input_ids, lang_id)
else:
langs = None
return {"input_ids": input_ids, "langs": langs}
@auto_docstring
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
langs: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
lengths: Optional[torch.Tensor] = None,
cache: Optional[dict[str, torch.Tensor]] = None,
inputs_embeds: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[tuple, MaskedLMOutput]:
r"""
langs (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the *language name
to language id* mapping is in `model.config.lang2id` (which is a dictionary string to int) and the
*language id to language name* mapping is in `model.config.id2lang` (dictionary int to string).
See usage examples detailed in the [multilingual documentation](../multilingual).
lengths (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use `attention_mask` for the same result (see above), kept here for compatibility. Indices selected in
`[0, ..., input_ids.size(-1)]`:
cache (`dict[str, torch.FloatTensor]`, *optional*):
Dictionary strings to `torch.FloatTensor` that contains precomputed hidden-states (key and values in the
attention blocks) as computed by the model (see `cache` output below). Can be used to speed up sequential
decoding. The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
`labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`
are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
transformer_outputs = self.transformer(
input_ids,
attention_mask=attention_mask,
langs=langs,
token_type_ids=token_type_ids,
position_ids=position_ids,
lengths=lengths,
cache=cache,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
output = transformer_outputs[0]
outputs = self.pred_layer(output, labels) # (loss, logits) or (logits,) depending on if labels are provided.
if not return_dict:
return outputs + transformer_outputs[1:]
return MaskedLMOutput(
loss=outputs[0] if labels is not None else None,
logits=outputs[0] if labels is None else outputs[1],
hidden_states=transformer_outputs.hidden_states,
attentions=transformer_outputs.attentions,
)
@auto_docstring(
custom_intro="""
Flaubert Model with a sequence classification/regression head on top (a linear layer on top of the pooled output)
e.g. for GLUE tasks.
"""
)
# Copied from transformers.models.xlm.modeling_xlm.XLMForSequenceClassification with XLM_INPUTS->FLAUBERT_INPUTS,XLM->Flaubert
| FlaubertWithLMHeadModel |
python | google__jax | jax/_src/pallas/mosaic_gpu/primitives.py | {
"start": 81137,
"end": 114881
} | class ____:
transforms: tuple[gpu_core.MemoryRefTransform, ...] = ()
def _undo_transforms(
raw_ref: state.AbstractRef,
memory_transforms: Sequence[gpu_core.MemoryRefTransform],
):
"""Extract the `Transform`s that reverse the `MemoryRefTransform`s"""
tmp_ref = state_types.TransformedRef(raw_ref, transforms=())
tmp_ref = functools.reduce(lambda r, t: t.undo(r), reversed(memory_transforms), tmp_ref)
return tmp_ref.transforms
def inline_mgpu(*, arg_types=(), return_type=None):
r"""Returns a decorator that inlines Mosaic GPU code.
This allows using lower-level Mosaic GPU abstractions and operations, which
are otherwise not directly exposed in Pallas.
Example::
layout = plgpu.Layout.WG_STRIDED(x_ref.shape, vec_size=4)
@plgpu.inline_mgpu(
arg_types=(plgpu.RefType(),),
return_type=plgpu.ShapeDtypeStruct(
(128, 128), dtype, layout=layout
),
)
def add_one(ctx, smem_ref):
x = mgpu.FragmentedArray.load_tiled(smem_ref)
y = mgpu.FragmentedArray.splat(
mgpu.c(1, x.mlir_dtype), shape=x.shape, layout=x.layout
)
return x + y
Args:
arg_types: A sequence of pytrees where the leaves are
:class:`~jax.experimental.pallas.mosaic_gpu.RefType`\s or
:class:`~jax.experimental.pallas.mosaic_gpu.Layout`\s for reference or
array arguments respectively.
return_type: A pytree where the leaves are
:class:`~jax.experimental.pallas.mosaic_gpu.ShapeDtypeStruct`\s
representing the arrays returned by the decorated function.
"""
flat_arg_types, treedef_ty = jax.tree.flatten(tuple(arg_types))
flat_ret_ty, pytree_ret_ty = jax.tree.flatten(return_type)
if return_type and not all(isinstance(r, ShapeDtypeStruct) for r in flat_ret_ty):
raise ValueError(
"inline_mgpu_p only supports plgpu.ShapeDtypeStruct return types."
)
if not all(isinstance(r, (SomeLayout, RefType)) for r in flat_arg_types):
raise ValueError(
"inline_mgpu_p only supports only SomeLayout and RefType arg types."
)
def inner(f):
def wrapper(*args):
flat_args, treedef = jax.tree.flatten(tuple(args))
if treedef != treedef_ty:
raise ValueError(f"Mismatched type shape: {treedef} != {treedef_ty}")
# Strip the transforms from the refs since they will be recorded in
# the types.
ref_transforms = []
raw_flat_args = []
for a, t in zip(flat_args, flat_arg_types):
if isinstance(a, state_types.TransformedRef) and isinstance(t, RefType):
raw_flat_args.append(a.ref)
ref_transforms.append(a.transforms)
elif isinstance(aval := jax_core.get_aval(a), jax_core.ShapedArray) and isinstance(t, SomeLayout):
raw_flat_args.append(a)
ref_transforms.append(None)
elif isinstance(aval, state.AbstractRef) and isinstance(t, RefType):
raw_flat_args.append(a)
ref_transforms.append(())
else:
raise ValueError(f"Mismatched type: {a, t}")
flat_ref_transforms, pytree_ref_transforms = jax.tree.flatten(ref_transforms)
flat_ret = inline_mgpu_p.bind(
*raw_flat_args,
*flat_ref_transforms,
flat_arg_types=tuple(flat_arg_types),
flat_ret_ty=tuple(flat_ret_ty),
pytree_ret_ty=pytree_ret_ty,
pytree_args=treedef,
pytree_ref_transforms=pytree_ref_transforms,
mgpu_fn=f,
)
return jax.tree.unflatten(pytree_ret_ty, flat_ret)
return wrapper
return inner
@inline_mgpu_p.def_effectful_abstract_eval
def _inline_mgpu_abstract_eval(
*flat_args_and_transforms,
flat_arg_types,
flat_ret_ty,
pytree_args,
pytree_ref_transforms,
pytree_ret_ty,
mgpu_fn,
):
del flat_arg_types, pytree_ret_ty, pytree_ref_transforms, mgpu_fn # Unused.
aval_return = tuple(
jax_core.ShapedArray(x.shape, x.dtype) for x in flat_ret_ty
)
# TODO(cperivol): Let the user set the effects.
flat_args = flat_args_and_transforms[:pytree_args.num_leaves]
return aval_return, {
gpu_core._wgmma_pipeline_effect,
gpu_core._memory_effect,
*itertools.chain.from_iterable(
(state.ReadEffect(i), state.WriteEffect(i))
for i, r in enumerate(flat_args)
if isinstance(r, state.AbstractRef)
),
}
@discharge.register_partial_discharge_rule(inline_mgpu_p)
def _inline_mgpu_discharge(*args, **kwargs):
del args, kwargs
raise NotImplementedError("inline_mgpu_p does not support discharge.")
def _type_check_mgpu_lane_semantics(v, ty):
match (ty, v):
case (RefType(), ir.Value()) if ir.MemRefType.isinstance(v.type):
pass
case (ShapeDtypeStruct(), mgpu.FragmentedArray()):
mlir_dtype = mgpu_utils.dtype_to_ir_type(ty.dtype)
if v.mlir_dtype != mlir_dtype:
raise ValueError(
f"Array dtype mismatch: expected {v.mlir_dtype} got {mlir_dtype}."
)
if ty.shape != v.shape:
raise ValueError(
f"Array shape mismatch: expected {ty.shape} got {v.shape}."
)
if v.layout != ty.layout.to_mgpu():
raise ValueError(
f"Array layout mismatch: expected {v.layout} got {ty.layout.to_mgpu()}."
)
case (SomeLayout(), mgpu.FragmentedArray()):
if ty.to_mgpu() != v.layout:
raise ValueError(f"Unexpected layout for {v} (expected: {ty})")
case _:
raise ValueError(f"Unexpected type {ty} for value {v}")
def _inline_mgpu_flat_transformed_args(
ctx: lowering.LoweringRuleContext,
flat_args_and_transforms,
flat_arg_types,
pytree_args,
pytree_ref_transforms,
) -> Sequence[ir.Value]:
flat_args = flat_args_and_transforms[:pytree_args.num_leaves]
flat_arg_avals = ctx.avals_in[:pytree_args.num_leaves]
ref_transforms = pytree_ref_transforms.unflatten(flat_args_and_transforms[pytree_args.num_leaves:])
is_wg_semantics = (
ctx.module_ctx.lowering_semantics == mgpu.LoweringSemantics.Warpgroup
)
if not is_wg_semantics:
for a, t in zip(flat_args, flat_arg_types):
_type_check_mgpu_lane_semantics(a, t)
flat_transformed : list[ir.Value] = []
for a, aval, t, transforms in zip(
flat_args, flat_arg_avals, flat_arg_types, ref_transforms, strict=True
):
if not isinstance(t, RefType):
flat_transformed.append(a)
assert transforms is None
continue
assert isinstance(aval, state.AbstractRef)
a, user_transforms = lowering._handle_transforms(
ctx,
a,
transforms,
handle_transposes=is_wg_semantics,
)
if is_wg_semantics:
if user_transforms:
raise NotImplementedError(
"Not all transforms could be handled. Remaining transforms:"
f" {user_transforms}."
)
else:
# Transforms that do not originate from a MemoryRefTransform are
# applied implicitly (eg by emit-pipeline) and therefore we do not
# expect the user to pass them to the type. The transforms not
# passed by the user here will be discharged.
ty_transforms = _undo_transforms(aval, t.transforms)
if ty_transforms != tuple(user_transforms):
raise ValueError(f"Transform mismatch: got {user_transforms}, expected {ty_transforms}")
flat_transformed.append(a)
return flat_transformed
@lowering.register_lowering_rule(inline_mgpu_p, mgpu.LoweringSemantics.Lane)
def _inline_mgpu_lowering_rule(
ctx: lowering.LoweringRuleContext,
*flat_args_and_transforms,
mgpu_fn: Callable[..., Any],
flat_arg_types,
flat_ret_ty,
pytree_args,
pytree_ref_transforms,
pytree_ret_ty,
):
flat_transformed = _inline_mgpu_flat_transformed_args(
ctx,
flat_args_and_transforms,
flat_arg_types,
pytree_args,
pytree_ref_transforms,
)
args = jax.tree.unflatten(pytree_args, flat_transformed)
ret = mgpu_fn(ctx.launch_ctx, *args)
ret_leaves, ret_tree = jax.tree.flatten(
ret, lambda x: isinstance(x, mgpu.FragmentedArray)
)
if ret_tree != pytree_ret_ty:
return_type = jax.tree.unflatten(pytree_ret_ty, flat_ret_ty)
raise ValueError(
f"inline_mgpu_p return type tree mismatch: {ret} != {return_type}"
)
for ty, r in zip(flat_ret_ty, ret_leaves):
_type_check_mgpu_lane_semantics(r, ty)
return ret_leaves
def _ref_type_to_transforms(ref_type: RefType) -> ir.ArrayAttribute:
"""Returns the Mosaic GPU transforms for the given ref type."""
transform_attrs = [t.to_gpu_transform_attr() for t in ref_type.transforms]
return ir.ArrayAttr.get(transform_attrs)
def _replace_uses_in_block(old: ir.Value, new: ir.Value, block: ir.Block):
"""Replaces all uses of the `old` value with the `new` value in `block`."""
def is_contained_within_block(operand: ir.OpOperand, block: ir.Block) -> bool:
current_op = operand.owner.operation
while (parent := current_op.parent) is not None:
if current_op.block == block:
return True
current_op = parent
return False
for use in old.uses:
if is_contained_within_block(use, block):
use.owner.operands[use.operand_number] = new
def _clone_custom_op_with_extra_args(
custom_op: mgpu.dialect.CustomPrimitiveOp, extra_args: Sequence[ir.Value]
) -> mgpu.dialect.CustomPrimitiveOp:
"""Clones a CustomPrimitiveOp and its block adding the given extra_args.
The new args are not allowed to contain SMEM refs or vector types. The extra
args are added in order at the end of the existing parameter list.
The reason we need to do this is because the custom primitive op has the
"IsolatedFromAbove" trait, which requires that its block does not close
over any values defined outside of it. When lowering the provided mgpu_fn,
it's possible that it closed over values from the conext (such as the SMEM
descriptors if it calls async_copy). Post-processing the original block
with this function is therefore required to restore the isolation property.
"""
for arg in extra_args:
if ir.MemRefType.isinstance(arg.type) and mgpu_utils.is_smem_ref(arg.type):
raise ValueError(f"Extra arg {arg} must not be an SMEM ref.")
if ir.VectorType.isinstance(arg.type):
raise ValueError(f"Extra arg {arg} must not have a vector type.")
new_operands = list(custom_op.operands) + list(extra_args)
old_block = custom_op.body.blocks[0]
new_in_types = [a.type for a in list(old_block.arguments) + list(extra_args)]
# Below, we can reuse all layouts and transforms, because the extra args
# are not smem refs or vectors.
new_op = mgpu.dialect.CustomPrimitiveOp(
result=custom_op.results,
operands_=new_operands,
in_layouts=custom_op.in_layouts,
in_transforms=custom_op.in_transforms,
out_layouts=custom_op.out_layouts,
)
new_block = new_op.body.blocks.append(*new_in_types)
for op in old_block.operations:
new_block.append(op)
for old_arg, new_arg in zip(old_block.arguments, new_block.arguments):
old_arg.replace_all_uses_with(new_arg)
num_old_args = len(old_block.arguments)
for extra_arg, new_arg in zip(
extra_args, new_block.arguments[num_old_args:], strict=True
):
_replace_uses_in_block(extra_arg, new_arg, new_block)
return new_op
def _custom_primitive_in_specs(
ctx: lowering.LoweringRuleContext,
flat_arg_types,
flat_transformed_args,
pytree_args,
) -> tuple[Sequence[ir.Type], Sequence[ir.Attribute], Sequence[ir.ArrayAttr]]:
"""Returns a tuple containing the list of MLIR input types, layouts, and
transforms for the given JAX array and ref arguments."""
in_types = []
in_layouts = []
in_transforms : list[ir.ArrayAttr] = []
flat_arg_avals = ctx.avals_in[:pytree_args.num_leaves]
for aval, transformed, t in zip(
flat_arg_avals, flat_transformed_args, flat_arg_types
):
match aval:
case state.AbstractRef():
initial_ty = ir.MemRefType(transformed.type)
in_types.append(initial_ty)
if mgpu_utils.is_smem_ref(initial_ty):
in_transforms.append(_ref_type_to_transforms(t))
case jax_core.ShapedArray() if isinstance(t, SomeLayout):
el_type = mgpu_utils.dtype_to_ir_type(aval.dtype)
if len(aval.shape) == 0:
in_types.append(el_type)
else:
vector_type = ir.VectorType.get(aval.shape, el_type)
in_types.append(vector_type)
in_layouts.append(mgpu_layouts.to_layout_attr(t.to_mgpu()))
case _:
raise NotImplementedError(
f"Unsupported aval type: {aval}, {type(aval)}, {t}"
)
return in_types, in_layouts, in_transforms
def _custom_primitive_op_results(flat_ret_ty) -> tuple[
Sequence[ir.Type],
Sequence[ir.Attribute | None],
]:
"""Returns a tuple containing the list of output MLIR types, and layouts for
the given JAX return types."""
results_ty: list[ir.Type] = []
out_layouts: list[ir.Attribute | None] = []
for r in flat_ret_ty:
if not isinstance(r, ShapeDtypeStruct):
raise NotImplementedError(f"Expected a ShapeDtypeStruct, but got: {r}")
el_type = mgpu_utils.dtype_to_ir_type(r.dtype)
if not r.shape: # scalar case.
results_ty.append(el_type)
out_layouts.append(None)
else:
results_ty.append(ir.VectorType.get(r.shape, el_type))
layout = mgpu_layouts.to_layout_attr(r.layout.to_mgpu())
out_layouts.append(layout)
return results_ty, out_layouts
def _populate_custom_primitive_op_block(
ctx: lowering.LoweringRuleContext,
block: ir.Block,
mgpu_fn: Callable[..., Any],
pytree_args,
in_layouts: Sequence[ir.Attribute],
in_transforms: ir.ArrayAttr,
results_ty: Sequence[ir.Type],
out_layouts: Sequence[ir.Attribute | None],
):
"""Calls the given mgpu_fn to populate the block, handling inputs and outputs.
Block arguments that are references to SMEM or vectors are unwrapped to
transformed references and fragmented arrays before they are passed to the
python function mgpu_fn.
The resulting fragmented arrays, if any, are wrapped as vectors before they
are returned.
"""
with ir.InsertionPoint(block):
fn_inputs = []
in_layouts_it = iter(in_layouts)
in_transforms_it = iter(in_transforms)
avals_in = ctx.avals_in[:pytree_args.num_leaves]
for arg, aval in zip(block.arguments, avals_in, strict=True):
if ir.MemRefType.isinstance(arg.type):
memref_ty = ir.MemRefType(arg.type)
if not mgpu_utils.is_smem_ref(memref_ty):
fn_inputs.append(arg)
continue
_, transforms = (
mgpu.dialect_lowering.swizzle_and_transforms_from_transforms_attr(
next(in_transforms_it)
)
)
# The block arguments in the Mosaic GPU dialect are logical refs that
# wrap the transfromed refs. Since the mgpu_fn works at the lowered
# "lane" level, we need to transform (lower) the inputs before passing
# them to the mgpu_fn.
transformed_type = mgpu.dialect_lowering.transformed_smem_ref_type(
memref_ty, transforms
)
conversion_cast = builtin_dialect.UnrealizedConversionCastOp(
[transformed_type], [arg]
)
fn_inputs.append(conversion_cast.result)
elif ir.VectorType.isinstance(arg.type):
layout_attr = next(in_layouts_it)
layout = mgpu.layouts.from_layout_attr(layout_attr)
vector_ty = ir.VectorType(arg.type)
reg_shape = layout.registers_shape(vector_ty.shape)
reg_ty = layout.registers_element_type(vector_ty.element_type)
# The vector block arguments in the Mosaic GPU dialect are wrapped
# Fragmented Arrays. Since the mgpu_fn works at the lowered
# "lane" level, we need to unwrap (lower) the input vectors before
# passing them to the mgpu_fn.
conversion_cast = builtin_dialect.UnrealizedConversionCastOp(
[reg_ty] * math.prod(reg_shape), [arg]
)
conversion_cast.attributes["registers_shape"] = ir.ArrayAttr.get([
ir.IntegerAttr.get(ir.IntegerType.get_signless(64), s)
for s in reg_shape
])
conversion_cast.attributes["layout"] = layout_attr
registers = np.array(list(conversion_cast.results)).reshape(reg_shape)
is_signed = mgpu_utils.is_signed(aval.dtype)
fa = mgpu.FragmentedArray(
_registers=registers, _layout=layout, _is_signed=is_signed
)
fn_inputs.append(fa)
else:
fn_inputs.append(arg)
args = jax.tree.unflatten(pytree_args, fn_inputs)
inner_ret = mgpu_fn(ctx.launch_ctx, *args)
if inner_ret is None:
inner_ret = []
elif not isinstance(inner_ret, tuple) and not isinstance(inner_ret, list):
inner_ret = [inner_ret]
ir_ret = []
for fa, result_ty, out_layout in zip(
inner_ret, results_ty, out_layouts, strict=True
):
if not isinstance(fa, mgpu.FragmentedArray):
raise ValueError(f"Expected a FragmentedArray, but got: {fa}")
if ir.VectorType.isinstance(result_ty):
result_shape = ir.VectorType(result_ty).shape
if fa.shape != tuple(result_shape):
raise ValueError(f"Expected {result_shape} but got {fa.shape}")
if out_layout != mgpu.layouts.to_layout_attr(fa.layout):
raise ValueError(
f"Output layout {out_layout} does not match the layout of the"
f" returned fragmented array {fa.layout}."
)
ir_ret.append(
mgpu.dialect_lowering.fragmented_array_to_ir(fa, result_ty)
)
else: # scalar case.
assert out_layout is None
if fa.shape:
raise ValueError(f"Expected 0D shape, but got {fa.shape}")
if not isinstance(fa.layout, mgpu.WGSplatFragLayout):
raise ValueError(f"Expected WGSplatFragLayout, but got {fa.layout}")
value = fa.registers.item()
ir_ret.append(value)
mgpu.dialect.ReturnOp(operands_=ir_ret)
def _closed_over_values(block: ir.Block) -> list[ir.Value]:
"""Returns the values closed over in the given block."""
def _closed_over_values_inner(
block: ir.Block, vals_in_block: set[ir.Value]
) -> list[ir.Value]:
closed_over_values = []
for arg in block.arguments:
vals_in_block.add(arg)
for op in block.operations:
for o in op.operands:
if o not in vals_in_block:
closed_over_values.append(o)
for r in op.regions:
for b in r.blocks:
closed_over_values.extend(_closed_over_values_inner(b, vals_in_block))
for r in op.results:
vals_in_block.add(r)
return closed_over_values
return _closed_over_values_inner(block, set())
@lowering.register_lowering_rule(inline_mgpu_p, mgpu.LoweringSemantics.Warpgroup)
def _inline_mgpu_lowering_rule_wg_semantics(
ctx: lowering.LoweringRuleContext,
*flat_args_and_transforms,
mgpu_fn: Callable[..., Any],
flat_arg_types,
flat_ret_ty,
pytree_args,
pytree_ref_transforms,
pytree_ret_ty,
):
del pytree_ret_ty
flat_transformed_args = _inline_mgpu_flat_transformed_args(
ctx,
flat_args_and_transforms,
flat_arg_types,
pytree_args,
pytree_ref_transforms,
)
in_types, in_layouts, in_transforms = (
_custom_primitive_in_specs(
ctx, flat_arg_types, flat_transformed_args, pytree_args
)
)
results_ty, out_layouts = _custom_primitive_op_results(flat_ret_ty)
custom_op = mgpu.dialect.CustomPrimitiveOp(
result=results_ty,
operands_=flat_transformed_args,
in_layouts=in_layouts,
in_transforms=in_transforms,
out_layouts=[l for l in out_layouts if l is not None],
)
block : ir.Block = custom_op.body.blocks.append(*in_types)
_populate_custom_primitive_op_block(
ctx,
block,
mgpu_fn,
pytree_args,
in_layouts,
in_transforms,
results_ty,
out_layouts,
)
# We need to ensure that the block doesn't capture any values from the context
# and uses args for everything instead. E.g. `LaunchContext.tma_descriptors`
# will be captured when calling `ctx.async_copy`.
captured = _closed_over_values(block)
if captured:
old_custom_op = custom_op
custom_op = _clone_custom_op_with_extra_args(custom_op, captured)
old_custom_op.erase()
return custom_op.results
load_p = jax_core.Primitive("load")
@load_p.def_effectful_abstract_eval
def _load_abstract_eval(src, *avals_flat, tree, optimized):
del optimized # Unused.
transforms = tree.unflatten(avals_flat)
dtype = lowering._transform_dtype(src.dtype, transforms)
transforms = list(transforms)
if not transforms or not isinstance(transforms[-1], indexing.NDIndexer):
ref_shape = state.get_transforms_shape(transforms, src.shape)
transforms.append(indexing.NDIndexer.make_trivial_indexer(ref_shape))
shape = transforms[-1].get_indexer_shape()
return jax_core.ShapedArray(shape, dtype), {state.ReadEffect(0)}
lowering.register_lowering_rule(load_p, mgpu.LoweringSemantics.Lane)(
lowering._get_lowering_rule
)
lowering.register_lowering_rule(
load_p, mgpu.LoweringSemantics.Lane, gpu_core.PrimitiveSemantics.Warp
)(
lowering._get_lowering_rule
)
lowering.register_lowering_rule(load_p, mgpu.LoweringSemantics.Warpgroup)(
lowering._get_lowering_rule_wg
)
def load(
src: _Ref,
idx,
*,
layout: SomeLayout | None = None,
optimized: bool = True,
) -> jax.Array:
"""Loads from a reference into an array with the specified layout.
Args:
src: The reference to load from. Can be either in SMEM or GMEM.
idx: The index to load from.
layout: The optional layout to use for the resulting array.
optimized: If True, a compilation error will be raised if no optimized
implementation for the load is available.
Returns:
The loaded array.
"""
src, src_transforms = state_primitives.get_ref_and_transforms(
src, idx, "load"
)
flat_src_transforms, src_transforms_treedef = tree_util.tree_flatten(
src_transforms
)
result = load_p.bind(
src,
*flat_src_transforms,
tree=src_transforms_treedef,
optimized=optimized,
)
if layout is not None:
result = gpu_core.layout_cast(result, layout)
return result
async_load_tmem_p = jax_core.Primitive("async_load")
def async_load_tmem(src: _Ref, *, layout: SomeLayout | None = None) -> jax.Array:
"""Performs an asynchronous load from the TMEM array.
The load operation is only partly asynchronous. The returned array can be used
immediately, without any additional synchronization. However, it cannot be
assumed that the read from TMEM has completed when the function returns. If
you ever attempt to overwrite the read region, you should ensure that
``wait_load_tmem`` has been called before that happens. Failure to do so
can result in nondeterministic data races.
For example, the following sequence of operations at the end of the kernel is
valid, even though the TMEM load is never awaited::
smem_ref[...] = plgpu.async_load_tmem(tmem_ref)
plgpu.commit_smem()
plgpu.copy_smem_to_gmem(smem_ref, gmem_ref)
plgpu.wait_smem_to_gmem(0)
However, if the kernel was persistent and might reuse the TMEM again, the
sequence should be extended with a call to ``wait_load_tmem``.
Args:
src: The TMEM reference to load from.
layout: The optional layout hint to use for the resulting array.
"""
src, src_transforms = state_primitives.get_ref_and_transforms(
src, None, "async_load_tmem"
)
flat_src_transforms, src_transforms_treedef = tree_util.tree_flatten(
src_transforms
)
result = async_load_tmem_p.bind(
src, *flat_src_transforms, tree=src_transforms_treedef
)
if layout is not None:
result = gpu_core.layout_cast(result, layout)
return result
@async_load_tmem_p.def_effectful_abstract_eval
def _async_load_tmem_abstract_eval(src, *avals_flat, tree):
if src.memory_space != gpu_core.MemorySpace.TMEM:
raise ValueError("Async load only supports TMEM refs")
return state_primitives._get_abstract_eval(src, *avals_flat, tree=tree)
@lowering.register_lowering_rule(async_load_tmem_p, mgpu.LoweringSemantics.Lane)
def _async_load_tmem_lowering_rule(
ctx: lowering.LoweringRuleContext, x_ref, *leaves, tree
):
assert isinstance(x_ref, tcgen05.TMEMRef)
transforms = jax.tree.unflatten(tree, leaves)
x_tmem, transforms = lowering._handle_transforms(
ctx, x_ref, transforms, handle_transposes=False, handle_reshapes=False,
)
if transforms:
raise NotImplementedError(
f"Unimplemented transforms for TMEM refs. {transforms=}"
)
layout_hint = None
if isinstance(ctx.out_layout_hint, mgpu.TiledLayout):
layout_hint = ctx.out_layout_hint
is_signed = mgpu_utils.is_signed(ctx.avals_out[0].dtype)
return x_tmem.load(layout=layout_hint, is_signed=is_signed)
@lowering.register_lowering_rule(
async_load_tmem_p, mgpu.LoweringSemantics.Warpgroup
)
def _async_load_tmem_lowering_rule_wg(
ctx: lowering.LoweringRuleContext, x_ref: ir.Value, *leaves, tree
):
assert isinstance(x_ref, ir.Value)
assert ir.MemRefType.isinstance(x_ref.type)
transforms = jax.tree.unflatten(tree, leaves)
x_tmem, transforms = lowering._handle_transforms(
ctx,
x_ref,
transforms,
handle_transposes=False,
handle_reshapes=False,
)
if transforms:
raise NotImplementedError(
f"Unimplemented transforms for TMEM refs. {transforms=}"
)
return mgpu.dialect.async_load_tmem(x_tmem)
wait_load_tmem_p = jax_core.Primitive("wait_load_tmem")
wait_load_tmem_p.multiple_results = True
def wait_load_tmem():
"""Awaits all previously asynchronous TMEM loads issued by the calling thread.
Once this function returns, the TMEM loads issued by the calling thread are
guaranteed to have completed. The read TMEM regions can be safely overwritten
by the calling thread, or any threads signalled through ``Barrier``s with
``orders_tensor_core=True``.
"""
wait_load_tmem_p.bind()
@wait_load_tmem_p.def_effectful_abstract_eval
def _wait_load_tmem_abstract_eval():
return (), {gpu_core._memory_effect}
@lowering.register_lowering_rule(wait_load_tmem_p, mgpu.LoweringSemantics.Lane)
def _wait_load_tmem_lowering(_):
tcgen05.wait_load_tmem()
return ()
async_store_tmem_p = jax_core.Primitive("async_store_tmem")
async_store_tmem_p.multiple_results = True
def async_store_tmem(ref: _Ref, value):
"""Stores the value to TMEM.
The store is asynchronous and is not guaranteed to be visible (e.g. by reads
or MMA operations) until ``commit_tmem`` has been called.
Args:
ref: The TMEM reference to store to.
value: The value to store.
"""
ref, ref_transforms = state_primitives.get_ref_and_transforms(
ref, None, "async_store_tmem"
)
flat_ref_transforms, ref_transforms_treedef = tree_util.tree_flatten(
ref_transforms
)
async_store_tmem_p.bind(
ref, value, *flat_ref_transforms, tree=ref_transforms_treedef
)
@async_store_tmem_p.def_effectful_abstract_eval
def _async_store_tmem_abstract_eval(ref, val, *avals_flat, tree):
if ref.memory_space != gpu_core.MemorySpace.TMEM:
raise ValueError("Async store only supports TMEM refs")
_, effects = state_primitives._swap_abstract_eval(
ref, val, *avals_flat, tree=tree
)
return (), effects
@lowering.register_lowering_rule(async_store_tmem_p, mgpu.LoweringSemantics.Lane)
def _async_store_tmem_lowering_rule(
ctx: lowering.LoweringRuleContext, x_ref, value, *leaves, tree
):
assert isinstance(x_ref, tcgen05.TMEMRef)
transforms = jax.tree.unflatten(tree, leaves)
x_tmem, transforms = lowering._handle_transforms(
ctx, x_ref, transforms, handle_transposes=False, handle_reshapes=False,
)
if transforms:
raise NotImplementedError(
f"Unimplemented transforms for TMEM refs. {transforms=}"
)
x_tmem.store(value)
return ()
@lowering.register_lowering_rule(
async_store_tmem_p, mgpu.LoweringSemantics.Warpgroup
)
def _async_store_tmem_lowering_rule_wg(
ctx: lowering.LoweringRuleContext,
x_ref: ir.Value,
value: ir.Value,
*leaves,
tree,
):
assert isinstance(x_ref, ir.Value)
assert ir.MemRefType.isinstance(x_ref.type)
assert isinstance(value, ir.Value)
assert ir.VectorType.isinstance(value.type)
transforms = jax.tree.unflatten(tree, leaves)
x_tmem, transforms = lowering._handle_transforms(
ctx,
x_ref,
transforms,
handle_transposes=False,
handle_reshapes=False,
)
if transforms:
raise NotImplementedError(
f"Unimplemented transforms for TMEM refs. {transforms=}"
)
mgpu.dialect.async_store_tmem(value, x_tmem)
return ()
async_copy_scales_to_tmem_p = jax_core.Primitive("async_copy_scales_to_tmem")
async_copy_scales_to_tmem_p.multiple_results = True
def async_copy_scales_to_tmem(smem_ref: _Ref, tmem_ref: _Ref):
"""Copies the MMA scales from SMEM to TMEM.
The copy is performed asynchronously and can be awaited by calling
``tcgen05_commit_arrive`` and waiting on the specified barrier. However, if
the copy is consumed by an MMA operation issued in the same thread, no
synchronization is necessary (except for eventually awaiting the MMA operation
itself).
"""
smem_ref, smem_transforms = state_primitives.get_ref_and_transforms(
smem_ref, None, "async_copy_scales_to_tmem"
)
flat_smem_transforms, smem_transforms_treedef = tree_util.tree_flatten(
smem_transforms
)
tmem_ref, tmem_transforms = state_primitives.get_ref_and_transforms(
tmem_ref, None, "async_copy_scales_to_tmem"
)
flat_tmem_transforms, tmem_transforms_treedef = tree_util.tree_flatten(
tmem_transforms
)
async_copy_scales_to_tmem_p.bind(
smem_ref, tmem_ref, *flat_smem_transforms, *flat_tmem_transforms,
smem_tree=smem_transforms_treedef, tmem_tree=tmem_transforms_treedef,
)
async_copy_sparse_metadata_to_tmem_p = jax_core.Primitive("async_copy_sparse_metadata_to_tmem")
async_copy_sparse_metadata_to_tmem_p.multiple_results = True
def async_copy_sparse_metadata_to_tmem(smem_ref: _Ref, tmem_ref: _Ref):
"""Copies the MMA sparse metadata from SMEM to TMEM.
The copy is performed asynchronously and can be awaited by calling
``tcgen05_commit_arrive`` and waiting on the specified barrier. However, if
the copy is consumed by an MMA operation issued in the same thread, no
synchronization is necessary (except for eventually awaiting the MMA operation
itself).
"""
smem_ref, smem_transforms = state_primitives.get_ref_and_transforms(
smem_ref, None, "async_copy_sparse_metadata_to_tmem"
)
flat_smem_transforms, smem_transforms_treedef = tree_util.tree_flatten(
smem_transforms
)
tmem_ref, tmem_transforms = state_primitives.get_ref_and_transforms(
tmem_ref, None, "async_copy_sparse_metadata_to_tmem"
)
flat_tmem_transforms, tmem_transforms_treedef = tree_util.tree_flatten(
tmem_transforms
)
async_copy_sparse_metadata_to_tmem_p.bind(
smem_ref, tmem_ref, *flat_smem_transforms, *flat_tmem_transforms,
smem_tree=smem_transforms_treedef, tmem_tree=tmem_transforms_treedef,
)
@async_copy_scales_to_tmem_p.def_effectful_abstract_eval
@async_copy_sparse_metadata_to_tmem_p.def_effectful_abstract_eval
def _async_copy_to_tmem_abstract_eval(smem_ref, tmem_ref, *avals_flat, smem_tree, tmem_tree):
if smem_ref.memory_space != gpu_core.MemorySpace.SMEM:
raise ValueError("async_copy_scales_to_tmem source must be an SMEM ref")
if tmem_ref.memory_space != gpu_core.MemorySpace.TMEM:
raise ValueError("async_copy_scales_to_tmem target must be a TMEM ref")
return (), {gpu_core._memory_effect}
def _async_copy_to_tmem_lowering_rule(
impl, ctx: lowering.LoweringRuleContext, smem_ref, tmem_ref, *leaves, smem_tree, tmem_tree
):
assert isinstance(tmem_ref, tcgen05.TMEMRef)
smem_leaves, tmem_leaves = util.split_list(leaves, [smem_tree.num_leaves])
smem_transforms = jax.tree.unflatten(smem_tree, smem_leaves)
tmem_transforms = jax.tree.unflatten(tmem_tree, tmem_leaves)
smem_ref, smem_transforms = lowering._handle_transforms(ctx, smem_ref, smem_transforms)
tmem_ref, tmem_transforms = lowering._handle_transforms(ctx, tmem_ref, tmem_transforms)
if smem_transforms:
raise NotImplementedError(f"Unimplemented transforms for SMEM refs: {smem_transforms}")
if tmem_transforms:
raise NotImplementedError(f"Unimplemented transforms for TMEM refs: {tmem_transforms}")
with mgpu.when(ctx.module_ctx.single_lane_predicate):
impl(smem_ref, tmem_ref)
return ()
@lowering.register_lowering_rule(
async_copy_scales_to_tmem_p, mgpu.LoweringSemantics.Lane
)
@lowering.register_lowering_rule(
async_copy_scales_to_tmem_p,
mgpu.LoweringSemantics.Lane,
gpu_core.PrimitiveSemantics.Warp,
)
def _async_copy_scales_to_tmem_lowering_rule(*args, **kwargs):
return _async_copy_to_tmem_lowering_rule(
tcgen05.async_copy_scales_smem_to_tmem, *args, **kwargs
)
@lowering.register_lowering_rule(
async_copy_sparse_metadata_to_tmem_p, mgpu.LoweringSemantics.Lane
)
@lowering.register_lowering_rule(
async_copy_sparse_metadata_to_tmem_p,
mgpu.LoweringSemantics.Lane,
gpu_core.PrimitiveSemantics.Warp,
)
def _async_copy_sparse_metadata_to_tmem_lowering_rule(*args, **kwargs):
return _async_copy_to_tmem_lowering_rule(
tcgen05.async_copy_sparse_metadata_smem_to_tmem, *args, **kwargs
)
semaphore_signal_parallel_p = jax_core.Primitive('semaphore_signal_parallel')
semaphore_signal_parallel_p.multiple_results = True
@dataclasses.dataclass(frozen=True)
| RefType |
python | sqlalchemy__sqlalchemy | lib/sqlalchemy/orm/context.py | {
"start": 23917,
"end": 24577
} | class ____(_DMLReturningColFilter):
"""an adapter used for the DML RETURNING case specifically
for ORM bulk insert (or any hypothetical DML that is splitting out a class
hierarchy among multiple DML statements....ORM bulk insert is the only
example right now)
its main job is to limit the columns in a RETURNING to only a specific
mapped table in a hierarchy.
"""
def adapt_check_present(self, col):
mapper = self.mapper
prop = mapper._columntoproperty.get(col, None)
if prop is None:
return None
return mapper.local_table.c.corresponding_column(col)
| _DMLBulkInsertReturningColFilter |
python | django__django | tests/delete/models.py | {
"start": 5697,
"end": 5729
} | class ____(Parent):
pass
| Child |
python | tornadoweb__tornado | tornado/locks.py | {
"start": 14127,
"end": 14840
} | class ____(Semaphore):
"""A semaphore that prevents release() being called too many times.
If `.release` would increment the semaphore's value past the initial
value, it raises `ValueError`. Semaphores are mostly used to guard
resources with limited capacity, so a semaphore released too many times
is a sign of a bug.
"""
def __init__(self, value: int = 1) -> None:
super().__init__(value=value)
self._initial_value = value
def release(self) -> None:
"""Increment the counter and wake one waiter."""
if self._value >= self._initial_value:
raise ValueError("Semaphore released too many times")
super().release()
| BoundedSemaphore |
python | fsspec__filesystem_spec | fsspec/implementations/http.py | {
"start": 844,
"end": 19177
} | class ____(AsyncFileSystem):
"""
Simple File-System for fetching data via HTTP(S)
``ls()`` is implemented by loading the parent page and doing a regex
match on the result. If simple_link=True, anything of the form
"http(s)://server.com/stuff?thing=other"; otherwise only links within
HTML href tags will be used.
"""
protocol = ("http", "https")
sep = "/"
def __init__(
self,
simple_links=True,
block_size=None,
same_scheme=True,
size_policy=None,
cache_type="bytes",
cache_options=None,
asynchronous=False,
loop=None,
client_kwargs=None,
get_client=get_client,
encoded=False,
**storage_options,
):
"""
NB: if this is called async, you must await set_client
Parameters
----------
block_size: int
Blocks to read bytes; if 0, will default to raw requests file-like
objects instead of HTTPFile instances
simple_links: bool
If True, will consider both HTML <a> tags and anything that looks
like a URL; if False, will consider only the former.
same_scheme: True
When doing ls/glob, if this is True, only consider paths that have
http/https matching the input URLs.
size_policy: this argument is deprecated
client_kwargs: dict
Passed to aiohttp.ClientSession, see
https://docs.aiohttp.org/en/stable/client_reference.html
For example, ``{'auth': aiohttp.BasicAuth('user', 'pass')}``
get_client: Callable[..., aiohttp.ClientSession]
A callable, which takes keyword arguments and constructs
an aiohttp.ClientSession. Its state will be managed by
the HTTPFileSystem class.
storage_options: key-value
Any other parameters passed on to requests
cache_type, cache_options: defaults used in open()
"""
super().__init__(self, asynchronous=asynchronous, loop=loop, **storage_options)
self.block_size = block_size if block_size is not None else DEFAULT_BLOCK_SIZE
self.simple_links = simple_links
self.same_schema = same_scheme
self.cache_type = cache_type
self.cache_options = cache_options
self.client_kwargs = client_kwargs or {}
self.get_client = get_client
self.encoded = encoded
self.kwargs = storage_options
self._session = None
# Clean caching-related parameters from `storage_options`
# before propagating them as `request_options` through `self.kwargs`.
# TODO: Maybe rename `self.kwargs` to `self.request_options` to make
# it clearer.
request_options = copy(storage_options)
self.use_listings_cache = request_options.pop("use_listings_cache", False)
request_options.pop("listings_expiry_time", None)
request_options.pop("max_paths", None)
request_options.pop("skip_instance_cache", None)
self.kwargs = request_options
@property
def fsid(self):
return "http"
def encode_url(self, url):
return yarl.URL(url, encoded=self.encoded)
@staticmethod
def close_session(loop, session):
if loop is not None and loop.is_running():
try:
sync(loop, session.close, timeout=0.1)
return
except (TimeoutError, FSTimeoutError, NotImplementedError):
pass
connector = getattr(session, "_connector", None)
if connector is not None:
# close after loop is dead
connector._close()
async def set_session(self):
if self._session is None:
self._session = await self.get_client(loop=self.loop, **self.client_kwargs)
if not self.asynchronous:
weakref.finalize(self, self.close_session, self.loop, self._session)
return self._session
@classmethod
def _strip_protocol(cls, path):
"""For HTTP, we always want to keep the full URL"""
return path
@classmethod
def _parent(cls, path):
# override, since _strip_protocol is different for URLs
par = super()._parent(path)
if len(par) > 7: # "http://..."
return par
return ""
async def _ls_real(self, url, detail=True, **kwargs):
# ignoring URL-encoded arguments
kw = self.kwargs.copy()
kw.update(kwargs)
logger.debug(url)
session = await self.set_session()
async with session.get(self.encode_url(url), **self.kwargs) as r:
self._raise_not_found_for_status(r, url)
if "Content-Type" in r.headers:
mimetype = r.headers["Content-Type"].partition(";")[0]
else:
mimetype = None
if mimetype in ("text/html", None):
try:
text = await r.text(errors="ignore")
if self.simple_links:
links = ex2.findall(text) + [u[2] for u in ex.findall(text)]
else:
links = [u[2] for u in ex.findall(text)]
except UnicodeDecodeError:
links = [] # binary, not HTML
else:
links = []
out = set()
parts = urlparse(url)
for l in links:
if isinstance(l, tuple):
l = l[1]
if l.startswith("/") and len(l) > 1:
# absolute URL on this server
l = f"{parts.scheme}://{parts.netloc}{l}"
if l.startswith("http"):
if self.same_schema and l.startswith(url.rstrip("/") + "/"):
out.add(l)
elif l.replace("https", "http").startswith(
url.replace("https", "http").rstrip("/") + "/"
):
# allowed to cross http <-> https
out.add(l)
else:
if l not in ["..", "../"]:
# Ignore FTP-like "parent"
out.add("/".join([url.rstrip("/"), l.lstrip("/")]))
if not out and url.endswith("/"):
out = await self._ls_real(url.rstrip("/"), detail=False)
if detail:
return [
{
"name": u,
"size": None,
"type": "directory" if u.endswith("/") else "file",
}
for u in out
]
else:
return sorted(out)
async def _ls(self, url, detail=True, **kwargs):
if self.use_listings_cache and url in self.dircache:
out = self.dircache[url]
else:
out = await self._ls_real(url, detail=detail, **kwargs)
self.dircache[url] = out
return out
ls = sync_wrapper(_ls)
def _raise_not_found_for_status(self, response, url):
"""
Raises FileNotFoundError for 404s, otherwise uses raise_for_status.
"""
if response.status == 404:
raise FileNotFoundError(url)
response.raise_for_status()
async def _cat_file(self, url, start=None, end=None, **kwargs):
kw = self.kwargs.copy()
kw.update(kwargs)
logger.debug(url)
if start is not None or end is not None:
if start == end:
return b""
headers = kw.pop("headers", {}).copy()
headers["Range"] = await self._process_limits(url, start, end)
kw["headers"] = headers
session = await self.set_session()
async with session.get(self.encode_url(url), **kw) as r:
out = await r.read()
self._raise_not_found_for_status(r, url)
return out
async def _get_file(
self, rpath, lpath, chunk_size=5 * 2**20, callback=DEFAULT_CALLBACK, **kwargs
):
kw = self.kwargs.copy()
kw.update(kwargs)
logger.debug(rpath)
session = await self.set_session()
async with session.get(self.encode_url(rpath), **kw) as r:
try:
size = int(r.headers["content-length"])
except (ValueError, KeyError):
size = None
callback.set_size(size)
self._raise_not_found_for_status(r, rpath)
if isfilelike(lpath):
outfile = lpath
else:
outfile = open(lpath, "wb") # noqa: ASYNC230
try:
chunk = True
while chunk:
chunk = await r.content.read(chunk_size)
outfile.write(chunk)
callback.relative_update(len(chunk))
finally:
if not isfilelike(lpath):
outfile.close()
async def _put_file(
self,
lpath,
rpath,
chunk_size=5 * 2**20,
callback=DEFAULT_CALLBACK,
method="post",
mode="overwrite",
**kwargs,
):
if mode != "overwrite":
raise NotImplementedError("Exclusive write")
async def gen_chunks():
# Support passing arbitrary file-like objects
# and use them instead of streams.
if isinstance(lpath, io.IOBase):
context = nullcontext(lpath)
use_seek = False # might not support seeking
else:
context = open(lpath, "rb") # noqa: ASYNC230
use_seek = True
with context as f:
if use_seek:
callback.set_size(f.seek(0, 2))
f.seek(0)
else:
callback.set_size(getattr(f, "size", None))
chunk = f.read(chunk_size)
while chunk:
yield chunk
callback.relative_update(len(chunk))
chunk = f.read(chunk_size)
kw = self.kwargs.copy()
kw.update(kwargs)
session = await self.set_session()
method = method.lower()
if method not in ("post", "put"):
raise ValueError(
f"method has to be either 'post' or 'put', not: {method!r}"
)
meth = getattr(session, method)
async with meth(self.encode_url(rpath), data=gen_chunks(), **kw) as resp:
self._raise_not_found_for_status(resp, rpath)
async def _exists(self, path, strict=False, **kwargs):
kw = self.kwargs.copy()
kw.update(kwargs)
try:
logger.debug(path)
session = await self.set_session()
r = await session.get(self.encode_url(path), **kw)
async with r:
if strict:
self._raise_not_found_for_status(r, path)
return r.status < 400
except FileNotFoundError:
return False
except aiohttp.ClientError:
if strict:
raise
return False
async def _isfile(self, path, **kwargs):
return await self._exists(path, **kwargs)
def _open(
self,
path,
mode="rb",
block_size=None,
autocommit=None, # XXX: This differs from the base class.
cache_type=None,
cache_options=None,
size=None,
**kwargs,
):
"""Make a file-like object
Parameters
----------
path: str
Full URL with protocol
mode: string
must be "rb"
block_size: int or None
Bytes to download in one request; use instance value if None. If
zero, will return a streaming Requests file-like instance.
kwargs: key-value
Any other parameters, passed to requests calls
"""
if mode != "rb":
raise NotImplementedError
block_size = block_size if block_size is not None else self.block_size
kw = self.kwargs.copy()
kw["asynchronous"] = self.asynchronous
kw.update(kwargs)
info = {}
size = size or info.update(self.info(path, **kwargs)) or info["size"]
session = sync(self.loop, self.set_session)
if block_size and size and info.get("partial", True):
return HTTPFile(
self,
path,
session=session,
block_size=block_size,
mode=mode,
size=size,
cache_type=cache_type or self.cache_type,
cache_options=cache_options or self.cache_options,
loop=self.loop,
**kw,
)
else:
return HTTPStreamFile(
self,
path,
mode=mode,
loop=self.loop,
session=session,
**kw,
)
async def open_async(self, path, mode="rb", size=None, **kwargs):
session = await self.set_session()
if size is None:
try:
size = (await self._info(path, **kwargs))["size"]
except FileNotFoundError:
pass
return AsyncStreamFile(
self,
path,
loop=self.loop,
session=session,
size=size,
**kwargs,
)
def ukey(self, url):
"""Unique identifier; assume HTTP files are static, unchanging"""
return tokenize(url, self.kwargs, self.protocol)
async def _info(self, url, **kwargs):
"""Get info of URL
Tries to access location via HEAD, and then GET methods, but does
not fetch the data.
It is possible that the server does not supply any size information, in
which case size will be given as None (and certain operations on the
corresponding file will not work).
"""
info = {}
session = await self.set_session()
for policy in ["head", "get"]:
try:
info.update(
await _file_info(
self.encode_url(url),
size_policy=policy,
session=session,
**self.kwargs,
**kwargs,
)
)
if info.get("size") is not None:
break
except Exception as exc:
if policy == "get":
# If get failed, then raise a FileNotFoundError
raise FileNotFoundError(url) from exc
logger.debug("", exc_info=exc)
return {"name": url, "size": None, **info, "type": "file"}
async def _glob(self, path, maxdepth=None, **kwargs):
"""
Find files by glob-matching.
This implementation is idntical to the one in AbstractFileSystem,
but "?" is not considered as a character for globbing, because it is
so common in URLs, often identifying the "query" part.
"""
if maxdepth is not None and maxdepth < 1:
raise ValueError("maxdepth must be at least 1")
import re
ends_with_slash = path.endswith("/") # _strip_protocol strips trailing slash
path = self._strip_protocol(path)
append_slash_to_dirname = ends_with_slash or path.endswith(("/**", "/*"))
idx_star = path.find("*") if path.find("*") >= 0 else len(path)
idx_brace = path.find("[") if path.find("[") >= 0 else len(path)
min_idx = min(idx_star, idx_brace)
detail = kwargs.pop("detail", False)
if not has_magic(path):
if await self._exists(path, **kwargs):
if not detail:
return [path]
else:
return {path: await self._info(path, **kwargs)}
else:
if not detail:
return [] # glob of non-existent returns empty
else:
return {}
elif "/" in path[:min_idx]:
min_idx = path[:min_idx].rindex("/")
root = path[: min_idx + 1]
depth = path[min_idx + 1 :].count("/") + 1
else:
root = ""
depth = path[min_idx + 1 :].count("/") + 1
if "**" in path:
if maxdepth is not None:
idx_double_stars = path.find("**")
depth_double_stars = path[idx_double_stars:].count("/") + 1
depth = depth - depth_double_stars + maxdepth
else:
depth = None
allpaths = await self._find(
root, maxdepth=depth, withdirs=True, detail=True, **kwargs
)
pattern = glob_translate(path + ("/" if ends_with_slash else ""))
pattern = re.compile(pattern)
out = {
(
p.rstrip("/")
if not append_slash_to_dirname
and info["type"] == "directory"
and p.endswith("/")
else p
): info
for p, info in sorted(allpaths.items())
if pattern.match(p.rstrip("/"))
}
if detail:
return out
else:
return list(out)
async def _isdir(self, path):
# override, since all URLs are (also) files
try:
return bool(await self._ls(path))
except (FileNotFoundError, ValueError):
return False
async def _pipe_file(self, path, value, mode="overwrite", **kwargs):
"""
Write bytes to a remote file over HTTP.
Parameters
----------
path : str
Target URL where the data should be written
value : bytes
Data to be written
mode : str
How to write to the file - 'overwrite' or 'append'
**kwargs : dict
Additional parameters to pass to the HTTP request
"""
url = self._strip_protocol(path)
headers = kwargs.pop("headers", {})
headers["Content-Length"] = str(len(value))
session = await self.set_session()
async with session.put(url, data=value, headers=headers, **kwargs) as r:
r.raise_for_status()
| HTTPFileSystem |
python | apache__airflow | shared/secrets_masker/tests/secrets_masker/test_secrets_masker.py | {
"start": 23589,
"end": 26118
} | class ____:
def test_kubernetes_env_var_redaction(self):
class MockV1EnvVar:
def __init__(self, name, value):
self.name = name
self.value = value
def to_dict(self):
return {"name": self.name, "value": self.value}
secret_env_var = MockV1EnvVar("password", "secret_password")
normal_env_var = MockV1EnvVar("app_name", "my_app")
secrets_masker = SecretsMasker()
configure_secrets_masker_for_test(secrets_masker)
with patch(
"airflow_shared.secrets_masker.secrets_masker._secrets_masker", return_value=secrets_masker
):
with patch(
"airflow_shared.secrets_masker.secrets_masker._is_v1_env_var",
side_effect=lambda a: isinstance(a, MockV1EnvVar),
):
redacted_secret = redact(secret_env_var)
redacted_normal = redact(normal_env_var)
assert redacted_secret["value"] == "***"
assert redacted_normal["value"] == "my_app"
def test_deeply_nested_mixed_structures(self):
nested_data = {
"level1": {
"normal_key": "normal_value",
"password": "secret_pass",
"level2": [
{"api_key": "secret_key", "user": "normal_user"},
("token", "secret_token"),
{"nested_list": ["normal", "password=secret"]},
],
}
}
secrets_masker = SecretsMasker()
configure_secrets_masker_for_test(secrets_masker)
secrets_masker.add_mask("secret_token")
secrets_masker.add_mask("password=secret")
with patch(
"airflow_shared.secrets_masker.secrets_masker._secrets_masker", return_value=secrets_masker
):
redacted_data = redact(nested_data)
assert redacted_data["level1"]["normal_key"] == "normal_value"
assert redacted_data["level1"]["password"] == "***"
assert redacted_data["level1"]["level2"][0]["api_key"] == "***"
assert redacted_data["level1"]["level2"][0]["user"] == "normal_user"
assert redacted_data["level1"]["level2"][1][1] == "***"
nested_list_str = str(redacted_data["level1"]["level2"][2]["nested_list"])
assert "password=secret" not in nested_list_str
assert "password=***" in nested_list_str or "***" in nested_list_str
| TestContainerTypesRedaction |
python | kamyu104__LeetCode-Solutions | Python/generate-parentheses.py | {
"start": 1049,
"end": 1738
} | class ____(object):
def generateParenthesis(self, n):
"""
:type n: int
:rtype: List[str]
"""
def generateParenthesisRecu(left, right, curr, result):
if left == 0 and right == 0:
result.append("".join(curr))
if left > 0:
curr.append('(')
generateParenthesisRecu(left-1, right, curr, result)
curr.pop()
if left < right:
curr.append(')')
generateParenthesisRecu(left, right-1, curr, result)
curr.pop()
result = []
generateParenthesisRecu(n, n, [], result)
return result
| Solution2 |
python | keon__algorithms | tests/test_dp.py | {
"start": 1444,
"end": 1718
} | class ____(unittest.TestCase):
def test_combination_sum_topdown(self):
self.assertEqual(combination_sum_topdown([1, 2, 3], 4), 7)
def test_combination_sum_bottom_up(self):
self.assertEqual(combination_sum_bottom_up([1, 2, 3], 4), 7)
| TestCombinationSum |
python | nedbat__coveragepy | tests/test_context.py | {
"start": 613,
"end": 5092
} | class ____(CoverageTest):
"""Tests of the static context."""
def test_no_context(self) -> None:
self.make_file("main.py", "a = 1")
cov = coverage.Coverage()
self.start_import_stop(cov, "main")
data = cov.get_data()
assert_count_equal(data.measured_contexts(), [""])
def test_static_context(self) -> None:
self.make_file("main.py", "a = 1")
cov = coverage.Coverage(context="gooey")
self.start_import_stop(cov, "main")
data = cov.get_data()
assert_count_equal(data.measured_contexts(), ["gooey"])
SOURCE = """\
a = 1
if a > 2:
a = 3
assert a == 1
"""
LINES = [1, 2, 4]
def run_red_blue(self, **options: TCovKwargs) -> tuple[CoverageData, CoverageData]:
"""Run red.py and blue.py, and return their CoverageData objects."""
self.make_file("red.py", self.SOURCE)
red_cov = coverage.Coverage(context="red", data_suffix="r", source=["."], **options)
self.start_import_stop(red_cov, "red")
red_cov.save()
red_data = red_cov.get_data()
self.make_file("blue.py", self.SOURCE)
blue_cov = coverage.Coverage(context="blue", data_suffix="b", source=["."], **options)
self.start_import_stop(blue_cov, "blue")
blue_cov.save()
blue_data = blue_cov.get_data()
return red_data, blue_data
def test_combining_line_contexts(self) -> None:
red_data, blue_data = self.run_red_blue()
for datas in [[red_data, blue_data], [blue_data, red_data]]:
combined = CoverageData(suffix="combined")
for data in datas:
combined.update(data)
assert combined.measured_contexts() == {"red", "blue"}
full_names = {os.path.basename(f): f for f in combined.measured_files()}
assert_count_equal(full_names, ["red.py", "blue.py"])
fred = full_names["red.py"]
fblue = full_names["blue.py"]
def assert_combined_lines(filename: str, context: str, lines: list[TLineNo]) -> None:
# pylint: disable=cell-var-from-loop
combined.set_query_context(context)
assert combined.lines(filename) == lines
assert_combined_lines(fred, "red", self.LINES)
assert_combined_lines(fred, "blue", [])
assert_combined_lines(fblue, "red", [])
assert_combined_lines(fblue, "blue", self.LINES)
def test_combining_arc_contexts(self) -> None:
red_data, blue_data = self.run_red_blue(branch=True)
# The exact arc data changes depending on the core and the version.
# Extract the red arc data for comparisons below.
arc_data = red_data.arcs(
next(fname for fname in red_data.measured_files() if "red.py" in fname)
)
assert arc_data is not None
for datas in [[red_data, blue_data], [blue_data, red_data]]:
combined = CoverageData(suffix="combined")
for data in datas:
combined.update(data)
assert combined.measured_contexts() == {"red", "blue"}
full_names = {os.path.basename(f): f for f in combined.measured_files()}
assert_count_equal(full_names, ["red.py", "blue.py"])
fred = full_names["red.py"]
fblue = full_names["blue.py"]
def assert_combined_lines(filename: str, context: str, lines: list[TLineNo]) -> None:
# pylint: disable=cell-var-from-loop
combined.set_query_context(context)
assert combined.lines(filename) == lines
assert_combined_lines(fred, "red", self.LINES)
assert_combined_lines(fred, "blue", [])
assert_combined_lines(fblue, "red", [])
assert_combined_lines(fblue, "blue", self.LINES)
def assert_combined_arcs(filename: str, context: str, lines: list[TArc]) -> None:
# pylint: disable=cell-var-from-loop
combined.set_query_context(context)
assert combined.arcs(filename) == lines
assert_combined_arcs(fred, "red", arc_data)
assert_combined_arcs(fred, "blue", [])
assert_combined_arcs(fblue, "red", [])
assert_combined_arcs(fblue, "blue", arc_data)
@pytest.mark.skipif(not testenv.DYN_CONTEXTS, reason="No dynamic contexts with this core")
| StaticContextTest |
python | rapidsai__cudf | python/cudf/cudf/core/series.py | {
"start": 123173,
"end": 123587
} | class ____:
"""
Base accessor class for Series values.
"""
def __init__(self, series: Series):
self.series = series
def _return_result_like_self(self, column: ColumnBase) -> Series:
"""Return the method result like self.series"""
data = ColumnAccessor({self.series.name: column}, verify=False)
return self.series._from_data_like_self(data)
| BaseDatelikeProperties |
python | jazzband__django-model-utils | tests/test_managers/test_inheritance_manager.py | {
"start": 646,
"end": 8431
} | class ____(TestCase):
def setUp(self) -> None:
self.child1 = InheritanceManagerTestChild1.objects.create()
self.child2 = InheritanceManagerTestChild2.objects.create()
self.grandchild1 = InheritanceManagerTestGrandChild1.objects.create()
self.grandchild1_2 = \
InheritanceManagerTestGrandChild1_2.objects.create()
def get_manager(self) -> InheritanceManager[InheritanceManagerTestParent]:
return InheritanceManagerTestParent.objects
def test_normal(self) -> None:
children = {
InheritanceManagerTestParent(pk=self.child1.pk),
InheritanceManagerTestParent(pk=self.child2.pk),
InheritanceManagerTestParent(pk=self.grandchild1.pk),
InheritanceManagerTestParent(pk=self.grandchild1_2.pk),
}
self.assertEqual(set(self.get_manager().all()), children)
def test_select_all_subclasses(self) -> None:
children = {self.child1, self.child2}
children.add(self.grandchild1)
children.add(self.grandchild1_2)
self.assertEqual(
set(self.get_manager().select_subclasses()), children)
def test_select_subclasses_invalid_relation(self) -> None:
"""
If an invalid relation string is provided, we can provide the user
with a list which is valid, rather than just have the select_related()
raise an AttributeError further in.
"""
regex = '^.+? is not in the discovered subclasses, tried:.+$'
with self.assertRaisesRegex(ValueError, regex):
self.get_manager().select_subclasses('user')
def test_select_specific_subclasses(self) -> None:
children = {
self.child1,
InheritanceManagerTestParent(pk=self.child2.pk),
InheritanceManagerTestChild1(pk=self.grandchild1.pk),
InheritanceManagerTestChild1(pk=self.grandchild1_2.pk),
}
self.assertEqual(
set(
self.get_manager().select_subclasses(
"inheritancemanagertestchild1")
),
children,
)
def test_select_specific_grandchildren(self) -> None:
children = {
InheritanceManagerTestParent(pk=self.child1.pk),
InheritanceManagerTestParent(pk=self.child2.pk),
self.grandchild1,
InheritanceManagerTestParent(pk=self.grandchild1_2.pk),
}
self.assertEqual(
set(
self.get_manager().select_subclasses(
"inheritancemanagertestchild1__inheritancemanagertestgrandchild1"
)
),
children,
)
def test_children_and_grandchildren(self) -> None:
children = {
self.child1,
InheritanceManagerTestParent(pk=self.child2.pk),
self.grandchild1,
InheritanceManagerTestChild1(pk=self.grandchild1_2.pk),
}
self.assertEqual(
set(
self.get_manager().select_subclasses(
"inheritancemanagertestchild1",
"inheritancemanagertestchild1__inheritancemanagertestgrandchild1"
)
),
children,
)
def test_get_subclass(self) -> None:
self.assertEqual(
self.get_manager().get_subclass(pk=self.child1.pk),
self.child1)
def test_get_subclass_on_queryset(self) -> None:
self.assertEqual(
self.get_manager().all().get_subclass(pk=self.child1.pk),
self.child1)
def test_prior_select_related(self) -> None:
with self.assertNumQueries(1):
obj = self.get_manager().select_related(
"inheritancemanagertestchild1").select_subclasses(
"inheritancemanagertestchild2").get(pk=self.child1.pk)
obj.inheritancemanagertestchild1
def test_manually_specifying_parent_fk_including_grandchildren(self) -> None:
"""
given a Model which inherits from another Model, but also declares
the OneToOne link manually using `related_name` and `parent_link`,
ensure that the relation names and subclasses are obtained correctly.
"""
child3 = InheritanceManagerTestChild3.objects.create()
qs = InheritanceManagerTestParent.objects.all()
results = qs.select_subclasses().order_by('pk')
expected_objs = [
self.child1,
self.child2,
self.grandchild1,
self.grandchild1_2,
child3
]
self.assertEqual(list(results), expected_objs)
expected_related_names = [
'inheritancemanagertestchild1__inheritancemanagertestgrandchild1',
'inheritancemanagertestchild1__inheritancemanagertestgrandchild1_2',
'inheritancemanagertestchild1',
'inheritancemanagertestchild2',
'manual_onetoone', # this was set via parent_link & related_name
'inheritancemanagertestchild3_1',
'child4_onetoone',
]
self.assertEqual(set(results.subclasses),
set(expected_related_names))
def test_manually_specifying_parent_fk_single_subclass(self) -> None:
"""
Using a string related_name when the relation is manually defined
instead of implicit should still work in the same way.
"""
related_name = 'manual_onetoone'
child3 = InheritanceManagerTestChild3.objects.create()
qs = InheritanceManagerTestParent.objects.all()
results = qs.select_subclasses(related_name).order_by('pk')
expected_objs = [InheritanceManagerTestParent(pk=self.child1.pk),
InheritanceManagerTestParent(pk=self.child2.pk),
InheritanceManagerTestParent(pk=self.grandchild1.pk),
InheritanceManagerTestParent(pk=self.grandchild1_2.pk),
child3]
self.assertEqual(list(results), expected_objs)
expected_related_names = [related_name]
self.assertEqual(set(results.subclasses),
set(expected_related_names))
def test_filter_on_values_queryset(self) -> None:
queryset = InheritanceManagerTestChild1.objects.values('id').filter(pk=self.child1.pk)
self.assertEqual(list(queryset), [{'id': self.child1.pk}])
def test_values_list_on_select_subclasses(self) -> None:
"""
Using `select_subclasses` in conjunction with `values_list()` raised an
exception in `_get_sub_obj_recurse()` because the result of `values_list()`
is either a `tuple` or primitive objects if `flat=True` is specified,
because no type checking was done prior to fetching child nodes.
"""
# Querysets are cast to lists to force immediate evaluation.
# No exceptions must be thrown.
# No argument to select_subclasses
objs_1 = list(
self.get_manager()
.select_subclasses()
.values_list('id')
)
# String argument to select_subclasses
objs_2 = list(
self.get_manager()
.select_subclasses(
"inheritancemanagertestchild2"
)
.values_list('id')
)
# String argument to select_subclasses
objs_3 = list(
self.get_manager()
.select_subclasses(
InheritanceManagerTestChild2
).values_list('id')
)
assert all((
isinstance(objs_1, list),
isinstance(objs_2, list),
isinstance(objs_3, list),
))
assert objs_1 == objs_2 == objs_3
| InheritanceManagerTests |
python | has2k1__plotnine | plotnine/positions/position_stack.py | {
"start": 213,
"end": 4027
} | class ____(position):
"""
Stack plotted objects on top of each other
The objects to stack are those that have
an overlapping x range.
Parameters
----------
vjust :
By what fraction to avoid overlapping the lower object,
where `0` gives a complete overlap and `1` gives no overlap.
reverse :
Reverse the order of the stacked groups if true.
"""
fill = False
def __init__(self, vjust: float = 1, reverse: bool = False):
self.params = {"vjust": vjust, "reverse": reverse}
def setup_params(self, data):
"""
Verify, modify & return a copy of the params.
"""
# Variable for which to do the stacking
if "ymax" in data:
if any((data["ymin"] != 0) & (data["ymax"] != 0)):
warn(
"Stacking not well defined when not anchored on the axis.",
PlotnineWarning,
)
var = "ymax"
elif "y" in data:
var = "y"
else:
warn(
"Stacking requires either ymin & ymax or y "
"aesthetics. Maybe you want position = 'identity'?",
PlotnineWarning,
)
var = None
params = self.params.copy()
params["var"] = var
params["fill"] = self.fill
return params
def setup_data(self, data, params):
if not params["var"]:
return data
if params["var"] == "y":
data["ymax"] = data["y"]
elif params["var"] == "ymax":
bool_idx = data["ymax"] == 0
data.loc[bool_idx, "ymax"] = data.loc[bool_idx, "ymin"]
data = remove_missing(
data, vars=("x", "xmin", "xmax", "y"), name="position_stack"
)
return data
@classmethod
def compute_panel(cls, data, scales, params):
if not params["var"]:
return data
# Positioning happens after scale has transformed the data,
# and stacking only works well for linear data.
# If the scale(transformation) is not linear, we undo it,
# do the "stacking" and redo the transformation.
from ..scales.scale_continuous import scale_continuous
if isinstance(scales.y, scale_continuous):
undo_transform = (
not scales.y.is_linear_scale and scales.y.domain_is_numerical
)
else:
undo_transform = False
if undo_transform:
data = cls.transform_position(data, trans_y=scales.y.inverse)
negative = data["ymax"] < 0
neg = data.loc[negative]
pos = data.loc[~negative]
if len(neg):
neg = cls.collide(neg, params=params)
if len(pos):
pos = cls.collide(pos, params=params)
data = pd.concat([neg, pos], axis=0, ignore_index=True, sort=True)
if undo_transform:
data = cls.transform_position(data, trans_y=scales.y.transform)
return data
@staticmethod
def strategy(data, params):
"""
Stack overlapping intervals.
Assumes that each set has the same horizontal position
"""
vjust = params["vjust"]
y = data["y"].copy()
y[np.isnan(y)] = 0
heights = np.append(0, y.cumsum())
if params["fill"]:
heights = heights / np.abs(heights[-1])
data["ymin"] = np.min([heights[:-1], heights[1:]], axis=0)
data["ymax"] = np.max([heights[:-1], heights[1:]], axis=0)
# less intuitive than (ymin + vjust(ymax-ymin)), but
# this way avoids subtracting numbers of potentially
# similar precision
data["y"] = (1 - vjust) * data["ymin"] + vjust * data["ymax"]
return data
| position_stack |
python | apache__airflow | providers/google/tests/unit/google/cloud/hooks/test_cloud_build.py | {
"start": 2378,
"end": 12832
} | class ____:
def setup_method(self):
with mock.patch(
"airflow.providers.google.common.hooks.base_google.GoogleBaseHook.__init__",
new=mock_base_gcp_hook_no_default_project_id,
):
self.hook = CloudBuildHook(gcp_conn_id="test")
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook.get_credentials")
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.CloudBuildClient")
def test_cloud_build_service_client_creation(self, mock_client, mock_get_creds):
result = self.hook.get_conn()
mock_client.assert_called_once_with(
credentials=mock_get_creds.return_value,
client_info=CLIENT_INFO,
client_options=None,
)
assert mock_client.return_value == result
assert self.hook._client["global"] == result
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook.get_conn")
def test_cancel_build(self, get_conn):
self.hook.cancel_build(id_=BUILD_ID, project_id=PROJECT_ID)
get_conn.return_value.cancel_build.assert_called_once_with(
request={"project_id": PROJECT_ID, "id": BUILD_ID}, retry=DEFAULT, timeout=None, metadata=()
)
@mock.patch(
"airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook._get_build_id_from_operation"
)
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook.get_conn")
def test_create_build_without_waiting_for_result(self, get_conn, mock_get_id_from_operation):
get_conn.return_value.run_build_trigger.return_value = mock.MagicMock()
mock_get_id_from_operation.return_value = BUILD_ID
self.hook.create_build_without_waiting_for_result(
build=BUILD, project_id=PROJECT_ID, location=LOCATION
)
mock_operation = get_conn.return_value.create_build
mock_operation.assert_called_once_with(
request={"parent": PARENT, "project_id": PROJECT_ID, "build": BUILD},
retry=DEFAULT,
timeout=None,
metadata=(),
)
mock_get_id_from_operation.assert_called_once_with(mock_operation())
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook.get_conn")
def test_create_build_trigger(self, get_conn):
self.hook.create_build_trigger(trigger=BUILD_TRIGGER, project_id=PROJECT_ID)
get_conn.return_value.create_build_trigger.assert_called_once_with(
request={"project_id": PROJECT_ID, "trigger": BUILD_TRIGGER},
retry=DEFAULT,
timeout=None,
metadata=(),
)
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook.get_conn")
def test_delete_build_trigger(self, get_conn):
self.hook.delete_build_trigger(trigger_id=TRIGGER_ID, project_id=PROJECT_ID)
get_conn.return_value.delete_build_trigger.assert_called_once_with(
request={"project_id": PROJECT_ID, "trigger_id": TRIGGER_ID},
retry=DEFAULT,
timeout=None,
metadata=(),
)
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook.get_conn")
def test_get_build(self, get_conn):
self.hook.get_build(id_=BUILD_ID, project_id=PROJECT_ID)
get_conn.return_value.get_build.assert_called_once_with(
request={"project_id": PROJECT_ID, "id": BUILD_ID}, retry=DEFAULT, timeout=None, metadata=()
)
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook.get_conn")
def test_get_build_trigger(self, get_conn):
self.hook.get_build_trigger(trigger_id=TRIGGER_ID, project_id=PROJECT_ID)
get_conn.return_value.get_build_trigger.assert_called_once_with(
request={"project_id": PROJECT_ID, "trigger_id": TRIGGER_ID},
retry=DEFAULT,
timeout=None,
metadata=(),
)
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook.get_conn")
def test_list_build_triggers(self, get_conn):
self.hook.list_build_triggers(project_id=PROJECT_ID, location=LOCATION)
get_conn.return_value.list_build_triggers.assert_called_once_with(
request={"parent": PARENT, "project_id": PROJECT_ID, "page_size": None, "page_token": None},
retry=DEFAULT,
timeout=None,
metadata=(),
)
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook.get_conn")
def test_list_builds(self, get_conn):
self.hook.list_builds(project_id=PROJECT_ID, location=LOCATION)
get_conn.return_value.list_builds.assert_called_once_with(
request={
"parent": PARENT,
"project_id": PROJECT_ID,
"page_size": None,
"page_token": None,
"filter": None,
},
retry=DEFAULT,
timeout=None,
metadata=(),
)
@mock.patch(
"airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook._get_build_id_from_operation"
)
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.TIME_TO_SLEEP_IN_SECONDS")
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook.get_conn")
def test_retry_build_with_wait(self, get_conn, wait_time, mock_get_id_from_operation):
get_conn.return_value.run_build_trigger.return_value = mock.MagicMock()
mock_get_id_from_operation.return_value = BUILD_ID
wait_time.return_value = 0
self.hook.retry_build(id_=BUILD_ID, project_id=PROJECT_ID)
mock_operation = get_conn.return_value.retry_build
mock_operation.assert_called_once_with(
request={"project_id": PROJECT_ID, "id": BUILD_ID}, retry=DEFAULT, timeout=None, metadata=()
)
get_conn.return_value.retry_build.return_value.result.assert_called_once_with(
timeout=None,
polling=None,
retry=None,
)
get_conn.return_value.get_build.assert_called_once_with(
request={"project_id": PROJECT_ID, "id": BUILD_ID}, retry=DEFAULT, timeout=None, metadata=()
)
mock_get_id_from_operation.assert_called_once_with(mock_operation())
@mock.patch(
"airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook._get_build_id_from_operation"
)
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook.get_conn")
def test_retry_build_without_wait(self, get_conn, mock_get_id_from_operation):
get_conn.return_value.run_build_trigger.return_value = mock.MagicMock()
mock_get_id_from_operation.return_value = BUILD_ID
self.hook.retry_build(id_=BUILD_ID, project_id=PROJECT_ID, wait=False)
get_conn.return_value.retry_build.assert_called_once_with(
request={"project_id": PROJECT_ID, "id": BUILD_ID}, retry=DEFAULT, timeout=None, metadata=()
)
get_conn.return_value.get_build.assert_called_once_with(
request={"project_id": PROJECT_ID, "id": BUILD_ID}, retry=DEFAULT, timeout=None, metadata=()
)
@mock.patch(
"airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook._get_build_id_from_operation"
)
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.TIME_TO_SLEEP_IN_SECONDS")
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook.get_conn")
def test_run_build_trigger_with_wait(self, get_conn, wait_time, mock_get_id_from_operation):
get_conn.return_value.run_build_trigger.return_value = mock.MagicMock()
mock_get_id_from_operation.return_value = BUILD_ID
wait_time.return_value = 0
self.hook.run_build_trigger(
trigger_id=TRIGGER_ID, source=REPO_SOURCE["repo_source"], project_id=PROJECT_ID
)
mock_operation = get_conn.return_value.run_build_trigger
mock_operation.assert_called_once_with(
request={
"project_id": PROJECT_ID,
"trigger_id": TRIGGER_ID,
"source": REPO_SOURCE["repo_source"],
},
retry=DEFAULT,
timeout=None,
metadata=(),
)
get_conn.return_value.run_build_trigger.return_value.result.assert_called_once_with(
timeout=None,
polling=None,
retry=None,
)
get_conn.return_value.get_build.assert_called_once_with(
request={"project_id": PROJECT_ID, "id": BUILD_ID}, retry=DEFAULT, timeout=None, metadata=()
)
mock_get_id_from_operation.assert_called_once_with(mock_operation())
@mock.patch(
"airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook._get_build_id_from_operation"
)
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook.get_conn")
def test_run_build_trigger_without_wait(self, get_conn, mock_get_id_from_operation):
get_conn.return_value.run_build_trigger.return_value = mock.MagicMock()
mock_get_id_from_operation.return_value = BUILD_ID
self.hook.run_build_trigger(
trigger_id=TRIGGER_ID, source=REPO_SOURCE["repo_source"], project_id=PROJECT_ID, wait=False
)
get_conn.return_value.run_build_trigger.assert_called_once_with(
request={
"project_id": PROJECT_ID,
"trigger_id": TRIGGER_ID,
"source": REPO_SOURCE["repo_source"],
},
retry=DEFAULT,
timeout=None,
metadata=(),
)
get_conn.return_value.get_build.assert_called_once_with(
request={"project_id": PROJECT_ID, "id": BUILD_ID}, retry=DEFAULT, timeout=None, metadata=()
)
@mock.patch("airflow.providers.google.cloud.hooks.cloud_build.CloudBuildHook.get_conn")
def test_update_build_trigger(self, get_conn):
self.hook.update_build_trigger(trigger_id=TRIGGER_ID, trigger=BUILD_TRIGGER, project_id=PROJECT_ID)
get_conn.return_value.update_build_trigger.assert_called_once_with(
request={"project_id": PROJECT_ID, "trigger_id": TRIGGER_ID, "trigger": BUILD_TRIGGER},
retry=DEFAULT,
timeout=None,
metadata=(),
)
@pytest.mark.db_test
| TestCloudBuildHook |
python | doocs__leetcode | solution/1400-1499/1492.The kth Factor of n/Solution.py | {
"start": 0,
"end": 222
} | class ____:
def kthFactor(self, n: int, k: int) -> int:
for i in range(1, n + 1):
if n % i == 0:
k -= 1
if k == 0:
return i
return -1
| Solution |
python | allegroai__clearml | clearml/automation/hpbandster/bandster.py | {
"start": 955,
"end": 4638
} | class ____(Worker):
def __init__(
self,
*args: Any,
optimizer: "OptimizerBOHB",
base_task_id: str,
queue_name: str,
objective: Objective,
sleep_interval: float = 0,
budget_iteration_scale: float = 1.0,
**kwargs: Any,
) -> "_TrainsBandsterWorker":
super(_TrainsBandsterWorker, self).__init__(*args, **kwargs)
self.optimizer = optimizer
self.base_task_id = base_task_id
self.queue_name = queue_name
self.objective = objective
self.sleep_interval = sleep_interval
self.budget_iteration_scale = budget_iteration_scale
self._current_job = None
def compute(self, config: dict, budget: float, **kwargs: Any) -> dict:
"""
Simple example for a compute function
The loss is just a the config + some noise (that decreases with the budget)
For dramatization, the function can sleep for a given interval to emphasizes
the speed ups achievable with parallel workers.
Args:
config: dictionary containing the sampled configurations by the optimizer
budget: (float) amount of time/epochs/etc. the model can use to train.
We assume budget is iteration, as time might not be stable from machine to machine.
Returns:
dictionary with mandatory fields:
'loss' (scalar)
'info' (dict)
"""
self._current_job = self.optimizer.helper_create_job(self.base_task_id, parameter_override=config)
# noinspection PyProtectedMember
self.optimizer._current_jobs.append(self._current_job)
if not self._current_job.launch(self.queue_name):
return dict()
iteration_value = None
is_pending = True
while not self._current_job.is_stopped():
if is_pending and not self._current_job.is_pending():
is_pending = False
# noinspection PyProtectedMember
self.optimizer.budget.jobs.update(
self._current_job.task_id(),
float(self.optimizer._min_iteration_per_job) / self.optimizer._max_iteration_per_job,
)
# noinspection PyProtectedMember
iteration_value = self.optimizer._objective_metric.get_current_raw_objective(self._current_job)
if iteration_value:
# update budget
self.optimizer.budget.iterations.update(self._current_job.task_id(), iteration_value[0])
# check if we exceeded this job budget
if iteration_value[0][0] >= self.budget_iteration_scale * budget:
self._current_job.abort()
break
sleep(self.sleep_interval)
if iteration_value:
# noinspection PyProtectedMember
self.optimizer.budget.jobs.update(
self._current_job.task_id(),
float(iteration_value[0][0]) / self.optimizer._max_iteration_per_job,
)
result = {
# this is the a mandatory field to run hyperband
# remember: HpBandSter always minimizes!
"loss": float(self.objective.get_normalized_objective(self._current_job) * -1.0),
# can be used for any user-defined information - also mandatory
"info": self._current_job.task_id(),
}
print("TrainsBandsterWorker result {}, iteration {}".format(result, iteration_value[0]))
# noinspection PyProtectedMember
self.optimizer._current_jobs.remove(self._current_job)
return result
| _TrainsBandsterWorker |
python | sqlalchemy__sqlalchemy | test/ext/test_mutable.py | {
"start": 3495,
"end": 3736
} | class ____:
@classmethod
def _type_fixture(cls):
return MutableDict
def teardown_test(self):
# clear out mapper events
Mapper.dispatch._clear()
ClassManager.dispatch._clear()
| _MutableDictTestFixture |
python | apache__airflow | airflow-core/src/airflow/timetables/base.py | {
"start": 2306,
"end": 2711
} | class ____(NamedTuple):
"""
A data interval for a DagRun to operate over.
Both ``start`` and ``end`` **MUST** be "aware", i.e. contain timezone
information.
"""
start: DateTime
end: DateTime
@classmethod
def exact(cls, at: DateTime) -> DataInterval:
"""Represent an "interval" containing only an exact time."""
return cls(start=at, end=at)
| DataInterval |
python | pytorch__pytorch | torch/jit/_script.py | {
"start": 11664,
"end": 11794
} | class ____:
def __get__(self, obj, cls):
return self.__getattr__("forward") # type: ignore[attr-defined]
| _CachedForward |
python | getsentry__sentry | src/sentry/integrations/slack/requests/event.py | {
"start": 608,
"end": 2916
} | class ____(SlackDMRequest):
"""
An Event request sent from Slack.
These requests require the same Data and Token validation as all other
requests from Slack, but also event data validation.
Challenge Requests
------------------
Slack Event requests first start with a "challenge request". This is just a
request Sentry needs to verifying using it's shared key.
Challenge requests will have a ``type`` of ``url_verification``.
"""
def validate(self) -> None:
if self.is_challenge():
# Challenge requests only include the Token and data to verify the
# request, so only validate those.
self._info("slack.event.url_verification")
self.authorize()
super(SlackDMRequest, self)._validate_data()
else:
# Non-Challenge requests need to validate everything plus the data
# about the event.
super().validate()
self._validate_event()
def is_challenge(self) -> bool:
"""We need to call this before validation."""
return is_event_challenge(self.request.data)
@property
def dm_data(self) -> Mapping[str, Any]:
return self.data.get("event", {})
@property
def channel_id(self) -> str:
return self.dm_data.get("channel", "")
@property
def user_id(self) -> str:
return self.dm_data.get("user", "")
@property
def links(self) -> list[str]:
return [link["url"] for link in self.dm_data.get("links", []) if "url" in link]
def _validate_event(self) -> None:
if not self.dm_data:
self._error("slack.event.invalid-event-data")
raise SlackRequestError(status=400)
if not self.dm_data.get("type"):
self._error("slack.event.invalid-event-type")
raise SlackRequestError(status=400)
def validate_integration(self) -> None:
super().validate_integration()
if (self.text in COMMANDS) or (
self.type == "link_shared" and has_discover_links(self.links)
):
self._validate_identity()
def _log_request(self) -> None:
self._info(f"slack.event.{self.type}")
def is_bot(self) -> bool:
return bool(self.dm_data.get("bot_id"))
| SlackEventRequest |
python | pandas-dev__pandas | pandas/core/arrays/datetimelike.py | {
"start": 62232,
"end": 69947
} | class ____(DatetimeLikeArrayMixin):
"""
Common ops for DatetimeIndex/PeriodIndex, but not TimedeltaIndex.
"""
@Substitution(
URL="https://docs.python.org/3/library/datetime.html"
"#strftime-and-strptime-behavior"
)
def strftime(self, date_format: str) -> npt.NDArray[np.object_]:
"""
Convert to Index using specified date_format.
Return an Index of formatted strings specified by date_format, which
supports the same string format as the python standard library. Details
of the string format can be found in `python string format
doc <%(URL)s>`__.
Formats supported by the C `strftime` API but not by the python string format
doc (such as `"%%R"`, `"%%r"`) are not officially supported and should be
preferably replaced with their supported equivalents (such as `"%%H:%%M"`,
`"%%I:%%M:%%S %%p"`).
Note that `PeriodIndex` support additional directives, detailed in
`Period.strftime`.
Parameters
----------
date_format : str
Date format string (e.g. "%%Y-%%m-%%d").
Returns
-------
ndarray[object]
NumPy ndarray of formatted strings.
See Also
--------
to_datetime : Convert the given argument to datetime.
DatetimeIndex.normalize : Return DatetimeIndex with times to midnight.
DatetimeIndex.round : Round the DatetimeIndex to the specified freq.
DatetimeIndex.floor : Floor the DatetimeIndex to the specified freq.
Timestamp.strftime : Format a single Timestamp.
Period.strftime : Format a single Period.
Examples
--------
>>> rng = pd.date_range(pd.Timestamp("2018-03-10 09:00"), periods=3, freq="s")
>>> rng.strftime("%%B %%d, %%Y, %%r")
Index(['March 10, 2018, 09:00:00 AM', 'March 10, 2018, 09:00:01 AM',
'March 10, 2018, 09:00:02 AM'],
dtype='str')
"""
result = self._format_native_types(date_format=date_format, na_rep=np.nan)
if using_string_dtype():
from pandas import StringDtype
return pd_array(result, dtype=StringDtype(na_value=np.nan)) # type: ignore[return-value]
return result.astype(object, copy=False)
_round_doc = """
Perform {op} operation on the data to the specified `freq`.
Parameters
----------
freq : str or Offset
The frequency level to {op} the index to. Must be a fixed
frequency like 's' (second) not 'ME' (month end). See
:ref:`frequency aliases <timeseries.offset_aliases>` for
a list of possible `freq` values.
ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise'
Only relevant for DatetimeIndex:
- 'infer' will attempt to infer fall dst-transition hours based on
order
- bool-ndarray where True signifies a DST time, False designates
a non-DST time (note that this flag is only applicable for
ambiguous times)
- 'NaT' will return NaT where there are ambiguous times
- 'raise' will raise a ValueError if there are ambiguous
times.
nonexistent : 'shift_forward', 'shift_backward', 'NaT', timedelta, default 'raise'
A nonexistent time does not exist in a particular timezone
where clocks moved forward due to DST.
- 'shift_forward' will shift the nonexistent time forward to the
closest existing time
- 'shift_backward' will shift the nonexistent time backward to the
closest existing time
- 'NaT' will return NaT where there are nonexistent times
- timedelta objects will shift nonexistent times by the timedelta
- 'raise' will raise a ValueError if there are
nonexistent times.
Returns
-------
DatetimeIndex, TimedeltaIndex, or Series
Index of the same type for a DatetimeIndex or TimedeltaIndex,
or a Series with the same index for a Series.
Raises
------
ValueError if the `freq` cannot be converted.
See Also
--------
DatetimeIndex.floor : Perform floor operation on the data to the specified `freq`.
DatetimeIndex.snap : Snap time stamps to nearest occurring frequency.
Notes
-----
If the timestamps have a timezone, {op}ing will take place relative to the
local ("wall") time and re-localized to the same timezone. When {op}ing
near daylight savings time, use ``nonexistent`` and ``ambiguous`` to
control the re-localization behavior.
Examples
--------
**DatetimeIndex**
>>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
dtype='datetime64[us]', freq='min')
"""
_round_example = """>>> rng.round('h')
DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00',
'2018-01-01 12:00:00'],
dtype='datetime64[us]', freq=None)
**Series**
>>> pd.Series(rng).dt.round("h")
0 2018-01-01 12:00:00
1 2018-01-01 12:00:00
2 2018-01-01 12:00:00
dtype: datetime64[us]
When rounding near a daylight savings time transition, use ``ambiguous`` or
``nonexistent`` to control how the timestamp should be re-localized.
>>> rng_tz = pd.DatetimeIndex(["2021-10-31 03:30:00"], tz="Europe/Amsterdam")
>>> rng_tz.floor("2h", ambiguous=False)
DatetimeIndex(['2021-10-31 02:00:00+01:00'],
dtype='datetime64[us, Europe/Amsterdam]', freq=None)
>>> rng_tz.floor("2h", ambiguous=True)
DatetimeIndex(['2021-10-31 02:00:00+02:00'],
dtype='datetime64[us, Europe/Amsterdam]', freq=None)
"""
_floor_example = """>>> rng.floor('h')
DatetimeIndex(['2018-01-01 11:00:00', '2018-01-01 12:00:00',
'2018-01-01 12:00:00'],
dtype='datetime64[us]', freq=None)
**Series**
>>> pd.Series(rng).dt.floor("h")
0 2018-01-01 11:00:00
1 2018-01-01 12:00:00
2 2018-01-01 12:00:00
dtype: datetime64[us]
When rounding near a daylight savings time transition, use ``ambiguous`` or
``nonexistent`` to control how the timestamp should be re-localized.
>>> rng_tz = pd.DatetimeIndex(["2021-10-31 03:30:00"], tz="Europe/Amsterdam")
>>> rng_tz.floor("2h", ambiguous=False)
DatetimeIndex(['2021-10-31 02:00:00+01:00'],
dtype='datetime64[us, Europe/Amsterdam]', freq=None)
>>> rng_tz.floor("2h", ambiguous=True)
DatetimeIndex(['2021-10-31 02:00:00+02:00'],
dtype='datetime64[us, Europe/Amsterdam]', freq=None)
"""
_ceil_example = """>>> rng.ceil('h')
DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00',
'2018-01-01 13:00:00'],
dtype='datetime64[us]', freq=None)
**Series**
>>> pd.Series(rng).dt.ceil("h")
0 2018-01-01 12:00:00
1 2018-01-01 12:00:00
2 2018-01-01 13:00:00
dtype: datetime64[us]
When rounding near a daylight savings time transition, use ``ambiguous`` or
``nonexistent`` to control how the timestamp should be re-localized.
>>> rng_tz = pd.DatetimeIndex(["2021-10-31 01:30:00"], tz="Europe/Amsterdam")
>>> rng_tz.ceil("h", ambiguous=False)
DatetimeIndex(['2021-10-31 02:00:00+01:00'],
dtype='datetime64[us, Europe/Amsterdam]', freq=None)
>>> rng_tz.ceil("h", ambiguous=True)
DatetimeIndex(['2021-10-31 02:00:00+02:00'],
dtype='datetime64[us, Europe/Amsterdam]', freq=None)
"""
| DatelikeOps |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.