code
stringlengths
1
25.8M
language
stringclasses
18 values
source
stringclasses
4 values
repo
stringclasses
78 values
path
stringlengths
0
268
(torch.compiler_troubleshooting)= # torch.compile Troubleshooting You're trying to use `torch.compile` on your PyTorch model to enhance its performance but it's not working as expected. Perhaps performance isn't improving, crashes are happening, or compilation time is too long. This article provides tips, workarounds, and debugging tools to help you overcome these challenges. **Contents** ```{contents} :local: true ``` ## Setting Expectations `torch.compile` is designed as a general-purpose PyTorch compiler. Unlike the previous compiler solution, TorchScript, `torch.compile` requires fewer code changes, meaning models typically don't need to be rewritten from scratch. It also manages unsupported code more gracefully - unsupported code results in a lost optimization opportunity rather than a crash. In the ideal world, one can simply apply `torch.compile` to any PyTorch model and enjoy automatic speedups. However, in reality, code complexities can lead to one of three scenarios: 1. `torch.compile` works seamlessly, providing speedups. 2. Some code modifications are necessary. `torch.compile` doesn't crash or take too long, but you might not be seeing significant performance gains. 3. Extensive changes to your code are required. We anticipate most code will fall under scenarios (1) and (2). This document provides tips, arranged by level of involvement, to help address code issues in scenario (2). ### Compile times `torch.compile` functions as a just-in-time compiler, so the initial one or two runs of the compiled function are expected to be significantly slower. Recompilations, which can occur under certain conditions (detailed below), will also make runs slower. Various `torch.compile` components cache results to reduce compilation time for future invocations, even in different processes. Cold-start (uncached) compilation time typically ranges from seconds to minutes for common or benchmarked models. Larger models may take upwards of 30 minutes to a few hours. ## Terminology The following terms are relevant to troubleshooting `torch.compile` problems. ### Graph break `torch.compile` traces your code and attempts to capture your PyTorch code into a single computation graph of PyTorch operators (FX graph). However, this is not always possible. When encountering code that can't be traced, a "graph break" occurs. A graph break involves compiling the FX graph has been determined so far, running the unsupported code, then resuming tracing after the unsupported code with a new FX graph. Because the computation graph is broken up, we lose optimization opportunities, so model code should avoid graph breaks whenever possible. Graph breaks occur on things like: - Data-dependent if-statements - Many Python built-in functions - C functions Below is an example of a graph break due to the function `copy.deepcopy` from a Python builtin library (exact output may differ). ```py import torch @torch.compile def fn(x): x = x + 1 with open("test.txt", "r") as f: return x + len(f.read()) fn(torch.ones(3, 3)) ``` ``` $TORCH_LOGS="graph_breaks" python playground.py Graph break in user code at /data/users/williamwen/pytorch/playground.py:7 Reason: Unsupported: builtin: open [<class 'torch._dynamo.variables.constant.ConstantVariable'>, <class 'torch._dynamo.variables.constant.ConstantVariable'>] False User code traceback: File "/data/users/williamwen/pytorch/playground.py", line 7, in fn with open("test.txt", "r") as f: Traceback (most recent call last): File "/data/users/williamwen/pytorch/torch/_dynamo/symbolic_convert.py", line 635, in wrapper return inner_fn(self, inst) ^^^^^^^^^^^^^^^^^^^^ File "/data/users/williamwen/pytorch/torch/_dynamo/symbolic_convert.py", line 2414, in CALL self._call(inst) File "/data/users/williamwen/pytorch/torch/_dynamo/symbolic_convert.py", line 2408, in _call self.call_function(fn, args, kwargs) File "/data/users/williamwen/pytorch/torch/_dynamo/symbolic_convert.py", line 962, in call_function self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/users/williamwen/pytorch/torch/_dynamo/variables/builtin.py", line 997, in call_function return handler(tx, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/users/williamwen/pytorch/torch/_dynamo/variables/builtin.py", line 831, in <lambda> return lambda *args: unimplemented(error_msg) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/users/williamwen/pytorch/torch/_dynamo/exc.py", line 313, in unimplemented raise Unsupported(msg, case_name=case_name) torch._dynamo.exc.Unsupported: builtin: open [<class 'torch._dynamo.variables.constant.ConstantVariable'>, <class 'torch._dynamo.variables.constant.ConstantVariable'>] False ``` ### Guards `torch.compile` makes some assumptions about runtime values as we trace through code. During tracing, we generate "guards", which are runtime checks for these assumptions. Guards are run in future calls to the compiled function to determine if we can reuse previously compiled code. Examples of runtime checks are constant values, types, and object IDs. Below is an example of generated guards. The `TENSOR_MATCH` guard checks for the input's type, device, dtype, shape, etc. ```py import torch @torch.compile def fn(x): return x + 1 fn(torch.ones(3, 3)) ``` ``` $ TORCH_LOGS="guards" python playground.py GUARDS: TREE_GUARD_MANAGER: +- RootGuardManager | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:471 in init_ambient_guards | +- GLOBAL_STATE: ___check_global_state() | +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack() | +- GuardManager: source=L['x'], accessed_by=DictGetItemGuardAccessor(x) | | +- TENSOR_MATCH: check_tensor(L['x'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[3, 3], stride=[3, 1]) # return x + 1 # playground.py:6 in fn | | +- NO_HASATTR: hasattr(L['x'], '_dynamo_dynamic_indices') == False # return x + 1 # playground.py:6 in fn ``` ### Recompilation If the guards fail for every instance of previously compiled code, then `torch.compile` must "recompile" the function, requiring the original code to be traced again. In the example below, recompilation is necessary because the guard checking the tensor argument's shape failed. ```py import torch @torch.compile def fn(x): return x + 1 fn(torch.ones(3, 3)) fn(torch.ones(4, 4)) ``` ``` $ TORCH_LOGS="recompiles" python playground.py Recompiling function fn in /data/users/williamwen/pytorch/playground.py:3 triggered by the following guard failure(s): - 0/0: tensor 'L['x']' size mismatch at index 0. expected 3, actual 4 ``` ### Dynamic Shapes `torch.compile` initially assumes tensor shapes are static/constant and guards based on these assumptions. By using "dynamic shapes," we can get `torch.compile` to produce compiled code that can accept tensor inputs with different shapes - we avoid recompiling every time shapes differ. By default, automatic dynamic shapes are enabled `torch.compile(dynamic=None)` - if compilation fails due to shape mismatch, recompilation is attempted with dynamic shapes. Dynamic shapes can also be fully enabled `dynamic=True` or disabled `dynamic=False`. Below, we enable dynamic shapes and note that we no longer need to recompile. ```py import torch @torch.compile(dynamic=True) def fn(x): return x + 1 fn(torch.ones(3, 3)) fn(torch.ones(4, 4)) ``` ``` $ TORCH_LOGS="dynamic,recompiles" python playground.py create_symbol s0 = 3 for L['x'].size()[0] [2, int_oo] at playground.py:5 in fn (_dynamo/variables/builder.py:2718 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0" produce_guards produce_guards ``` For more information on dynamic shapes, see [The dynamic shapes manual](https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.fh8zzonyw8ng). ## Logging Tools (tlparse-torch-trace)= ### tlparse / TORCH_TRACE `tlparse` / `TORCH_TRACE` are a pair of tools that produce compilation reports that look like this: <https://web.mit.edu/~ezyang/Public/bhack-20240609-tlparse/index.html>. Traces are very easy to collect. To collect a trace, run your reproduction command with ``` TORCH_TRACE="/tmp/tracedir" python foo.py pip install tlparse tlparse /tmp/tracedir ``` This approach works even if you are running a distributed job, providing a trace for each rank. It will open your browser with HTML similar to what's generated above. If you are making a bug report for a complicated problem that you don't have a standalone reproduction for, you can still greatly assist PyTorch developers by attaching the trace log generated in `/tmp/tracedir`. ```{warning} The trace log contains all of your model code. Do not share the trace log if the model you are working on is sensitive. The trace log does NOT contain weights. ``` ```{raw} html <style> .red {background-color:#ff0000;} .green {background-color:#00ff00;} .dark-green {background-color:#027f02;} </style> ``` ```{eval-rst} .. role:: red .. role:: green .. role:: dark-green ``` The output of `tlparse` is primarily aimed for PyTorch developers, and the log format is easy to upload and share on GitHub. However, as a non-PyTorch developer, you can still extract useful information from it. We recommend starting with the inline help text in the report, which explains its contents. Here are some insights you can gain from a `tlparse`: - What model code was compiled by looking at the stack trie? This is especially useful if you're not familiar with the codebase being compiled! - How many graph breaks / distinct compilation regions are there? (Each distinct compile is its own color coded block like {dark-green}`[0/0]`). Frames that are potentially graph-broken are light green {green}`[2/4]`. If there are a lot of frames, that is suspicious, and suggests that you had some catastrophic graph breaks, or maybe your code isn't a good match for `torch.compile`. - How many times did I recompile a particular frame? Something that recompiled a lot will look like: {dark-green}`[10/0]` {dark-green}`[10/1]` {dark-green}`[10/2]` \- if something is being recompiled a lot, that is very suspicious and worth looking into, even if it isn't the root cause of your problem. - Was there a compilation error? Frames that errored will look like {red}`[0/1]`. - What intermediate compiler products did I generate for a given frame? For example, you can look at the high-level generated FX graph or the generated Triton code. - Is there relevant information for a particular frame? You can find these in `compilation_metrics`. (torch-logs)= ### TORCH_LOGS You can use the `TORCH_LOGS` environment variable to selectively enable parts of the `torch.compile` stack to log. `TORCH_LOGS` is in fact the source of logs for `tlparse`. The format of the `TORCH_LOGS` environment variable looks like this: ``` TORCH_LOGS="<option1>,<option2>,..." python foo.py ``` Useful high-level options include: - `graph_breaks`: logs locations of graph breaks in user code and the reason for the graph break - `guards`: logs guards that are generated - `recompiles`: logs which function recompiled and the guards that failed, leading to the recompilation - `dynamic`: logs related to dynamic shapes Also, you can programmatically set logging options using `torch._logging.set_logs`: ```py import logging torch._logging.set_logs(graph_breaks=True) ... ``` More `TORCH_LOGS` options are {ref}`troubleshooting-torch-logs-options`. For the full list of options, see [torch.\_logging](https://pytorch.org/docs/stable/logging.html) and [torch.\_logging.set_logs](https://pytorch.org/docs/stable/generated/torch._logging.set_logs.html#torch._logging.set_logs). ### tlparse vs. TORCH_LOGS Generally, we suggest first using `tlparse` when encountering issues. `tlparse` is ideal for debugging large models and gaining a high-level overview of how your model was compiled. On the other hand, `TORCH_LOGS` is preferred for small examples and fine-grained debugging detail, when we already have an idea of which `torch.compile` component is causing the problem. ## Simple Workarounds Here, we describe some workarounds to `torch.compile` issues involving small code modifications or changing some `torch.compile` settings. ### Where to apply torch.compile? We recommend applying `torch.compile` to the highest-level function that doesn't cause excessive problems. Typically, it is your train or eval step with the optimizer but without the loop, your top-level `nn.Module`, or some sub-``` nn.Module``s. ``torch.compile ``` specifically doesn't handle distributed wrapper modules like DDP or FSDP very well, so consider applying `torch.compile` to the inner module passed to the wrapper. ```py # inference model = ... opt_model = torch.compile(model) for _ in range(N_ITERS): inp = ... out = opt_model(inp) ``` ```py # training model = ... opt = torch.optim.Adam(model.parameters()) @torch.compile def train(mod, data): opt.zero_grad(True) pred = mod(data[0]) loss = torch.nn.CrossEntropyLoss()(pred, data[1]) loss.backward() opt.step() for _ in range(N_ITERS): inp = ... train(model, inp) ``` ```py # DistributedDataParallel model = ... opt_model = torch.compile(model) model_ddp = DistributedDataParallel(opt_model, ...) for _ in range(N_ITERS): inp = ... out = model_ddp(inp) ``` ### Disabling and Suppressing Errors For some model architectures, there are portions of the model which are particularly difficult to compile \- either there are many graph breaks, or there are crashes. You may want to explicitly disable these portions of the model which are problematic so that you can apply `torch.compile` to the parts that work. You can do this by using the `@torch.compiler.disable` decorator. When `torch.compile` attempts to call a disabled function, it breaks the graph and skips tracing the disabled function, resuming tracing after the call. By default, all recursive calls made from a disabled function are also disabled. Use the `recursive=False` option to allow compilation for recursive calls. ```py def bad1_inner(...): # skipped @torch.compiler.disable def bad1_outer(...): # skipped bad1_inner(...) def bad2_inner(...) # traced @torch.compiler.disable(recursive=False) def bad2_outer(...): # skipped bad2_inner(...) @torch.compile def fn(...): # graph break bad1_outer(...) ... # graph break bad2_outer(...) ``` For example, we use `torch.compiler.disable` to disable `torch.compile` on sparse architecture in recommendation models, as the sparse arch is difficult to compile. Preprocessing and logging functions are other examples of functions that typically cause a lot of graph breaks and do not get value from being compiled. If you are experiencing compiler crashes and you want to continue regardless, you can set `torch._dynamo.config.suppress_errors = True`. When the compiler crashes, we will just skip tracing the function and try again later. This is not best practice - it is better to eventually manually add disable annotations as necessary. ### Resolving graph breaks To maximize optimization opportunities, it's important to reduce the number of graph breaks. Recall that you can see what graph breaks are happening using `tlparse` or `TORCH_LOGS="graph_breaks"`. In general, graph breaks are caused by one of the following: 1. You're trying to do something that fundamentally cannot be traced, such as data-dependent control flow. 2. You're trying to do something not yet supported. . For example, we currently have limited support for tracing code that uses the built-in Python `inspect` module. 3. Your code has an error in it. For example, you may have tried calling a function with an incorrect number of arguments. Graph break logs will tell you the user code location and reason for the graph break. Unfortunately, many graph breaks are not actionable without a deeper understanding of Dynamo. It can even be challenging to determine which of the three causes was the true cause of your graph break. We are working on making graph break messages more actionable. Additionally, the impact of lost optimization opportunities differs between graph breaks. For example, graph breaks that happen in the middle of your model's `forward` are likely to have a more negatie impact than graph breaks in a preprocessing part at the beginning of the `forward`. So it is not crucial to prevent *every single* break, but rather to prevent the ones that cause significant performance hits. If a graph break message doesn't suggest any action, you suspect that the cause of your graph break is (2), and you believe that the graph break is causing performance hits, then please report the graph break as an issue. If a function has many graph breaks, consider disabling compilation on that function, as the overhead cost for the graph breaks may become prohibitive. Below are some common graph breaks and some workarounds. #### Data-dependent operations `torch.compile` graph breaks on data-dependent operations such as data-dependent control flow (if-statements, loops with tensors) and direct tensor data accesses (`.item`, `.data_ptr`). ```py import torch @torch.compile def fn(x): y = x.sum() if y > 0: return x + y.item() return x - y.item() fn(torch.ones(3, 3)) ``` ``` $ TORCH_LOGS="graph_breaks" python playground.py Graph break in user code at /data/users/williamwen/pytorch/playground.py:6 Reason: Data-dependent jump User code traceback: File "/data/users/williamwen/pytorch/playground.py", line 6, in fn if y > 0: Graph break in user code at /data/users/williamwen/pytorch/playground.py:7 Reason: Unsupported: Tensor.item User code traceback: File "/data/users/williamwen/pytorch/playground.py", line 7, in torch_dynamo_resume_in_fn_at_6 return x + y.item() Traceback (most recent call last): File "/data/users/williamwen/pytorch/torch/_dynamo/symbolic_convert.py", line 616, in wrapper return inner_fn(self, inst) ^^^^^^^^^^^^^^^^^^^^ File "/data/users/williamwen/pytorch/torch/_dynamo/symbolic_convert.py", line 2288, in CALL self._call(inst) File "/data/users/williamwen/pytorch/torch/_dynamo/symbolic_convert.py", line 2282, in _call self.call_function(fn, args, kwargs) File "/data/users/williamwen/pytorch/torch/_dynamo/symbolic_convert.py", line 838, in call_function self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/users/williamwen/pytorch/torch/_dynamo/variables/misc.py", line 1038, in call_function return self.obj.call_method(tx, self.name, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/users/williamwen/pytorch/torch/_dynamo/variables/tensor.py", line 527, in call_method result = handler_method(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/users/williamwen/pytorch/torch/_dynamo/variables/tensor.py", line 773, in method_item unimplemented("Tensor.item") File "/data/users/williamwen/pytorch/torch/_dynamo/exc.py", line 304, in unimplemented raise Unsupported(msg, case_name=case_name) torch._dynamo.exc.Unsupported: Tensor.item ``` The general workaround for these graph breaks is to avoid doing data-dependent operations. Some specific workarounds are: - If your control flow doesn't actually depend on data values, consider modifying your code to perform control flow on constants. ```py # old x = torch.randn(3, 3) @torch.compile def fn(y): if x.sum() > 0: return y + x else: return y - x # new x = torch.randn(3, 3) cond = (x.sum() > 0).item() @torch.compile def fn(y): if cond: return y + x else: return y - x ``` - Use higher-order ops like `torch.cond` (<https://pytorch.org/docs/main/cond.html>) in place of data-dependent control flow ```py # old @torch.compile def fn(x): if x.sum() > 0: return x + 1 return x - 1 # new @torch.compile def fn(x): return torch.cond( x.sum() > 0, lambda x: x + 1, lambda x: x - 1, (x,), ) ``` - If you have a `.item()` call, try `torch._dynamo.config.capture_scalar_outputs = True` or `TORCHDYNAMO_CAPTURE_SCALAR_OUTPUTS=1` - Wrap problematic parts of the function in a custom op #### Custom ops If you have code that `torch.compile` has trouble tracing through, either due to missing support or fundamental incompatibility, you can consider wrapping the problematic code in a custom op. Custom ops require a little bit of additional work to get them to be compatible with `torch.compile`. See <https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html> for more details. #### Printing Printing/logging/issuing warnings will result in a graph break. If you have a function that makes many logging calls, for example, a function that logs data about a training iteration, consider applying `torch.compiler.disable` on it. Alternatively, you can try using `torch._dynamo.config.reorderable_logging_functions`. This config is used to reorder logging functions so that they are called at the end of the traced function, thus avoiding a graph break. However, the logged contents may differ if, for example, a mutation occurs. ```py import torch torch._dynamo.config.reorderable_logging_functions.add(print) @torch.compile def fn(x): x += 1 print("log!") return torch.sin(x) fn(torch.ones(3, 3)) ``` ``` $ TORCH_LOGS="graph_breaks" python playground.py log! ``` #### Incorrect code Your code may be wrong, or is otherwise encountering an error from outside `torch.compile`. In the code below, we made a typo in the `torch.sin` call by providing an extra argument. ```py import torch @torch.compile def fn(x): y = torch.sin(x, x) return y fn(torch.ones(3, 3)) ``` ``` $ TORCH_LOGS="graph_breaks" python playground.py Graph break in user code at /data/users/williamwen/pytorch/playground.py:5 Reason: Unsupported: TypeError <built-in method sin of type object at 0x7fd6fd764600>: sin() takes 1 positional argument but 2 were given User code traceback: File "/data/users/williamwen/pytorch/playground.py", line 5, in fn y = torch.sin(x, x) ... ``` It can be difficult to tell from the logs if the error is caused by your code or because of a `torch.compile` bug. In order to differentiate, we recommend trying to run your code without `torch.compile` to see if you still get the error. ### Dealing with recompilations You can view recompilations and their reasons using `tlparse` or `TORCH_LOGS=recompiles`. #### Is dynamic shapes enabled? Recompilations due to mismatched shapes are in the form: ``` tensor 'L['x']' size mismatch at index 0. expected 3, actual 4 ``` Make sure that the `dynamic` option of `torch.compile` is not set to `False`. The default option, `dynamic=None`, will only attempt dynamic shapes after the first compilation. You can set `dynamic=True` to upfront compile as dynamic as possible. For more information on dynamic shapes, see [The dynamic shapes manual](https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.fh8zzonyw8ng). #### Changing the cache size limit There is a limit to how many times a function can be recompiled, determined by `torch._dynamo.config.recompile_limit` and `torch._dynamo.config.accumulated_recompile_limit`. If either limit is exceeded, then we will not attempt to compile the function again and instead will run the function eagerly. `torch.compile` will also issue a warning containing the affected function and which limit was hit. In the example below, each function call results in a recompile attempt. When we hit the cache size limit (8), we stop attempting to recompile. ```py import torch @torch.compile(dynamic=False) def fn(x): return x + 1 for i in range(1, 10): fn(torch.ones(i)) ``` ``` $ python playground.py torch._dynamo hit config.recompile_limit (8) function: 'fn' (/data/users/williamwen/pytorch/playground.py:5) last reason: 0/0: tensor 'L['x']' size mismatch at index 0. expected 1, actual 9 ``` If you know that the number of recompilations has a reasonable constant upper bound, you can raise the cache size limit. If the cost of recompilation outweighs the benefit of compilation, then you can consider lowering the cache size limit. #### Wrapping constants with tensors By default, `int` / `float` variables are treated as constants and are guarded as such. In the below example, we have a recompilation for each function call. ```py import torch @torch.compile def fn(x, c): return x + c for i in range(1, 10): fn(torch.ones(i), 0.5 + i) ``` ``` $ TORCH_LOGS="recompiles" python playground.py Recompiling function fn in /data/users/williamwen/pytorch/playground.py:3 triggered by the following guard failure(s): - 0/7: L['c'] == 8.5 - 0/6: L['c'] == 7.5 - 0/5: L['c'] == 6.5 - 0/4: L['c'] == 5.5 - 0/3: L['c'] == 4.5 - 0/2: L['c'] == 3.5 - 0/1: L['c'] == 2.5 - 0/0: L['c'] == 1.5 torch._dynamo hit config.recompile_limit (8) function: 'fn' (/data/users/williamwen/pytorch/playground.py:3) last reason: 0/0: L['c'] == 1.5 ``` In particular, for LR schedulers, initializing with a constant can lead to recompilations: ```py import torch mod = torch.nn.Linear(3, 3) opt = torch.optim.Adam(mod.parameters(), lr=0.01) sched = torch.optim.lr_scheduler.ExponentialLR(opt, 0.9) @torch.compile def fn(inp): opt.zero_grad(True) out = mod(inp).sum() out.backward() opt.step() sched.step() for i in range(1, 10): fn(torch.ones(3, 3)) ``` ``` $ TORCH_LOGS="recompiles" python playground.py Recompiling function step in /data/users/williamwen/pytorch/torch/optim/adam.py:189 triggered by the following guard failure(s): - 3/7: L['self'].param_groups[0]['lr'] == 0.004782969000000002 - 3/6: L['self'].param_groups[0]['lr'] == 0.005314410000000002 - 3/5: L['self'].param_groups[0]['lr'] == 0.005904900000000002 - 3/4: L['self'].param_groups[0]['lr'] == 0.006561000000000002 - 3/3: L['self'].param_groups[0]['lr'] == 0.007290000000000001 - 3/2: L['self'].param_groups[0]['lr'] == 0.008100000000000001 - 3/1: L['self'].param_groups[0]['lr'] == 0.009000000000000001 - 3/0: L['self'].param_groups[0]['lr'] == 0.01 torch._dynamo hit config.recompile_limit (8) function: 'step' (/data/users/williamwen/pytorch/torch/optim/adam.py:189) last reason: 3/0: L['self'].param_groups[0]['lr'] == 0.01 ``` In both examples, we can wrap float variables in tensors in order to prevent recompilations. ```py # first example for i in range(1, 10): fn(torch.ones(i), torch.tensor(0.5 + i)) # second example opt = torch.optim.Adam(mod.parameters(), lr=torch.tensor(0.01)) sched = torch.optim.lr_scheduler.ExponentialLR(opt, torch.tensor(0.9)) ``` ## Reporting Issues If the workarounds provided above were not enough to get `torch.compile` working, then you should consider reporting the issue to PyTorch. But there are a few things that you can do to make our lives significantly easier. ### Ablation Check which component of the `torch.compile` stack is the one causing the issue using the `backend=` option for `torch.compile`. In particular, try: - `torch.compile(fn, backend="eager")`, which only runs TorchDynamo, the graph capture component of `torch.compile`. - `torch.compile(fn, backend="aot_eager")`, which runs TorchDynamo and AOTAutograd, which additionally generates the backward graph during compilation. - `torch.compile(fn, backend="aot_eager_decomp_partition")`, which runs TorchDynamo and AOTAutograd with operator decompositions/partitions. - `torch.compile(fn, backend="inductor")`, which runs TorchDynamo, AOTAutograd, and TorchInductor, the backend ML compiler that generates compiled kernels. If you only fail with the Inductor backend, you can additionally test various Inductor modes: - `torch.compile(fn, backend="inductor", mode="default")` - `torch.compile(fn, backend="inductor", mode="reduce-overhead")` - `torch.compile(fn, backend="inductor", mode="max-autotune")` You can also check if dynamic shapes is causing issues with any backend: - `torch.compile(fn, dynamic=True)` (always use dynamic shapes) - `torch.compile(fn, dynamic=False)` (never use dynamic shapes) - `torch.compile(fn, dynamic=None)` (automatic dynamic shapes) ### Bisecting Did you try on the latest nightly? Did something work in the past but now no longer works? Can you bisect to determine the first nightly where your issue occurs? Bisecting is especially helpful for performance, accuracy, or compile time regressions, where it is not immediately obvious where the problem originates from. ### Creating a reproducer Creating reproducers is a lot of work, and it is perfectly fine if you do not have the time to do it. However, if you are a motivated user unfamiliar with the internals of `torch.compile`, creating a standalone reproducer can have a huge impact on our ability to fix the bug. Without a reproducer, your bug report must contain enough information for us to identify the root cause of the problem and write a reproducer from scratch. Here's a list of useful reproducers, ranked from most to least preferred: 1. **Self-contained, small reproducer:** A script with no external dependencies, under 100 lines of code, that reproduces the problem when run. 2. **Self-contained, large reproducer:** Even if it's large, being self-contained is a huge advantage! 3. **Non-self-contained reproducer with manageable dependencies:** For example, if you can reproduce the problem by running a script after `pip install transformers`, that's manageable. We can likely run it and investigate. 4. **Non-self-contained reproducer requiring substantial setup:** This might involve downloading datasets, multiple environment setup steps, or specific system library versions requiring a Docker image. The more complex the setup, the harder it is for us to recreate the environment. :::{note} Docker simplifies setup but complicates changes to the environment, so it's not a perfect solution, though we'll use it if necessary. ::: Somewhat orthogonally, a reproducer that can be run in a single process is better than a reproducer that requires multiprocess training (but once again, if you only have a multiprocess reproducer, we'll take it!). Additionally, below is a non-exhaustive list of aspects to check in your issue that you can attempt to replicate in your reproducer: - **Autograd**. Did you have tensor inputs with `requires_grad=True`? Did you call `backward()` on the output? - **Dynamic shapes**. Did you set `dynamic=True`? Or did you run the test code multiple times with varying shapes? - **Custom operators**. Is there a custom operator involved in the real workflow? Can you replicate some of its important characteristics using the Python custom operator API? - **Configuration**. Did you set all the same configuration? This includes `torch._dynamo.config` and `torch._inductor.config` settings, as well as arguments to `torch.compile` like `backend` / `mode`. - **Context managers**. Did you replicate any active context managers? This could be `torch.no_grad`, automatic mixed precision, `TorchFunctionMode` / `TorchDispatchMode`, activation checkpointing, compiled autograd etc. - **Tensor subclasses**. Is there a tensor subclass involved? ### Minifier The minifier is an early `torch.compile` tool that, given an FX graph that crashes when we attempt to run or compile it, finds a subgraph that also crashes and outputs the code that performs that subgraph's operations. Essentially, the minifier finds a minimal repro for a certain class of `torch.compile`-related crashes. This assumes that we were able to successfully trace through code. Unfortunately, most of the time nowadays, the minifier doesn't work as expected, and alternative methods may be necessary. This is likely because bugs that can be automatically reproduced in this manner are generally easier to fix and have already been addressed, leaving more complex issues that do not reproduce easily. However, it is straightforward to attempt using the minifier, so it is worth trying even if it may not succeed. Instructions for operating the minifier can be found [here](https://pytorch.org/docs/stable/torch.compiler_troubleshooting_old.html). If the compiler is crashing, you can set `TORCHDYNAMO_REPRO_AFTER="dynamo"` or `TORCHDYNAMO_REPRO_AFTER="aot"` The `aot` option is more likely to succeed, although it may not identify the `AOTAutograd` issues. This will generate the `repro.py` file which may help to diagnose the problem. For accuracy-related issues, consider setting `TORCHDYNAMO_REPRO_LEVEL=4`. Please note that this may not always successfully identify the problematic subgraph. ## Debugging Deeper This section provides tools and techniques for independently debugging `torch.compile` issues or for gaining a deeper understanding of the `torch.compile` stack. These methods are more involved than those presented above and are used by PyTorch developers regularly to debug real `torch.compile` issues. Below is a high-level overview of the stack: ![Torch Dynamo Stack](../../_static/img/dynamo/td_stack.png) The stack comprises three main components: TorchDynamo, AOTAutograd, and Inductor. Our debugging strategy involves first identifying the component in which the error occurs and then individually debugging the component. To determine the component responsible for the issue, see the `Ablation` section under `Reporting Issues` above. For guidance on debugging a specific component, consult the sections below. ### TorchDynamo #### Logging what Dynamo is tracing The `TORCH_LOGS=trace_bytecode` option enables you to view the precise bytecode instructions that Dynamo is tracing, as well as a symbolic representation of the Python interpreter stack. When encountering a graph break or crash, it is advisable to inspect the last few bytecode instructions traced. You can also use `TORCH_LOGS=trace_source` to see which lines of source code Dynamo is tracing through. This is useful in combination with `trace_bytecode` to see the line of source code each traced bytecode instruction corresponds to. Finally, you can use `TORCH_LOGS=graph_code` to see the Python code representing the FX graph that Dynamo traced. You can view this code to double check that the correct ops are being traced. ```py import torch def g(x, y): return x + y @torch.compile(backend="eager") def f(x): x = torch.sin(x) x = g(x, x) return x f(torch.ones(3, 3)) ``` ``` $ TORCH_LOGS="trace_bytecode,trace_source,graph_code" python playground.py TRACE starts_line /data/users/williamwen/pytorch/playground.py:6 in f () @torch.compile(backend="eager") TRACE RESUME 0 [] TRACE starts_line /data/users/williamwen/pytorch/playground.py:8 in f (f) x = torch.sin(x) TRACE LOAD_GLOBAL torch [] TRACE LOAD_ATTR sin [NullVariable(), PythonModuleVariable(<module 'torch' from '/data/users/williamwen/pytorch/torch/__init__.py'>)] TRACE LOAD_FAST x [NullVariable(), TorchInGraphFunctionVariable(<built-in method sin of type object at 0x7f00f6964600>)] TRACE CALL 1 [NullVariable(), TorchInGraphFunctionVariable(<built-in method sin of type object at 0x7f00f6964600>), LazyVariableTracker()] TRACE STORE_FAST x [TensorVariable()] TRACE starts_line /data/users/williamwen/pytorch/playground.py:9 in f (f) x = g(x, x) TRACE LOAD_GLOBAL g [] TRACE LOAD_FAST x [NullVariable(), UserFunctionVariable()] TRACE LOAD_FAST x [NullVariable(), UserFunctionVariable(), TensorVariable()] TRACE CALL 2 [NullVariable(), UserFunctionVariable(), TensorVariable(), TensorVariable()] TRACE starts_line /data/users/williamwen/pytorch/playground.py:3 in g (g) (inline depth: 1) def g(x, y): TRACE RESUME 0 [] TRACE starts_line /data/users/williamwen/pytorch/playground.py:4 in g (g) (inline depth: 1) return x + y TRACE LOAD_FAST x [] TRACE LOAD_FAST y [TensorVariable()] TRACE BINARY_OP 0 [TensorVariable(), TensorVariable()] TRACE RETURN_VALUE None [TensorVariable()] TRACE STORE_FAST x [TensorVariable()] TRACE starts_line /data/users/williamwen/pytorch/playground.py:10 in f (f) return x TRACE LOAD_FAST x [] TRACE RETURN_VALUE None [TensorVariable()] TRACED GRAPH ===== __compiled_fn_1 ===== /data/users/williamwen/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module): def forward(self, L_x_: "f32[3, 3][3, 1]cpu"): l_x_ = L_x_ # File: /data/users/williamwen/pytorch/playground.py:8 in f, code: x = torch.sin(x) x: "f32[3, 3][3, 1]cpu" = torch.sin(l_x_); l_x_ = None # File: /data/users/williamwen/pytorch/playground.py:4 in g, code: return x + y x_1: "f32[3, 3][3, 1]cpu" = x + x; x = None return (x_1,) ``` #### Breakpointing Dynamo tracing Inserting a breakpoint in Dynamo/user code is helpful at times to see what the state of Dynamo is when tracing through user code. Unfortunately, inserting a breakpoint in the normal Python fashion will result in a graph break in TorchDynamo, so we will not be able to view the state of Dynamo at the point where we intended to breakpoint. The first method for setting a breakpoint is to insert it within the Dynamo source code. Three recommended locations to place a breakpoint are: - In `torch/_dynamo/symbolic_convert.py`, breakpoint at functions that are named after the problematic bytecode instruction, such as `def CALL_FUNCTION` and `def STORE_ATTR`. You can conditionally breakpoint depending on inputs, for example, the `argval` of the instruction, or the name of the object at the top of the stack since some bytecode opcodes are frequently used. - Breakpoint where the graph break or error originates from. Typically, graph breaks are emitted from a call to `unimplemented(...)`. - Breakpoint in `torch/_dynamo/variables/builder.py, function:_wrap`. You will likely have to conditionally breakpoint on the input. This function determines how to symbolically represent a given value. Consider breakpointing here if you suspect that a value is represented incorrectly. The second way to insert a breakpoint is to use `torch._dynamo.comptime.comptime.breakpoint`: ```py from torch._dynamo.comptime import comptime @torch.compile def f(...): ... comptime.breakpoint() ... ``` A comptime breakpoint is convenient as it enables you to inspect the Dynamo state at a specific location within the user code being traced. It does not require you to insert a breakpoint in the Dynamo source or to conditionally breakpoint based on variables. When a comptime breakpoint is triggered, you can do the following: - `ctx.print_bt()` to print the user stack trace - `ctx.print_locals()` to print all current locals - `ctx.print_graph()` to print the currently traced graph - `ctx.disas()` to print the currently traced function's bytecode - Use standard `pdb` commands, such as `bt/u/d/n/s/r`, - you can go up the `pdb` stack to inspect more Dynamo internals ```py import torch from torch._dynamo.comptime import comptime @torch.compile(backend="eager") def f(x): y = x + 1 comptime.breakpoint() y = y + 1 return y f(torch.ones(3, 3)) ``` ``` $ python playground.py --Return-- > /data/users/williamwen/pytorch/torch/_dynamo/comptime.py(392)inner()->None -> builtins.breakpoint() (Pdb) ctx.print_bt() File "/data/users/williamwen/pytorch/playground.py", line 7, in f comptime.breakpoint() (Pdb) ctx.print_locals() x = FakeTensor(..., size=(3, 3)) y = FakeTensor(..., size=(3, 3)) (Pdb) bt ... /data/users/williamwen/pytorch/torch/_dynamo/symbolic_convert.py(826)call_function() -> self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] /data/users/williamwen/pytorch/torch/_dynamo/variables/misc.py(331)call_function() -> func(ComptimeContext(tx)) > /data/users/williamwen/pytorch/torch/_dynamo/comptime.py(392)inner()->None -> builtins.breakpoint() (Pdb) ctx.print_graph() def forward(self, L_x_: "f32[3, 3]"): l_x_ = L_x_ # File: /data/users/williamwen/pytorch/playground.py:6 in f, code: y = x + 1 y: "f32[3, 3]" = l_x_ + 1; l_x_ = y = None ``` % TODO(uncomment/update once we improve this API) % Debugging large models % ^^^^^^^^^^^^^^^^^^^^^^ % % Debugging TorchDynamo on large models can be tricky, mainly because Dynamo traces through large amounts of code. % It can be difficult to find the problematic function, or to determine where to place a breakpoint. % Even if we've found the problematic function, we don't want to deal with logging spam. % Fortunately, you can use ``TORCHDYNAMO_DEBUG_FUNCTION=<function name>``, which limits dynamo tracing to only functions with a specific name % (exact match). This will allow you to filter all of the functions in the model to the function(s) of interest. % Use this in combination with the above debugging strategies. #### Bytecode generation errors Although uncommon, Dynamo may generate incorrect bytecode. This may occur if you determine the following: - Ablation reveals the error is happening at the TorchDynamo level - The error is not being emitted from TorchDynamo stack frames - The error looks more like a user error rather than a Dynamo error, or is a segmentation fault - The error does not occur without `torch.compile` Bytecode generation bugs are generally tricky to fix and we recommend submitting an issue instead of trying to fix those yourself. If you are interested in seeing the bytecode that Dynamo generates, you can use `TORCH_LOGS=bytecode`. You can see a high-level overview on what bytecode Dynamo generates [here](https://docs.google.com/presentation/d/1tMZOoAoNKF32CAm1C-WfzdVVgoEvJ3lp/edit?usp=sharing&ouid=114922067987692817315&rtpof=true&sd=true). ### AOTAutograd AOTAutograd errors are typically difficult to debug - we recommend just submitting an issue. AOTAutograd logging output is primarily helpful to see what the input to Inductor is. % TODO % TorchInductor % ------------- % TODO (troubleshooting-torch-logs-options)= ### Summary of TORCH_LOGS options A summary of helpful `TORCH_LOGS` options is: ```{eval-rst} .. list-table:: :widths: 25 50 :header-rows: 1 * - Option - Description * - +all - Output debug logs from all ``torch.compile`` components * - +dynamo - Output debug logs from TorchDynamo * - +aot - Output debug logs from AOTAutograd * - +inductor - Output debug logs from TorchInductor * - dynamic - Output logs from dynamic shapes * - graph_code - Output the Python code for the FX graph that Dynamo generated * - graph_sizes - Output the tensor sizes of the FX graph that Dynamo generated * - trace_bytecode - Output the bytecode instructions that Dynamo is tracing through and the symbolic interpreter stack Dynamo is keeping track of * - trace_source - Output the line of code in the original source that Dynamo is currently tracing through * - bytecode - Output Dynamo-generated bytecode * - guards - Output generated guards * - recompiles - Output recompilation reasons (only the first guard check that fails) * - recompiles_verbose - Output all guard checks that fail when a recompilation occurs * - aot_graphs - Output graph generated by AOTAutograd * - aot_joint_graphs - Output the joint forward-backward graph generated by AOTAutograd * - output_code - Output code generated by Inductor * - kernel_code - Output code generated by Inductor on a per-kernel basis * - schedule - Output Inductor scheduling logs * - perf_hints - Output Inductor perf hint logs * - fusion - Output Inductor fusion logs ``` For the full list of options, see [torch.\_logging](https://pytorch.org/docs/stable/logging.html) and [torch.\_logging.set_logs](https://pytorch.org/docs/stable/generated/torch._logging.set_logs.html#torch._logging.set_logs). ## Related Articles - [torch.compile tutorial](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) - [torch.compile fine-grained APIs](https://pytorch.org/docs/stable/torch.compiler_fine_grain_apis.html) - [torch.compile FAQ](https://pytorch.org/docs/stable/torch.compiler_faq.html) - [torch.compiler namespace overview](https://pytorch.org/docs/stable/torch.compiler.html#torch-compiler-overview) - [torch.compiler API reference](https://pytorch.org/docs/stable/torch.compiler_api.html) - [Profiling torch.compile](https://pytorch.org/docs/stable/torch.compiler_profiling_torch_compile.html) - [torch.compile missing manual](https://docs.google.com/document/d/1y5CRfMLdwEoF1nTk9q8qEu1mgMUuUtvhklPKJ2emLU8/edit?usp=sharing) - [The dynamic shapes manual](https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.fh8zzonyw8ng) - [TorchInductor caching tutorial](https://pytorch.org/tutorials/recipes/torch_compile_caching_tutorial.html)
unknown
github
https://github.com/pytorch/pytorch
docs/source/user_guide/torch_compiler/torch.compiler_troubleshooting.md
import pickle import time from datetime import datetime from django.template import engines from django.template.response import ( ContentNotRenderedError, SimpleTemplateResponse, TemplateResponse, ) from django.test import ( RequestFactory, SimpleTestCase, modify_settings, override_settings, ) from django.test.utils import require_jinja2 from .utils import TEMPLATE_DIR def test_processor(request): return {"processors": "yes"} test_processor_name = "template_tests.test_response.test_processor" # A test middleware that installs a temporary URLConf def custom_urlconf_middleware(get_response): def middleware(request): request.urlconf = "template_tests.alternate_urls" return get_response(request) return middleware class SimpleTemplateResponseTest(SimpleTestCase): def _response(self, template="foo", *args, **kwargs): template = engines["django"].from_string(template) return SimpleTemplateResponse(template, *args, **kwargs) def test_template_resolving(self): response = SimpleTemplateResponse("first/test.html") response.render() self.assertEqual(response.content, b"First template\n") templates = ["foo.html", "second/test.html", "first/test.html"] response = SimpleTemplateResponse(templates) response.render() self.assertEqual(response.content, b"Second template\n") response = self._response() response.render() self.assertEqual(response.content, b"foo") def test_explicit_baking(self): # explicit baking response = self._response() self.assertFalse(response.is_rendered) response.render() self.assertTrue(response.is_rendered) def test_render(self): # response is not re-rendered without the render call response = self._response().render() self.assertEqual(response.content, b"foo") # rebaking doesn't change the rendered content template = engines["django"].from_string("bar{{ baz }}") response.template_name = template response.render() self.assertEqual(response.content, b"foo") # but rendered content can be overridden by manually # setting content response.content = "bar" self.assertEqual(response.content, b"bar") def test_iteration_unrendered(self): # unrendered response raises an exception on iteration response = self._response() self.assertFalse(response.is_rendered) def iteration(): list(response) msg = "The response content must be rendered before it can be iterated over." with self.assertRaisesMessage(ContentNotRenderedError, msg): iteration() self.assertFalse(response.is_rendered) def test_iteration_rendered(self): # iteration works for rendered responses response = self._response().render() self.assertEqual(list(response), [b"foo"]) def test_content_access_unrendered(self): # unrendered response raises an exception when content is accessed response = self._response() self.assertFalse(response.is_rendered) with self.assertRaises(ContentNotRenderedError): response.content self.assertFalse(response.is_rendered) def test_content_access_rendered(self): # rendered response content can be accessed response = self._response().render() self.assertEqual(response.content, b"foo") def test_set_content(self): # content can be overridden response = self._response() self.assertFalse(response.is_rendered) response.content = "spam" self.assertTrue(response.is_rendered) self.assertEqual(response.content, b"spam") response.content = "baz" self.assertEqual(response.content, b"baz") def test_dict_context(self): response = self._response("{{ foo }}{{ processors }}", {"foo": "bar"}) self.assertEqual(response.context_data, {"foo": "bar"}) response.render() self.assertEqual(response.content, b"bar") def test_kwargs(self): response = self._response( content_type="application/json", status=504, charset="ascii" ) self.assertEqual(response.headers["content-type"], "application/json") self.assertEqual(response.status_code, 504) self.assertEqual(response.charset, "ascii") def test_args(self): response = SimpleTemplateResponse("", {}, "application/json", 504) self.assertEqual(response.headers["content-type"], "application/json") self.assertEqual(response.status_code, 504) @require_jinja2 def test_using(self): response = SimpleTemplateResponse("template_tests/using.html").render() self.assertEqual(response.content, b"DTL\n") response = SimpleTemplateResponse( "template_tests/using.html", using="django" ).render() self.assertEqual(response.content, b"DTL\n") response = SimpleTemplateResponse( "template_tests/using.html", using="jinja2" ).render() self.assertEqual(response.content, b"Jinja2\n") def test_post_callbacks(self): "Rendering a template response triggers the post-render callbacks" post = [] def post1(obj): post.append("post1") def post2(obj): post.append("post2") response = SimpleTemplateResponse("first/test.html", {}) response.add_post_render_callback(post1) response.add_post_render_callback(post2) # When the content is rendered, all the callbacks are invoked, too. response.render() self.assertEqual(response.content, b"First template\n") self.assertEqual(post, ["post1", "post2"]) def test_pickling(self): # Create a template response. The context is # known to be unpicklable (e.g., a function). response = SimpleTemplateResponse( "first/test.html", { "value": 123, "fn": datetime.now, }, ) with self.assertRaises(ContentNotRenderedError): pickle.dumps(response) # But if we render the response, we can pickle it. response.render() pickled_response = pickle.dumps(response) unpickled_response = pickle.loads(pickled_response) self.assertEqual(unpickled_response.content, response.content) self.assertEqual( unpickled_response.headers["content-type"], response.headers["content-type"] ) self.assertEqual(unpickled_response.status_code, response.status_code) # ...and the unpickled response doesn't have the # template-related attributes, so it can't be re-rendered template_attrs = ("template_name", "context_data", "_post_render_callbacks") for attr in template_attrs: self.assertFalse(hasattr(unpickled_response, attr)) # ...and requesting any of those attributes raises an exception for attr in template_attrs: with self.assertRaises(AttributeError): getattr(unpickled_response, attr) def test_repickling(self): response = SimpleTemplateResponse( "first/test.html", { "value": 123, "fn": datetime.now, }, ) with self.assertRaises(ContentNotRenderedError): pickle.dumps(response) response.render() pickled_response = pickle.dumps(response) unpickled_response = pickle.loads(pickled_response) pickle.dumps(unpickled_response) def test_pickling_cookie(self): response = SimpleTemplateResponse( "first/test.html", { "value": 123, "fn": datetime.now, }, ) response.cookies["key"] = "value" response.render() pickled_response = pickle.dumps(response, pickle.HIGHEST_PROTOCOL) unpickled_response = pickle.loads(pickled_response) self.assertEqual(unpickled_response.cookies["key"].value, "value") def test_headers(self): response = SimpleTemplateResponse( "first/test.html", {"value": 123, "fn": datetime.now}, headers={"X-Foo": "foo"}, ) self.assertEqual(response.headers["X-Foo"], "foo") @override_settings( TEMPLATES=[ { "BACKEND": "django.template.backends.django.DjangoTemplates", "DIRS": [TEMPLATE_DIR], "OPTIONS": { "context_processors": [test_processor_name], }, } ] ) class TemplateResponseTest(SimpleTestCase): factory = RequestFactory() def _response(self, template="foo", *args, **kwargs): self._request = self.factory.get("/") template = engines["django"].from_string(template) return TemplateResponse(self._request, template, *args, **kwargs) def test_render(self): response = self._response("{{ foo }}{{ processors }}").render() self.assertEqual(response.content, b"yes") def test_render_with_requestcontext(self): response = self._response("{{ foo }}{{ processors }}", {"foo": "bar"}).render() self.assertEqual(response.content, b"baryes") def test_context_processor_priority(self): # context processors should be overridden by passed-in context response = self._response( "{{ foo }}{{ processors }}", {"processors": "no"} ).render() self.assertEqual(response.content, b"no") def test_kwargs(self): response = self._response(content_type="application/json", status=504) self.assertEqual(response.headers["content-type"], "application/json") self.assertEqual(response.status_code, 504) def test_args(self): response = TemplateResponse( self.factory.get("/"), "", {}, "application/json", 504 ) self.assertEqual(response.headers["content-type"], "application/json") self.assertEqual(response.status_code, 504) @require_jinja2 def test_using(self): request = self.factory.get("/") response = TemplateResponse(request, "template_tests/using.html").render() self.assertEqual(response.content, b"DTL\n") response = TemplateResponse( request, "template_tests/using.html", using="django" ).render() self.assertEqual(response.content, b"DTL\n") response = TemplateResponse( request, "template_tests/using.html", using="jinja2" ).render() self.assertEqual(response.content, b"Jinja2\n") def test_pickling(self): # Create a template response. The context is # known to be unpicklable (e.g., a function). response = TemplateResponse( self.factory.get("/"), "first/test.html", { "value": 123, "fn": datetime.now, }, ) with self.assertRaises(ContentNotRenderedError): pickle.dumps(response) # But if we render the response, we can pickle it. response.render() pickled_response = pickle.dumps(response) unpickled_response = pickle.loads(pickled_response) self.assertEqual(unpickled_response.content, response.content) self.assertEqual( unpickled_response.headers["content-type"], response.headers["content-type"] ) self.assertEqual(unpickled_response.status_code, response.status_code) # ...and the unpickled response doesn't have the # template-related attributes, so it can't be re-rendered template_attrs = ( "template_name", "context_data", "_post_render_callbacks", "_request", ) for attr in template_attrs: self.assertFalse(hasattr(unpickled_response, attr)) # ...and requesting any of those attributes raises an exception for attr in template_attrs: with self.assertRaises(AttributeError): getattr(unpickled_response, attr) def test_repickling(self): response = SimpleTemplateResponse( "first/test.html", { "value": 123, "fn": datetime.now, }, ) with self.assertRaises(ContentNotRenderedError): pickle.dumps(response) response.render() pickled_response = pickle.dumps(response) unpickled_response = pickle.loads(pickled_response) pickle.dumps(unpickled_response) def test_headers(self): response = TemplateResponse( self.factory.get("/"), "first/test.html", {"value": 123, "fn": datetime.now}, headers={"X-Foo": "foo"}, ) self.assertEqual(response.headers["X-Foo"], "foo") @modify_settings( MIDDLEWARE={"append": ["template_tests.test_response.custom_urlconf_middleware"]} ) @override_settings(ROOT_URLCONF="template_tests.urls") class CustomURLConfTest(SimpleTestCase): def test_custom_urlconf(self): response = self.client.get("/template_response_view/") self.assertContains(response, "This is where you can find the snark: /snark/") @modify_settings( MIDDLEWARE={ "append": [ "django.middleware.cache.FetchFromCacheMiddleware", "django.middleware.cache.UpdateCacheMiddleware", ], }, ) @override_settings( CACHE_MIDDLEWARE_SECONDS=2, ROOT_URLCONF="template_tests.alternate_urls" ) class CacheMiddlewareTest(SimpleTestCase): def test_middleware_caching(self): response = self.client.get("/template_response_view/") self.assertEqual(response.status_code, 200) time.sleep(1.0) response2 = self.client.get("/template_response_view/") self.assertEqual(response2.status_code, 200) self.assertEqual(response.content, response2.content) time.sleep(2.0) # Let the cache expire and test again response2 = self.client.get("/template_response_view/") self.assertEqual(response2.status_code, 200) self.assertNotEqual(response.content, response2.content)
python
github
https://github.com/django/django
tests/template_tests/test_response.py
- name: test that import_role adds one (just one) execution of the role hosts: localhost gather_facts: false tags: ['importrole'] roles: - name: a tasks: - name: import role ignores dupe rule import_role: name=a - name: test that include_role adds one (just one) execution of the role hosts: localhost gather_facts: false tags: ['includerole'] roles: - name: a tasks: - include_role: name=a
unknown
github
https://github.com/ansible/ansible
test/integration/targets/roles/allowed_dupes.yml
#!/usr/bin/env python from __future__ import division __author__ = "Jai Ram Rideout" __copyright__ = "Copyright 2012, The QIIME project" __credits__ = ["Jai Ram Rideout", "Michael Dwan", "Logan Knecht", "Damien Coy", "Levi McCracken", "Greg Caporaso"] __license__ = "GPL" __version__ = "1.9.1-dev" __maintainer__ = "Jai Ram Rideout" __email__ = "jai.rideout@gmail.com" from os import path from skbio.stats import p_value_to_str from skbio.stats.distance import DistanceMatrix, mantel from qiime.util import make_compatible_distance_matrices from qiime.stats import MantelCorrelogram, PartialMantel def run_mantel_test(method, fps, distmats, num_perms, tail_type, comment, control_dm_fp=None, control_dm=None, sample_id_map=None): """Runs a Mantel test on all pairs of distance matrices. Returns a string suitable for writing out to a file containing the results of the test. WARNING: Only symmetric, hollow distance matrices may be used as input. Asymmetric distance matrices, such as those obtained by the UniFrac Gain metric (i.e. beta_diversity.py -m unifrac_g), should not be used as input. Arguments: method - which Mantel test to run (either 'mantel' or 'partial_mantel') fps - list of filepaths of the distance matrices distmats - list of tuples containing dm labels and dm data (i.e. the output of parse_distmat) num_perms - the number of permutations to use to calculate the p-value(s) tail_type - the type of tail test to use when calculating the p-value(s). Can be 'two-sided', 'greater', or 'less'. Only applies when method is mantel comment - comment string to add to the beginning of the results string control_dm_fp - filepath of the control distance matrix. Only applies when method is partial_mantel (it is required then) control_dm - tuple containing control distance matrix labels and matrix data. Only applies when method is partial_mantel (it is required then) sample_id_map - dict mapping sample IDs (i.e. what is expected by make_compatible_distance_matrices) """ if len(fps) != len(distmats): raise ValueError("Must provide the same number of filepaths as there " "are distance matrices.") if comment is None: comment = '' result = comment if method == 'mantel': result += 'DM1\tDM2\tNumber of entries\tMantel r statistic\t' + \ 'p-value\tNumber of permutations\tTail type\n' elif method == 'partial_mantel': if not control_dm_fp or not control_dm: raise ValueError("You must provide a control matrix filepath and " "control matrix when running the partial Mantel " "test.") result += 'DM1\tDM2\tCDM\tNumber of entries\t' + \ 'Mantel r statistic\tp-value\tNumber of permutations\t' +\ 'Tail type\n' else: raise ValueError("Invalid method '%s'. Must be either 'mantel' or " "'partial_mantel'." % method) # Loop over all pairs of dms. for i, (fp1, (dm1_labels, dm1_data)) in enumerate(zip(fps, distmats)): for fp2, (dm2_labels, dm2_data) in zip(fps, distmats)[i + 1:]: # Make the current pair of distance matrices compatible by only # keeping samples that match between them, and ordering them by # the same sample IDs. (dm1_labels, dm1_data), (dm2_labels, dm2_data) = \ make_compatible_distance_matrices((dm1_labels, dm1_data), (dm2_labels, dm2_data), lookup=sample_id_map) if method == 'partial_mantel': # We need to intersect three sets (three matrices). (dm1_labels, dm1_data), (cdm_labels, cdm_data) = \ make_compatible_distance_matrices( (dm1_labels, dm1_data), control_dm, lookup=sample_id_map) (dm1_labels, dm1_data), (dm2_labels, dm2_data) = \ make_compatible_distance_matrices( (dm1_labels, dm1_data), (dm2_labels, dm2_data), lookup=sample_id_map) if len(dm1_labels) < 3: result += '%s\t%s\t%s\t%d\tToo few samples\n' % (fp1, fp2, control_dm_fp, len(dm1_labels)) continue elif len(dm1_labels) < 3: result += '%s\t%s\t%d\tToo few samples\n' % (fp1, fp2, len(dm1_labels)) continue dm1 = DistanceMatrix(dm1_data, dm1_labels) dm2 = DistanceMatrix(dm2_data, dm2_labels) if method == 'mantel': corr_coeff, p_value, n = mantel(dm1, dm2, method='pearson', permutations=num_perms, alternative=tail_type, strict=True) p_str = p_value_to_str(p_value, num_perms) result += "%s\t%s\t%d\t%.5f\t%s\t%d\t%s\n" % ( fp1, fp2, n, corr_coeff, p_str, num_perms, tail_type) elif method == 'partial_mantel': cdm = DistanceMatrix(cdm_data, cdm_labels) results = PartialMantel(dm1, dm2, cdm)(num_perms) p_str = p_value_to_str(results['mantel_p'], num_perms) result += "%s\t%s\t%s\t%d\t%.5f\t%s\t%d\t%s\n" % ( fp1, fp2, control_dm_fp, len(dm1_labels), results['mantel_r'], p_str, num_perms, 'greater') return result def run_mantel_correlogram(fps, distmats, num_perms, comment, alpha, sample_id_map=None, variable_size_distance_classes=False): """Runs a Mantel correlogram analysis on all pairs of distance matrices. Returns a string suitable for writing out to a file containing the results of the test, a list of correlogram filepath names, and a list of matplotlib Figure objects representing each correlogram. The correlogram filepaths can have an extension string appended to the end of them and then be used to save each of the correlogram Figures to a file. Each correlogram filepath will be a combination of the two distance matrix filepaths that were used to create it. WARNING: Only symmetric, hollow distance matrices may be used as input. Asymmetric distance matrices, such as those obtained by the UniFrac Gain metric (i.e. beta_diversity.py -m unifrac_g), should not be used as input. Arguments: fps - list of filepaths of the distance matrices distmats - list of tuples containing dm labels and dm data (i.e. the output of parse_distmat) num_perms - the number of permutations to use to calculate the p-value(s) comment - comment string to add to the beginning of the results string alpha - the alpha value to use to determine significance in the correlogram plots sample_id_map - dict mapping sample IDs (i.e. what is expected by make_compatible_distance_matrices) variable_size_distance_classes - create distance classes that vary in size (i.e. width) but have the same number of distances in each class """ if len(fps) != len(distmats): raise ValueError("Must provide the same number of filepaths as there " "are distance matrices.") if comment is None: comment = '' result = comment + 'DM1\tDM2\tNumber of entries\t' + \ 'Number of permutations\tClass index\t' + \ 'Number of distances\tMantel r statistic\t' + \ 'p-value\tp-value (Bonferroni corrected)\tTail type\n' correlogram_fps = [] correlograms = [] # Loop over all pairs of dms. for i, (fp1, (dm1_labels, dm1_data)) in enumerate(zip(fps, distmats)): for fp2, (dm2_labels, dm2_data) in zip(fps, distmats)[i + 1:]: # Make the current pair of distance matrices compatible by only # keeping samples that match between them, and ordering them by # the same sample IDs. (dm1_labels, dm1_data), (dm2_labels, dm2_data) = \ make_compatible_distance_matrices((dm1_labels, dm1_data), (dm2_labels, dm2_data), lookup=sample_id_map) if len(dm1_labels) < 3: result += '%s\t%s\t%d\tToo few samples\n' % (fp1, fp2, len(dm1_labels)) continue dm1 = DistanceMatrix(dm1_data, dm1_labels) dm2 = DistanceMatrix(dm2_data, dm2_labels) # Create an instance of our Mantel correlogram test and run it with # the specified number of permutations. mc = MantelCorrelogram(dm1, dm2, alpha=alpha, variable_size_distance_classes=variable_size_distance_classes) results = mc(num_perms) # Generate a name for the current correlogram and save it and the # correlogram itself. dm1_name = path.basename(fp1) dm2_name = path.basename(fp2) correlogram_fps.append('_'.join((dm1_name, 'AND', dm2_name, 'mantel_correlogram')) + '.') correlograms.append(results['correlogram_plot']) # Iterate over the results and write them to the text file. first_time = True for class_idx, num_dist, r, p, p_corr in zip( results['class_index'], results['num_dist'], results['mantel_r'], results['mantel_p'], results['mantel_p_corr']): # Format p-values and figure out which tail type we have based # on the sign of r. p_str = None if p is not None: p_str = p_value_to_str(p, num_perms) p_corr_str = None if p_corr is not None: p_corr_str = p_value_to_str(p_corr, num_perms) if r is None: tail_type = None elif r < 0: tail_type = 'less' else: tail_type = 'greater' if first_time: result += '%s\t%s\t%d\t%d\t%s\t%d\t%s\t%s\t%s\t%s\n' % ( fp1, fp2, len(dm1_labels), num_perms, class_idx, num_dist, r, p_str, p_corr_str, tail_type) first_time = False else: result += '\t\t\t\t%s\t%d\t%s\t%s\t%s\t%s\n' % (class_idx, num_dist, r, p_str, p_corr_str, tail_type) return result, correlogram_fps, correlograms
unknown
codeparrot/codeparrot-clean
/* Copyright 2014 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package persistentvolume import ( "fmt" "sort" v1 "k8s.io/api/core/v1" utilfeature "k8s.io/apiserver/pkg/util/feature" "k8s.io/client-go/tools/cache" "k8s.io/component-helpers/storage/volume" v1helper "k8s.io/kubernetes/pkg/apis/core/v1/helper" "k8s.io/kubernetes/pkg/features" "k8s.io/kubernetes/pkg/volume/util" ) // persistentVolumeOrderedIndex is a cache.Store that keeps persistent volumes // indexed by AccessModes and ordered by storage capacity. type persistentVolumeOrderedIndex struct { store cache.Indexer } func newPersistentVolumeOrderedIndex() persistentVolumeOrderedIndex { return persistentVolumeOrderedIndex{cache.NewIndexer(cache.MetaNamespaceKeyFunc, cache.Indexers{"accessmodes": accessModesIndexFunc})} } // accessModesIndexFunc is an indexing function that returns a persistent // volume's AccessModes as a string func accessModesIndexFunc(obj interface{}) ([]string, error) { if pv, ok := obj.(*v1.PersistentVolume); ok { modes := v1helper.GetAccessModesAsString(pv.Spec.AccessModes) return []string{modes}, nil } return []string{""}, fmt.Errorf("object is not a persistent volume: %v", obj) } // listByAccessModes returns all volumes with the given set of // AccessModeTypes. The list is unsorted! func (pvIndex *persistentVolumeOrderedIndex) listByAccessModes(modes []v1.PersistentVolumeAccessMode) ([]*v1.PersistentVolume, error) { pv := &v1.PersistentVolume{ Spec: v1.PersistentVolumeSpec{ AccessModes: modes, }, } objs, err := pvIndex.store.Index("accessmodes", pv) if err != nil { return nil, err } volumes := make([]*v1.PersistentVolume, len(objs)) for i, obj := range objs { volumes[i] = obj.(*v1.PersistentVolume) } return volumes, nil } // find returns the nearest PV from the ordered list or nil if a match is not found func (pvIndex *persistentVolumeOrderedIndex) findByClaim(claim *v1.PersistentVolumeClaim, delayBinding bool) (*v1.PersistentVolume, error) { // PVs are indexed by their access modes to allow easier searching. Each // index is the string representation of a set of access modes. There is a // finite number of possible sets and PVs will only be indexed in one of // them (whichever index matches the PV's modes). // // A request for resources will always specify its desired access modes. // Any matching PV must have at least that number of access modes, but it // can have more. For example, a user asks for ReadWriteOnce but a GCEPD // is available, which is ReadWriteOnce+ReadOnlyMany. // // Searches are performed against a set of access modes, so we can attempt // not only the exact matching modes but also potential matches (the GCEPD // example above). allPossibleModes := pvIndex.allPossibleMatchingAccessModes(claim.Spec.AccessModes) for _, modes := range allPossibleModes { volumes, err := pvIndex.listByAccessModes(modes) if err != nil { return nil, err } bestVol, err := volume.FindMatchingVolume(claim, volumes, nil /* node for topology binding*/, nil /* exclusion map */, delayBinding, utilfeature.DefaultFeatureGate.Enabled(features.VolumeAttributesClass)) if err != nil { return nil, err } if bestVol != nil { return bestVol, nil } } return nil, nil } // findBestMatchForClaim is a convenience method that finds a volume by the claim's AccessModes and requests for Storage func (pvIndex *persistentVolumeOrderedIndex) findBestMatchForClaim(claim *v1.PersistentVolumeClaim, delayBinding bool) (*v1.PersistentVolume, error) { return pvIndex.findByClaim(claim, delayBinding) } // allPossibleMatchingAccessModes returns an array of AccessMode arrays that // can satisfy a user's requested modes. // // see comments in the Find func above regarding indexing. // // allPossibleMatchingAccessModes gets all stringified accessmodes from the // index and returns all those that contain at least all of the requested // mode. // // For example, assume the index contains 2 types of PVs where the stringified // accessmodes are: // // "RWO,ROX" -- some number of GCEPDs // "RWO,ROX,RWX" -- some number of NFS volumes // // A request for RWO could be satisfied by both sets of indexed volumes, so // allPossibleMatchingAccessModes returns: // // [][]v1.PersistentVolumeAccessMode { // []v1.PersistentVolumeAccessMode { // v1.ReadWriteOnce, v1.ReadOnlyMany, // }, // []v1.PersistentVolumeAccessMode { // v1.ReadWriteOnce, v1.ReadOnlyMany, v1.ReadWriteMany, // }, // } // // A request for RWX can be satisfied by only one set of indexed volumes, so // the return is: // // [][]v1.PersistentVolumeAccessMode { // []v1.PersistentVolumeAccessMode { // v1.ReadWriteOnce, v1.ReadOnlyMany, v1.ReadWriteMany, // }, // } // // This func returns modes with ascending levels of modes to give the user // what is closest to what they actually asked for. func (pvIndex *persistentVolumeOrderedIndex) allPossibleMatchingAccessModes(requestedModes []v1.PersistentVolumeAccessMode) [][]v1.PersistentVolumeAccessMode { matchedModes := [][]v1.PersistentVolumeAccessMode{} keys := pvIndex.store.ListIndexFuncValues("accessmodes") for _, key := range keys { indexedModes := v1helper.GetAccessModesFromString(key) if util.ContainsAllAccessModes(indexedModes, requestedModes) { matchedModes = append(matchedModes, indexedModes) } } // sort by the number of modes in each array with the fewest number of // modes coming first. this allows searching for volumes by the minimum // number of modes required of the possible matches. sort.Sort(byAccessModes{matchedModes}) return matchedModes } // byAccessModes is used to order access modes by size, with the fewest modes first type byAccessModes struct { modes [][]v1.PersistentVolumeAccessMode } func (c byAccessModes) Less(i, j int) bool { return len(c.modes[i]) < len(c.modes[j]) } func (c byAccessModes) Swap(i, j int) { c.modes[i], c.modes[j] = c.modes[j], c.modes[i] } func (c byAccessModes) Len() int { return len(c.modes) } func claimToClaimKey(claim *v1.PersistentVolumeClaim) string { return fmt.Sprintf("%s/%s", claim.Namespace, claim.Name) } func claimrefToClaimKey(claimref *v1.ObjectReference) string { return fmt.Sprintf("%s/%s", claimref.Namespace, claimref.Name) }
go
github
https://github.com/kubernetes/kubernetes
pkg/controller/volume/persistentvolume/index.go
# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils as json from tempest.lib.common import rest_client from tempest.lib import exceptions as lib_exc class BackupsClient(rest_client.RestClient): """Volume V2 Backups client""" api_version = "v2" def create_backup(self, **kwargs): """Creates a backup of volume. Available params: see http://developer.openstack.org/ api-ref-blockstorage-v2.html#createBackup """ post_body = json.dumps({'backup': kwargs}) resp, body = self.post('backups', post_body) body = json.loads(body) self.expected_success(202, resp.status) return rest_client.ResponseBody(resp, body) def restore_backup(self, backup_id, **kwargs): """Restore volume from backup. Available params: see http://developer.openstack.org/ api-ref-blockstorage-v2.html#restoreBackup """ post_body = json.dumps({'restore': kwargs}) resp, body = self.post('backups/%s/restore' % (backup_id), post_body) body = json.loads(body) self.expected_success(202, resp.status) return rest_client.ResponseBody(resp, body) def delete_backup(self, backup_id): """Delete a backup of volume.""" resp, body = self.delete('backups/%s' % backup_id) self.expected_success(202, resp.status) return rest_client.ResponseBody(resp, body) def show_backup(self, backup_id): """Returns the details of a single backup.""" url = "backups/%s" % backup_id resp, body = self.get(url) body = json.loads(body) self.expected_success(200, resp.status) return rest_client.ResponseBody(resp, body) def list_backups(self, detail=False): """Information for all the tenant's backups.""" url = "backups" if detail: url += "/detail" resp, body = self.get(url) body = json.loads(body) self.expected_success(200, resp.status) return rest_client.ResponseBody(resp, body) def export_backup(self, backup_id): """Export backup metadata record.""" url = "backups/%s/export_record" % backup_id resp, body = self.get(url) body = json.loads(body) self.expected_success(200, resp.status) return rest_client.ResponseBody(resp, body) def import_backup(self, **kwargs): """Import backup metadata record.""" post_body = json.dumps({'backup-record': kwargs}) resp, body = self.post("backups/import_record", post_body) body = json.loads(body) self.expected_success(201, resp.status) return rest_client.ResponseBody(resp, body) def reset_backup_status(self, backup_id, status): """Reset the specified backup's status.""" post_body = json.dumps({'os-reset_status': {"status": status}}) resp, body = self.post('backups/%s/action' % backup_id, post_body) self.expected_success(202, resp.status) return rest_client.ResponseBody(resp, body) def is_resource_deleted(self, id): try: self.show_backup(id) except lib_exc.NotFound: return True return False
unknown
codeparrot/codeparrot-clean
from __future__ import unicode_literals from django.db.migrations.state import ProjectState from django.utils.datastructures import OrderedSet class MigrationGraph(object): """ Represents the digraph of all migrations in a project. Each migration is a node, and each dependency is an edge. There are no implicit dependencies between numbered migrations - the numbering is merely a convention to aid file listing. Every new numbered migration has a declared dependency to the previous number, meaning that VCS branch merges can be detected and resolved. Migrations files can be marked as replacing another set of migrations - this is to support the "squash" feature. The graph handler isn't responsible for these; instead, the code to load them in here should examine the migration files and if the replaced migrations are all either unapplied or not present, it should ignore the replaced ones, load in just the replacing migration, and repoint any dependencies that pointed to the replaced migrations to point to the replacing one. A node should be a tuple: (app_path, migration_name). The tree special-cases things within an app - namely, root nodes and leaf nodes ignore dependencies to other apps. """ def __init__(self): self.nodes = {} self.dependencies = {} self.dependents = {} def add_node(self, node, implementation): self.nodes[node] = implementation def add_dependency(self, migration, child, parent): if child not in self.nodes: raise NodeNotFoundError( "Migration %s dependencies reference nonexistent child node %r" % (migration, child), child ) if parent not in self.nodes: raise NodeNotFoundError( "Migration %s dependencies reference nonexistent parent node %r" % (migration, parent), parent ) self.dependencies.setdefault(child, set()).add(parent) self.dependents.setdefault(parent, set()).add(child) def forwards_plan(self, node): """ Given a node, returns a list of which previous nodes (dependencies) must be applied, ending with the node itself. This is the list you would follow if applying the migrations to a database. """ if node not in self.nodes: raise NodeNotFoundError("Node %r not a valid node" % (node, ), node) return self.dfs(node, lambda x: self.dependencies.get(x, set())) def backwards_plan(self, node): """ Given a node, returns a list of which dependent nodes (dependencies) must be unapplied, ending with the node itself. This is the list you would follow if removing the migrations from a database. """ if node not in self.nodes: raise NodeNotFoundError("Node %r not a valid node" % (node, ), node) return self.dfs(node, lambda x: self.dependents.get(x, set())) def root_nodes(self, app=None): """ Returns all root nodes - that is, nodes with no dependencies inside their app. These are the starting point for an app. """ roots = set() for node in self.nodes: if (not any(key[0] == node[0] for key in self.dependencies.get(node, set())) and (not app or app == node[0])): roots.add(node) return sorted(roots) def leaf_nodes(self, app=None): """ Returns all leaf nodes - that is, nodes with no dependents in their app. These are the "most current" version of an app's schema. Having more than one per app is technically an error, but one that gets handled further up, in the interactive command - it's usually the result of a VCS merge and needs some user input. """ leaves = set() for node in self.nodes: if (not any(key[0] == node[0] for key in self.dependents.get(node, set())) and (not app or app == node[0])): leaves.add(node) return sorted(leaves) def dfs(self, start, get_children): """ Dynamic programming based depth first search, for finding dependencies. """ visited = [] visited.append(start) path = [start] stack = sorted(get_children(start)) while stack: node = stack.pop(0) if node in path: raise CircularDependencyError() path.append(node) visited.insert(0, node) children = sorted(get_children(node)) if not children: path = [] stack = children + stack return list(OrderedSet(visited)) def __str__(self): return "Graph: %s nodes, %s edges" % ( len(self.nodes), sum(len(x) for x in self.dependencies.values()), ) def make_state(self, nodes=None, at_end=True, real_apps=None): """ Given a migration node or nodes, returns a complete ProjectState for it. If at_end is False, returns the state before the migration has run. If nodes is not provided, returns the overall most current project state. """ if nodes is None: nodes = list(self.leaf_nodes()) if len(nodes) == 0: return ProjectState() if not isinstance(nodes[0], tuple): nodes = [nodes] plan = [] for node in nodes: for migration in self.forwards_plan(node): if migration not in plan: if not at_end and migration in nodes: continue plan.append(migration) project_state = ProjectState(real_apps=real_apps) for node in plan: project_state = self.nodes[node].mutate_state(project_state) return project_state def __contains__(self, node): return node in self.nodes class CircularDependencyError(Exception): """ Raised when there's an impossible-to-resolve circular dependency. """ pass class NodeNotFoundError(LookupError): """ Raised when an attempt on a node is made that is not available in the graph. """ def __init__(self, message, node): self.message = message self.node = node def __str__(self): return self.message __unicode__ = __str__ def __repr__(self): return "NodeNotFoundError(%r)" % self.node
unknown
codeparrot/codeparrot-clean
--- c: Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al. SPDX-License-Identifier: curl Title: curl_ws_start_frame Section: 3 Source: libcurl See-also: - curl_easy_getinfo (3) - curl_easy_perform (3) - curl_easy_setopt (3) - curl_ws_recv (3) - libcurl-ws (3) Protocol: - WS Added-in: 8.16.0 --- # NAME curl_ws_start_frame - start a new WebSocket frame # SYNOPSIS ~~~c #include <curl/curl.h> CURLcode curl_ws_start_frame(CURL *curl, unsigned int flags, curl_off_t frame_len); ~~~ # DESCRIPTION Add the WebSocket frame header for the given flags and length to the transfers send buffer for WebSocket encoded data. Intended for use in a CURLOPT_READFUNCTION(3) callback. When using a CURLOPT_READFUNCTION(3) in a WebSocket transfer, any data returned by that function is sent as a *CURLWS_BINARY* frame with the length being the amount of data read. To send larger frames or frames of a different type, call curl_ws_start_frame() from within the read function and then return the data belonging to the frame. The function fails, if a previous frame has not been completely read yet. Also it fails in *CURLWS_RAW_MODE*. The read function in libcurl usually treats a return value of 0 as the end of file indication and stops any further reads. This would prevent sending WebSocket frames of length 0. If the read function calls `curl_ws_start_frame()` however, a return value of 0 is *not* treated as an end of file and libcurl calls the read function again. # FLAGS Supports all flags documented in curl_ws_meta(3). # %PROTOCOLS% # EXAMPLE ~~~c #include <string.h> /* for strlen */ struct read_ctx { CURL *easy; char *message; size_t msg_len; size_t nsent; }; static size_t readcb(char *buf, size_t nitems, size_t buflen, void *p) { struct read_ctx *ctx = p; size_t len = nitems * buflen; size_t left = ctx->msg_len - ctx->nsent; if(!ctx->nsent) { CURLcode result; /* Want to send TEXT frame. */ result = curl_ws_start_frame(ctx->easy, CURLWS_TEXT, (curl_off_t)ctx->msg_len); if(result != CURLE_OK) { fprintf(stderr, "error starting frame: %d\n", result); return CURL_READFUNC_ABORT; } } if(left) { if(left < len) len = left; memcpy(buf, ctx->message + ctx->nsent, len); ctx->nsent += len; return len; } return 0; } int main(void) { CURL *easy; struct read_ctx rctx; CURLcode result; easy = curl_easy_init(); if(!easy) return 1; curl_easy_setopt(easy, CURLOPT_URL, "wss://example.com"); curl_easy_setopt(easy, CURLOPT_READFUNCTION, readcb); /* tell curl that we want to send the payload */ memset(&rctx, 0, sizeof(rctx)); rctx.easy = easy; rctx.message = "Hello, friend!"; rctx.msg_len = strlen(rctx.message); curl_easy_setopt(easy, CURLOPT_READDATA, &rctx); curl_easy_setopt(easy, CURLOPT_UPLOAD, 1L); /* Perform the request, result gets the return code */ result = curl_easy_perform(easy); /* Check for errors */ if(result != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(result)); /* always cleanup */ curl_easy_cleanup(easy); return 0; } ~~~ # %AVAILABILITY% # RETURN VALUE This function returns a CURLcode indicating success or error. CURLE_OK (0) means everything was OK, non-zero means an error occurred, see libcurl-errors(3). If CURLOPT_ERRORBUFFER(3) was set with curl_easy_setopt(3) there can be an error message stored in the error buffer when non-zero is returned. Instead of blocking, the function returns **CURLE_AGAIN**. The correct behavior is then to wait for the socket to signal readability before calling this function again. Any other non-zero return value indicates an error. See the libcurl-errors(3) man page for the full list with descriptions.
unknown
github
https://github.com/curl/curl
docs/libcurl/curl_ws_start_frame.md
from __future__ import division, print_function import numpy as np import pyroomacoustics as pra import time try: import pyfftw pyfftw_available = True except ImportError: pyfftw_available = False try: import mkl_fft mkl_available = True except ImportError: mkl_available = False n_trials = 1000 nfft = 128 D = 7 x = np.random.randn(nfft, D).astype('float32') def timing(transform, n_trials): dft = pra.transform.DFT(nfft, D, transform=transform) start_time = time.time() for k in range(n_trials): dft.analysis(x) analysis_time = (time.time()-start_time)/n_trials * 1e6 start_time = time.time() for k in range(n_trials): dft.synthesis() synthesis_time = (time.time()-start_time)/n_trials * 1e6 print("avg %s : %f [1e-6 sec], (analysis, synthesis)=(%f, %f) [1e-6 sec]" % (transform, analysis_time+synthesis_time, analysis_time, synthesis_time)) res = timing('numpy', n_trials) if pyfftw_available: res = timing('fftw', n_trials) if mkl_available: res = timing('mkl', n_trials) """ test against without using class """ print() start_time = time.time() for k in range(n_trials): X = np.fft.rfft(x) analysis_time = (time.time()-start_time)/n_trials * 1e6 start_time = time.time() for k in range(n_trials): x_r = np.fft.irfft(X) synthesis_time = (time.time()-start_time)/n_trials * 1e6 print("avg numpy w/o class : %f [1e-6 sec], (analysis, synthesis)=(%f, %f) [1e-6 sec]" % (analysis_time+synthesis_time, analysis_time, synthesis_time)) if pyfftw_available: # prepare a = pyfftw.empty_aligned([nfft, D], dtype='float32') b = pyfftw.empty_aligned([nfft//2+1, D], dtype='complex64') c = pyfftw.empty_aligned([nfft, D], dtype='float32') forward = pyfftw.FFTW(a, b, axes=(0, )) backward = pyfftw.FFTW(b, c, axes=(0, ), direction='FFTW_BACKWARD') start_time = time.time() for k in range(n_trials): forward() analysis_time = (time.time()-start_time)/n_trials * 1e6 start_time = time.time() for k in range(n_trials): backward() synthesis_time = (time.time()-start_time)/n_trials * 1e6 print("avg fftw w/o class : %f [1e-6 sec], (analysis, synthesis)=(%f, %f) [1e-6 sec]" % (analysis_time+synthesis_time, analysis_time, synthesis_time)) if mkl_available: start_time = time.time() for k in range(n_trials): X = mkl_fft.rfft_numpy(x) analysis_time = (time.time()-start_time)/n_trials * 1e6 start_time = time.time() for k in range(n_trials): x_r = mkl_fft.irfft_numpy(X) synthesis_time = (time.time()-start_time)/n_trials * 1e6 print("avg mkl w/o class : %f [1e-6 sec], (analysis, synthesis)=(%f, %f) [1e-6 sec]" % (analysis_time+synthesis_time, analysis_time, synthesis_time))
unknown
codeparrot/codeparrot-clean
# ---------------------------------------------------------------------------- # Copyright (c) 2016-2017, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import collections import os import tempfile import unittest import uuid import qiime2.core.type from qiime2.sdk import Artifact from qiime2.sdk.result import ResultMetadata import qiime2.core.archive as archive from qiime2.core.testing.type import IntSequence1, FourInts, Mapping from qiime2.core.testing.util import get_dummy_plugin, ArchiveTestingMixin class TestArtifact(unittest.TestCase, ArchiveTestingMixin): def setUp(self): # Ignore the returned dummy plugin object, just run this to verify the # plugin exists as the tests rely on it being loaded. get_dummy_plugin() # TODO standardize temporary directories created by QIIME 2 self.test_dir = tempfile.TemporaryDirectory(prefix='qiime2-test-temp-') self.provenance_capture = archive.ImportProvenanceCapture() def tearDown(self): self.test_dir.cleanup() def test_private_constructor(self): with self.assertRaisesRegex( NotImplementedError, 'Artifact constructor.*private.*Artifact.load'): Artifact() # Note on testing strategy below: many of the tests for `_from_view` and # `load` are similar, with the exception that when `load`ing, the # artifact's UUID is known so more specific assertions can be performed. # While these tests appear somewhat redundant, they are important because # they exercise the same operations on Artifact objects constructed from # different sources, whose codepaths have very different internal behavior. # This internal behavior could be tested explicitly but it is safer to test # the public API behavior (e.g. as a user would interact with the object) # in case the internals change. def test_from_view(self): artifact = Artifact._from_view(FourInts, [-1, 42, 0, 43], list, self.provenance_capture) self.assertEqual(artifact.type, FourInts) # We don't know what the UUID is because it's generated within # Artifact._from_view. self.assertIsInstance(artifact.uuid, uuid.UUID) self.assertEqual(artifact.view(list), [-1, 42, 0, 43]) # Can produce same view if called again. self.assertEqual(artifact.view(list), [-1, 42, 0, 43]) def test_from_view_different_type_with_multiple_view_types(self): artifact = Artifact._from_view(IntSequence1, [42, 42, 43, -999, 42], list, self.provenance_capture) self.assertEqual(artifact.type, IntSequence1) self.assertIsInstance(artifact.uuid, uuid.UUID) self.assertEqual(artifact.view(list), [42, 42, 43, -999, 42]) self.assertEqual(artifact.view(list), [42, 42, 43, -999, 42]) self.assertEqual(artifact.view(collections.Counter), collections.Counter({42: 3, 43: 1, -999: 1})) self.assertEqual(artifact.view(collections.Counter), collections.Counter({42: 3, 43: 1, -999: 1})) def test_from_view_and_save(self): fp = os.path.join(self.test_dir.name, 'artifact.qza') # Using four-ints data layout because it has multiple files, some of # which are in a nested directory. artifact = Artifact._from_view(FourInts, [-1, 42, 0, 43], list, self.provenance_capture) artifact.save(fp) root_dir = str(artifact.uuid) expected = { 'VERSION', 'metadata.yaml', 'data/file1.txt', 'data/file2.txt', 'data/nested/file3.txt', 'data/nested/file4.txt', 'provenance/metadata.yaml', 'provenance/VERSION', 'provenance/action/action.yaml' } self.assertArchiveMembers(fp, root_dir, expected) def test_load(self): saved_artifact = Artifact.import_data(FourInts, [-1, 42, 0, 43]) fp = os.path.join(self.test_dir.name, 'artifact.qza') saved_artifact.save(fp) artifact = Artifact.load(fp) self.assertEqual(artifact.type, FourInts) self.assertEqual(artifact.uuid, saved_artifact.uuid) self.assertEqual(artifact.view(list), [-1, 42, 0, 43]) self.assertEqual(artifact.view(list), [-1, 42, 0, 43]) def test_load_different_type_with_multiple_view_types(self): saved_artifact = Artifact.import_data(IntSequence1, [42, 42, 43, -999, 42]) fp = os.path.join(self.test_dir.name, 'artifact.qza') saved_artifact.save(fp) artifact = Artifact.load(fp) self.assertEqual(artifact.type, IntSequence1) self.assertEqual(artifact.uuid, saved_artifact.uuid) self.assertEqual(artifact.view(list), [42, 42, 43, -999, 42]) self.assertEqual(artifact.view(list), [42, 42, 43, -999, 42]) self.assertEqual(artifact.view(collections.Counter), collections.Counter({42: 3, 43: 1, -999: 1})) self.assertEqual(artifact.view(collections.Counter), collections.Counter({42: 3, 43: 1, -999: 1})) def test_load_and_save(self): fp1 = os.path.join(self.test_dir.name, 'artifact1.qza') fp2 = os.path.join(self.test_dir.name, 'artifact2.qza') artifact = Artifact.import_data(FourInts, [-1, 42, 0, 43]) artifact.save(fp1) artifact = Artifact.load(fp1) # Overwriting its source file works. artifact.save(fp1) # Saving to a new file works. artifact.save(fp2) root_dir = str(artifact.uuid) expected = { 'VERSION', 'metadata.yaml', 'data/file1.txt', 'data/file2.txt', 'data/nested/file3.txt', 'data/nested/file4.txt', 'provenance/metadata.yaml', 'provenance/VERSION', 'provenance/action/action.yaml' } self.assertArchiveMembers(fp1, root_dir, expected) root_dir = str(artifact.uuid) expected = { 'VERSION', 'metadata.yaml', 'data/file1.txt', 'data/file2.txt', 'data/nested/file3.txt', 'data/nested/file4.txt', 'provenance/metadata.yaml', 'provenance/VERSION', 'provenance/action/action.yaml' } self.assertArchiveMembers(fp2, root_dir, expected) def test_roundtrip(self): fp1 = os.path.join(self.test_dir.name, 'artifact1.qza') fp2 = os.path.join(self.test_dir.name, 'artifact2.qza') artifact = Artifact.import_data(FourInts, [-1, 42, 0, 43]) artifact.save(fp1) artifact1 = Artifact.load(fp1) artifact1.save(fp2) artifact2 = Artifact.load(fp2) self.assertEqual(artifact1.type, artifact2.type) self.assertEqual(artifact1.format, artifact2.format) self.assertEqual(artifact1.uuid, artifact2.uuid) self.assertEqual(artifact1.view(list), artifact2.view(list)) # double view to make sure multiple views can be taken self.assertEqual(artifact1.view(list), artifact2.view(list)) def test_load_with_archive_filepath_modified(self): # Save an artifact for use in the following test case. fp = os.path.join(self.test_dir.name, 'artifact.qza') Artifact.import_data(FourInts, [-1, 42, 0, 43]).save(fp) # Load the artifact from a filepath then save a different artifact to # the same filepath. Assert that both artifacts produce the correct # views of their data. # # `load` used to be lazy, only extracting data when it needed to (e.g. # when `save` or `view` was called). This was buggy as the filepath # could have been deleted, or worse, modified to contain a different # .qza file. Thus, the wrong archive could be extracted on demand, or # the archive could be missing altogether. There isn't an easy # cross-platform compatible way to solve this problem, so Artifact.load # is no longer lazy and always extracts its data immediately. The real # motivation for lazy loading was for quick inspection of archives # without extracting/copying data, so that API is now provided through # Artifact.peek. artifact1 = Artifact.load(fp) Artifact.import_data(FourInts, [10, 11, 12, 13]).save(fp) artifact2 = Artifact.load(fp) self.assertEqual(artifact1.view(list), [-1, 42, 0, 43]) self.assertEqual(artifact2.view(list), [10, 11, 12, 13]) def test_extract(self): fp = os.path.join(self.test_dir.name, 'artifact.qza') artifact = Artifact.import_data(FourInts, [-1, 42, 0, 43]) artifact.save(fp) root_dir = str(artifact.uuid) output_dir = os.path.join(self.test_dir.name, 'artifact-extract-test') result_dir = Artifact.extract(fp, output_dir=output_dir) self.assertEqual(result_dir, os.path.join(output_dir, root_dir)) expected = { 'VERSION', 'metadata.yaml', 'data/file1.txt', 'data/file2.txt', 'data/nested/file3.txt', 'data/nested/file4.txt', 'provenance/metadata.yaml', 'provenance/VERSION', 'provenance/action/action.yaml' } self.assertExtractedArchiveMembers(output_dir, root_dir, expected) def test_peek(self): artifact = Artifact.import_data(FourInts, [0, 0, 42, 1000]) fp = os.path.join(self.test_dir.name, 'artifact.qza') artifact.save(fp) metadata = Artifact.peek(fp) self.assertIsInstance(metadata, ResultMetadata) self.assertEqual(metadata.type, 'FourInts') self.assertEqual(metadata.uuid, str(artifact.uuid)) self.assertEqual(metadata.format, 'FourIntsDirectoryFormat') def test_import_data_invalid_type(self): with self.assertRaisesRegex(TypeError, 'concrete semantic type.*Visualization'): Artifact.import_data(qiime2.core.type.Visualization, self.test_dir) with self.assertRaisesRegex(TypeError, 'concrete semantic type.*Visualization'): Artifact.import_data('Visualization', self.test_dir) def test_import_data_with_filepath_multi_file_data_layout(self): fp = os.path.join(self.test_dir.name, 'test.txt') with open(fp, 'w') as fh: fh.write('42\n') with self.assertRaisesRegex(ValueError, "FourIntsDirectoryFormat.*directory"): Artifact.import_data(FourInts, fp) def test_import_data_with_wrong_number_of_files(self): data_dir = os.path.join(self.test_dir.name, 'test') os.mkdir(data_dir) error_regex = ("Missing.*MappingDirectoryFormat.*mapping.tsv") with self.assertRaisesRegex(ValueError, error_regex): Artifact.import_data(Mapping, data_dir) def test_import_data_with_unrecognized_files(self): data_dir = os.path.join(self.test_dir.name, 'test') os.mkdir(data_dir) with open(os.path.join(data_dir, 'file1.txt'), 'w') as fh: fh.write('42\n') with open(os.path.join(data_dir, 'file2.txt'), 'w') as fh: fh.write('43\n') nested = os.path.join(data_dir, 'nested') os.mkdir(nested) with open(os.path.join(nested, 'file3.txt'), 'w') as fh: fh.write('44\n') with open(os.path.join(nested, 'foo.txt'), 'w') as fh: fh.write('45\n') error_regex = ("Unrecognized.*foo.txt.*FourIntsDirectoryFormat") with self.assertRaisesRegex(ValueError, error_regex): Artifact.import_data(FourInts, data_dir) def test_import_data_with_unreachable_path(self): with self.assertRaisesRegex(ValueError, "does not exist"): Artifact.import_data(IntSequence1, os.path.join(self.test_dir.name, 'foo.txt')) with self.assertRaisesRegex(ValueError, "does not exist"): Artifact.import_data(FourInts, os.path.join(self.test_dir.name, 'bar', '')) def test_import_data_with_invalid_format_single_file(self): fp = os.path.join(self.test_dir.name, 'foo.txt') with open(fp, 'w') as fh: fh.write('42\n') fh.write('43\n') fh.write('abc\n') fh.write('123\n') error_regex = "foo.txt.*IntSequenceFormat" with self.assertRaisesRegex(ValueError, error_regex): Artifact.import_data(IntSequence1, fp) def test_import_data_with_invalid_format_multi_file(self): data_dir = os.path.join(self.test_dir.name, 'test') os.mkdir(data_dir) with open(os.path.join(data_dir, 'file1.txt'), 'w') as fh: fh.write('42\n') with open(os.path.join(data_dir, 'file2.txt'), 'w') as fh: fh.write('43\n') nested = os.path.join(data_dir, 'nested') os.mkdir(nested) with open(os.path.join(nested, 'file3.txt'), 'w') as fh: fh.write('44\n') with open(os.path.join(nested, 'file4.txt'), 'w') as fh: fh.write('foo\n') error_regex = "file4.txt.*SingleIntFormat" with self.assertRaisesRegex(ValueError, error_regex): Artifact.import_data(FourInts, data_dir) def test_import_data_with_filepath(self): data_dir = os.path.join(self.test_dir.name, 'test') os.mkdir(data_dir) # Filename shouldn't matter for single-file case. fp = os.path.join(data_dir, 'foo.txt') with open(fp, 'w') as fh: fh.write('42\n') fh.write('43\n') fh.write('42\n') fh.write('0\n') artifact = Artifact.import_data(IntSequence1, fp) self.assertEqual(artifact.type, IntSequence1) self.assertIsInstance(artifact.uuid, uuid.UUID) self.assertEqual(artifact.view(list), [42, 43, 42, 0]) def test_import_data_with_directory_single_file(self): data_dir = os.path.join(self.test_dir.name, 'test') os.mkdir(data_dir) fp = os.path.join(data_dir, 'ints.txt') with open(fp, 'w') as fh: fh.write('-1\n') fh.write('-2\n') fh.write('10\n') fh.write('100\n') artifact = Artifact.import_data(IntSequence1, data_dir) self.assertEqual(artifact.type, IntSequence1) self.assertIsInstance(artifact.uuid, uuid.UUID) self.assertEqual(artifact.view(list), [-1, -2, 10, 100]) def test_import_data_with_directory_multi_file(self): data_dir = os.path.join(self.test_dir.name, 'test') os.mkdir(data_dir) with open(os.path.join(data_dir, 'file1.txt'), 'w') as fh: fh.write('42\n') with open(os.path.join(data_dir, 'file2.txt'), 'w') as fh: fh.write('41\n') nested = os.path.join(data_dir, 'nested') os.mkdir(nested) with open(os.path.join(nested, 'file3.txt'), 'w') as fh: fh.write('43\n') with open(os.path.join(nested, 'file4.txt'), 'w') as fh: fh.write('40\n') artifact = Artifact.import_data(FourInts, data_dir) self.assertEqual(artifact.type, FourInts) self.assertIsInstance(artifact.uuid, uuid.UUID) self.assertEqual(artifact.view(list), [42, 41, 43, 40]) def test_eq_identity(self): artifact = Artifact.import_data(FourInts, [-1, 42, 0, 43]) self.assertEqual(artifact, artifact) def test_eq_same_uuid(self): fp = os.path.join(self.test_dir.name, 'artifact.qza') artifact1 = Artifact.import_data(FourInts, [-1, 42, 0, 43]) artifact1.save(fp) artifact2 = Artifact.load(fp) self.assertEqual(artifact1, artifact2) def test_ne_same_data_different_uuid(self): artifact1 = Artifact.import_data(FourInts, [-1, 42, 0, 43]) artifact2 = Artifact.import_data(FourInts, [-1, 42, 0, 43]) self.assertNotEqual(artifact1, artifact2) def test_ne_different_data_different_uuid(self): artifact1 = Artifact.import_data(FourInts, [-1, 42, 0, 43]) artifact2 = Artifact.import_data(FourInts, [1, 2, 3, 4]) self.assertNotEqual(artifact1, artifact2) def test_ne_subclass_same_uuid(self): class ArtifactSubclass(Artifact): pass fp = os.path.join(self.test_dir.name, 'artifact.qza') artifact1 = ArtifactSubclass.import_data(FourInts, [-1, 42, 0, 43]) artifact1.save(fp) artifact2 = Artifact.load(fp) self.assertNotEqual(artifact1, artifact2) self.assertNotEqual(artifact2, artifact1) def test_ne_different_type_same_uuid(self): artifact = Artifact.import_data(FourInts, [-1, 42, 0, 43]) class Faker: @property def uuid(self): return artifact.uuid faker = Faker() self.assertNotEqual(artifact, faker) if __name__ == '__main__': unittest.main()
unknown
codeparrot/codeparrot-clean
/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hadoop.net.unix; import java.io.Closeable; import org.apache.hadoop.classification.InterfaceAudience; import java.io.FileDescriptor; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.nio.channels.ClosedChannelException; import java.nio.channels.ReadableByteChannel; import java.nio.ByteBuffer; import org.apache.commons.lang3.SystemUtils; import org.apache.hadoop.util.NativeCodeLoader; import org.apache.hadoop.util.CloseableReferenceCount; import org.apache.hadoop.classification.VisibleForTesting; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * The implementation of UNIX domain sockets in Java. * * See {@link DomainSocket} for more information about UNIX domain sockets. */ @InterfaceAudience.LimitedPrivate("HDFS") public class DomainSocket implements Closeable { static { if (SystemUtils.IS_OS_WINDOWS) { loadingFailureReason = "UNIX Domain sockets are not available on Windows."; } else if (!NativeCodeLoader.isNativeCodeLoaded()) { loadingFailureReason = "libhadoop cannot be loaded."; } else { String problem; try { anchorNative(); problem = null; } catch (Throwable t) { problem = "DomainSocket#anchorNative got error: " + t.getMessage(); } loadingFailureReason = problem; } } static final Logger LOG = LoggerFactory.getLogger(DomainSocket.class); /** * True only if we should validate the paths used in * {@link DomainSocket#bindAndListen(String)} */ private static boolean validateBindPaths = true; /** * The reason why DomainSocket is not available, or null if it is available. */ private final static String loadingFailureReason; /** * Initialize the native library code. */ private static native void anchorNative(); /** * This function is designed to validate that the path chosen for a UNIX * domain socket is secure. A socket path is secure if it doesn't allow * unprivileged users to perform a man-in-the-middle attack against it. * For example, one way to perform a man-in-the-middle attack would be for * a malicious user to move the server socket out of the way and create his * own socket in the same place. Not good. * * Note that we only check the path once. It's possible that the * permissions on the path could change, perhaps to something more relaxed, * immediately after the path passes our validation test-- hence creating a * security hole. However, the purpose of this check is to spot common * misconfigurations. System administrators do not commonly change * permissions on these paths while the server is running. * * For more information on Security exceptions see this wiki page: * https://wiki.apache.org/hadoop/SocketPathSecurity * * @param path the path to validate * @param skipComponents the number of starting path components to skip * validation for (used only for testing) */ @VisibleForTesting native static void validateSocketPathSecurity0(String path, int skipComponents) throws IOException; /** * Return true only if UNIX domain sockets are available. * * @return loadingFailureReason. */ public static String getLoadingFailureReason() { return loadingFailureReason; } /** * Disable validation of the server bind paths. */ @VisibleForTesting public static void disableBindPathValidation() { validateBindPaths = false; } /** * Given a path and a port, compute the effective path by replacing * occurrences of _PORT with the port. This is mainly to make it * possible to run multiple DataNodes locally for testing purposes. * * @param path The source path * @param port Port number to use * * @return The effective path */ public static String getEffectivePath(String path, int port) { return path.replace("_PORT", String.valueOf(port)); } /** * The socket reference count and closed bit. */ final CloseableReferenceCount refCount; /** * The file descriptor associated with this UNIX domain socket. */ final int fd; /** * The path associated with this UNIX domain socket. */ private final String path; /** * The InputStream associated with this socket. */ private final DomainInputStream inputStream = new DomainInputStream(); /** * The OutputStream associated with this socket. */ private final DomainOutputStream outputStream = new DomainOutputStream(); /** * The Channel associated with this socket. */ private final DomainChannel channel = new DomainChannel(); private DomainSocket(String path, int fd) { this.refCount = new CloseableReferenceCount(); this.fd = fd; this.path = path; } private static native int bind0(String path) throws IOException; private void unreference(boolean checkClosed) throws ClosedChannelException { if (checkClosed) { refCount.unreferenceCheckClosed(); } else { refCount.unreference(); } } /** * Create a new DomainSocket listening on the given path. * * @param path The path to bind and listen on. * @return The new DomainSocket. * @throws IOException raised on errors performing I/O. */ public static DomainSocket bindAndListen(String path) throws IOException { if (loadingFailureReason != null) { throw new UnsupportedOperationException(loadingFailureReason); } if (validateBindPaths) { validateSocketPathSecurity0(path, 0); } int fd = bind0(path); return new DomainSocket(path, fd); } /** * Create a pair of UNIX domain sockets which are connected to each other * by calling socketpair(2). * * @return An array of two UNIX domain sockets connected to * each other. * @throws IOException on error. */ public static DomainSocket[] socketpair() throws IOException { int fds[] = socketpair0(); return new DomainSocket[] { new DomainSocket("(anonymous0)", fds[0]), new DomainSocket("(anonymous1)", fds[1]) }; } private static native int[] socketpair0() throws IOException; private static native int accept0(int fd) throws IOException; /** * Accept a new UNIX domain connection. * * This method can only be used on sockets that were bound with bind(). * * @return The new connection. * @throws IOException If there was an I/O error performing the accept-- * such as the socket being closed from under us. * Particularly when the accept is timed out, it throws * SocketTimeoutException. */ public DomainSocket accept() throws IOException { refCount.reference(); boolean exc = true; try { DomainSocket ret = new DomainSocket(path, accept0(fd)); exc = false; return ret; } finally { unreference(exc); } } private static native int connect0(String path) throws IOException; /** * Create a new DomainSocket connected to the given path. * * @param path The path to connect to. * @throws IOException If there was an I/O error performing the connect. * * @return The new DomainSocket. */ public static DomainSocket connect(String path) throws IOException { if (loadingFailureReason != null) { throw new UnsupportedOperationException(loadingFailureReason); } int fd = connect0(path); return new DomainSocket(path, fd); } /** * Return true if the file descriptor is currently open. * * @return True if the file descriptor is currently open. */ public boolean isOpen() { return refCount.isOpen(); } /** * @return The socket path. */ public String getPath() { return path; } /** * @return The socket InputStream */ public DomainInputStream getInputStream() { return inputStream; } /** * @return The socket OutputStream */ public DomainOutputStream getOutputStream() { return outputStream; } /** * @return The socket Channel */ public DomainChannel getChannel() { return channel; } public static final int SEND_BUFFER_SIZE = 1; public static final int RECEIVE_BUFFER_SIZE = 2; public static final int SEND_TIMEOUT = 3; public static final int RECEIVE_TIMEOUT = 4; private static native void setAttribute0(int fd, int type, int val) throws IOException; public void setAttribute(int type, int size) throws IOException { refCount.reference(); boolean exc = true; try { setAttribute0(fd, type, size); exc = false; } finally { unreference(exc); } } private native int getAttribute0(int fd, int type) throws IOException; public int getAttribute(int type) throws IOException { refCount.reference(); int attribute; boolean exc = true; try { attribute = getAttribute0(fd, type); exc = false; return attribute; } finally { unreference(exc); } } private static native void close0(int fd) throws IOException; private static native void closeFileDescriptor0(FileDescriptor fd) throws IOException; private static native void shutdown0(int fd) throws IOException; /** * Close the Server Socket without check refCount. * When Server Socket is blocked on accept(), its refCount is 1. * close() call on Server Socket will be stuck in the while loop count check. * @param force if true, will not check refCount before close socket. * @throws IOException raised on errors performing I/O. */ public void close(boolean force) throws IOException { // Set the closed bit on this DomainSocket int count; try { count = refCount.setClosed(); } catch (ClosedChannelException e) { // Someone else already closed the DomainSocket. return; } boolean interrupted = false; if (force) { try { // Calling shutdown on the socket will interrupt blocking system // calls like accept, write, and read that are going on in a // different thread. shutdown0(fd); } catch (IOException e) { LOG.error("shutdown error: ", e); } } else { // Wait for all references to go away boolean didShutdown = false; while (count > 0) { if (!didShutdown) { try { // Calling shutdown on the socket will interrupt blocking system // calls like accept, write, and read that are going on in a // different thread. shutdown0(fd); } catch (IOException e) { LOG.error("shutdown error: ", e); } didShutdown = true; } try { Thread.sleep(10); } catch (InterruptedException e) { interrupted = true; } count = refCount.getReferenceCount(); } } // At this point, nobody has a reference to the file descriptor, // and nobody will be able to get one in the future either. // We now call close(2) on the file descriptor. // After this point, the file descriptor number will be reused by // something else. Although this DomainSocket object continues to hold // the old file descriptor number (it's a final field), we never use it // again because this DomainSocket is closed. close0(fd); if (interrupted) { Thread.currentThread().interrupt(); } } /** * Close the Socket. */ @Override public void close() throws IOException { close(false); } /** * Call shutdown(SHUT_RDWR) on the UNIX domain socket. * * @throws IOException raised on errors performing I/O. */ public void shutdown() throws IOException { refCount.reference(); boolean exc = true; try { shutdown0(fd); exc = false; } finally { unreference(exc); } } private native static void sendFileDescriptors0(int fd, FileDescriptor descriptors[], byte jbuf[], int offset, int length) throws IOException; /** * Send some FileDescriptor objects to the process on the other side of this * socket. * * @param descriptors The file descriptors to send. * @param jbuf Some bytes to send. You must send at least * one byte. * @param offset The offset in the jbuf array to start at. * @param length Length of the jbuf array to use. * @throws IOException raised on errors performing I/O. */ public void sendFileDescriptors(FileDescriptor descriptors[], byte jbuf[], int offset, int length) throws IOException { refCount.reference(); boolean exc = true; try { sendFileDescriptors0(fd, descriptors, jbuf, offset, length); exc = false; } finally { unreference(exc); } } private static native int receiveFileDescriptors0(int fd, FileDescriptor[] descriptors, byte[] buf, int offset, int length) throws IOException; /** * Receive some FileDescriptor objects from the process on the other side of * this socket, and wrap them in FileInputStream objects. * * @param streams input stream. * @param buf input buf. * @param offset input offset. * @param length input length. * @return wrap them in FileInputStream objects. * @throws IOException raised on errors performing I/O. */ public int recvFileInputStreams(FileInputStream[] streams, byte buf[], int offset, int length) throws IOException { FileDescriptor descriptors[] = new FileDescriptor[streams.length]; boolean success = false; for (int i = 0; i < streams.length; i++) { streams[i] = null; } refCount.reference(); try { int ret = receiveFileDescriptors0(fd, descriptors, buf, offset, length); for (int i = 0, j = 0; i < descriptors.length; i++) { if (descriptors[i] != null) { streams[j++] = new FileInputStream(descriptors[i]); descriptors[i] = null; } } success = true; return ret; } finally { if (!success) { for (int i = 0; i < descriptors.length; i++) { if (descriptors[i] != null) { try { closeFileDescriptor0(descriptors[i]); } catch (Throwable t) { LOG.warn(t.toString()); } } else if (streams[i] != null) { try { streams[i].close(); } catch (Throwable t) { LOG.warn(t.toString()); } finally { streams[i] = null; } } } } unreference(!success); } } private native static int readArray0(int fd, byte b[], int off, int len) throws IOException; private native static int available0(int fd) throws IOException; private static native void write0(int fd, int b) throws IOException; private static native void writeArray0(int fd, byte b[], int offset, int length) throws IOException; private native static int readByteBufferDirect0(int fd, ByteBuffer dst, int position, int remaining) throws IOException; /** * Input stream for UNIX domain sockets. */ @InterfaceAudience.LimitedPrivate("HDFS") public class DomainInputStream extends InputStream { @Override public int read() throws IOException { refCount.reference(); boolean exc = true; try { byte b[] = new byte[1]; int ret = DomainSocket.readArray0(DomainSocket.this.fd, b, 0, 1); exc = false; return (ret >= 0) ? b[0] : -1; } finally { unreference(exc); } } @Override public int read(byte b[], int off, int len) throws IOException { refCount.reference(); boolean exc = true; try { int nRead = DomainSocket.readArray0(DomainSocket.this.fd, b, off, len); exc = false; return nRead; } finally { unreference(exc); } } @Override public int available() throws IOException { refCount.reference(); boolean exc = true; try { int nAvailable = DomainSocket.available0(DomainSocket.this.fd); exc = false; return nAvailable; } finally { unreference(exc); } } @Override public void close() throws IOException { DomainSocket.this.close(); } } /** * Output stream for UNIX domain sockets. */ @InterfaceAudience.LimitedPrivate("HDFS") public class DomainOutputStream extends OutputStream { @Override public void close() throws IOException { DomainSocket.this.close(); } @Override public void write(int val) throws IOException { refCount.reference(); boolean exc = true; try { byte b[] = new byte[1]; b[0] = (byte)val; DomainSocket.writeArray0(DomainSocket.this.fd, b, 0, 1); exc = false; } finally { unreference(exc); } } @Override public void write(byte[] b, int off, int len) throws IOException { refCount.reference(); boolean exc = true; try { DomainSocket.writeArray0(DomainSocket.this.fd, b, off, len); exc = false; } finally { unreference(exc); } } } @InterfaceAudience.LimitedPrivate("HDFS") public class DomainChannel implements ReadableByteChannel { @Override public boolean isOpen() { return DomainSocket.this.isOpen(); } @Override public void close() throws IOException { DomainSocket.this.close(); } @Override public int read(ByteBuffer dst) throws IOException { refCount.reference(); boolean exc = true; try { int nread = 0; if (dst.isDirect()) { nread = DomainSocket.readByteBufferDirect0(DomainSocket.this.fd, dst, dst.position(), dst.remaining()); } else if (dst.hasArray()) { nread = DomainSocket.readArray0(DomainSocket.this.fd, dst.array(), dst.position() + dst.arrayOffset(), dst.remaining()); } else { throw new AssertionError("we don't support " + "using ByteBuffers that aren't either direct or backed by " + "arrays"); } if (nread > 0) { dst.position(dst.position() + nread); } exc = false; return nread; } finally { unreference(exc); } } } @Override public String toString() { return String.format("DomainSocket(fd=%d,path=%s)", fd, path); } }
java
github
https://github.com/apache/hadoop
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/unix/DomainSocket.java
import numpy as np import pytest from pandas import ( DataFrame, Index, MultiIndex, ) import pandas._testing as tm class TestDataFrameRenameAxis: def test_rename_axis_inplace(self, float_frame): # GH#15704 expected = float_frame.rename_axis("foo") result = float_frame.copy() return_value = no_return = result.rename_axis("foo", inplace=True) assert return_value is None assert no_return is None tm.assert_frame_equal(result, expected) expected = float_frame.rename_axis("bar", axis=1) result = float_frame.copy() return_value = no_return = result.rename_axis("bar", axis=1, inplace=True) assert return_value is None assert no_return is None tm.assert_frame_equal(result, expected) def test_rename_axis_with_allows_duplicate_labels_false(self): # GH#44958 df = DataFrame([[1, 2], [3, 4]], columns=["a", "b"]).set_flags( allows_duplicate_labels=False ) result = df.rename_axis("idx", axis=0) expected = DataFrame( [[1, 2], [3, 4]], index=Index([0, 1], name="idx"), columns=["a", "b"] ) tm.assert_frame_equal(result, expected, check_flags=False) def test_rename_axis_raises(self): # GH#17833 df = DataFrame({"A": [1, 2], "B": [1, 2]}) with pytest.raises(ValueError, match="Use `.rename`"): df.rename_axis(id, axis=0) with pytest.raises(ValueError, match="Use `.rename`"): df.rename_axis({0: 10, 1: 20}, axis=0) with pytest.raises(ValueError, match="Use `.rename`"): df.rename_axis(id, axis=1) with pytest.raises(ValueError, match="Use `.rename`"): df["A"].rename_axis(id) def test_rename_axis_mapper(self): # GH#19978 mi = MultiIndex.from_product([["a", "b", "c"], [1, 2]], names=["ll", "nn"]) df = DataFrame( {"x": list(range(len(mi))), "y": [i * 10 for i in range(len(mi))]}, index=mi ) # Test for rename of the Index object of columns result = df.rename_axis("cols", axis=1) tm.assert_index_equal(result.columns, Index(["x", "y"], name="cols")) # Test for rename of the Index object of columns using dict result = result.rename_axis(columns={"cols": "new"}, axis=1) tm.assert_index_equal(result.columns, Index(["x", "y"], name="new")) # Test for renaming index using dict result = df.rename_axis(index={"ll": "foo"}) assert result.index.names == ["foo", "nn"] # Test for renaming index using a function result = df.rename_axis(index=str.upper, axis=0) assert result.index.names == ["LL", "NN"] # Test for renaming index providing complete list result = df.rename_axis(index=["foo", "goo"]) assert result.index.names == ["foo", "goo"] # Test for changing index and columns at same time sdf = df.reset_index().set_index("nn").drop(columns=["ll", "y"]) result = sdf.rename_axis(index="foo", columns="meh") assert result.index.name == "foo" assert result.columns.name == "meh" # Test different error cases with pytest.raises(TypeError, match="Must pass"): df.rename_axis(index="wrong") with pytest.raises(ValueError, match="Length of names"): df.rename_axis(index=["wrong"]) with pytest.raises(TypeError, match="bogus"): df.rename_axis(bogus=None) @pytest.mark.parametrize( "kwargs, rename_index, rename_columns", [ ({"mapper": None, "axis": 0}, True, False), ({"mapper": None, "axis": 1}, False, True), ({"index": None}, True, False), ({"columns": None}, False, True), ({"index": None, "columns": None}, True, True), ({}, False, False), ], ) def test_rename_axis_none(self, kwargs, rename_index, rename_columns): # GH 25034 index = Index(list("abc"), name="foo") columns = Index(["col1", "col2"], name="bar") data = np.arange(6).reshape(3, 2) df = DataFrame(data, index, columns) result = df.rename_axis(**kwargs) expected_index = index.rename(None) if rename_index else index expected_columns = columns.rename(None) if rename_columns else columns expected = DataFrame(data, expected_index, expected_columns) tm.assert_frame_equal(result, expected)
python
github
https://github.com/pandas-dev/pandas
pandas/tests/frame/methods/test_rename_axis.py
#!/usr/bin/env python """ Test suite for Morphism classes. """ import inspect import os import re import pytest from ..rom import ROM from ..morphism import Morphism from ..thousandcurses.codec import ASCII, ReverseASCII package_dir = os.path.dirname(os.path.abspath(__file__)) @pytest.mark.skip() class TestASCIIMorphism: def setup(self): self.morphism = Morphism(b"abcdefghi", ASCII) def test_repr(self): rpr = self.morphism.__repr__() repr_regex = r"<pyromhackit\.morphism.Morphism object at 0x(\d|\w)+>" assert re.search(repr_regex, rpr) def test_str(self): expected = ("Morphism(b'abcdefghi',\n" " 'abcdefghi')") assert str(self.morphism) == expected @pytest.mark.parametrize("arg, expected", [ (0, ord(b'a')), (slice(0, 1), (b'a', 'a')), (slice(0, 2), (b'ab', 'ab')), (slice(6, 9), (b'ghi', 'ghi')), (slice(6, 9), (b'ghi', 'ghi')), ((6, 9), TypeError), ]) def test_subscripting(self, arg, expected): if inspect.isclass(expected) and issubclass(expected, Exception): with pytest.raises(TypeError) as excinfo: self.morphism[arg] else: returned = self.morphism[arg] if isinstance(expected, int): assert returned == expected elif isinstance(expected, tuple): bytestring, string = expected assert returned.src == ROM(bytestring) assert returned.dst == string else: raise Exception("Something is wrong with the test.") @pytest.mark.parametrize("searchitem, expected", [ ("", (0, 0)), ("a", (0, 0)), ("abcdefghi", (0, 0)), ("bcdefghi", (1, 1)), ("fgh", (5, 5)), ]) def test_index(self, searchitem, expected): returned = self.morphism.index(searchitem) assert returned == expected @pytest.mark.parametrize("srcindex, expected", [ (0, {0}), (1, {1}), (8, {8}), ]) def test_source_diffusion(self, srcindex, expected): assert self.morphism.source_diffusion(srcindex) == expected @pytest.mark.parametrize("byteidx, stridx, c, expected", [ (0, 0, 'K', ord('K')), (0, 1, 'K', None), (8, 8, 'a', ord('a')), (0, 0, 'ä', None), ]) @pytest.mark.skip() def test_impose_character(self, byteidx, stridx, c, expected): if expected is None: assert self.morphism.impose_character(byteidx, stridx, c) is None else: returned = self.morphism.impose_character(byteidx, stridx, c) msg = "Could not find a value of r[{}] so that s[{}] == {}".format(byteidx, stridx, repr(c)) assert isinstance(returned, int), msg assert returned == expected @pytest.mark.skip("TODO (?)") def test_impose_decoding(self): newdecoder = self.morphism.impose_decoding(97, 'b') assert newdecoder @pytest.mark.skip() class TestReverseASCIIMorphism: def setup(self): self.morphism = Morphism(b"hello", ReverseASCII) def test_str(self): expected = ("Morphism(b'hello',\n" " 'olleh')") assert str(self.morphism) == expected @pytest.mark.parametrize("searchitem, expected", [ ("", (0, 0)), ("h", (0, 0)), ("ll", (2, 0)), ("bcdefghi", (1, 1)), ("fgh", (5, 5)), ]) @pytest.mark.skip() def test_index(self, searchitem, expected): returned = self.morphism.index(searchitem) assert returned == expected @pytest.mark.parametrize("srcindexpath, expected", [ ((0,), {(4,)}), # (1, {3}), # (2, {2}), # (3, {1}), # (4, {0}), ]) @pytest.mark.skip() def test_source_tree_diffusion(self, srcindexpath, expected): assert self.morphism.source_tree_diffusion(srcindexpath) == expected @pytest.mark.parametrize("srcindex, expected", [ (0, {4}), # (1, {3}), # (2, {2}), # (3, {1}), # (4, {0}), ]) @pytest.mark.skip() def test_source_diffusion(self, srcindex, expected): assert self.morphism.source_diffusion(srcindex) == expected
unknown
codeparrot/codeparrot-clean
--- navigation_title: "Circle" mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest-circle-processor.html --- # Circle processor [ingest-circle-processor] Converts circle definitions of shapes to regular polygons which approximate them. $$$circle-processor-options$$$ | Name | Required | Default | Description | | --- | --- | --- | --- | | `field` | yes | - | The field to interpret as a circle. Either a string in WKT format or a map for GeoJSON. | | `target_field` | no | `field` | The field to assign the polygon shape to, by default `field` is updated in-place | | `ignore_missing` | no | `false` | If `true` and `field` does not exist, the processor quietly exits without modifying the document | | `error_distance` | yes | - | The difference between the resulting inscribed distance from center to side and the circle’s radius (measured in meters for `geo_shape`, unit-less for `shape`) | | `shape_type` | yes | - | Which field mapping type is to be used when processing the circle: `geo_shape` or `shape` | | `description` | no | - | Description of the processor. Useful for describing the purpose of the processor or its configuration. | | `if` | no | - | Conditionally execute the processor. See [Conditionally run a processor](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md#conditionally-run-processor). | | `ignore_failure` | no | `false` | Ignore failures for the processor. See [Handling pipeline failures](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md#handling-pipeline-failures). | | `on_failure` | no | - | Handle failures for the processor. See [Handling pipeline failures](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md#handling-pipeline-failures). | | `tag` | no | - | Identifier for the processor. Useful for debugging and metrics. | ![error distance](images/error_distance.png "") ```console PUT circles { "mappings": { "properties": { "circle": { "type": "geo_shape" } } } } PUT _ingest/pipeline/polygonize_circles { "description": "translate circle to polygon", "processors": [ { "circle": { "field": "circle", "error_distance": 28.0, "shape_type": "geo_shape" } } ] } ``` Using the above pipeline, we can attempt to index a document into the `circles` index. The circle can be represented as either a WKT circle or a GeoJSON circle. The resulting polygon will be represented and indexed using the same format as the input circle. WKT will be translated to a WKT polygon, and GeoJSON circles will be translated to GeoJSON polygons. ::::{important} Circles that contain a pole are not supported. :::: ## Example: Circle defined in Well Known Text [_example_circle_defined_in_well_known_text] In this example a circle defined in WKT format is indexed ```console PUT circles/_doc/1?pipeline=polygonize_circles { "circle": "CIRCLE (30 10 40)" } GET circles/_doc/1 ``` % TEST[continued] The response from the above index request: ```console-result { "found": true, "_index": "circles", "_id": "1", "_version": 1, "_seq_no": 22, "_primary_term": 1, "_source": { "circle": "POLYGON ((30.000365257263184 10.0, 30.000111397193788 10.00034284530941, 29.999706043744222 10.000213571721195, 29.999706043744222 9.999786428278805, 30.000111397193788 9.99965715469059, 30.000365257263184 10.0))" } } ``` % TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term": 1/"_primary_term" : $body._primary_term/] ## Example: Circle defined in GeoJSON [_example_circle_defined_in_geojson] In this example a circle defined in GeoJSON format is indexed ```console PUT circles/_doc/2?pipeline=polygonize_circles { "circle": { "type": "circle", "radius": "40m", "coordinates": [30, 10] } } GET circles/_doc/2 ``` % TEST[continued] The response from the above index request: ```console-result { "found": true, "_index": "circles", "_id": "2", "_version": 1, "_seq_no": 22, "_primary_term": 1, "_source": { "circle": { "coordinates": [ [ [30.000365257263184, 10.0], [30.000111397193788, 10.00034284530941], [29.999706043744222, 10.000213571721195], [29.999706043744222, 9.999786428278805], [30.000111397193788, 9.99965715469059], [30.000365257263184, 10.0] ] ], "type": "Polygon" } } } ``` % TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term": 1/"_primary_term" : $body._primary_term/] ## Notes on Accuracy [circle-processor-notes] Accuracy of the polygon that represents the circle is defined as `error_distance`. The smaller this difference is, the closer to a perfect circle the polygon is. Below is a table that aims to help capture how the radius of the circle affects the resulting number of sides of the polygon given different inputs. The minimum number of sides is `4` and the maximum is `1000`. $$$circle-processor-accuracy$$$ | error_distance | radius in meters | number of sides of polygon | | --- | --- | --- | | 1.00 | 1.0 | 4 | | 1.00 | 10.0 | 14 | | 1.00 | 100.0 | 45 | | 1.00 | 1000.0 | 141 | | 1.00 | 10000.0 | 445 | | 1.00 | 100000.0 | 1000 |
unknown
github
https://github.com/elastic/elasticsearch
docs/reference/enrich-processor/ingest-circle-processor.md
# -*- coding: utf-8 -*- # # Copyright 2012-2015 Spotify AB # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ''' Supports sending emails when tasks fail. This needs some more documentation. See :doc:`/configuration` for configuration options. In particular using the config `error-email` should set up Luigi so that it will send emails when tasks fail. :: [core] error-email: foo@bar.baz TODO: Eventually, all email configuration should move into the [email] section. ''' import logging import socket import sys import textwrap from luigi import configuration import luigi.task import luigi.parameter logger = logging.getLogger("luigi-interface") DEFAULT_CLIENT_EMAIL = 'luigi-client@%s' % socket.gethostname() DEBUG = False class TestNotificationsTask(luigi.task.Task): """ You may invoke this task to quickly check if you correctly have setup your notifications Configuration. You can run: .. code:: console $ luigi TestNotifications --local-scheduler And then check your email inbox to see if you got an error email or any other kind of notifications that you expected. """ raise_in_complete = luigi.parameter.BoolParameter(description='If true, fail in complete() instead of run()') def run(self): raise ValueError('Testing notifications triggering') def complete(self): if self.raise_in_complete: raise ValueError('Testing notifications triggering') return False def email_type(): return configuration.get_config().get('core', 'email-type', 'plain') def generate_email(sender, subject, message, recipients, image_png): import email import email.mime import email.mime.multipart import email.mime.text import email.mime.image msg_root = email.mime.multipart.MIMEMultipart('related') msg_text = email.mime.text.MIMEText(message, email_type()) msg_text.set_charset('utf-8') msg_root.attach(msg_text) if image_png: with open(image_png, 'rb') as fp: msg_image = email.mime.image.MIMEImage(fp.read(), 'png') msg_root.attach(msg_image) msg_root['Subject'] = subject msg_root['From'] = sender msg_root['To'] = ','.join(recipients) return msg_root def wrap_traceback(traceback): """ For internal use only (until further notice) """ if email_type() == 'html': try: from pygments import highlight from pygments.lexers import PythonTracebackLexer from pygments.formatters import HtmlFormatter with_pygments = True except ImportError: with_pygments = False if with_pygments: formatter = HtmlFormatter(noclasses=True) wrapped = highlight(traceback, PythonTracebackLexer(), formatter) else: wrapped = '<pre>%s</pre>' % traceback else: wrapped = traceback return wrapped def send_email_smtp(config, sender, subject, message, recipients, image_png): import smtplib smtp_ssl = config.getboolean('core', 'smtp_ssl', False) smtp_without_tls = config.getboolean('core', 'smtp_without_tls', False) smtp_host = config.get('core', 'smtp_host', 'localhost') smtp_port = config.getint('core', 'smtp_port', 0) smtp_local_hostname = config.get('core', 'smtp_local_hostname', None) smtp_timeout = config.getfloat('core', 'smtp_timeout', None) kwargs = dict(host=smtp_host, port=smtp_port, local_hostname=smtp_local_hostname) if smtp_timeout: kwargs['timeout'] = smtp_timeout smtp_login = config.get('core', 'smtp_login', None) smtp_password = config.get('core', 'smtp_password', None) smtp = smtplib.SMTP(**kwargs) if not smtp_ssl else smtplib.SMTP_SSL(**kwargs) smtp.ehlo_or_helo_if_needed() if smtp.has_extn('starttls') and not smtp_without_tls: smtp.starttls() if smtp_login and smtp_password: smtp.login(smtp_login, smtp_password) msg_root = generate_email(sender, subject, message, recipients, image_png) smtp.sendmail(sender, recipients, msg_root.as_string()) def send_email_ses(config, sender, subject, message, recipients, image_png): """ Sends notification through AWS SES. Does not handle access keys. Use either 1/ configuration file 2/ EC2 instance profile See also http://boto3.readthedocs.org/en/latest/guide/configuration.html. """ from boto3 import client as boto3_client client = boto3_client('ses') msg_root = generate_email(sender, subject, message, recipients, image_png) response = client.send_raw_email(Source=sender, Destinations=recipients, RawMessage={'Data': msg_root.as_string()}) logger.debug(("Message sent to SES.\nMessageId: {},\nRequestId: {},\n" "HTTPSStatusCode: {}").format(response['MessageId'], response['ResponseMetadata']['RequestId'], response['ResponseMetadata']['HTTPStatusCode'])) def send_email_sendgrid(config, sender, subject, message, recipients, image_png): import sendgrid client = sendgrid.SendGridClient(config.get('email', 'SENDGRID_USERNAME', None), config.get('email', 'SENDGRID_PASSWORD', None), raise_errors=True) to_send = sendgrid.Mail() to_send.add_to(recipients) to_send.set_from(sender) to_send.set_subject(subject) if email_type() == 'html': to_send.set_html(message) else: to_send.set_text(message) if image_png: to_send.add_attachment(image_png) client.send(to_send) def _email_disabled(): if email_type() == 'none': logger.info("Not sending email when email-type is none") return True elif configuration.get_config().getboolean('email', 'force-send', False): return False elif sys.stdout.isatty(): logger.info("Not sending email when running from a tty") return True elif DEBUG: logger.info("Not sending email when running in debug mode") else: return False def send_email_sns(config, sender, subject, message, topic_ARN, image_png): """ Sends notification through AWS SNS. Takes Topic ARN from recipients. Does not handle access keys. Use either 1/ configuration file 2/ EC2 instance profile See also http://boto3.readthedocs.org/en/latest/guide/configuration.html. """ from boto3 import resource as boto3_resource sns = boto3_resource('sns') topic = sns.Topic(topic_ARN[0]) # Subject is max 100 chars if len(subject) > 100: subject = subject[0:48] + '...' + subject[-49:] response = topic.publish(Subject=subject, Message=message) logger.debug(("Message sent to SNS.\nMessageId: {},\nRequestId: {},\n" "HTTPSStatusCode: {}").format(response['MessageId'], response['ResponseMetadata']['RequestId'], response['ResponseMetadata']['HTTPStatusCode'])) def send_email(subject, message, sender, recipients, image_png=None): """ Decides whether to send notification. Notification is cancelled if there are no recipients or if stdout is onto tty or if in debug mode. Dispatches on config value email.type. Default is 'smtp'. """ config = configuration.get_config() notifiers = {'ses': send_email_ses, 'sendgrid': send_email_sendgrid, 'smtp': send_email_smtp, 'sns': send_email_sns} subject = _prefix(subject) if not recipients or recipients == (None,): return if _email_disabled(): return # Clean the recipients lists to allow multiple error-email addresses, comma # separated in luigi.cfg recipients_tmp = [] for r in recipients: recipients_tmp.extend(r.split(',')) # Replace original recipients with the clean list recipients = recipients_tmp # Get appropriate sender and call it to send the notification email_sender_type = config.get('email', 'type', None) email_sender = notifiers.get(email_sender_type, send_email_smtp) email_sender(config, sender, subject, message, recipients, image_png) def _email_recipients(additional_recipients=None): config = configuration.get_config() receiver = config.get('core', 'error-email', None) recipients = [receiver] if receiver else [] if additional_recipients: if isinstance(additional_recipients, str): recipients.append(additional_recipients) else: recipients.extend(additional_recipients) return recipients def send_error_email(subject, message, additional_recipients=None): """ Sends an email to the configured error-email. If no error-email is configured, then a message is logged. """ config = configuration.get_config() recipients = _email_recipients(additional_recipients) if recipients: sender = config.get('core', 'email-sender', DEFAULT_CLIENT_EMAIL) logger.info("Sending warning email to %r", recipients) send_email( subject=subject, message=message, sender=sender, recipients=recipients ) else: logger.info("Skipping error email. Set `error-email` in the `core` " "section of the luigi config file or override `owner_email`" "in the task to receive error emails.") def _prefix(subject): """ If the config has a special prefix for emails then this function adds this prefix. """ config = configuration.get_config() email_prefix = config.get('core', 'email-prefix', None) if email_prefix is not None: subject = "%s %s" % (email_prefix, subject) return subject def format_task_error(headline, task, formatted_exception=None): """ Format a message body for an error email related to a luigi.task.Task :param headline: Summary line for the message :param task: `luigi.task.Task` instance where this error occurred :param formatted_exception: optional string showing traceback :return: message body """ typ = email_type() if formatted_exception: formatted_exception = wrap_traceback(formatted_exception) else: formatted_exception = "" if typ == 'html': msg_template = textwrap.dedent(''' <html> <body> <h2>{headline}</h2> <table style="border-top: 1px solid black; border-bottom: 1px solid black"> <thead> <tr><th>name</th><td>{name}</td></tr> </thead> <tbody> {param_rows} </tbody> </table> </pre> <h2>Traceback</h2> {traceback} </body> </html> ''') str_params = task.to_str_params() params = '\n'.join('<tr><th>{}</th><td>{}</td></tr>'.format(*items) for items in str_params.items()) body = msg_template.format(headline=headline, name=task.task_family, param_rows=params, traceback=formatted_exception) else: msg_template = textwrap.dedent('''\ {headline} Name: {name} Parameters: {params} {traceback} ''') str_params = task.to_str_params() max_width = max([0] + [len(x) for x in str_params.keys()]) params = '\n'.join(' {:{width}}: {}'.format(*items, width=max_width) for items in str_params.items()) body = msg_template.format(headline=headline, name=task.task_family, params=params, traceback=formatted_exception) return body
unknown
codeparrot/codeparrot-clean
import sys def query_yes_no(question, default="yes"): """Ask a yes/no question via raw_input() and return their answer. "question" is a string that is presented to the user. "default" is the presumed answer if the user just hits <Enter>. It must be "yes" (the default), "no" or None (meaning an answer is required of the user). The "answer" return value is one of "yes" or "no". """ valid = { "yes": True, "y": True, "ye": True, "no": False, "n": False, } if default is None: prompt = " [y/n] " elif default == "yes": prompt = " [Y/n] " elif default == "no": prompt = " [y/N] " else: raise ValueError("invalid default answer: '%s'" % default) while True: sys.stdout.write(question + prompt) choice = raw_input().lower() if default is not None and choice == '': return valid[default] elif choice in valid: return valid[choice] else: sys.stdout.write("Please respond with 'yes' or 'no' (or 'y' or 'n').\n")
unknown
codeparrot/codeparrot-clean
from __future__ import absolute_import from django.test import TestCase from django.contrib.contenttypes.models import ContentType from ..models import Author, Article, UrlArticle class DefaultsTests(TestCase): """Test django views in django/views/defaults.py""" fixtures = ['testdata.json'] non_existing_urls = ['/views/non_existing_url/', # this is in urls.py '/views/other_non_existing_url/'] # this NOT in urls.py def test_shortcut_with_absolute_url(self): "Can view a shortcut for an Author object that has a get_absolute_url method" for obj in Author.objects.all(): short_url = '/views/shortcut/%s/%s/' % (ContentType.objects.get_for_model(Author).id, obj.pk) response = self.client.get(short_url) self.assertRedirects(response, 'http://testserver%s' % obj.get_absolute_url(), status_code=302, target_status_code=404) def test_shortcut_no_absolute_url(self): "Shortcuts for an object that has no get_absolute_url method raises 404" for obj in Article.objects.all(): short_url = '/views/shortcut/%s/%s/' % (ContentType.objects.get_for_model(Article).id, obj.pk) response = self.client.get(short_url) self.assertEqual(response.status_code, 404) def test_wrong_type_pk(self): short_url = '/views/shortcut/%s/%s/' % (ContentType.objects.get_for_model(Author).id, 'nobody/expects') response = self.client.get(short_url) self.assertEqual(response.status_code, 404) def test_shortcut_bad_pk(self): short_url = '/views/shortcut/%s/%s/' % (ContentType.objects.get_for_model(Author).id, '42424242') response = self.client.get(short_url) self.assertEqual(response.status_code, 404) def test_nonint_content_type(self): an_author = Author.objects.all()[0] short_url = '/views/shortcut/%s/%s/' % ('spam', an_author.pk) response = self.client.get(short_url) self.assertEqual(response.status_code, 404) def test_bad_content_type(self): an_author = Author.objects.all()[0] short_url = '/views/shortcut/%s/%s/' % (42424242, an_author.pk) response = self.client.get(short_url) self.assertEqual(response.status_code, 404) def test_page_not_found(self): "A 404 status is returned by the page_not_found view" for url in self.non_existing_urls: response = self.client.get(url) self.assertEqual(response.status_code, 404) def test_csrf_token_in_404(self): """ The 404 page should have the csrf_token available in the context """ # See ticket #14565 for url in self.non_existing_urls: response = self.client.get(url) csrf_token = response.context['csrf_token'] self.assertNotEqual(str(csrf_token), 'NOTPROVIDED') self.assertNotEqual(str(csrf_token), '') def test_server_error(self): "The server_error view raises a 500 status" response = self.client.get('/views/server_error/') self.assertEqual(response.status_code, 500) def test_get_absolute_url_attributes(self): "A model can set attributes on the get_absolute_url method" self.assertTrue(getattr(UrlArticle.get_absolute_url, 'purge', False), 'The attributes of the original get_absolute_url must be added.') article = UrlArticle.objects.get(pk=1) self.assertTrue(getattr(article.get_absolute_url, 'purge', False), 'The attributes of the original get_absolute_url must be added.')
unknown
codeparrot/codeparrot-clean
#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright (c) 2012-2021 SoftBank Robotics. All rights reserved. # Use of this source code is governed by a BSD-style license (see the COPYING file). """ QiBuild """ from __future__ import absolute_import from __future__ import unicode_literals from __future__ import print_function from qipy.test.conftest import * from qisys.test.conftest import * from qitoolchain.test.conftest import * from qibuild.test.conftest import QiBuildAction class QiPkgAction(TestAction): """ QiPkgAction Class """ def __init__(self): """ QiPkgAction Init """ super(QiPkgAction, self).__init__("qipkg.actions") def add_test_project(self, src): """ Add Test Project """ this_dir = os.path.dirname(__file__) src_path = os.path.join(this_dir, "projects", src) dest_path = os.path.join(self.worktree.root, src) qisys.sh.copy_git_src(src_path, dest_path) worktree_project = self.worktree.add_project(src) return worktree_project @pytest.fixture def qipkg_action(cd_to_tmpdir): """ QiPkg Action """ return QiPkgAction()
unknown
codeparrot/codeparrot-clean
% This is generated by ESQL's AbstractFunctionTestCase. Do not edit it. See ../README.md for how to regenerate it. ```esql ROW language_code = "1" | ENRICH languages_policy ``` | language_code:keyword | language_name:keyword | | --- | --- | | 1 | English |
unknown
github
https://github.com/elastic/elasticsearch
docs/reference/query-languages/esql/_snippets/commands/examples/enrich.csv-spec/enrich.md
/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hadoop.security; import java.io.IOException; import java.util.LinkedList; import java.util.List; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; import org.apache.hadoop.util.Shell; import org.apache.hadoop.util.Shell.ExitCodeException; import org.apache.hadoop.security.NetgroupCache; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * A simple shell-based implementation of {@link GroupMappingServiceProvider} * that exec's the <code>groups</code> shell command to fetch the group * memberships of a given user. */ @InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"}) @InterfaceStability.Evolving public class ShellBasedUnixGroupsNetgroupMapping extends ShellBasedUnixGroupsMapping { private static final Logger LOG = LoggerFactory.getLogger(ShellBasedUnixGroupsNetgroupMapping.class); /** * Get unix groups (parent) and netgroups for given user * * @param user get groups and netgroups for this user * @return groups and netgroups for user */ @Override public List<String> getGroups(String user) throws IOException { // parent get unix groups List<String> groups = new LinkedList<String>(super.getGroups(user)); NetgroupCache.getNetgroups(user, groups); return groups; } /** * Refresh the netgroup cache */ @Override public void cacheGroupsRefresh() throws IOException { List<String> groups = NetgroupCache.getNetgroupNames(); NetgroupCache.clear(); cacheGroupsAdd(groups); } /** * Add a group to cache, only netgroups are cached * * @param groups list of group names to add to cache */ @Override public void cacheGroupsAdd(List<String> groups) throws IOException { for(String group: groups) { if(group.length() == 0) { // better safe than sorry (should never happen) } else if(group.charAt(0) == '@') { if(!NetgroupCache.isCached(group)) { NetgroupCache.add(group, getUsersForNetgroup(group)); } } else { // unix group, not caching } } } /** * Gets users for a netgroup * * @param netgroup return users for this netgroup * @return list of users for a given netgroup * @throws IOException raised on errors performing I/O. */ protected List<String> getUsersForNetgroup(String netgroup) throws IOException { List<String> users = new LinkedList<String>(); // returns a string similar to this: // group ( , user, ) ( domain, user1, host.com ) String usersRaw = execShellGetUserForNetgroup(netgroup); // get rid of spaces, makes splitting much easier usersRaw = usersRaw.replaceAll(" +", ""); // remove netgroup name at the beginning of the string usersRaw = usersRaw.replaceFirst( netgroup.replaceFirst("@", "") + "[()]+", ""); // split string into user infos String[] userInfos = usersRaw.split("[()]+"); for(String userInfo : userInfos) { // userInfo: xxx,user,yyy (xxx, yyy can be empty strings) // get rid of everything before first and after last comma String user = userInfo.replaceFirst("[^,]*,", ""); user = user.replaceFirst(",.*$", ""); // voila! got username! users.add(user); } return users; } /** * Calls shell to get users for a netgroup by calling getent * netgroup, this is a low level function that just returns string * that * * @param netgroup get users for this netgroup * @return string of users for a given netgroup in getent netgroups format * @throws IOException raised on errors performing I/O. */ protected String execShellGetUserForNetgroup(final String netgroup) throws IOException { String result = ""; try { // shell command does not expect '@' at the beginning of the group name result = Shell.execCommand( Shell.getUsersForNetgroupCommand(netgroup.substring(1))); } catch (ExitCodeException e) { // if we didn't get the group - just return empty list; LOG.warn("error getting users for netgroup " + netgroup, e); } return result; } }
java
github
https://github.com/apache/hadoop
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedUnixGroupsNetgroupMapping.java
import {Component, inject} from '@angular/core'; import {CarService} from './car.service'; @Component({ selector: 'app-root', template: ``, }) export class App { display = ''; }
typescript
github
https://github.com/angular/angular
adev/src/content/tutorials/learn-angular/steps/20-inject-based-di/src/app/app.ts
#! /usr/bin/env python """Test script for the dbm.open function based on testdumbdbm.py""" import os import unittest import glob import test.support # Skip tests if dbm module doesn't exist. dbm = test.support.import_module('dbm') _fname = test.support.TESTFN # # Iterates over every database module supported by dbm currently available, # setting dbm to use each in turn, and yielding that module # def dbm_iterator(): for name in dbm._names: try: mod = __import__(name, fromlist=['open']) except ImportError: continue dbm._modules[name] = mod yield mod # # Clean up all scratch databases we might have created during testing # def delete_files(): # we don't know the precise name the underlying database uses # so we use glob to locate all names for f in glob.glob(_fname + "*"): test.support.unlink(f) class AnyDBMTestCase(unittest.TestCase): _dict = {'0': b'', 'a': b'Python:', 'b': b'Programming', 'c': b'the', 'd': b'way', 'f': b'Guido', 'g': b'intended', } def init_db(self): f = dbm.open(_fname, 'n') for k in self._dict: f[k.encode("ascii")] = self._dict[k] f.close() def keys_helper(self, f): keys = sorted(k.decode("ascii") for k in f.keys()) dkeys = sorted(self._dict.keys()) self.assertEqual(keys, dkeys) return keys def test_error(self): self.assert_(issubclass(self.module.error, IOError)) def test_anydbm_not_existing(self): self.assertRaises(dbm.error, dbm.open, _fname) def test_anydbm_creation(self): f = dbm.open(_fname, 'c') self.assertEqual(list(f.keys()), []) for key in self._dict: f[key.encode("ascii")] = self._dict[key] self.read_helper(f) f.close() def test_anydbm_modification(self): self.init_db() f = dbm.open(_fname, 'c') self._dict['g'] = f[b'g'] = b"indented" self.read_helper(f) f.close() def test_anydbm_read(self): self.init_db() f = dbm.open(_fname, 'r') self.read_helper(f) f.close() def test_anydbm_keys(self): self.init_db() f = dbm.open(_fname, 'r') keys = self.keys_helper(f) f.close() def test_anydbm_access(self): self.init_db() f = dbm.open(_fname, 'r') key = "a".encode("ascii") assert(key in f) assert(f[key] == b"Python:") f.close() def read_helper(self, f): keys = self.keys_helper(f) for key in self._dict: self.assertEqual(self._dict[key], f[key.encode("ascii")]) def tearDown(self): delete_files() def setUp(self): dbm._defaultmod = self.module delete_files() class WhichDBTestCase(unittest.TestCase): # Actual test methods are added to namespace after class definition. def __init__(self, *args): unittest.TestCase.__init__(self, *args) def test_whichdb(self): for module in dbm_iterator(): # Check whether whichdb correctly guesses module name # for databases opened with "module" module. # Try with empty files first name = module.__name__ if name == 'dbm.dumb': continue # whichdb can't support dbm.dumb test.support.unlink(_fname) f = module.open(_fname, 'c') f.close() self.assertEqual(name, dbm.whichdb(_fname)) # Now add a key f = module.open(_fname, 'w') f[b"1"] = b"1" # and test that we can find it self.assertTrue(b"1" in f) # and read it self.assertTrue(f[b"1"] == b"1") f.close() self.assertEqual(name, dbm.whichdb(_fname)) def tearDown(self): delete_files() def setUp(self): delete_files() self.filename = test.support.TESTFN self.d = dbm.open(self.filename, 'c') self.d.close() def test_keys(self): self.d = dbm.open(self.filename, 'c') self.assertEqual(self.d.keys(), []) a = [(b'a', b'b'), (b'12345678910', b'019237410982340912840198242')] for k, v in a: self.d[k] = v self.assertEqual(sorted(self.d.keys()), sorted(k for (k, v) in a)) for k, v in a: self.assert_(k in self.d) self.assertEqual(self.d[k], v) self.assert_(b'xxx' not in self.d) self.assertRaises(KeyError, lambda: self.d[b'xxx']) self.d.close() def test_main(): classes = [WhichDBTestCase] for mod in dbm_iterator(): classes.append(type("TestCase-" + mod.__name__, (AnyDBMTestCase,), {'module': mod})) test.support.run_unittest(*classes) if __name__ == "__main__": test_main()
unknown
codeparrot/codeparrot-clean
# coding: utf-8 # # Copyright 2014 The Oppia Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS-IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Jobs operating on explorations that can be used for production tests. To use these jobs, first need to register them in jobs_registry (at the moment they are not displayed there to avoid accidental use).""" from core import jobs from core.domain import exp_domain from core.domain import exp_services from core.domain import rights_manager from core.platform import models import feconf (base_models, exp_models,) = models.Registry.import_models([ models.NAMES.base_model, models.NAMES.exploration]) class ExpCopiesRealtimeModel( jobs.BaseRealtimeDatastoreClassForContinuousComputations): pass class ExpCopiesAggregator(jobs.BaseContinuousComputationManager): """A continuous-computation job creating 10 published copies of every existing exploration, with the eid being '[old_eid]copy[copy_number]', title 'Copy' and category 'Copies'. """ @classmethod def get_event_types_listened_to(cls): return [] @classmethod def _get_realtime_datastore_class(cls): return ExpCopiesRealtimeModel @classmethod def _get_batch_job_manager_class(cls): return ExpCopiesMRJobManager @classmethod def _handle_incoming_event(cls, active_realtime_layer, event_type, *args): pass class ExpCopiesMRJobManager( jobs.BaseMapReduceJobManagerForContinuousComputations): """A continuous-computation job creating 10 published copies of every existing exploration, with the eid being '[old_eid]copy[copy_number]', title 'Copy' and category 'Copies'. """ @classmethod def _get_continuous_computation_class(cls): return ExpCopiesAggregator @classmethod def entity_classes_to_map_over(cls): return [exp_models.ExplorationModel] @staticmethod def map(item): if ExpCopiesMRJobManager._entity_created_before_job_queued(item): for count in range(10): yield ('%scopy%d' % (item.id, count), exp_services.get_exploration_from_model(item).to_yaml()) @staticmethod def reduce(exp_id, list_of_exps): for stringified_exp in list_of_exps: exploration = exp_domain.Exploration.from_untitled_yaml( exp_id, 'Copy', 'Copies', stringified_exp) exp_services.save_new_exploration( feconf.SYSTEM_COMMITTER_ID, exploration) rights_manager.publish_exploration( feconf.SYSTEM_COMMITTER_ID, exp_id) # Job to delete all copied explorations. class DeleteExpCopiesRealtimeModel( jobs.BaseRealtimeDatastoreClassForContinuousComputations): pass class DeleteExpCopiesAggregator(jobs.BaseContinuousComputationManager): """A continuous-computation job deleting all explorations in category 'Copies'. """ @classmethod def get_event_types_listened_to(cls): return [] @classmethod def _get_realtime_datastore_class(cls): return DeleteExpCopiesRealtimeModel @classmethod def _get_batch_job_manager_class(cls): return DeleteExpCopiesMRJobManager @classmethod def _handle_incoming_event(cls, active_realtime_layer, event_type, *args): pass class DeleteExpCopiesMRJobManager( jobs.BaseMapReduceJobManagerForContinuousComputations): """Job that deletes all explorations in category 'Copies'. """ @classmethod def _get_continuous_computation_class(cls): return DeleteExpCopiesAggregator @classmethod def entity_classes_to_map_over(cls): return [exp_models.ExplorationModel] @staticmethod def map(item): if item.category == 'Copies': exp_services.delete_exploration( feconf.SYSTEM_COMMITTER_ID, item.id, force_deletion=True) @staticmethod def reduce(exp_id, list_of_exps): pass
unknown
codeparrot/codeparrot-clean
import json import math import re from decimal import Decimal from django.contrib.gis.db.models import GeometryField, PolygonField, functions from django.contrib.gis.geos import ( GEOSGeometry, LineString, MultiLineString, MultiPoint, MultiPolygon, Point, Polygon, fromstr, ) from django.contrib.gis.measure import Area from django.db import NotSupportedError, connection from django.db.models import F, IntegerField, Sum, Value from django.test import TestCase, skipUnlessDBFeature from ..utils import FuncTestMixin, can_save_multipoint from .models import ( City, Country, CountryWebMercator, Feature, ManyPointModel, State, ThreeDimensionalFeature, Track, ) class GISFunctionsTests(FuncTestMixin, TestCase): """ Testing functions from django/contrib/gis/db/models/functions.py. Area/Distance/Length/Perimeter are tested in distapp/tests. Please keep the tests in function's alphabetic order. """ fixtures = ["initial"] def test_asgeojson(self): if not connection.features.has_AsGeoJSON_function: with self.assertRaises(NotSupportedError): list(Country.objects.annotate(json=functions.AsGeoJSON("mpoly"))) return pueblo_json = '{"type":"Point","coordinates":[-104.609252,38.255001]}' houston_json = json.loads( '{"type":"Point","crs":{"type":"name","properties":' '{"name":"EPSG:4326"}},"coordinates":[-95.363151,29.763374]}' ) victoria_json = json.loads( '{"type":"Point",' '"bbox":[-123.30519600,48.46261100,-123.30519600,48.46261100],' '"coordinates":[-123.305196,48.462611]}' ) chicago_json = json.loads( '{"type":"Point","crs":{"type":"name","properties":{"name":"EPSG:4326"}},' '"bbox":[-87.65018,41.85039,-87.65018,41.85039],' '"coordinates":[-87.65018,41.85039]}' ) if "crs" in connection.features.unsupported_geojson_options: del houston_json["crs"] del chicago_json["crs"] if "bbox" in connection.features.unsupported_geojson_options: del chicago_json["bbox"] del victoria_json["bbox"] if "precision" in connection.features.unsupported_geojson_options: chicago_json["coordinates"] = [-87.650175, 41.850385] # Precision argument should only be an integer with self.assertRaises(TypeError): City.objects.annotate(geojson=functions.AsGeoJSON("point", precision="foo")) # Reference queries and values. # SELECT ST_AsGeoJson("geoapp_city"."point", 8, 0) # FROM "geoapp_city" WHERE "geoapp_city"."name" = 'Pueblo'; self.assertJSONEqual( pueblo_json, City.objects.annotate(geojson=functions.AsGeoJSON("point")) .get(name="Pueblo") .geojson, ) # SELECT ST_AsGeoJson("geoapp_city"."point", 8, 2) FROM "geoapp_city" # WHERE "geoapp_city"."name" = 'Houston'; # This time we want to include the CRS by using the `crs` keyword. self.assertJSONEqual( City.objects.annotate(json=functions.AsGeoJSON("point", crs=True)) .get(name="Houston") .json, houston_json, ) # SELECT ST_AsGeoJson("geoapp_city"."point", 8, 1) FROM "geoapp_city" # WHERE "geoapp_city"."name" = 'Houston'; # This time we include the bounding box by using the `bbox` keyword. self.assertJSONEqual( City.objects.annotate(geojson=functions.AsGeoJSON("point", bbox=True)) .get(name="Victoria") .geojson, victoria_json, ) # SELECT ST_AsGeoJson("geoapp_city"."point", 5, 3) FROM "geoapp_city" # WHERE "geoapp_city"."name" = 'Chicago'; # Finally, we set every available keyword. # MariaDB doesn't limit the number of decimals in bbox. if connection.ops.mariadb: chicago_json["bbox"] = [-87.650175, 41.850385, -87.650175, 41.850385] try: self.assertJSONEqual( City.objects.annotate( geojson=functions.AsGeoJSON( "point", bbox=True, crs=True, precision=5 ) ) .get(name="Chicago") .geojson, chicago_json, ) except AssertionError: # Give a second chance with different coords rounding. chicago_json["coordinates"][1] = 41.85038 self.assertJSONEqual( City.objects.annotate( geojson=functions.AsGeoJSON( "point", bbox=True, crs=True, precision=5 ) ) .get(name="Chicago") .geojson, chicago_json, ) @skipUnlessDBFeature("has_AsGeoJSON_function") def test_asgeojson_option_0(self): p1 = Point(1, 1, srid=4326) p2 = Point(-87.65018, 41.85039, srid=4326) obj = ManyPointModel.objects.create( point1=p1, point2=p2, point3=p2.transform(3857, clone=True), ) self.assertJSONEqual( ManyPointModel.objects.annotate(geojson=functions.AsGeoJSON("point3")) .get(pk=obj.pk) .geojson, # GeoJSON without CRS. json.loads( '{"type":"Point","coordinates":[-9757173.40553877, 5138594.87034608]}' ), ) @skipUnlessDBFeature("has_AsGML_function") def test_asgml(self): # Should throw a TypeError when trying to obtain GML from a # non-geometry field. qs = City.objects.all() with self.assertRaises(TypeError): qs.annotate(gml=functions.AsGML("name")) ptown = City.objects.annotate(gml=functions.AsGML("point", precision=9)).get( name="Pueblo" ) if connection.ops.oracle: # No precision parameter for Oracle :-/ gml_regex = re.compile( r'^<gml:Point srsName="EPSG:4326" ' r'xmlns:gml="http://www.opengis.net/gml">' r'<gml:coordinates decimal="\." cs="," ts=" ">' r"-104.60925\d+,38.25500\d+ " r"</gml:coordinates></gml:Point>" ) else: gml_regex = re.compile( r'^<gml:Point srsName="(urn:ogc:def:crs:)?EPSG:4326"><gml:coordinates>' r"-104\.60925\d+,38\.255001</gml:coordinates></gml:Point>" ) self.assertTrue(gml_regex.match(ptown.gml)) self.assertIn( '<gml:pos srsDimension="2">', City.objects.annotate(gml=functions.AsGML("point", version=3)) .get(name="Pueblo") .gml, ) @skipUnlessDBFeature("has_AsKML_function") def test_askml(self): # Should throw a TypeError when trying to obtain KML from a # non-geometry field. with self.assertRaises(TypeError): City.objects.annotate(kml=functions.AsKML("name")) # Ensuring the KML is as expected. ptown = City.objects.annotate(kml=functions.AsKML("point", precision=9)).get( name="Pueblo" ) self.assertEqual( "<Point><coordinates>-104.609252,38.255001</coordinates></Point>", ptown.kml ) @skipUnlessDBFeature("has_AsSVG_function") def test_assvg(self): with self.assertRaises(TypeError): City.objects.annotate(svg=functions.AsSVG("point", precision="foo")) # SELECT AsSVG(geoapp_city.point, 0, 8) FROM geoapp_city # WHERE name = 'Pueblo'; svg1 = 'cx="-104.609252" cy="-38.255001"' # Even though relative, only one point so it's practically the same # except for the 'c' letter prefix on the x,y values. svg2 = svg1.replace("c", "") self.assertEqual( svg1, City.objects.annotate(svg=functions.AsSVG("point")).get(name="Pueblo").svg, ) self.assertEqual( svg2, City.objects.annotate(svg=functions.AsSVG("point", relative=5)) .get(name="Pueblo") .svg, ) @skipUnlessDBFeature("has_AsWKB_function") def test_aswkb(self): wkb = ( City.objects.annotate( wkb=functions.AsWKB(Point(1, 2, srid=4326)), ) .first() .wkb ) # WKB is either XDR or NDR encoded. self.assertIn( bytes(wkb), ( b"\x00\x00\x00\x00\x01?\xf0\x00\x00\x00\x00\x00\x00@\x00\x00" b"\x00\x00\x00\x00\x00", b"\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf0?\x00\x00" b"\x00\x00\x00\x00\x00@", ), ) @skipUnlessDBFeature("has_AsWKT_function") def test_aswkt(self): wkt = ( City.objects.annotate( wkt=functions.AsWKT(Point(1, 2, srid=4326)), ) .first() .wkt ) self.assertEqual( wkt, "POINT (1.0 2.0)" if connection.ops.oracle else "POINT(1 2)" ) @skipUnlessDBFeature("has_Azimuth_function") def test_azimuth(self): # Returns the azimuth in radians. azimuth_expr = functions.Azimuth(Point(0, 0, srid=4326), Point(1, 1, srid=4326)) self.assertAlmostEqual( City.objects.annotate(azimuth=azimuth_expr).first().azimuth, math.pi / 4, places=2, ) # Returns None if the two points are coincident. azimuth_expr = functions.Azimuth(Point(0, 0, srid=4326), Point(0, 0, srid=4326)) self.assertIsNone(City.objects.annotate(azimuth=azimuth_expr).first().azimuth) @skipUnlessDBFeature("has_BoundingCircle_function") def test_bounding_circle(self): def circle_num_points(num_seg): # num_seg is the number of segments per quarter circle. return (4 * num_seg) + 1 if connection.ops.postgis: expected_area = 169 elif connection.ops.spatialite: expected_area = 168 else: # Oracle. expected_area = 171 country = Country.objects.annotate( circle=functions.BoundingCircle("mpoly") ).order_by("name")[0] self.assertAlmostEqual(country.circle.area, expected_area, 0) if connection.ops.postgis: # By default num_seg=48. self.assertEqual(country.circle.num_points, circle_num_points(48)) tests = [12, Value(12, output_field=IntegerField())] for num_seq in tests: with self.subTest(num_seq=num_seq): country = Country.objects.annotate( circle=functions.BoundingCircle("mpoly", num_seg=num_seq), ).order_by("name")[0] if connection.ops.postgis: self.assertGreater(country.circle.area, 168.4, 0) self.assertLess(country.circle.area, 169.5, 0) self.assertEqual(country.circle.num_points, circle_num_points(12)) else: self.assertAlmostEqual(country.circle.area, expected_area, 0) @skipUnlessDBFeature("has_Centroid_function") def test_centroid(self): qs = State.objects.exclude(poly__isnull=True).annotate( centroid=functions.Centroid("poly") ) tol = ( 1.8 if connection.ops.mysql else (0.1 if connection.ops.oracle else 0.00001) ) for state in qs: self.assertTrue(state.poly.centroid.equals_exact(state.centroid, tol)) with self.assertRaisesMessage( TypeError, "'Centroid' takes exactly 1 argument (2 given)" ): State.objects.annotate(centroid=functions.Centroid("poly", "poly")) @skipUnlessDBFeature("has_Difference_function") def test_difference(self): geom = Point(5, 23, srid=4326) qs = Country.objects.annotate(diff=functions.Difference("mpoly", geom)) # Oracle does something screwy with the Texas geometry. if connection.ops.oracle: qs = qs.exclude(name="Texas") for c in qs: self.assertTrue(c.mpoly.difference(geom).equals(c.diff)) @skipUnlessDBFeature("has_Difference_function", "has_Transform_function") def test_difference_mixed_srid(self): """Testing with mixed SRID (Country has default 4326).""" geom = Point(556597.4, 2632018.6, srid=3857) # Spherical Mercator qs = Country.objects.annotate(difference=functions.Difference("mpoly", geom)) # Oracle does something screwy with the Texas geometry. if connection.ops.oracle: qs = qs.exclude(name="Texas") for c in qs: self.assertTrue(c.mpoly.difference(geom).equals(c.difference)) @skipUnlessDBFeature("has_Envelope_function") def test_envelope(self): countries = Country.objects.annotate(envelope=functions.Envelope("mpoly")) for country in countries: self.assertTrue(country.envelope.equals(country.mpoly.envelope)) @skipUnlessDBFeature("has_ForcePolygonCW_function") def test_force_polygon_cw(self): rings = ( ((0, 0), (5, 0), (0, 5), (0, 0)), ((1, 1), (1, 3), (3, 1), (1, 1)), ) rhr_rings = ( ((0, 0), (0, 5), (5, 0), (0, 0)), ((1, 1), (3, 1), (1, 3), (1, 1)), ) State.objects.create(name="Foo", poly=Polygon(*rings)) st = State.objects.annotate( force_polygon_cw=functions.ForcePolygonCW("poly") ).get(name="Foo") self.assertEqual(rhr_rings, st.force_polygon_cw.coords) @skipUnlessDBFeature("has_FromWKB_function") def test_fromwkb(self): g = Point(56.811078, 60.608647) pt1, pt2 = City.objects.values_list( functions.FromWKB(Value(g.wkb.tobytes())), functions.FromWKB(Value(g.wkb.tobytes()), srid=4326), )[0] self.assertIs(g.equals_exact(pt1, 0.00001), True) self.assertIsNone(pt1.srid) self.assertEqual(pt2.srid, 4326) @skipUnlessDBFeature("has_FromWKT_function") def test_fromwkt(self): g = Point(56.811078, 60.608647) pt1, pt2 = City.objects.values_list( functions.FromWKT(Value(g.wkt)), functions.FromWKT(Value(g.wkt), srid=4326), )[0] self.assertIs(g.equals_exact(pt1, 0.00001), True) self.assertIsNone(pt1.srid) self.assertEqual(pt2.srid, 4326) @skipUnlessDBFeature("has_GeoHash_function") def test_geohash(self): # Reference query: # SELECT ST_GeoHash(point) FROM geoapp_city WHERE name='Houston'; # SELECT ST_GeoHash(point, 5) FROM geoapp_city WHERE name='Houston'; ref_hash = "9vk1mfq8jx0c8e0386z6" h1 = City.objects.annotate(geohash=functions.GeoHash("point")).get( name="Houston" ) h2 = City.objects.annotate(geohash=functions.GeoHash("point", precision=5)).get( name="Houston" ) self.assertEqual(ref_hash, h1.geohash[: len(ref_hash)]) self.assertEqual(ref_hash[:5], h2.geohash) @skipUnlessDBFeature("has_GeometryDistance_function") def test_geometry_distance(self): point = Point(-90, 40, srid=4326) qs = City.objects.annotate( distance=functions.GeometryDistance("point", point) ).order_by("distance") distances = ( 2.99091995527296, 5.33507274054713, 9.33852187483721, 9.91769193646233, 11.556465744884, 14.713098433352, 34.3635252198568, 276.987855073372, ) for city, expected_distance in zip(qs, distances): with self.subTest(city=city): self.assertAlmostEqual(city.distance, expected_distance) @skipUnlessDBFeature("has_Intersection_function") def test_intersection(self): geom = Point(5, 23, srid=4326) qs = Country.objects.annotate(inter=functions.Intersection("mpoly", geom)) for c in qs: if connection.features.empty_intersection_returns_none: self.assertIsNone(c.inter) else: self.assertIs(c.inter.empty, True) @skipUnlessDBFeature("supports_empty_geometries", "has_IsEmpty_function") def test_isempty_geometry_empty(self): empty = City.objects.create(name="Nowhere", point=Point(srid=4326)) City.objects.create(name="Somewhere", point=Point(6.825, 47.1, srid=4326)) self.assertSequenceEqual( City.objects.annotate(isempty=functions.IsEmpty("point")).filter( isempty=True ), [empty], ) self.assertSequenceEqual(City.objects.filter(point__isempty=True), [empty]) @skipUnlessDBFeature("has_IsEmpty_function") def test_isempty_geometry_null(self): nowhere = State.objects.create(name="Nowhere", poly=None) qs = State.objects.annotate(isempty=functions.IsEmpty("poly")) self.assertSequenceEqual(qs.filter(isempty=None), [nowhere]) self.assertSequenceEqual( qs.filter(isempty=False).order_by("name").values_list("name", flat=True), ["Colorado", "Kansas"], ) self.assertSequenceEqual(qs.filter(isempty=True), []) self.assertSequenceEqual(State.objects.filter(poly__isempty=True), []) @skipUnlessDBFeature("has_IsValid_function") def test_isvalid(self): valid_geom = fromstr("POLYGON((0 0, 0 1, 1 1, 1 0, 0 0))") invalid_geom = fromstr("POLYGON((0 0, 0 1, 1 1, 1 0, 1 1, 1 0, 0 0))") State.objects.create(name="valid", poly=valid_geom) State.objects.create(name="invalid", poly=invalid_geom) valid = ( State.objects.filter(name="valid") .annotate(isvalid=functions.IsValid("poly")) .first() ) invalid = ( State.objects.filter(name="invalid") .annotate(isvalid=functions.IsValid("poly")) .first() ) self.assertIs(valid.isvalid, True) self.assertIs(invalid.isvalid, False) @skipUnlessDBFeature("has_Area_function") def test_area_with_regular_aggregate(self): # Create projected country objects, for this test to work on all # backends. for c in Country.objects.all(): CountryWebMercator.objects.create( name=c.name, mpoly=c.mpoly.transform(3857, clone=True) ) # Test in projected coordinate system qs = CountryWebMercator.objects.annotate(area_sum=Sum(functions.Area("mpoly"))) # Some backends (e.g. Oracle) cannot group by multipolygon values, so # defer such fields in the aggregation query. for c in qs.defer("mpoly"): result = c.area_sum # If the result is a measure object, get value. if isinstance(result, Area): result = result.sq_m self.assertAlmostEqual((result - c.mpoly.area) / c.mpoly.area, 0) @skipUnlessDBFeature("has_Area_function") def test_area_lookups(self): # Create projected countries so the test works on all backends. CountryWebMercator.objects.bulk_create( CountryWebMercator(name=c.name, mpoly=c.mpoly.transform(3857, clone=True)) for c in Country.objects.all() ) qs = CountryWebMercator.objects.annotate(area=functions.Area("mpoly")) self.assertEqual( qs.get(area__lt=Area(sq_km=500000)), CountryWebMercator.objects.get(name="New Zealand"), ) with self.assertRaisesMessage( ValueError, "AreaField only accepts Area measurement objects." ): qs.get(area__lt=500000) @skipUnlessDBFeature("has_ClosestPoint_function") def test_closest_point(self): qs = Country.objects.annotate( closest_point=functions.ClosestPoint("mpoly", functions.Centroid("mpoly")) ) for country in qs: self.assertIsInstance(country.closest_point, Point) self.assertEqual( country.mpoly.intersection(country.closest_point), country.closest_point, ) @skipUnlessDBFeature("has_LineLocatePoint_function") def test_line_locate_point(self): pos_expr = functions.LineLocatePoint( LineString((0, 0), (0, 3), srid=4326), Point(0, 1, srid=4326) ) self.assertAlmostEqual( State.objects.annotate(pos=pos_expr).first().pos, 0.3333333 ) @skipUnlessDBFeature("has_MakeValid_function") def test_make_valid(self): invalid_geom = fromstr("POLYGON((0 0, 0 1, 1 1, 1 0, 1 1, 1 0, 0 0))") State.objects.create(name="invalid", poly=invalid_geom) invalid = ( State.objects.filter(name="invalid") .annotate(repaired=functions.MakeValid("poly")) .first() ) self.assertIs(invalid.repaired.valid, True) self.assertTrue( invalid.repaired.equals( fromstr("POLYGON((0 0, 0 1, 1 1, 1 0, 0 0))", srid=invalid.poly.srid) ) ) @skipUnlessDBFeature("has_MakeValid_function") def test_make_valid_multipolygon(self): invalid_geom = fromstr( "POLYGON((0 0, 0 1 , 1 1 , 1 0, 0 0), (10 0, 10 1, 11 1, 11 0, 10 0))" ) State.objects.create(name="invalid", poly=invalid_geom) invalid = ( State.objects.filter(name="invalid") .annotate( repaired=functions.MakeValid("poly"), ) .get() ) self.assertIs(invalid.repaired.valid, True) self.assertTrue( invalid.repaired.equals( fromstr( "MULTIPOLYGON (((0 0, 0 1, 1 1, 1 0, 0 0)), " "((10 0, 10 1, 11 1, 11 0, 10 0)))", srid=invalid.poly.srid, ) ) ) self.assertEqual(len(invalid.repaired), 2) @skipUnlessDBFeature("has_MakeValid_function") def test_make_valid_output_field(self): # output_field is GeometryField instance because different geometry # types can be returned. output_field = functions.MakeValid( Value(Polygon(), PolygonField(srid=42)), ).output_field self.assertIs(output_field.__class__, GeometryField) self.assertEqual(output_field.srid, 42) @skipUnlessDBFeature("has_MemSize_function") def test_memsize(self): ptown = City.objects.annotate(size=functions.MemSize("point")).get( name="Pueblo" ) # Exact value depends on database and version. self.assertTrue(20 <= ptown.size <= 105) @skipUnlessDBFeature("has_NumGeometries_function") def test_num_geom(self): # Both 'countries' only have two geometries. for c in Country.objects.annotate(num_geom=functions.NumGeometries("mpoly")): self.assertEqual(2, c.num_geom) qs = City.objects.filter(point__isnull=False).annotate( num_geom=functions.NumGeometries("point") ) for city in qs: # The results for the number of geometries on non-collections # depends on the database. if connection.ops.mysql or connection.ops.mariadb: self.assertIsNone(city.num_geom) else: self.assertEqual(1, city.num_geom) @skipUnlessDBFeature("has_NumDimensions_function") def test_num_dimensions(self): for c in Country.objects.annotate(num_dims=functions.NumDimensions("mpoly")): self.assertEqual(2, c.num_dims) ThreeDimensionalFeature.objects.create( name="London", geom=Point(-0.126418, 51.500832, 0) ) qs = ThreeDimensionalFeature.objects.annotate( num_dims=functions.NumDimensions("geom") ) self.assertEqual(qs[0].num_dims, 3) qs = ThreeDimensionalFeature.objects.annotate( num_dims=F("geom__num_dimensions") ) self.assertEqual(qs[0].num_dims, 3) msg = "'NumDimensions' takes exactly 1 argument (2 given)" with self.assertRaisesMessage(TypeError, msg): Country.objects.annotate(num_dims=functions.NumDimensions("point", "error")) @skipUnlessDBFeature("has_NumPoints_function") def test_num_points(self): coords = [(-95.363151, 29.763374), (-95.448601, 29.713803)] Track.objects.create(name="Foo", line=LineString(coords)) qs = Track.objects.annotate(num_points=functions.NumPoints("line")) self.assertEqual(qs.first().num_points, 2) mpoly_qs = Country.objects.annotate(num_points=functions.NumPoints("mpoly")) if not connection.features.supports_num_points_poly: for c in mpoly_qs: self.assertIsNone(c.num_points) return for c in mpoly_qs: self.assertEqual(c.mpoly.num_points, c.num_points) for c in City.objects.annotate(num_points=functions.NumPoints("point")): self.assertEqual(c.num_points, 1) @skipUnlessDBFeature("has_PointOnSurface_function") def test_point_on_surface(self): qs = Country.objects.annotate( point_on_surface=functions.PointOnSurface("mpoly") ) for country in qs: self.assertTrue(country.mpoly.intersection(country.point_on_surface)) @skipUnlessDBFeature("has_Reverse_function") def test_reverse_geom(self): coords = [(-95.363151, 29.763374), (-95.448601, 29.713803)] Track.objects.create(name="Foo", line=LineString(coords)) track = Track.objects.annotate(reverse_geom=functions.Reverse("line")).get( name="Foo" ) coords.reverse() self.assertEqual(tuple(coords), track.reverse_geom.coords) @skipUnlessDBFeature("has_Rotate_function") def test_rotate(self): angle = math.pi tests = [ {"angle": angle}, {"angle": angle, "origin": Point(0, 0)}, {"angle": angle, "origin": Point(1, 1)}, ] for params in tests: with self.subTest(params=params): qs = Country.objects.annotate( rotated=functions.Rotate("mpoly", **params) ) for country in qs: for p1, p2 in zip(country.mpoly, country.rotated): for r1, r2 in zip(p1, p2): for c1, c2 in zip(r1.coords, r2.coords): origin = params.get("origin") if origin is None: origin = Point(0, 0) self.assertAlmostEqual(-c1[0] + 2 * origin.x, c2[0], 5) self.assertAlmostEqual(-c1[1] + 2 * origin.y, c2[1], 5) @skipUnlessDBFeature("has_Rotate_function") def test_rotate_invalid_params(self): angle = math.pi bad_params_tests = [ {"angle": angle, "origin": 0}, {"angle": angle, "origin": [0, 0]}, ] msg = "origin argument must be a Point" for params in bad_params_tests: with self.subTest(params=params), self.assertRaisesMessage(TypeError, msg): functions.Rotate("mpoly", **params) @skipUnlessDBFeature("has_Scale_function") def test_scale(self): xfac, yfac = 2, 3 tol = 5 # The low precision tolerance is for SpatiaLite qs = Country.objects.annotate(scaled=functions.Scale("mpoly", xfac, yfac)) for country in qs: for p1, p2 in zip(country.mpoly, country.scaled): for r1, r2 in zip(p1, p2): for c1, c2 in zip(r1.coords, r2.coords): self.assertAlmostEqual(c1[0] * xfac, c2[0], tol) self.assertAlmostEqual(c1[1] * yfac, c2[1], tol) # Test float/Decimal values qs = Country.objects.annotate( scaled=functions.Scale("mpoly", 1.5, Decimal("2.5")) ) self.assertGreater(qs[0].scaled.area, qs[0].mpoly.area) @skipUnlessDBFeature("has_SnapToGrid_function") def test_snap_to_grid(self): # Let's try and break snap_to_grid() with bad combinations of # arguments. for bad_args in ((), range(3), range(5)): with self.assertRaises(ValueError): Country.objects.annotate(snap=functions.SnapToGrid("mpoly", *bad_args)) for bad_args in (("1.0",), (1.0, None), tuple(map(str, range(4)))): with self.assertRaises(TypeError): Country.objects.annotate(snap=functions.SnapToGrid("mpoly", *bad_args)) # Boundary for San Marino, courtesy of Bjorn Sandvik of # thematicmapping.org from the world borders dataset he provides. wkt = ( "MULTIPOLYGON(((12.41580 43.95795,12.45055 43.97972,12.45389 43.98167," "12.46250 43.98472,12.47167 43.98694,12.49278 43.98917," "12.50555 43.98861,12.51000 43.98694,12.51028 43.98277," "12.51167 43.94333,12.51056 43.93916,12.49639 43.92333," "12.49500 43.91472,12.48778 43.90583,12.47444 43.89722," "12.46472 43.89555,12.45917 43.89611,12.41639 43.90472," "12.41222 43.90610,12.40782 43.91366,12.40389 43.92667," "12.40500 43.94833,12.40889 43.95499,12.41580 43.95795)))" ) Country.objects.create(name="San Marino", mpoly=fromstr(wkt)) # Because floating-point arithmetic isn't exact, we set a tolerance # to pass into GEOS `equals_exact`. tol = 0.000000001 # SELECT AsText(ST_SnapToGrid("geoapp_country"."mpoly", 0.1)) # FROM "geoapp_country" # WHERE "geoapp_country"."name" = 'San Marino'; ref = fromstr("MULTIPOLYGON(((12.4 44,12.5 44,12.5 43.9,12.4 43.9,12.4 44)))") self.assertTrue( ref.equals_exact( Country.objects.annotate(snap=functions.SnapToGrid("mpoly", 0.1)) .get(name="San Marino") .snap, tol, ) ) # SELECT AsText(ST_SnapToGrid("geoapp_country"."mpoly", 0.05, 0.23)) # FROM "geoapp_country" # WHERE "geoapp_country"."name" = 'San Marino'; ref = fromstr( "MULTIPOLYGON(((12.4 43.93,12.45 43.93,12.5 43.93,12.45 43.93,12.4 43.93)))" ) self.assertTrue( ref.equals_exact( Country.objects.annotate(snap=functions.SnapToGrid("mpoly", 0.05, 0.23)) .get(name="San Marino") .snap, tol, ) ) # SELECT AsText( # ST_SnapToGrid("geoapp_country"."mpoly", 0.5, 0.17, 0.05, 0.23)) # FROM "geoapp_country" # WHERE "geoapp_country"."name" = 'San Marino'; ref = fromstr( "MULTIPOLYGON(((12.4 43.87,12.45 43.87,12.45 44.1,12.5 44.1,12.5 43.87," "12.45 43.87,12.4 43.87)))" ) self.assertTrue( ref.equals_exact( Country.objects.annotate( snap=functions.SnapToGrid("mpoly", 0.05, 0.23, 0.5, 0.17) ) .get(name="San Marino") .snap, tol, ) ) @skipUnlessDBFeature("has_SymDifference_function") def test_sym_difference(self): geom = Point(5, 23, srid=4326) qs = Country.objects.annotate( sym_difference=functions.SymDifference("mpoly", geom) ) # Oracle does something screwy with the Texas geometry. if connection.ops.oracle: qs = qs.exclude(name="Texas") for country in qs: self.assertTrue( country.mpoly.sym_difference(geom).equals(country.sym_difference) ) @skipUnlessDBFeature("has_Transform_function") def test_transform(self): # Pre-transformed points for Houston and Pueblo. ptown = fromstr("POINT(992363.390841912 481455.395105533)", srid=2774) # Asserting the result of the transform operation with the values in # the pre-transformed points. h = City.objects.annotate(pt=functions.Transform("point", ptown.srid)).get( name="Pueblo" ) self.assertEqual(2774, h.pt.srid) # Precision is low due to version variations in PROJ and GDAL. self.assertLess(ptown.x - h.pt.x, 1) self.assertLess(ptown.y - h.pt.y, 1) @skipUnlessDBFeature("has_Translate_function") def test_translate(self): xfac, yfac = 5, -23 qs = Country.objects.annotate( translated=functions.Translate("mpoly", xfac, yfac) ) for c in qs: for p1, p2 in zip(c.mpoly, c.translated): for r1, r2 in zip(p1, p2): for c1, c2 in zip(r1.coords, r2.coords): # The low precision is for SpatiaLite self.assertAlmostEqual(c1[0] + xfac, c2[0], 5) self.assertAlmostEqual(c1[1] + yfac, c2[1], 5) # Some combined function tests @skipUnlessDBFeature( "has_Difference_function", "has_Intersection_function", "has_SymDifference_function", "has_Union_function", ) def test_diff_intersection_union(self): geom = Point(5, 23, srid=4326) qs = Country.objects.annotate( difference=functions.Difference("mpoly", geom), sym_difference=functions.SymDifference("mpoly", geom), union=functions.Union("mpoly", geom), intersection=functions.Intersection("mpoly", geom), ) if connection.ops.oracle: # Should be able to execute the queries; however, they won't be the # same as GEOS (because Oracle doesn't use GEOS internally like # PostGIS or SpatiaLite). return for c in qs: self.assertTrue(c.mpoly.difference(geom).equals(c.difference)) if connection.features.empty_intersection_returns_none: self.assertIsNone(c.intersection) else: self.assertIs(c.intersection.empty, True) self.assertTrue(c.mpoly.sym_difference(geom).equals(c.sym_difference)) self.assertTrue(c.mpoly.union(geom).equals(c.union)) @skipUnlessDBFeature("has_Union_function") def test_union(self): """Union with all combinations of geometries/geometry fields.""" geom = Point(-95.363151, 29.763374, srid=4326) union = ( City.objects.annotate(union=functions.Union("point", geom)) .get(name="Dallas") .union ) expected = fromstr( "MULTIPOINT(-96.801611 32.782057,-95.363151 29.763374)", srid=4326 ) self.assertTrue(expected.equals(union)) union = ( City.objects.annotate(union=functions.Union(geom, "point")) .get(name="Dallas") .union ) self.assertTrue(expected.equals(union)) union = ( City.objects.annotate(union=functions.Union("point", "point")) .get(name="Dallas") .union ) expected = GEOSGeometry("POINT(-96.801611 32.782057)", srid=4326) self.assertTrue(expected.equals(union)) union = ( City.objects.annotate(union=functions.Union(geom, geom)) .get(name="Dallas") .union ) self.assertTrue(geom.equals(union)) @skipUnlessDBFeature("has_Union_function", "has_Transform_function") def test_union_mixed_srid(self): """The result SRID depends on the order of parameters.""" geom = Point(61.42915, 55.15402, srid=4326) geom_3857 = geom.transform(3857, clone=True) tol = 0.001 for city in City.objects.annotate(union=functions.Union("point", geom_3857)): expected = city.point | geom self.assertTrue(city.union.equals_exact(expected, tol)) self.assertEqual(city.union.srid, 4326) for city in City.objects.annotate(union=functions.Union(geom_3857, "point")): expected = geom_3857 | city.point.transform(3857, clone=True) self.assertTrue(expected.equals_exact(city.union, tol)) self.assertEqual(city.union.srid, 3857) def test_argument_validation(self): with self.assertRaisesMessage( ValueError, "SRID is required for all geometries." ): City.objects.annotate(geo=functions.GeoFunc(Point(1, 1))) msg = "GeoFunc function requires a GeometryField in position 1, got CharField." with self.assertRaisesMessage(TypeError, msg): City.objects.annotate(geo=functions.GeoFunc("name")) msg = "GeoFunc function requires a geometric argument in position 1." with self.assertRaisesMessage(TypeError, msg): City.objects.annotate(union=functions.GeoFunc(1, "point")).get( name="Dallas" ) @skipUnlessDBFeature("has_GeometryType_function") def test_geometry_type(self): test_features = [ Feature(name="Point", geom=Point(0, 0)), Feature(name="LineString", geom=LineString((0, 0), (1, 1))), Feature(name="Polygon", geom=Polygon(((0, 0), (1, 0), (1, 1), (0, 0)))), Feature( name="MultiLineString", geom=MultiLineString( LineString((0, 0), (1, 1)), LineString((1, 1), (2, 2)) ), ), Feature( name="MultiPolygon", geom=MultiPolygon( Polygon(((0, 0), (1, 0), (1, 1), (0, 0))), Polygon(((1, 1), (2, 1), (2, 2), (1, 1))), ), ), ] expected_results = [ ("POINT", Point), ("LINESTRING", LineString), ("POLYGON", Polygon), ("MULTILINESTRING", MultiLineString), ("MULTIPOLYGON", MultiPolygon), ] if can_save_multipoint: test_features.append( Feature(name="MultiPoint", geom=MultiPoint(Point(0, 0), Point(1, 1))) ) expected_results.append(("MULTIPOINT", MultiPoint)) for test_feature, (geom_type, geom_class) in zip( test_features, expected_results, strict=True ): with self.subTest(geom_type=geom_type, geom=test_feature.geom.wkt): test_feature.save() obj = ( Feature.objects.annotate( geometry_type=functions.GeometryType("geom") ) .filter(geom__geom_type=geom_type) .get() ) self.assertIsInstance(obj.geom, geom_class) self.assertEqual(obj.geometry_type, geom_type)
python
github
https://github.com/django/django
tests/gis_tests/geoapp/test_functions.py
# -*- coding: utf-8 -*- from __future__ import absolute_import, unicode_literals from gaebusiness.business import CommandExecutionException from tekton.gae.middleware.json_middleware import JsonResponse from servico_app import facade def index(): cmd = facade.list_servicos_cmd() servico_list = cmd() short_form=facade.servico_short_form() servico_short = [short_form.fill_with_model(m) for m in servico_list] return JsonResponse(servico_short) def save(**servico_properties): cmd = facade.save_servico_cmd(**servico_properties) return _save_or_update_json_response(cmd) def update(servico_id, **servico_properties): cmd = facade.update_servico_cmd(servico_id, **servico_properties) return _save_or_update_json_response(cmd) def delete(servico_id): facade.delete_servico_cmd(servico_id)() def _save_or_update_json_response(cmd): try: servico = cmd() except CommandExecutionException: return JsonResponse({'errors': cmd.errors}) short_form=facade.servico_short_form() return JsonResponse(short_form.fill_with_model(servico))
unknown
codeparrot/codeparrot-clean
# -*- coding: utf-8 -*- from __future__ import print_function, division from sympy.core.basic import Basic from sympy.core.compatibility import is_sequence, as_int, string_types from sympy.core.expr import Expr from sympy.core.symbol import Symbol, symbols as _symbols from sympy.core.sympify import CantSympify from mpmath import isint from sympy.core import S from sympy.printing.defaults import DefaultPrinting from sympy.utilities import public from sympy.utilities.iterables import flatten from sympy.utilities.magic import pollute from sympy import sign @public def free_group(symbols): """Construct a free group returning ``(FreeGroup, (f_0, f_1, ..., f_(n-1))``. Parameters ---------- symbols : str, Symbol/Expr or sequence of str, Symbol/Expr (may be empty) Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> F, x, y, z = free_group("x, y, z") >>> F <free group on the generators (x, y, z)> >>> x**2*y**-1 x**2*y**-1 >>> type(_) <class 'sympy.combinatorics.free_groups.FreeGroupElement'> """ _free_group = FreeGroup(symbols) return (_free_group,) + tuple(_free_group.generators) @public def xfree_group(symbols): """Construct a free group returning ``(FreeGroup, (f_0, f_1, ..., f_(n-1)))``. Parameters ---------- symbols : str, Symbol/Expr or sequence of str, Symbol/Expr (may be empty) Examples ======== >>> from sympy.combinatorics.free_groups import xfree_group >>> F, (x, y, z) = xfree_group("x, y, z") >>> F <free group on the generators (x, y, z)> >>> y**2*x**-2*z**-1 y**2*x**-2*z**-1 >>> type(_) <class 'sympy.combinatorics.free_groups.FreeGroupElement'> """ _free_group = FreeGroup(symbols) return (_free_group, _free_group.generators) @public def vfree_group(symbols): """Construct a free group and inject ``f_0, f_1, ..., f_(n-1)`` as symbols into the global namespace. Parameters ---------- symbols : str, Symbol/Expr or sequence of str, Symbol/Expr (may be empty) Examples ======== >>> from sympy.combinatorics.free_groups import vfree_group >>> vfree_group("x, y, z") <free group on the generators (x, y, z)> >>> x**2*y**-2*z x**2*y**-2*z >>> type(_) <class 'sympy.combinatorics.free_groups.FreeGroupElement'> """ _free_group = FreeGroup(symbols) pollute([sym.name for sym in _free_group.symbols], _free_group.generators) return _free_group def _parse_symbols(symbols): if not symbols: return tuple() if isinstance(symbols, string_types): return _symbols(symbols, seq=True) elif isinstance(symbols, Expr): return (symbols,) elif is_sequence(symbols): if all(isinstance(s, string_types) for s in symbols): return _symbols(symbols) elif all(isinstance(s, Expr) for s in symbols): return symbols raise ValueError("") ############################################################################## # FREE GROUP # ############################################################################## _free_group_cache = {} class FreeGroup(DefaultPrinting): """Free group with finite or infinite number of generators. Its input API is that of an str, Symbol/Expr or sequence of str, Symbol/Expr (may be empty). References ========== [1] http://www.gap-system.org/Manuals/doc/ref/chap37.html [2] https://en.wikipedia.org/wiki/Free_group See Also ======== sympy.polys.rings.PolyRing """ is_associative = True is_group = True is_FreeGroup = True is_PermutationGroup = False def __new__(cls, symbols): symbols = tuple(_parse_symbols(symbols)) rank = len(symbols) _hash = hash((cls.__name__, symbols, rank)) obj = _free_group_cache.get(_hash) if obj is None: obj = object.__new__(cls) obj._hash = _hash obj._rank = rank # dtype method is used to create new instances of FreeGroupElement obj.dtype = type("FreeGroupElement", (FreeGroupElement,), {"group": obj}) obj.symbols = symbols obj.generators = obj._generators() obj._gens_set = set(obj.generators) for symbol, generator in zip(obj.symbols, obj.generators): if isinstance(symbol, Symbol): name = symbol.name if hasattr(obj, name): setattr(obj, name, generator) _free_group_cache[_hash] = obj return obj def _generators(group): """Returns the generators of the FreeGroup. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> F, x, y, z = free_group("x, y, z") >>> F.generators (x, y, z) """ gens = [] for sym in group.symbols: elm = ((sym, 1),) gens.append(group.dtype(elm)) return tuple(gens) def clone(self, symbols=None): return self.__class__(symbols or self.symbols) def __contains__(self, i): """Return True if ``i`` is contained in FreeGroup.""" if not isinstance(i, FreeGroupElement): raise TypeError("FreeGroup contains only FreeGroupElement as elements " ", not elements of type %s" % type(i)) group = i.group return self == group def __hash__(self): return self._hash def __len__(self): return self.rank def __str__(self): if self.rank > 30: str_form = "<free group with %s generators>" % self.rank else: str_form = "<free group on the generators " gens = self.generators str_form += str(gens) + ">" return str_form __repr__ = __str__ def __getitem__(self, index): symbols = self.symbols[index] return self.clone(symbols=symbols) def __eq__(self, other): """No ``FreeGroup`` is equal to any "other" ``FreeGroup``. """ return self is other def index(self, gen): """Returns the index of the generator `gen` from ``(f_0, ..., f_(n-1))``. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> F, x, y = free_group("x, y") >>> F.index(y) 1 """ if isinstance(gen, self.dtype): return self.generators.index(gen) else: raise ValueError("expected a generator of Free Group %s, got %s" % (self, gen)) def order(self): """Returns the order of the free group.""" if self.rank == 0: return 1 else: return S.Infinity @property def elements(self): if self.rank == 0: # A set containing Identity element of `FreeGroup` self is returned return {self.identity} else: raise ValueError("Group contains infinitely many elements" ", hence can't be represented") @property def rank(self): r""" In group theory, the `rank` of a group `G`, denoted `G.rank`, can refer to the smallest cardinality of a generating set for G, that is \operatorname{rank}(G)=\min\{ |X|: X\subseteq G, \langle X\rangle =G\}. """ return self._rank def _symbol_index(self, symbol): """Returns the index of a generator for free group `self`, while returns the -ve index of the inverse generator. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> from sympy import Symbol >>> F, x, y = free_group("x, y") >>> F._symbol_index(-Symbol('x')) 0 """ try: return self.symbols.index(symbol) except ValueError: return -self.symbols.index(-symbol) @property def is_abelian(self): """Returns if the group is Abelian. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, x, y, z = free_group("x y z") >>> f.is_abelian False """ if self.rank == 0 or self.rank == 1: return True else: return False @property def identity(self): """Returns the identity element of free group.""" return self.dtype() def contains(self, g): """Tests if Free Group element ``g`` belong to self, ``G``. In mathematical terms any linear combination of generators of a Free Group is contained in it. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, x, y, z = free_group("x y z") >>> f.contains(x**3*y**2) True """ if not isinstance(g, FreeGroupElement): return False elif self != g.group: return False else: return True def is_subgroup(self, F): """Return True if all elements of `self` belong to `F`.""" return F.is_group and all([self.contains(gen) for gen in F.generators]) def center(self): """Returns the center of the free group `self`.""" return {self.identity} ############################################################################ # FreeGroupElement # ############################################################################ class FreeGroupElement(CantSympify, DefaultPrinting, tuple): """Used to create elements of FreeGroup. It can not be used directly to create a free group element. It is called by the `dtype` method of the `FreeGroup` class. """ is_assoc_word = True def new(self, init): return self.__class__(init) _hash = None def __hash__(self): _hash = self._hash if _hash is None: self._hash = _hash = hash((self.group, frozenset(tuple(self)))) return _hash def copy(self): return self.new(self) @property def is_identity(self): if self.array_form == tuple(): return True else: return False @property def array_form(self): """ SymPy provides two different internal kinds of representation of associative words. The first one is called the `array_form` which is a tuple containing `tuples` as its elements, where the size of each tuple is two. At the first position the tuple contains the `symbol-generator`, while at the second position of tuple contains the exponent of that generator at the position. Since elements (i.e. words) don't commute, the indexing of tuple makes that property to stay. The structure in ``array_form`` of ``FreeGroupElement`` is of form: ``( ( symbol_of_gen , exponent ), ( , ), ... ( , ) )`` Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, x, y, z = free_group("x y z") >>> (x*z).array_form ((x, 1), (z, 1)) >>> (x**2*z*y*x**2).array_form ((x, 2), (z, 1), (y, 1), (x, 2)) See Also ======== letter_repr """ return tuple(self) @property def letter_form(self): """ The letter representation of a ``FreeGroupElement`` is a tuple of generator symbols, with each entry corresponding to a group generator. Inverses of the generators are represented by negative generator symbols. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, a, b, c, d = free_group("a b c d") >>> (a**3).letter_form (a, a, a) >>> (a**2*d**-2*a*b**-4).letter_form (a, a, -d, -d, a, -b, -b, -b, -b) >>> (a**-2*b**3*d).letter_form (-a, -a, b, b, b, d) See Also ======== array_form """ return tuple(flatten([(i,)*j if j > 0 else (-i,)*(-j) for i, j in self.array_form])) def __getitem__(self, i): group = self.group r = self.letter_form[i] if r.is_Symbol: return group.dtype(((r, 1),)) else: return group.dtype(((-r, -1),)) def index(self, gen): if len(gen) != 1: raise ValueError() return (self.letter_form).index(gen.letter_form[0]) @property def letter_form_elm(self): """ """ group = self.group r = self.letter_form return [group.dtype(((elm,1),)) if elm.is_Symbol \ else group.dtype(((-elm,-1),)) for elm in r] @property def ext_rep(self): """This is called the External Representation of ``FreeGroupElement`` """ return tuple(flatten(self.array_form)) def __contains__(self, gen): return gen.array_form[0][0] in tuple([r[0] for r in self.array_form]) def __str__(self): if self.is_identity: return "<identity>" symbols = self.group.symbols str_form = "" array_form = self.array_form for i in range(len(array_form)): if i == len(array_form) - 1: if array_form[i][1] == 1: str_form += str(array_form[i][0]) else: str_form += str(array_form[i][0]) + \ "**" + str(array_form[i][1]) else: if array_form[i][1] == 1: str_form += str(array_form[i][0]) + "*" else: str_form += str(array_form[i][0]) + \ "**" + str(array_form[i][1]) + "*" return str_form __repr__ = __str__ def __pow__(self, n): n = as_int(n) group = self.group if n == 0: return group.identity if n < 0: n = -n return (self.inverse())**n result = self for i in range(n - 1): result = result*self # this method can be improved instead of just returning the # multiplication of elements return result def __mul__(self, other): """Returns the product of elements belonging to the same ``FreeGroup``. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, x, y, z = free_group("x y z") >>> x*y**2*y**-4 x*y**-2 >>> z*y**-2 z*y**-2 >>> x**2*y*y**-1*x**-2 <identity> """ group = self.group if not isinstance(other, group.dtype): raise TypeError("only FreeGroup elements of same FreeGroup can " "be multiplied") if self.is_identity: return other if other.is_identity: return self r = list(self.array_form + other.array_form) zero_mul_simp(r, len(self.array_form) - 1) return group.dtype(tuple(r)) def __div__(self, other): group = self.group if not isinstance(other, group.dtype): raise TypeError("only FreeGroup elements of same FreeGroup can " "be multiplied") return self*(other.inverse()) def __rdiv__(self, other): group = self.group if not isinstance(other, group.dtype): raise TypeError("only FreeGroup elements of same FreeGroup can " "be multiplied") return other*(self.inverse()) __truediv__ = __div__ __rtruediv__ = __rdiv__ def __add__(self, other): return NotImplemented def inverse(self): """ Returns the inverse of a ``FreeGroupElement`` element Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, x, y, z = free_group("x y z") >>> x.inverse() x**-1 >>> (x*y).inverse() y**-1*x**-1 """ group = self.group r = tuple([(i, -j) for i, j in self.array_form[::-1]]) return group.dtype(r) def order(self): """Find the order of a ``FreeGroupElement``. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, x, y = free_group("x y") >>> (x**2*y*y**-1*x**-2).order() 1 """ if self.is_identity: return 1 else: return S.Infinity def commutator(self, other): """Returns the commutator of `self` and `x`: ``~x*~self*x*self`` """ group = self.group if not isinstance(other, group.dtype): raise ValueError("commutator of only FreeGroupElement of the same " "FreeGroup exists") else: return self.inverse()*other.inverse()*self*other def eliminate_word(self, gen, by): """ For an associative word `self`, a generator `gen`, and an associative word by, ``eliminate_word`` returns the associative word obtained by replacing each occurrence of `gen` in `self` by `by`. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, x, y = free_group("x y") >>> w = x**5*y*x**2*y**-4*x >>> w.eliminate_word( x, x**2 ) x**10*y*x**4*y**-4*x**2 >>> w.eliminate_word( x, y**-1 ) y**-11 See Also ======== substituted_word """ group = self.group r = Symbol(str(gen)) arr = self.array_form array = [] by_arr = list(by.array_form) l_by = len(by_arr) for i in range(len(arr)): if arr[i][0] == r: # TODO: this shouldn't be checked again and again, since `by` # is fixed if by_arr == 1: array.append((by_arr[0][0], by_arr[0][1]*arr[i][1])) zero_mul_simp(array, len(array) - l_by - 1) else: k = arr[i][1] sig = sign(k) for j in range(sig*k): array.extend(list((by**sig).array_form)) zero_mul_simp(array, len(array) - l_by - 1) else: array.append(arr[i]) zero_mul_simp(array, len(array) - 2) return group.dtype(tuple(array)) def __len__(self): """ For an associative word `self`, returns the number of letters in it. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, a, b = free_group("a b") >>> w = a**5*b*a**2*b**-4*a >>> len(w) 13 >>> len(a**17) 17 >>> len(w**0) 0 """ return sum([abs(j) for (i, j) in self]) def __eq__(self, other): """ Two associative words are equal if they are words over the same alphabet and if they are sequences of the same letters. This is equivalent to saying that the external representations of the words are equal. There is no "universal" empty word, every alphabet has its own empty word. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, swapnil0, swapnil1 = free_group("swapnil0 swapnil1") >>> f <free group on the generators (swapnil0, swapnil1)> >>> g, swap0, swap1 = free_group("swap0 swap1") >>> g <free group on the generators (swap0, swap1)> >>> swapnil0 == swapnil1 False >>> swapnil0*swapnil1 == swapnil1/swapnil1*swapnil0*swapnil1 True >>> swapnil0*swapnil1 == swapnil1*swapnil0 False >>> swapnil1**0 == swap0**0 False """ group = self.group if not isinstance(other, group.dtype): return False return tuple.__eq__(self, other) def __lt__(self, other): """ The ordering of associative words is defined by length and lexicography (this ordering is called short-lex ordering), that is, shorter words are smaller than longer words, and words of the same length are compared w.r.t. the lexicographical ordering induced by the ordering of generators. Generators are sorted according to the order in which they were created. If the generators are invertible then each generator `g` is larger than its inverse `g^{-1}`, and `g^{-1}` is larger than every generator that is smaller than `g`. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, a, b = free_group("a b") >>> b < a False >>> a < a.inverse() False """ group = self.group if not isinstance(other, group.dtype): raise TypeError("only FreeGroup elements of same FreeGroup can " "be compared") l = len(self) m = len(other) # implement lenlex order if l < m: return True elif l > m: return False a = self.letter_form b = other.letter_form for i in range(l): p = group._symbol_index(a[i]) q = group._symbol_index(b[i]) if abs(p) < abs(q): return True elif abs(p) > abs(q): return False elif p < q: return True elif p > q: return False return False def __le__(self, other): return (self == other or self < other) def __gt__(self, other): """ Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, x, y, z = free_group("x y z") >>> y**2 > x**2 True >>> y*z > z*y False >>> x > x.inverse() True """ group = self.group if not isinstance(other, group.dtype): raise TypeError("only FreeGroup elements of same FreeGroup can " "be compared") return not self <= other def __ge__(self, other): return not self < other def exponent_sum(self, gen): """ For an associative word `self` and a generator or inverse of generator `gen`, ``exponent_sum`` returns the number of times `gen` appears in `self` minus the number of times its inverse appears in `self`. If neither `gen` nor its inverse occur in `self` then 0 is returned. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> F, x, y = free_group("x, y") >>> w = x**2*y**3 >>> w.exponent_sum(x) 2 >>> w.exponent_sum(x**-1) -2 >>> w = x**2*y**4*x**-3 >>> w.exponent_sum(x) -1 See Also ======== generator_count """ if len(gen) != 1: raise ValueError("gen must be a generator or inverse of a generator") s = gen.array_form[0] return s[1]*sum([i[1] for i in self.array_form if i[0] == s[0]]) def generator_count(self, gen): """ For an associative word `self` and a generator `gen`, ``generator_count`` returns the multiplicity of generator `gen` in `self`. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> F, x, y = free_group("x, y") >>> w = x**2*y**3 >>> w.generator_count(x) 2 >>> w = x**2*y**4*x**-3 >>> w.generator_count(x) 5 See Also ======== exponent_sum """ if len(gen) != 1 or gen.array_form[0][1] < 0: raise ValueError("gen must be a generator") s = gen.array_form[0] return s[1]*sum([abs(i[1]) for i in self.array_form if i[0] == s[0]]) def subword(self, from_i, to_j): """ For an associative word `self` and two positive integers `from_i` and `to_j`, subword returns the subword of `self` that begins at position `from_to` and ends at `to_j`, indexing is done with origin 0. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, a, b = free_group("a b") >>> w = a**5*b*a**2*b**-4*a >>> w.subword(2, 6) a**3*b """ group = self.group if from_i < 0 or to_j > len(self): raise ValueError("`from_i`, `to_j` must be positive and less than " "the length of associative word") if to_j <= from_i: return group.identity else: letter_form = self.letter_form[from_i: to_j] array_form = letter_form_to_array_form(letter_form, group) return group.dtype(array_form) def is_dependent(self, word): """ Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> F, x, y = free_group("x, y") >>> (x**4*y**-3).is_dependent(x**4*y**-2) True >>> (x**2*y**-1).is_dependent(x*y) False >>> (x*y**2*x*y**2).is_dependent(x*y**2) True >>> (x**12).is_dependent(x**-4) True See Also ======== is_independent """ self_st = str(self.letter_form)[1: -1] return str(word.letter_form)[1: -1] in self_st or \ str((word**-1).letter_form)[1: -1] in self_st def is_independent(self, word): """ See Also ======== is_dependent """ return not self.is_dependent def contains_generators(self): """ Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> F, x, y, z = free_group("x, y, z") >>> (x**2*y**-1).contains_generators() {x, y} >>> (x**3*z).contains_generators() {x, z} """ group = self.group gens = set() for syllable in self.array_form: gens.add(group.dtype(((syllable[0], 1),))) return set(gens) def cyclic_subword(self, from_i, to_j): group = self.group l = len(self) letter_form = self.letter_form period1 = int(from_i/l) if from_i >= l: from_i -= l*period1 to_j -= l*period1 diff = to_j - from_i word = letter_form[from_i: to_j] period2 = int(to_j/l) - 1 word += letter_form*period2 + letter_form[:diff-l+from_i-l*period2] word = letter_form_to_array_form(word, group) return group.dtype(word) def cyclic_conjugates(self): """Returns a words which are cyclic to the word `self`. References ========== http://planetmath.org/cyclicpermutation Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> F, x, y = free_group("x, y") >>> w = x*y*x*y*x >>> w.cyclic_conjugates() {x*y*x**2*y, x**2*y*x*y, y*x*y*x**2, y*x**2*y*x, x*y*x*y*x} >>> s = x*y*x**2*y*x >>> s.cyclic_conjugates() {x**2*y*x**2*y, y*x**2*y*x**2, x*y*x**2*y*x} """ return {self.cyclic_subword(i, i+len(self)) for i in range(len(self))} def is_cyclic_conjugate(self, w): """ Checks whether words ``self``, ``w`` are cyclic conjugates. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> F, x, y = free_group("x, y") >>> w1 = x**2*y**5 >>> w2 = x*y**5*x >>> w1.is_cyclic_conjugate(w2) True >>> w3 = x**-1*y**5*x**-1 >>> w3.is_cyclic_conjugate(w2) False """ l1 = len(self) l2 = len(w) if l1 != l2: return False w1 = self.identity_cyclic_reduction() w2 = w.identity_cyclic_reduction() letter1 = w1.letter_form letter2 = w2.letter_form str1 = ' '.join(map(str, letter1)) str2 = ' '.join(map(str, letter2)) if len(str1) != len(str2): return False return str1 in str2 + ' ' + str2 def number_syllables(self): """Returns the number of syllables of the associative word `self`. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, swapnil0, swapnil1 = free_group("swapnil0 swapnil1") >>> (swapnil1**3*swapnil0*swapnil1**-1).number_syllables() 3 """ return len(self.array_form) def exponent_syllable(self, i): """ Returns the exponent of the `i`-th syllable of the associative word `self`. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, a, b = free_group("a b") >>> w = a**5*b*a**2*b**-4*a >>> w.exponent_syllable( 2 ) 2 """ return self.array_form[i][1] def generator_syllable(self, i): """ Returns the number of the generator that is involved in the i-th syllable of the associative word `self`. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, a, b = free_group("a b") >>> w = a**5*b*a**2*b**-4*a >>> w.generator_syllable( 3 ) b """ return self.array_form[i][0] def sub_syllables(self, from_i, to_j): """ `sub_syllables` returns the subword of the associative word `self` that consists of syllables from positions `from_to` to `to_j`, where `from_to` and `to_j` must be positive integers and indexing is done with origin 0. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> f, a, b = free_group("a, b") >>> w = a**5*b*a**2*b**-4*a >>> w.sub_syllables(1, 2) b >>> w.sub_syllables(3, 3) <identity> """ if not isinstance(from_i, int) or not isinstance(to_j, int): raise ValueError("both arguments should be integers") group = self.group if to_j <= from_i: return group.identity else: r = tuple(self.array_form[from_i: to_j]) return group.dtype(r) def substituted_word(self, from_i, to_j, by): """ Returns the associative word obtained by replacing the subword of `self` that begins at position `from_i` and ends at position `to_j` by the associative word `by`. `from_i` and `to_j` must be positive integers, indexing is done with origin 0. In other words, `w.substituted_word(w, from_i, to_j, by)` is the product of the three words: `w.subword(0, from_i - 1)`, `by`, and `w.subword(to_j + 1, len(w))`. See Also ======== eliminate_word """ lw = len(self) if from_i > to_j or from_i > lw or to_j > lw: raise ValueError("values should be within bounds") # otherwise there are four possibilities # first if from=1 and to=lw then if from_i == 0 and to_j == lw - 1: return by elif from_i == 0: # second if from_i=1 (and to_j < lw) then return by*self.subword(to_j, lw - 1) elif to_j == lw: # third if to_j=1 (and fromi_i > 1) then return self.subword(0, from_i - 1)*by; else: # finally return self.subword(0, from_i - 1)*by*self.subword(to_j + 1, lw) def is_cyclically_reduced(self): r"""Returns whether the word is cyclically reduced or not. A word is cyclically reduced if by forming the cycle of the word, the word is not reduced, i.e a word w = `a_1 ... a_n` is called cyclically reduced if `a_1 \ne a_n^{−1}`. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> F, x, y = free_group("x, y") >>> (x**2*y**-1*x**-1).is_cyclically_reduced() False >>> (y*x**2*y**2).is_cyclically_reduced() True """ if not self: return True return self[0] != self[-1]**-1 def identity_cyclic_reduction(self): """Return a unique cyclically reduced version of the word. Examples ======== >>> from sympy.combinatorics.free_groups import free_group >>> F, x, y = free_group("x, y") >>> (x**2*y**2*x**-1).identity_cyclic_reduction() x*y**2 >>> (x**-3*y**-1*x**5).identity_cyclic_reduction() x**2*y**-1 References ========== http://planetmath.org/cyclicallyreduced """ if self.is_cyclically_reduced(): return self.copy() group = self.group exp1 = self.exponent_syllable(0) exp2 = self.exponent_syllable(-1) r = exp1 + exp2 if r == 0: rep = self.array_form[1: self.number_syllables() - 1] else: rep = ((self.generator_syllable(0), exp1 + exp2),) + \ self.array_form[1: self.number_syllables() - 1] return group.dtype(rep) def letter_form_to_array_form(array_form, group): """ This method converts a list given with possible repetitions of elements in it. It returns a new list such that repetitions of consecutive elements is removed and replace with a tuple element of size two such that the first index contains `value` and the second index contains the number of consecutive repetitions of `value`. """ a = list(array_form[:]) new_array = [] n = 1 symbols = group.symbols for i in range(len(a)): if i == len(a) - 1: if a[i] == a[i - 1]: if (-a[i]) in symbols: new_array.append((-a[i], -n)) else: new_array.append((a[i], n)) else: if (-a[i]) in symbols: new_array.append((-a[i], -1)) else: new_array.append((a[i], 1)) return new_array elif a[i] == a[i + 1]: n += 1 else: if (-a[i]) in symbols: new_array.append((-a[i], -n)) else: new_array.append((a[i], n)) n = 1 def zero_mul_simp(l, index): """Used to combine two reduced words.""" while index >=0 and index < len(l) - 1 and l[index][0] is l[index + 1][0]: exp = l[index][1] + l[index + 1][1] base = l[index][0] l[index] = (base, exp) del l[index + 1] if l[index][1] == 0: del l[index] index -= 1
unknown
codeparrot/codeparrot-clean
def switch_first_and_last(sequence): '''return a sequence with the first and last items exchanged.''' first = sequence[0] last = sequence[-1] # the problem here is that array[0] returns an int, not an arr # and you can't concatenate an int and an arr # so test for type before concatenating if type(sequence) == list: return [last] + sequence[1:-1] + [first] else: return last + sequence[1:-1] + first def remove_every_other(sequence): '''return a sequence with every other item removed''' return sequence[::2] def middle_skip(sequence): '''return a sequence with the first and last 4 items removed, and every other item in between''' return sequence[4:-4][::2] def reverse(sequence): '''return a sequence reversed (just with slicing)''' return sequence[::-1] def mix_thirds(sequence): '''return a sequence with the middle third, then last third, then the first third in the new order''' third = len(sequence) // 3 return sequence[third:2*third] + sequence[2*third:] + sequence[0:third]
unknown
codeparrot/codeparrot-clean
# -*- coding: utf-8 -*- """ celery.utils.mail ~~~~~~~~~~~~~~~~~ How task error emails are formatted and sent. :copyright: (c) 2009 - 2011 by Ask Solem. :license: BSD, see LICENSE for more details. """ from __future__ import absolute_import import sys import smtplib try: from email.mime.text import MIMEText except ImportError: from email.MIMEText import MIMEText # noqa from celery.utils import get_symbol_by_name supports_timeout = sys.version_info >= (2, 6) class SendmailWarning(UserWarning): """Problem happened while sending the email message.""" class Message(object): def __init__(self, to=None, sender=None, subject=None, body=None, charset="us-ascii"): self.to = to self.sender = sender self.subject = subject self.body = body self.charset = charset if not isinstance(self.to, (list, tuple)): self.to = [self.to] def __repr__(self): return "<Email: To:%r Subject:%r>" % (self.to, self.subject) def __str__(self): msg = MIMEText(self.body, "plain", self.charset) msg["Subject"] = self.subject msg["From"] = self.sender msg["To"] = ", ".join(self.to) return msg.as_string() class Mailer(object): def __init__(self, host="localhost", port=0, user=None, password=None, timeout=2, use_ssl=False, use_tls=False): self.host = host self.port = port self.user = user self.password = password self.timeout = timeout self.use_ssl = use_ssl self.use_tls = use_tls def send(self, message): if supports_timeout: self._send(message, timeout=self.timeout) else: import socket old_timeout = socket.getdefaulttimeout() socket.setdefaulttimeout(self.timeout) try: self._send(message) finally: socket.setdefaulttimeout(old_timeout) def _send(self, message, **kwargs): if (self.use_ssl): client = smtplib.SMTP_SSL(self.host, self.port, **kwargs) else: client = smtplib.SMTP(self.host, self.port, **kwargs) if self.use_tls: client.ehlo() client.starttls() client.ehlo() if self.user and self.password: client.login(self.user, self.password) client.sendmail(message.sender, message.to, str(message)) client.quit() class ErrorMail(object): """Defines how and when task error e-mails should be sent. :param task: The task instance that raised the error. :attr:`subject` and :attr:`body` are format strings which are passed a context containing the following keys: * name Name of the task. * id UUID of the task. * exc String representation of the exception. * args Positional arguments. * kwargs Keyword arguments. * traceback String representation of the traceback. * hostname Worker hostname. """ # pep8.py borks on a inline signature separator and # says "trailing whitespace" ;) EMAIL_SIGNATURE_SEP = "-- " #: Format string used to generate error email subjects. subject = """\ [celery@%(hostname)s] Error: Task %(name)s (%(id)s): %(exc)s """ #: Format string used to generate error email content. body = """ Task %%(name)s with id %%(id)s raised exception:\n%%(exc)r Task was called with args: %%(args)s kwargs: %%(kwargs)s. The contents of the full traceback was: %%(traceback)s %(EMAIL_SIGNATURE_SEP)s Just to let you know, celeryd at %%(hostname)s. """ % {"EMAIL_SIGNATURE_SEP": EMAIL_SIGNATURE_SEP} error_whitelist = None def __init__(self, task, **kwargs): self.task = task self.email_subject = kwargs.get("subject", self.subject) self.email_body = kwargs.get("body", self.body) self.error_whitelist = getattr(task, "error_whitelist") def should_send(self, context, exc): """Returns true or false depending on if a task error mail should be sent for this type of error.""" allow_classes = tuple(map(get_symbol_by_name, self.error_whitelist)) return not self.error_whitelist or isinstance(exc, allow_classes) def format_subject(self, context): return self.subject.strip() % context def format_body(self, context): return self.body.strip() % context def send(self, context, exc, fail_silently=True): if self.should_send(context, exc): self.task.app.mail_admins(self.format_subject(context), self.format_body(context), fail_silently=fail_silently)
unknown
codeparrot/codeparrot-clean
# -*- test-case-name: twisted.test.test_internet -*- # $Id: default.py,v 1.90 2004/01/06 22:35:22 warner Exp $ # # Copyright (c) 2001-2004 Twisted Matrix Laboratories. # See LICENSE for details. """Select reactor API Stability: stable Maintainer: U{Itamar Shtull-Trauring<mailto:twisted@itamarst.org>} """ from time import sleep import sys from zope.interface import implements from twisted.internet.interfaces import IReactorFDSet from twisted.internet import error from twisted.internet import posixbase from twisted.python import log, components from twisted.persisted import styles from twisted.python.runtime import platformType import select from errno import EINTR, EBADF # global state for selector reads = {} writes = {} def win32select(r, w, e, timeout=None): """Win32 select wrapper.""" if not (r or w): # windows select() exits immediately when no sockets if timeout is None: timeout = 0.01 else: timeout = min(timeout, 0.001) sleep(timeout) return [], [], [] # windows doesn't process 'signals' inside select(), so we set a max # time or ctrl-c will never be recognized if timeout is None or timeout > 0.5: timeout = 0.5 r, w, e = select.select(r, w, w, timeout) return r, w + e, [] if platformType == "win32": _select = win32select else: _select = select.select # Exceptions that doSelect might return frequently _NO_FILENO = error.ConnectionFdescWentAway('Handler has no fileno method') _NO_FILEDESC = error.ConnectionFdescWentAway('Filedescriptor went away') class SelectReactor(posixbase.PosixReactorBase): """A select() based reactor - runs on all POSIX platforms and on Win32. """ implements(IReactorFDSet) def _preenDescriptors(self): log.msg("Malformed file descriptor found. Preening lists.") readers = reads.keys() writers = writes.keys() reads.clear() writes.clear() for selDict, selList in ((reads, readers), (writes, writers)): for selectable in selList: try: select.select([selectable], [selectable], [selectable], 0) except: log.msg("bad descriptor %s" % selectable) else: selDict[selectable] = 1 def doSelect(self, timeout, # Since this loop should really be as fast as possible, # I'm caching these global attributes so the interpreter # will hit them in the local namespace. reads=reads, writes=writes): """Run one iteration of the I/O monitor loop. This will run all selectables who had input or output readiness waiting for them. """ while 1: try: r, w, ignored = _select(reads.keys(), writes.keys(), [], timeout) break except ValueError, ve: # Possibly a file descriptor has gone negative? log.err() self._preenDescriptors() except TypeError, te: # Something *totally* invalid (object w/o fileno, non-integral # result) was passed log.err() self._preenDescriptors() except (select.error, IOError), se: # select(2) encountered an error if se.args[0] in (0, 2): # windows does this if it got an empty list if (not reads) and (not writes): return else: raise elif se.args[0] == EINTR: return elif se.args[0] == EBADF: self._preenDescriptors() else: # OK, I really don't know what's going on. Blow up. raise _drdw = self._doReadOrWrite _logrun = log.callWithLogger for selectables, method, dict in ((r, "doRead", reads), (w,"doWrite", writes)): hkm = dict.has_key for selectable in selectables: # if this was disconnected in another thread, kill it. if not hkm(selectable): continue # This for pausing input when we're not ready for more. _logrun(selectable, _drdw, selectable, method, dict) doIteration = doSelect def _doReadOrWrite(self, selectable, method, dict): try: why = getattr(selectable, method)() handfn = getattr(selectable, 'fileno', None) if not handfn: why = _NO_FILENO elif handfn() == -1: why = _NO_FILEDESC except: why = sys.exc_info()[1] log.err() if why: self._disconnectSelectable(selectable, why, method=="doRead") def addReader(self, reader): """Add a FileDescriptor for notification of data available to read. """ reads[reader] = 1 def addWriter(self, writer): """Add a FileDescriptor for notification of data available to write. """ writes[writer] = 1 def removeReader(self, reader): """Remove a Selectable for notification of data available to read. """ if reads.has_key(reader): del reads[reader] def removeWriter(self, writer): """Remove a Selectable for notification of data available to write. """ if writes.has_key(writer): del writes[writer] def removeAll(self): return self._removeAll(reads, writes) components.backwardsCompatImplements(SelectReactor) def install(): """Configure the twisted mainloop to be run using the select() reactor. """ reactor = SelectReactor() from twisted.internet.main import installReactor installReactor(reactor) __all__ = ['install']
unknown
codeparrot/codeparrot-clean
/* global QUnit, SelectFilter */ 'use strict'; QUnit.module('admin.SelectFilter2'); QUnit.test('init', function(assert) { const $ = django.jQuery; $('<form id="test"></form>').appendTo('#qunit-fixture'); $('<label for="id_id">Test</label>').appendTo('#test'); $('<div class="helptext">This is helpful.</div>').appendTo('#test'); $('<select id="id"><option value="0">A</option></select>').appendTo('#test'); SelectFilter.init('id', 'things', 0); assert.deepEqual( Array.from($('#test')[0].children).map(child => child.tagName), ["LABEL", "DIV", "DIV"] ); assert.equal($('.helptext')[0].nextSibling.getAttribute("class"), "selector"); assert.equal($('.selector-available label').text().trim(), "Available things"); assert.equal($('.selector-available label').attr("id"), "id_from_label"); assert.equal($('.selector-chosen label').text().trim(), "Chosen things"); assert.equal($('.selector-chosen label').attr("id"), "id_to_label"); assert.equal($('.selector-chosen select')[0].getAttribute('multiple'), ''); assert.equal($('.selector-chooseall').text(), "Choose all things"); assert.equal($('.selector-chooseall').prop("tagName"), "BUTTON"); assert.equal($('.selector-add').text(), "Choose selected things"); assert.equal($('.selector-add').prop("tagName"), "BUTTON"); assert.equal($('.selector-remove').text(), "Remove selected things"); assert.equal($('.selector-remove').prop("tagName"), "BUTTON"); assert.equal($('.selector-clearall').text(), "Remove all things"); assert.equal($('.selector-clearall').prop("tagName"), "BUTTON"); assert.equal($('.selector-available .filtered').attr("aria-labelledby"), "id_from_label"); assert.equal($('.selector-available .filtered').attr("aria-describedby"), "id_helptext id_choose_helptext"); assert.equal($('.selector-available .selector-available-title label').text(), "Available things "); assert.equal($('.selector-available .selector-available-title .helptext').text(), 'Choose things by selecting them and then select the "Choose" arrow button.'); assert.equal($('.selector-chosen .filtered').attr("aria-labelledby"), "id_to_label"); assert.equal($('.selector-chosen .filtered').attr("aria-describedby"), "id_helptext id_remove_helptext"); assert.equal($('.selector-chosen .selector-chosen-title label').text(), "Chosen things "); assert.equal($('.selector-chosen .selector-chosen-title .helptext').text(), 'Remove things by selecting them and then select the "Remove" arrow button.'); assert.equal($('.selector-filter label .help-tooltip')[0].getAttribute("aria-label"), "Type into this box to filter down the list of available things."); assert.equal($('.selector-filter label .help-tooltip')[1].getAttribute("aria-label"), "Type into this box to filter down the list of selected things."); assert.equal($('#test button:not([type="button"])').length, 0); }); QUnit.test('filtering available options', function(assert) { const $ = django.jQuery; $('<form><select multiple id="select"></select></form>').appendTo('#qunit-fixture'); $('<option value="1" title="Red">Red</option>').appendTo('#select'); $('<option value="2" title="Blue">Blue</option>').appendTo('#select'); $('<option value="3" title="Green">Green</option>').appendTo('#select'); SelectFilter.init('select', 'items', 0); assert.equal($('#select_from option').length, 3); assert.equal($('#select_to option').length, 0); const done = assert.async(); const search_term = 'r'; const event = new KeyboardEvent('keyup', {'key': search_term}); $('#select_input').val(search_term); SelectFilter.filter_key_up(event, 'select', '_from'); setTimeout(() => { assert.equal($('#select_from option').length, 2); assert.equal($('#select_to option').length, 0); assert.equal($('#select_from option')[0].value, '1'); assert.equal($('#select_from option')[1].value, '3'); done(); }); }); QUnit.test('filtering selected options', function(assert) { const $ = django.jQuery; $('<form><select multiple id="select"></select></form>').appendTo('#qunit-fixture'); $('<option selected value="1" title="Red">Red</option>').appendTo('#select'); $('<option selected value="2" title="Blue">Blue</option>').appendTo('#select'); $('<option selected value="3" title="Green">Green</option>').appendTo('#select'); SelectFilter.init('select', 'items', 0); assert.equal($('#select_from option').length, 0); assert.equal($('#select_to option').length, 3); const done = assert.async(); const search_term = 'r'; const event = new KeyboardEvent('keyup', {'key': search_term}); $('#select_selected_input').val(search_term); SelectFilter.filter_key_up(event, 'select', '_to', '_selected_input'); setTimeout(() => { assert.equal($('#select_from option').length, 0); assert.equal($('#select_to option').length, 2); assert.equal($('#select_to option')[0].value, '1'); assert.equal($('#select_to option')[1].value, '3'); done(); }); }); QUnit.test('filtering available options to nothing', function(assert) { const $ = django.jQuery; $('<form><select multiple id="select"></select></form>').appendTo('#qunit-fixture'); $('<option value="1" title="Red">Red</option>').appendTo('#select'); $('<option value="2" title="Blue">Blue</option>').appendTo('#select'); $('<option value="3" title="Green">Green</option>').appendTo('#select'); SelectFilter.init('select', 'items', 0); assert.equal($('#select_from option').length, 3); assert.equal($('#select_to option').length, 0); const done = assert.async(); const search_term = 'x'; const event = new KeyboardEvent('keyup', {'key': search_term}); $('#select_input').val(search_term); SelectFilter.filter_key_up(event, 'select', '_from'); setTimeout(() => { assert.equal($('#select_from option').length, 0); assert.equal($('#select_to option').length, 0); done(); }); }); QUnit.test('filtering selected options to nothing', function(assert) { const $ = django.jQuery; $('<form><select multiple id="select"></select></form>').appendTo('#qunit-fixture'); $('<option selected value="1" title="Red">Red</option>').appendTo('#select'); $('<option selected value="2" title="Blue">Blue</option>').appendTo('#select'); $('<option selected value="3" title="Green">Green</option>').appendTo('#select'); SelectFilter.init('select', 'items', 0); assert.equal($('#select_from option').length, 0); assert.equal($('#select_to option').length, 3); const done = assert.async(); const search_term = 'x'; const event = new KeyboardEvent('keyup', {'key': search_term}); $('#select_selected_input').val(search_term); SelectFilter.filter_key_up(event, 'select', '_to', '_selected_input'); setTimeout(() => { assert.equal($('#select_from option').length, 0); assert.equal($('#select_to option').length, 0); done(); }); }); QUnit.test('selecting option', function(assert) { const $ = django.jQuery; $('<form><select multiple id="select"></select></form>').appendTo('#qunit-fixture'); $('<option value="1" title="Red">Red</option>').appendTo('#select'); $('<option value="2" title="Blue">Blue</option>').appendTo('#select'); $('<option value="3" title="Green">Green</option>').appendTo('#select'); SelectFilter.init('select', 'items', 0); assert.equal($('#select_from option').length, 3); assert.equal($('#select_to option').length, 0); // move to the right const done = assert.async(); $('#select_from')[0].selectedIndex = 0; const event = new KeyboardEvent('keydown', {'keyCode': 39, 'charCode': 39}); SelectFilter.filter_key_down(event, 'select', '_from', '_to'); setTimeout(() => { assert.equal($('#select_from option').length, 2); assert.equal($('#select_to option').length, 1); assert.equal($('#select_to option')[0].value, '1'); done(); }); }); QUnit.test('deselecting option', function(assert) { const $ = django.jQuery; $('<form><select multiple id="select"></select></form>').appendTo('#qunit-fixture'); $('<option selected value="1" title="Red">Red</option>').appendTo('#select'); $('<option value="2" title="Blue">Blue</option>').appendTo('#select'); $('<option value="3" title="Green">Green</option>').appendTo('#select'); SelectFilter.init('select', 'items', 0); assert.equal($('#select_from option').length, 2); assert.equal($('#select_to option').length, 1); assert.equal($('#select_to option')[0].value, '1'); // move back to the left const done_left = assert.async(); $('#select_to')[0].selectedIndex = 0; const event_left = new KeyboardEvent('keydown', {'keyCode': 37, 'charCode': 37}); SelectFilter.filter_key_down(event_left, 'select', '_to', '_from'); setTimeout(() => { assert.equal($('#select_from option').length, 3); assert.equal($('#select_to option').length, 0); done_left(); }); });
javascript
github
https://github.com/django/django
js_tests/admin/SelectFilter2.test.js
""" Scheduler queues """ from __future__ import annotations import marshal import pickle from pathlib import Path from typing import TYPE_CHECKING, Any from queuelib import queue from scrapy.utils.request import request_from_dict if TYPE_CHECKING: from collections.abc import Callable from os import PathLike # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy import Request from scrapy.crawler import Crawler def _with_mkdir(queue_class: type[queue.BaseQueue]) -> type[queue.BaseQueue]: class DirectoriesCreated(queue_class): # type: ignore[valid-type,misc] def __init__(self, path: str | PathLike, *args: Any, **kwargs: Any): dirname = Path(path).parent if not dirname.exists(): dirname.mkdir(parents=True, exist_ok=True) super().__init__(path, *args, **kwargs) return DirectoriesCreated def _serializable_queue( queue_class: type[queue.BaseQueue], serialize: Callable[[Any], bytes], deserialize: Callable[[bytes], Any], ) -> type[queue.BaseQueue]: class SerializableQueue(queue_class): # type: ignore[valid-type,misc] def push(self, obj: Any) -> None: s = serialize(obj) super().push(s) def pop(self) -> Any | None: s = super().pop() if s: return deserialize(s) return None def peek(self) -> Any | None: """Returns the next object to be returned by :meth:`pop`, but without removing it from the queue. Raises :exc:`NotImplementedError` if the underlying queue class does not implement a ``peek`` method, which is optional for queues. """ try: s = super().peek() except AttributeError as ex: raise NotImplementedError( "The underlying queue class does not implement 'peek'" ) from ex if s: return deserialize(s) return None return SerializableQueue def _scrapy_serialization_queue( queue_class: type[queue.BaseQueue], ) -> type[queue.BaseQueue]: class ScrapyRequestQueue(queue_class): # type: ignore[valid-type,misc] def __init__(self, crawler: Crawler, key: str): self.spider = crawler.spider super().__init__(key) @classmethod def from_crawler( cls, crawler: Crawler, key: str, *args: Any, **kwargs: Any ) -> Self: return cls(crawler, key) def push(self, request: Request) -> None: request_dict = request.to_dict(spider=self.spider) super().push(request_dict) def pop(self) -> Request | None: request = super().pop() if not request: return None return request_from_dict(request, spider=self.spider) def peek(self) -> Request | None: """Returns the next object to be returned by :meth:`pop`, but without removing it from the queue. Raises :exc:`NotImplementedError` if the underlying queue class does not implement a ``peek`` method, which is optional for queues. """ request = super().peek() if not request: return None return request_from_dict(request, spider=self.spider) return ScrapyRequestQueue def _scrapy_non_serialization_queue( queue_class: type[queue.BaseQueue], ) -> type[queue.BaseQueue]: class ScrapyRequestQueue(queue_class): # type: ignore[valid-type,misc] @classmethod def from_crawler(cls, crawler: Crawler, *args: Any, **kwargs: Any) -> Self: return cls() def peek(self) -> Any | None: """Returns the next object to be returned by :meth:`pop`, but without removing it from the queue. Raises :exc:`NotImplementedError` if the underlying queue class does not implement a ``peek`` method, which is optional for queues. """ try: s = super().peek() except AttributeError as ex: raise NotImplementedError( "The underlying queue class does not implement 'peek'" ) from ex return s return ScrapyRequestQueue def _pickle_serialize(obj: Any) -> bytes: try: return pickle.dumps(obj, protocol=4) # Both pickle.PicklingError and AttributeError can be raised by pickle.dump(s) # TypeError is raised from parsel.Selector except (pickle.PicklingError, AttributeError, TypeError) as e: raise ValueError(str(e)) from e # queue.*Queue aren't subclasses of queue.BaseQueue _PickleFifoSerializationDiskQueue = _serializable_queue( _with_mkdir(queue.FifoDiskQueue), # type: ignore[arg-type] _pickle_serialize, pickle.loads, ) _PickleLifoSerializationDiskQueue = _serializable_queue( _with_mkdir(queue.LifoDiskQueue), # type: ignore[arg-type] _pickle_serialize, pickle.loads, ) _MarshalFifoSerializationDiskQueue = _serializable_queue( _with_mkdir(queue.FifoDiskQueue), # type: ignore[arg-type] marshal.dumps, marshal.loads, ) _MarshalLifoSerializationDiskQueue = _serializable_queue( _with_mkdir(queue.LifoDiskQueue), # type: ignore[arg-type] marshal.dumps, marshal.loads, ) # public queue classes PickleFifoDiskQueue = _scrapy_serialization_queue(_PickleFifoSerializationDiskQueue) PickleLifoDiskQueue = _scrapy_serialization_queue(_PickleLifoSerializationDiskQueue) MarshalFifoDiskQueue = _scrapy_serialization_queue(_MarshalFifoSerializationDiskQueue) MarshalLifoDiskQueue = _scrapy_serialization_queue(_MarshalLifoSerializationDiskQueue) FifoMemoryQueue = _scrapy_non_serialization_queue(queue.FifoMemoryQueue) # type: ignore[arg-type] LifoMemoryQueue = _scrapy_non_serialization_queue(queue.LifoMemoryQueue) # type: ignore[arg-type]
python
github
https://github.com/scrapy/scrapy
scrapy/squeues.py
# -*- coding: utf-8 -*- """ unit test for streaming interface ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ :copyright: (c) 2009 by the Jinja Team. :license: BSD, see LICENSE for more details. """ from jinja2 import Environment env = Environment() def test_basic_streaming(): r""" >>> tmpl = env.from_string("<ul>{% for item in seq %}<li>{{ loop.index " ... "}} - {{ item }}</li>{%- endfor %}</ul>") >>> stream = tmpl.stream(seq=range(4)) >>> stream.next() u'<ul>' >>> stream.next() u'<li>1 - 0</li>' >>> stream.next() u'<li>2 - 1</li>' >>> stream.next() u'<li>3 - 2</li>' >>> stream.next() u'<li>4 - 3</li>' >>> stream.next() u'</ul>' """ def test_buffered_streaming(): r""" >>> tmpl = env.from_string("<ul>{% for item in seq %}<li>{{ loop.index " ... "}} - {{ item }}</li>{%- endfor %}</ul>") >>> stream = tmpl.stream(seq=range(4)) >>> stream.enable_buffering(size=3) >>> stream.next() u'<ul><li>1 - 0</li><li>2 - 1</li>' >>> stream.next() u'<li>3 - 2</li><li>4 - 3</li></ul>' """ def test_streaming_behavior(): r""" >>> tmpl = env.from_string("") >>> stream = tmpl.stream() >>> stream.buffered False >>> stream.enable_buffering(20) >>> stream.buffered True >>> stream.disable_buffering() >>> stream.buffered False """
unknown
codeparrot/codeparrot-clean
# Accelerator Integration Since PyTorch 2.1, the community has made significant progress in streamlining the process of integrating new accelerators into the PyTorch ecosystem. These improvements include, but are not limited to: refinements to the `PrivateUse1` Dispatch Key, the introduction and enhancement of core subsystem extension mechanisms, and the device-agnostic refactoring of key modules (e.g., `torch.accelerator`, `memory management`). Taken together, these advances provide the foundation for a **robust**, **flexible**, and **developer-friendly** pathway for accelerator integration. ```{note} This guide is a work in progress. For more details, please refer to the [roadmap](https://github.com/pytorch/pytorch/issues/158917). ``` ## Why Does This Matter? This integration pathway offers several major benefits: * **Speed**: Extensibility is built into all core PyTorch modules. Developers can integrate new accelerators into their downstream codebases independently—without modifying upstream code and without being limited by community review bandwidth. * **Future-proofing**: This is the default integration path for all future PyTorch features, meaning that as new modules and features are added, they will automatically support scaling to new accelerators if this path is followed. * **Autonomy**: Vendors maintain full control over their accelerator integration timelines, enabling fast iteration cycles and reducing reliance on upstream coordination. ## Target Audience This document is intended for: * **Accelerator Developers** who are integrating accelerator into PyTorch; * **Advanced PyTorch Users** interested in the inner workings of key modules; ## About This Document This guide aims to provide a **comprehensive overview of the modern integration pathway** for new accelerator in PyTorch. It walks through the full integration surface, from low-level device primitives to higher-level domain modules like compilation and quantization. The structure follows a **modular and scenario-driven approach**, where each topic is paired with corresponding code examples from [torch_openreg][OpenReg URL], an official reference implementation, and this series is structured around four major axes: * **Runtime**: Covers core components such as Event, Stream, Memory, Generator, Guard, Hooks, as well as the supporting C++ scaffolding. * **Operators**: Involve the minimum necessary set of operators, forward and backward operators, fallback operators, fallthroughs, STUBs, etc. in both C++ and Python implementations. * **Python Frontend**: Focuses on Python bindings for modules and device-agnostic APIs. * **High-level Modules**: Explores integration with major subsystems such as `AMP`, `Compiler`, `ONNX`, and `Distributed` and so on. The goal is to help developers: * Understand the full scope of accelerator integration; * Follow best practices to quickly launch new accelerators; * Avoid common pitfalls through clear, targeted examples. Next, we will delve into each chapter of this guide. Each chapter focuses on a key aspect of integration, providing detailed explanations and illustrative examples. Since some chapters build upon previous ones, readers are encouraged to follow the sequence to achieve a more coherent understanding. ```{toctree} :glob: :maxdepth: 1 device hooks guard autoload operators amp profiler ``` [OpenReg URL]: https://github.com/pytorch/pytorch/tree/main/test/cpp_extensions/open_registration_extension/torch_openreg "OpenReg URL"
unknown
github
https://github.com/pytorch/pytorch
docs/source/accelerator/index.md
// Copyright The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package marathon import ( "context" "errors" "io" "net/http" "net/http/httptest" "testing" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/common/model" "github.com/stretchr/testify/require" "github.com/prometheus/prometheus/discovery" "github.com/prometheus/prometheus/discovery/targetgroup" ) var ( marathonValidLabel = map[string]string{"prometheus": "yes"} testServers = []string{"http://localhost:8080"} ) func testConfig() SDConfig { return SDConfig{Servers: testServers} } func testUpdateServices(client appListClient) ([]*targetgroup.Group, error) { cfg := testConfig() reg := prometheus.NewRegistry() refreshMetrics := discovery.NewRefreshMetrics(reg) metrics := cfg.NewDiscovererMetrics(reg, refreshMetrics) err := metrics.Register() if err != nil { return nil, err } defer metrics.Unregister() defer refreshMetrics.Unregister() md, err := NewDiscovery(cfg, discovery.DiscovererOptions{ Logger: nil, Metrics: metrics, SetName: "marathon", }) if err != nil { return nil, err } if client != nil { md.appsClient = client } return md.refresh(context.Background()) } func TestMarathonSDHandleError(t *testing.T) { var ( errTesting = errors.New("testing failure") client = func(context.Context, *http.Client, string) (*appList, error) { return nil, errTesting } ) tgs, err := testUpdateServices(client) require.ErrorIs(t, err, errTesting) require.Empty(t, tgs, "Expected no target groups.") } func TestMarathonSDEmptyList(t *testing.T) { client := func(context.Context, *http.Client, string) (*appList, error) { return &appList{}, nil } tgs, err := testUpdateServices(client) require.NoError(t, err) require.Empty(t, tgs, "Expected no target groups.") } func marathonTestAppList(labels map[string]string, runningTasks int) *appList { var ( t = task{ ID: "test-task-1", Host: "mesos-slave1", } docker = dockerContainer{ Image: "repo/image:tag", } portMappings = []portMapping{ {Labels: labels, HostPort: 31000}, } container = container{Docker: docker, PortMappings: portMappings} a = app{ ID: "test-service", Tasks: []task{t}, RunningTasks: runningTasks, Labels: labels, Container: container, } ) return &appList{ Apps: []app{a}, } } func TestMarathonSDSendGroup(t *testing.T) { client := func(context.Context, *http.Client, string) (*appList, error) { return marathonTestAppList(marathonValidLabel, 1), nil } tgs, err := testUpdateServices(client) require.NoError(t, err) require.Len(t, tgs, 1, "Expected 1 target group.") tg := tgs[0] require.Equal(t, "test-service", tg.Source, "Wrong target group name.") require.Len(t, tg.Targets, 1, "Expected 1 target.") tgt := tg.Targets[0] require.Equal(t, "mesos-slave1:31000", string(tgt[model.AddressLabel]), "Wrong target address.") require.Equal(t, "yes", string(tgt[model.LabelName(portMappingLabelPrefix+"prometheus")]), "Wrong portMappings label from the first port.") } func TestMarathonSDRemoveApp(t *testing.T) { cfg := testConfig() reg := prometheus.NewRegistry() refreshMetrics := discovery.NewRefreshMetrics(reg) metrics := cfg.NewDiscovererMetrics(reg, refreshMetrics) require.NoError(t, metrics.Register()) defer metrics.Unregister() defer refreshMetrics.Unregister() md, err := NewDiscovery(cfg, discovery.DiscovererOptions{ Logger: nil, Metrics: metrics, SetName: "marathon", }) require.NoError(t, err) md.appsClient = func(context.Context, *http.Client, string) (*appList, error) { return marathonTestAppList(marathonValidLabel, 1), nil } tgs, err := md.refresh(context.Background()) require.NoError(t, err, "Got error on first update.") require.Len(t, tgs, 1, "Expected 1 targetgroup.") tg1 := tgs[0] md.appsClient = func(context.Context, *http.Client, string) (*appList, error) { return marathonTestAppList(marathonValidLabel, 0), nil } tgs, err = md.refresh(context.Background()) require.NoError(t, err, "Got error on second update.") require.Len(t, tgs, 1, "Expected 1 targetgroup.") tg2 := tgs[0] require.NotEmpty(t, tg2.Targets, "Got a non-empty target set.") require.Equal(t, tg1.Source, tg2.Source, "Source is different.") } func marathonTestAppListWithMultiplePorts(labels map[string]string, runningTasks int) *appList { var ( t = task{ ID: "test-task-1", Host: "mesos-slave1", } docker = dockerContainer{ Image: "repo/image:tag", } portMappings = []portMapping{ {Labels: labels, HostPort: 31000}, {Labels: make(map[string]string), HostPort: 32000}, } container = container{Docker: docker, PortMappings: portMappings} a = app{ ID: "test-service", Tasks: []task{t}, RunningTasks: runningTasks, Labels: labels, Container: container, } ) return &appList{ Apps: []app{a}, } } func TestMarathonSDSendGroupWithMultiplePort(t *testing.T) { client := func(context.Context, *http.Client, string) (*appList, error) { return marathonTestAppListWithMultiplePorts(marathonValidLabel, 1), nil } tgs, err := testUpdateServices(client) require.NoError(t, err) require.Len(t, tgs, 1, "Expected 1 target group.") tg := tgs[0] require.Equal(t, "test-service", tg.Source, "Wrong target group name.") require.Len(t, tg.Targets, 2, "Wrong number of targets.") tgt := tg.Targets[0] require.Equal(t, "mesos-slave1:31000", string(tgt[model.AddressLabel]), "Wrong target address.") require.Equal(t, "yes", string(tgt[model.LabelName(portMappingLabelPrefix+"prometheus")]), "Wrong portMappings label from the first port: %s", tgt[model.AddressLabel]) tgt = tg.Targets[1] require.Equal(t, "mesos-slave1:32000", string(tgt[model.AddressLabel]), "Wrong target address.") require.Empty(t, string(tgt[model.LabelName(portMappingLabelPrefix+"prometheus")]), "Wrong portMappings label from the second port: %s", tgt[model.AddressLabel]) } func marathonTestZeroTaskPortAppList(labels map[string]string, runningTasks int) *appList { var ( t = task{ ID: "test-task-2", Host: "mesos-slave-2", Ports: []uint32{}, } docker = dockerContainer{Image: "repo/image:tag"} container = container{Docker: docker} a = app{ ID: "test-service-zero-ports", Tasks: []task{t}, RunningTasks: runningTasks, Labels: labels, Container: container, } ) return &appList{ Apps: []app{a}, } } func TestMarathonZeroTaskPorts(t *testing.T) { client := func(context.Context, *http.Client, string) (*appList, error) { return marathonTestZeroTaskPortAppList(marathonValidLabel, 1), nil } tgs, err := testUpdateServices(client) require.NoError(t, err) require.Len(t, tgs, 1, "Expected 1 target group.") tg := tgs[0] require.Equal(t, "test-service-zero-ports", tg.Source, "Wrong target group name.") require.Empty(t, tg.Targets, "Wrong number of targets.") } func Test500ErrorHttpResponseWithValidJSONBody(t *testing.T) { // Simulate 500 error with a valid JSON response. respHandler := func(w http.ResponseWriter, _ *http.Request) { w.WriteHeader(http.StatusInternalServerError) w.Header().Set("Content-Type", "application/json") io.WriteString(w, `{}`) } // Create a test server with mock HTTP handler. ts := httptest.NewServer(http.HandlerFunc(respHandler)) defer ts.Close() // Execute test case and validate behavior. _, err := testUpdateServices(nil) require.Error(t, err, "Expected error for 5xx HTTP response from marathon server.") } func marathonTestAppListWithPortDefinitions(labels map[string]string, runningTasks int) *appList { var ( t = task{ ID: "test-task-1", Host: "mesos-slave1", // Auto-generated ports when requirePorts is false Ports: []uint32{1234, 5678}, } docker = dockerContainer{ Image: "repo/image:tag", } container = container{Docker: docker} a = app{ ID: "test-service", Tasks: []task{t}, RunningTasks: runningTasks, Labels: labels, Container: container, PortDefinitions: []portDefinition{ {Labels: make(map[string]string), Port: 31000}, {Labels: labels, Port: 32000}, }, RequirePorts: false, // default } ) return &appList{ Apps: []app{a}, } } func TestMarathonSDSendGroupWithPortDefinitions(t *testing.T) { client := func(context.Context, *http.Client, string) (*appList, error) { return marathonTestAppListWithPortDefinitions(marathonValidLabel, 1), nil } tgs, err := testUpdateServices(client) require.NoError(t, err) require.Len(t, tgs, 1, "Expected 1 target group.") tg := tgs[0] require.Equal(t, "test-service", tg.Source, "Wrong target group name.") require.Len(t, tg.Targets, 2, "Wrong number of targets.") tgt := tg.Targets[0] require.Equal(t, "mesos-slave1:1234", string(tgt[model.AddressLabel]), "Wrong target address.") require.Empty(t, string(tgt[model.LabelName(portMappingLabelPrefix+"prometheus")]), "Wrong portMappings label from the first port.") require.Empty(t, string(tgt[model.LabelName(portDefinitionLabelPrefix+"prometheus")]), "Wrong portDefinitions label from the first port.") tgt = tg.Targets[1] require.Equal(t, "mesos-slave1:5678", string(tgt[model.AddressLabel]), "Wrong target address.") require.Empty(t, tgt[model.LabelName(portMappingLabelPrefix+"prometheus")], "Wrong portMappings label from the second port.") require.Equal(t, "yes", string(tgt[model.LabelName(portDefinitionLabelPrefix+"prometheus")]), "Wrong portDefinitions label from the second port.") } func marathonTestAppListWithPortDefinitionsRequirePorts(labels map[string]string, runningTasks int) *appList { var ( t = task{ ID: "test-task-1", Host: "mesos-slave1", Ports: []uint32{31000, 32000}, } docker = dockerContainer{ Image: "repo/image:tag", } container = container{Docker: docker} a = app{ ID: "test-service", Tasks: []task{t}, RunningTasks: runningTasks, Labels: labels, Container: container, PortDefinitions: []portDefinition{ {Labels: make(map[string]string), Port: 31000}, {Labels: labels, Port: 32000}, }, RequirePorts: true, } ) return &appList{ Apps: []app{a}, } } func TestMarathonSDSendGroupWithPortDefinitionsRequirePorts(t *testing.T) { client := func(context.Context, *http.Client, string) (*appList, error) { return marathonTestAppListWithPortDefinitionsRequirePorts(marathonValidLabel, 1), nil } tgs, err := testUpdateServices(client) require.NoError(t, err) require.Len(t, tgs, 1, "Expected 1 target group.") tg := tgs[0] require.Equal(t, "test-service", tg.Source, "Wrong target group name.") require.Len(t, tg.Targets, 2, "Wrong number of targets.") tgt := tg.Targets[0] require.Equal(t, "mesos-slave1:31000", string(tgt[model.AddressLabel]), "Wrong target address.") require.Empty(t, string(tgt[model.LabelName(portMappingLabelPrefix+"prometheus")]), "Wrong portMappings label from the first port.") require.Empty(t, string(tgt[model.LabelName(portDefinitionLabelPrefix+"prometheus")]), "Wrong portDefinitions label from the first port.") tgt = tg.Targets[1] require.Equal(t, "mesos-slave1:32000", string(tgt[model.AddressLabel]), "Wrong target address.") require.Empty(t, string(tgt[model.LabelName(portMappingLabelPrefix+"prometheus")]), "Wrong portMappings label from the second port.") require.Equal(t, "yes", string(tgt[model.LabelName(portDefinitionLabelPrefix+"prometheus")]), "Wrong portDefinitions label from the second port.") } func marathonTestAppListWithPorts(labels map[string]string, runningTasks int) *appList { var ( t = task{ ID: "test-task-1", Host: "mesos-slave1", Ports: []uint32{31000, 32000}, } docker = dockerContainer{ Image: "repo/image:tag", } container = container{Docker: docker} a = app{ ID: "test-service", Tasks: []task{t}, RunningTasks: runningTasks, Labels: labels, Container: container, } ) return &appList{ Apps: []app{a}, } } func TestMarathonSDSendGroupWithPorts(t *testing.T) { client := func(context.Context, *http.Client, string) (*appList, error) { return marathonTestAppListWithPorts(marathonValidLabel, 1), nil } tgs, err := testUpdateServices(client) require.NoError(t, err) require.Len(t, tgs, 1, "Expected 1 target group.") tg := tgs[0] require.Equal(t, "test-service", tg.Source, "Wrong target group name.") require.Len(t, tg.Targets, 2, "Wrong number of targets.") tgt := tg.Targets[0] require.Equal(t, "mesos-slave1:31000", string(tgt[model.AddressLabel]), "Wrong target address.") require.Empty(t, string(tgt[model.LabelName(portMappingLabelPrefix+"prometheus")]), "Wrong portMappings label from the first port.") require.Empty(t, string(tgt[model.LabelName(portDefinitionLabelPrefix+"prometheus")]), "Wrong portDefinitions label from the first port.") tgt = tg.Targets[1] require.Equal(t, "mesos-slave1:32000", string(tgt[model.AddressLabel]), "Wrong target address.") require.Empty(t, string(tgt[model.LabelName(portMappingLabelPrefix+"prometheus")]), "Wrong portMappings label from the second port.") require.Empty(t, string(tgt[model.LabelName(portDefinitionLabelPrefix+"prometheus")]), "Wrong portDefinitions label from the second port.") } func marathonTestAppListWithContainerPortMappings(labels map[string]string, runningTasks int) *appList { var ( t = task{ ID: "test-task-1", Host: "mesos-slave1", Ports: []uint32{ 12345, // 'Automatically-generated' port 32000, }, } docker = dockerContainer{ Image: "repo/image:tag", } container = container{ Docker: docker, PortMappings: []portMapping{ {Labels: labels, HostPort: 0}, {Labels: make(map[string]string), HostPort: 32000}, }, } a = app{ ID: "test-service", Tasks: []task{t}, RunningTasks: runningTasks, Labels: labels, Container: container, } ) return &appList{ Apps: []app{a}, } } func TestMarathonSDSendGroupWithContainerPortMappings(t *testing.T) { client := func(context.Context, *http.Client, string) (*appList, error) { return marathonTestAppListWithContainerPortMappings(marathonValidLabel, 1), nil } tgs, err := testUpdateServices(client) require.NoError(t, err) require.Len(t, tgs, 1, "Expected 1 target group.") tg := tgs[0] require.Equal(t, "test-service", tg.Source, "Wrong target group name.") require.Len(t, tg.Targets, 2, "Wrong number of targets.") tgt := tg.Targets[0] require.Equal(t, "mesos-slave1:12345", string(tgt[model.AddressLabel]), "Wrong target address.") require.Equal(t, "yes", string(tgt[model.LabelName(portMappingLabelPrefix+"prometheus")]), "Wrong portMappings label from the first port.") require.Empty(t, string(tgt[model.LabelName(portDefinitionLabelPrefix+"prometheus")]), "Wrong portDefinitions label from the first port.") tgt = tg.Targets[1] require.Equal(t, "mesos-slave1:32000", string(tgt[model.AddressLabel]), "Wrong target address.") require.Empty(t, string(tgt[model.LabelName(portMappingLabelPrefix+"prometheus")]), "Wrong portMappings label from the second port.") require.Empty(t, string(tgt[model.LabelName(portDefinitionLabelPrefix+"prometheus")]), "Wrong portDefinitions label from the second port.") } func marathonTestAppListWithDockerContainerPortMappings(labels map[string]string, runningTasks int) *appList { var ( t = task{ ID: "test-task-1", Host: "mesos-slave1", Ports: []uint32{ 31000, 12345, // 'Automatically-generated' port }, } docker = dockerContainer{ Image: "repo/image:tag", PortMappings: []portMapping{ {Labels: labels, HostPort: 31000}, {Labels: make(map[string]string), HostPort: 0}, }, } container = container{ Docker: docker, } a = app{ ID: "test-service", Tasks: []task{t}, RunningTasks: runningTasks, Labels: labels, Container: container, } ) return &appList{ Apps: []app{a}, } } func TestMarathonSDSendGroupWithDockerContainerPortMappings(t *testing.T) { client := func(context.Context, *http.Client, string) (*appList, error) { return marathonTestAppListWithDockerContainerPortMappings(marathonValidLabel, 1), nil } tgs, err := testUpdateServices(client) require.NoError(t, err) require.Len(t, tgs, 1, "Expected 1 target group.") tg := tgs[0] require.Equal(t, "test-service", tg.Source, "Wrong target group name.") require.Len(t, tg.Targets, 2, "Wrong number of targets.") tgt := tg.Targets[0] require.Equal(t, "mesos-slave1:31000", string(tgt[model.AddressLabel]), "Wrong target address.") require.Equal(t, "yes", string(tgt[model.LabelName(portMappingLabelPrefix+"prometheus")]), "Wrong portMappings label from the first port.") require.Empty(t, string(tgt[model.LabelName(portDefinitionLabelPrefix+"prometheus")]), "Wrong portDefinitions label from the first port.") tgt = tg.Targets[1] require.Equal(t, "mesos-slave1:12345", string(tgt[model.AddressLabel]), "Wrong target address.") require.Empty(t, string(tgt[model.LabelName(portMappingLabelPrefix+"prometheus")]), "Wrong portMappings label from the second port.") require.Empty(t, string(tgt[model.LabelName(portDefinitionLabelPrefix+"prometheus")]), "Wrong portDefinitions label from the second port.") } func marathonTestAppListWithContainerNetworkAndPortMappings(labels map[string]string, runningTasks int) *appList { var ( t = task{ ID: "test-task-1", Host: "mesos-slave1", IPAddresses: []ipAddress{ {Address: "1.2.3.4"}, }, } docker = dockerContainer{ Image: "repo/image:tag", } portMappings = []portMapping{ {Labels: labels, ContainerPort: 8080, HostPort: 31000}, {Labels: make(map[string]string), ContainerPort: 1234, HostPort: 32000}, } container = container{ Docker: docker, PortMappings: portMappings, } networks = []network{ {Mode: "container", Name: "test-network"}, } a = app{ ID: "test-service", Tasks: []task{t}, RunningTasks: runningTasks, Labels: labels, Container: container, Networks: networks, } ) return &appList{ Apps: []app{a}, } } func TestMarathonSDSendGroupWithContainerNetworkAndPortMapping(t *testing.T) { client := func(context.Context, *http.Client, string) (*appList, error) { return marathonTestAppListWithContainerNetworkAndPortMappings(marathonValidLabel, 1), nil } tgs, err := testUpdateServices(client) require.NoError(t, err) require.Len(t, tgs, 1, "Expected 1 target group.") tg := tgs[0] require.Equal(t, "test-service", tg.Source, "Wrong target group name.") require.Len(t, tg.Targets, 2, "Wrong number of targets.") tgt := tg.Targets[0] require.Equal(t, "1.2.3.4:8080", string(tgt[model.AddressLabel]), "Wrong target address.") require.Equal(t, "yes", string(tgt[model.LabelName(portMappingLabelPrefix+"prometheus")]), "Wrong portMappings label from the first port.") require.Empty(t, string(tgt[model.LabelName(portDefinitionLabelPrefix+"prometheus")]), "Wrong portDefinitions label from the first port.") tgt = tg.Targets[1] require.Equal(t, "1.2.3.4:1234", string(tgt[model.AddressLabel]), "Wrong target address.") require.Empty(t, string(tgt[model.LabelName(portMappingLabelPrefix+"prometheus")]), "Wrong portMappings label from the second port.") require.Empty(t, string(tgt[model.LabelName(portDefinitionLabelPrefix+"prometheus")]), "Wrong portDefinitions label from the second port.") }
go
github
https://github.com/prometheus/prometheus
discovery/marathon/marathon_test.go
# !/usr/bin/env python # -*- coding: utf-8 -*- # vispy: gallery 2 """ Simple example plotting 2D points. """ from vispy import gloo from vispy import app import numpy as np VERT_SHADER = """ attribute vec2 a_position; attribute vec3 a_color; attribute float a_size; varying vec4 v_fg_color; varying vec4 v_bg_color; varying float v_radius; varying float v_linewidth; varying float v_antialias; void main (void) { v_radius = a_size; v_linewidth = 1.0; v_antialias = 1.0; v_fg_color = vec4(0.0,0.0,0.0,0.5); v_bg_color = vec4(a_color, 1.0); gl_Position = vec4(a_position, 0.0, 1.0); gl_PointSize = 2.0*(v_radius + v_linewidth + 1.5*v_antialias); } """ FRAG_SHADER = """ #version 120 varying vec4 v_fg_color; varying vec4 v_bg_color; varying float v_radius; varying float v_linewidth; varying float v_antialias; void main() { float size = 2.0*(v_radius + v_linewidth + 1.5*v_antialias); float t = v_linewidth/2.0-v_antialias; float r = length((gl_PointCoord.xy - vec2(0.5,0.5))*size); float d = abs(r - v_radius) - t; if( d < 0.0 ) gl_FragColor = v_fg_color; else { float alpha = d/v_antialias; alpha = exp(-alpha*alpha); if (r > v_radius) gl_FragColor = vec4(v_fg_color.rgb, alpha*v_fg_color.a); else gl_FragColor = mix(v_bg_color, v_fg_color, alpha); } } """ class Canvas(app.Canvas): def __init__(self): app.Canvas.__init__(self, keys='interactive') ps = self.pixel_scale # Create vertices n = 10000 v_position = 0.25 * np.random.randn(n, 2).astype(np.float32) v_color = np.random.uniform(0, 1, (n, 3)).astype(np.float32) v_size = np.random.uniform(2*ps, 12*ps, (n, 1)).astype(np.float32) self.program = gloo.Program(VERT_SHADER, FRAG_SHADER) # Set uniform and attribute self.program['a_color'] = gloo.VertexBuffer(v_color) self.program['a_position'] = gloo.VertexBuffer(v_position) self.program['a_size'] = gloo.VertexBuffer(v_size) gloo.set_state(clear_color='white', blend=True, blend_func=('src_alpha', 'one_minus_src_alpha')) self.show() def on_resize(self, event): gloo.set_viewport(0, 0, *event.physical_size) def on_draw(self, event): gloo.clear(color=True, depth=True) self.program.draw('points') if __name__ == '__main__': c = Canvas() app.run()
unknown
codeparrot/codeparrot-clean
import * as ts from "../../_namespaces/ts.js"; import { dedent } from "../../_namespaces/Utils.js"; import { baselineTsserverLogs, closeFilesForSession, openFilesForSession, setCompilerOptionsForInferredProjectsRequestForSession, TestSession, } from "../helpers/tsserver.js"; import { File, TestServerHost, } from "../helpers/virtualFileSystemWithWatch.js"; describe("unittests:: tsserver:: maxNodeModuleJsDepth:: for inferred projects", () => { it("should be set to 2 if the project has js root files", () => { const file1: File = { path: "/home/src/projects/project/file1.js", content: `var t = require("test"); t.`, }; const moduleFile: File = { path: "/home/src/projects/project/node_modules/test/index.js", content: `var v = 10; module.exports = v;`, }; const host = TestServerHost.createServerHost([file1, moduleFile]); const session = new TestSession(host); openFilesForSession([file1], session); session.logger.log(`maxNodeModuleJsDepth: ${session.getProjectService().inferredProjects[0].getCompilationSettings().maxNodeModuleJsDepth}`); // Assert the option sticks setCompilerOptionsForInferredProjectsRequestForSession({ target: ts.server.protocol.ScriptTarget.ES2016 }, session); session.logger.log(`maxNodeModuleJsDepth: ${session.getProjectService().inferredProjects[0].getCompilationSettings().maxNodeModuleJsDepth}`); baselineTsserverLogs("maxNodeModuleJsDepth", "should be set to 2 if the project has js root files", session); }); it("should return to normal state when all js root files are removed from project", () => { const file1 = { path: "/home/src/projects/project/file1.ts", content: "let x =1;", }; const file2 = { path: "/home/src/projects/project/file2.js", content: "let x =1;", }; const host = TestServerHost.createServerHost([file1, file2]); const session = new TestSession({ host, useSingleInferredProject: true }); openFilesForSession([file1], session); session.logger.log(`maxNodeModuleJsDepth: ${session.getProjectService().inferredProjects[0].getCompilationSettings().maxNodeModuleJsDepth}`); openFilesForSession([file2], session); session.logger.log(`maxNodeModuleJsDepth: ${session.getProjectService().inferredProjects[0].getCompilationSettings().maxNodeModuleJsDepth}`); closeFilesForSession([file2], session); session.logger.log(`maxNodeModuleJsDepth: ${session.getProjectService().inferredProjects[0].getCompilationSettings().maxNodeModuleJsDepth}`); baselineTsserverLogs("maxNodeModuleJsDepth", "should return to normal state when all js root files are removed from project", session); }); it("handles resolutions when currentNodeModulesDepth changes when referencing file from another file", () => { const host = TestServerHost.createServerHost({ "/user/username/projects/project1/src/file1.js": dedent` import {x} from 'glob'; import {y} from 'minimatch'; // This imported file will add imports from minimatch to program `, "/user/username/projects/project1/src/node_modules/glob/index.js": dedent` import { y } from "minimatch"; // This import is will put minimatch at maxNodeModuleJsDepth so its imports are not added to program export const x = y; `, "/user/username/projects/project1/src/node_modules/minimatch/index.js": dedent` import { z } from "path"; // This will be resolved two times export const y = z; `, "/user/username/projects/project1/src/node_modules/path/index.js": dedent` export const z = 10; `, }); const session = new TestSession({ host, useSingleInferredProject: true }); openFilesForSession(["/user/username/projects/project1/src/file1.js"], session); baselineTsserverLogs("maxNodeModuleJsDepth", "handles resolutions when currentNodeModulesDepth changes when referencing file from another file", session); }); });
typescript
github
https://github.com/microsoft/TypeScript
src/testRunner/unittests/tsserver/maxNodeModuleJsDepth.ts
# -*- coding: utf-8 -*- # Part of Odoo. See LICENSE file for full copyright and licensing details. import time from openerp.osv import osv from openerp.report import report_sxw def titlize(journal_name): words = journal_name.split() while words.pop() != 'journal': continue return ' '.join(words) class order(report_sxw.rml_parse): def __init__(self, cr, uid, name, context): super(order, self).__init__(cr, uid, name, context=context) user = self.pool['res.users'].browse(cr, uid, uid, context=context) partner = user.company_id.partner_id self.localcontext.update({ 'time': time, 'disc': self.discount, 'net': self.netamount, 'get_journal_amt': self._get_journal_amt, 'address': partner or False, 'titlize': titlize }) def netamount(self, order_line_id): sql = 'select (qty*price_unit) as net_price from pos_order_line where id = %s' self.cr.execute(sql, (order_line_id,)) res = self.cr.fetchone() return res[0] def discount(self, order_id): sql = 'select discount, price_unit, qty from pos_order_line where order_id = %s ' self.cr.execute(sql, (order_id,)) res = self.cr.fetchall() dsum = 0 for line in res: if line[0] != 0: dsum = dsum +(line[2] * (line[0]*line[1]/100)) return dsum def _get_journal_amt(self, order_id): data={} sql = """ select aj.name,absl.amount as amt from account_bank_statement as abs LEFT JOIN account_bank_statement_line as absl ON abs.id = absl.statement_id LEFT JOIN account_journal as aj ON aj.id = abs.journal_id WHERE absl.pos_statement_id =%d"""%(order_id) self.cr.execute(sql) data = self.cr.dictfetchall() return data class report_order_receipt(osv.AbstractModel): _name = 'report.point_of_sale.report_receipt' _inherit = 'report.abstract_report' _template = 'point_of_sale.report_receipt' _wrapped_report_class = order
unknown
codeparrot/codeparrot-clean
// Copyright The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package almost import ( "math" "github.com/prometheus/prometheus/model/value" ) var minNormal = math.Float64frombits(0x0010000000000000) // The smallest positive normal value of type float64. // Equal returns true if a and b differ by less than their sum // multiplied by epsilon, or if both are StaleNaN, or if both are any other NaN. func Equal(a, b, epsilon float64) bool { // StaleNaN is a special value that is used as staleness maker, and // we don't want it to compare equal to any other NaN. if value.IsStaleNaN(a) || value.IsStaleNaN(b) { return value.IsStaleNaN(a) && value.IsStaleNaN(b) } // NaN has no equality but for testing we still want to know whether both values // are NaN. if math.IsNaN(a) && math.IsNaN(b) { return true } // Cf. http://floating-point-gui.de/errors/comparison/ if a == b { return true } absSum := math.Abs(a) + math.Abs(b) diff := math.Abs(a - b) if a == 0 || b == 0 || absSum < minNormal { return diff < epsilon*minNormal } return diff/math.Min(absSum, math.MaxFloat64) < epsilon }
go
github
https://github.com/prometheus/prometheus
util/almost/almost.go
<?php /* * This file is part of the Symfony package. * * (c) Fabien Potencier <fabien@symfony.com> * * For the full copyright and license information, please view the LICENSE * file that was distributed with this source code. */ namespace Symfony\Bundle\FrameworkBundle\Routing; use Symfony\Component\Config\Exception\LoaderLoadException; use Symfony\Component\Config\Loader\DelegatingLoader as BaseDelegatingLoader; use Symfony\Component\Config\Loader\LoaderResolverInterface; use Symfony\Component\Routing\RouteCollection; /** * DelegatingLoader delegates route loading to other loaders using a loader resolver. * * This implementation resolves the _controller attribute from the short notation * to the fully-qualified form (from a:b:c to class::method). * * @author Fabien Potencier <fabien@symfony.com> * * @final */ class DelegatingLoader extends BaseDelegatingLoader { private bool $loading = false; public function __construct( LoaderResolverInterface $resolver, private array $defaultOptions = [], private array $defaultRequirements = [], ) { parent::__construct($resolver); } public function load(mixed $resource, ?string $type = null): RouteCollection { if ($this->loading) { // This can happen if a fatal error occurs in parent::load(). // Here is the scenario: // - while routes are being loaded by parent::load() below, a fatal error // occurs (e.g. parse error in a controller while loading annotations); // - PHP abruptly empties the stack trace, bypassing all catch/finally blocks; // it then calls the registered shutdown functions; // - the ErrorHandler catches the fatal error and re-injects it for rendering // thanks to HttpKernel->terminateWithException() (that calls handleException()); // - at this stage, if we try to load the routes again, we must prevent // the fatal error from occurring a second time, // otherwise the PHP process would be killed immediately; // - while rendering the exception page, the router can be required // (by e.g. the web profiler that needs to generate a URL); // - this handles the case and prevents the second fatal error // by triggering an exception beforehand. throw new LoaderLoadException($resource, null, 0, null, $type); } $this->loading = true; try { $collection = parent::load($resource, $type); } finally { $this->loading = false; } foreach ($collection->all() as $route) { if ($this->defaultOptions) { $route->setOptions($route->getOptions() + $this->defaultOptions); } if ($this->defaultRequirements) { $route->setRequirements($route->getRequirements() + $this->defaultRequirements); } if (!\is_string($controller = $route->getDefault('_controller'))) { continue; } if (str_contains($controller, '::')) { continue; } $route->setDefault('_controller', $controller); } return $collection; } }
php
github
https://github.com/symfony/symfony
src/Symfony/Bundle/FrameworkBundle/Routing/DelegatingLoader.php
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) %YAML 1.2 --- $id: http://devicetree.org/schemas/auxdisplay/maxim,max6959.yaml# $schema: http://devicetree.org/meta-schemas/core.yaml# title: MAX6958/6959 7-segment LED display controller maintainers: - Andy Shevchenko <andriy.shevchenko@linux.intel.com> description: The Maxim MAX6958/6959 7-segment LED display controller provides an I2C interface to up to four 7-segment LED digits. The MAX6959, in comparison to MAX6958, adds input support. Type of the chip can be autodetected via specific register read, and hence the features may be enabled in the driver at run-time, in case they are requested via Device Tree. A given hardware is simple and does not provide any additional pins, such as reset or power enable. properties: compatible: const: maxim,max6959 reg: maxItems: 1 required: - compatible - reg additionalProperties: false examples: - | i2c { #address-cells = <1>; #size-cells = <0>; display-controller@38 { compatible = "maxim,max6959"; reg = <0x38>; }; };
unknown
github
https://github.com/torvalds/linux
Documentation/devicetree/bindings/auxdisplay/maxim,max6959.yaml
# coding=utf-8 """InaSAFE Disaster risk assessment tool developed by AusAid Contact : ole.moller.nielsen@gmail.com .. note:: This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. """ import json class FlatTable(): """ Flat table object - used as a source of data for pivot tables. After constructing the object, repeatedly call "add_value" method for each row of the input table. FlatTable stores only fields that are important for the creation of pivot tables later. It also aggregates values of rows where specified fields have the same value, saving memory by not storing all source data. An example of use for the flat table - afterwards it can be converted into a pivot table: flat_table = FlatTable('hazard_type', 'road_type', 'district') for f in layer.getFeatures(): flat_table.add_value( f.geometry().length(), hazard_type=f['hazard'], road_type=f['road'], zone=f['zone']) """ def __init__(self, *args): """ Construct flat table, fields are passed""" self.groups = args self.data = {} # Store keys, allowing for consistent ordered self.data_keys = [] def add_value(self, value, **kwargs): key = tuple(kwargs[group] for group in self.groups) if key not in self.data: self.data[key] = 0 self.data_keys.append(key) self.data[key] += value def get_value(self, **kwargs): """Return the value for a specific key.""" key = tuple(kwargs[group] for group in self.groups) if key not in self.data: self.data[key] = 0 return self.data[key] def group_values(self, group_name): """Return all distinct group values for given group.""" group_index = self.groups.index(group_name) values = [] for key in self.data_keys: if key[group_index] not in values: values.append(key[group_index]) return values def to_json(self): """Return json representation of FlatTable :returns: JSON string. :rtype: str """ return json.dumps(self.to_dict()) def from_json(self, json_string): """Override current FlatTable using data from json. :param json_string: JSON String :type json_string: str """ obj = json.loads(json_string) self.from_dict(obj['groups'], obj['data']) return self def to_dict(self): """Return common list python object. :returns: Dictionary of groups and data :rtype: dict """ list_data = [] for key, value in list(self.data.items()): row = list(key) row.append(value) list_data.append(row) return { 'groups': self.groups, 'data': list_data } def from_dict(self, groups, data): """Populate FlatTable based on groups and data. :param groups: List of group name. :type groups: list :param data: Dictionary of raw table. :type data: list example of groups: ["road_type", "hazard"] example of data: [ ["residential", "low", 50], ["residential", "medium", 30], ["secondary", "low", 40], ["primary", "high", 10], ["primary", "medium", 20] ] """ self.groups = tuple(groups) for item in data: kwargs = {} for i in range(len(self.groups)): kwargs[self.groups[i]] = item[i] self.add_value(item[-1], **kwargs) return self class PivotTable(): """ Pivot tables as known from spreadsheet software. Pivot table restructures the input data table. For example, a table with fields "hazard_type", "road_type", "district", "length" and rows where each row tells length of roads of particular road type affected by a particular hazard in a particular district. Now if we need any kind of summary table from these data, we use create a pivot table: PivotTable(flat_table, row_field="road_type", column_field="hazard_type") This will generate a table like this: High Medium Low Total Highway 3.5 4.3 0.2 8.0 Residential 1.2 2.2 1.0 4.4 Total 4.7 6.5 1.2 12.4 The returned pivot table will have attributes defined as follows (assuming "t" is the returned table): >>> t.total 12.4 >>> t.total_rows [8.0, 4.4] >>> t.total_columns [4.7, 6.5, 1.2] >>> t.rows ["Highway", "Residential"] >>> t.columns ["High", "Medium", "Low"] >>> t.data [[3.5, 4.3, 0.2], [1.2, 2.2, 1.0]] The summary table includes data from all districts. If we wanted to focus only on district named "West Side": PivotTable(flat_table, row_field="road_type", column_field="hazard_type", filter_field="district", filter_value="West Side") """ def __init__(self, flat_table, row_field=None, column_field=None, filter_field=None, filter_value=None, columns=None, affected_columns=None): """ Make a pivot table out of the source data :param flat_table: Flat table with input data for pivot table :type flat_table: FlatTable :param row_field: Field name from flat table to use for rows. If None, there will be just one row in the pivot table :type row_field: str :param column_field: Field name from flat table to use for columns. If None, there will be just one column in the pivot table :type column_field: str :param filter_field: Field name from flat table which will be used for filtering. To be used together with filter_value. If None, no filtering will be applied. :type filter_field: str :param filter_value: Value of filter_field that will pass filtering, all other values will be skipped for pivot table :type filter_value: any :param columns: List of columns to be present. If not defined, the list of columns will be determined from unique column_field values. If defined, it explicitly defines order of columns and it includes columns even if they were not in input data. :param columns: list :param affected_columns: List of columns which are considered affected. It has to used with column_field. :type affected_columns: list """ if affected_columns is None: affected_columns = [] if len(flat_table.data) == 0: raise ValueError('No input data') if row_field is not None: flat_row_index = flat_table.groups.index(row_field) if column_field is not None: flat_column_index = flat_table.groups.index(column_field) if filter_field is not None: flat_filter_index = flat_table.groups.index(filter_field) sums = {} # key = (row, column), value = sum sums_affected = {} # key = row, value = sum for flat_key, flat_value in list(flat_table.data.items()): # apply filtering if filter_field is not None: if flat_key[flat_filter_index] != filter_value: continue if column_field is not None: current_value = flat_key[flat_column_index] if current_value in affected_columns: if row_field is not None: row_key = flat_key[flat_row_index] else: row_key = '' if row_key not in list(sums_affected.keys()): sums_affected[row_key] = 0 sums_affected[row_key] += flat_value if column_field is not None and row_field is not None: key = flat_key[flat_row_index], flat_key[flat_column_index] elif row_field is not None: key = (flat_key[flat_row_index], '') elif column_field is not None: key = ('', flat_key[flat_column_index]) if key not in sums: sums[key] = 0 sums[key] += flat_value # TODO: configurable order of rows # - undefined # - using row label # - using column's values # - custom (using function) # determine rows if row_field is None: self.rows = [''] else: self.rows = flat_table.group_values(row_field) # determine columns if columns is not None: self.columns = columns elif column_field is None: self.columns = [''] else: self.columns = flat_table.group_values(column_field) self.affected_columns = affected_columns self.total = 0.0 self.total_rows = [0.0] * len(self.rows) self.total_columns = [0.0] * len(self.columns) self.data = [[] for i in range(len(self.rows))] for i in range(len(self.rows)): self.data[i] = [0.0] * len(self.columns) for (sum_row, sum_column), sum_value in list(sums.items()): sum_row_index = self.rows.index(sum_row) sum_column_index = self.columns.index(sum_column) self.data[sum_row_index][sum_column_index] = sum_value self.total_rows[sum_row_index] += sum_value self.total_columns[sum_column_index] += sum_value self.total += sum_value self.total_rows_affected = [0.0] * len(self.rows) self.total_affected = 0.0 for row, value in list(sums_affected.items()): self.total_affected += value sum_row_index = self.rows.index(row) self.total_rows_affected[sum_row_index] = value self.total_percent_rows_affected = [0.0] * len(self.rows) for row, value in enumerate(self.total_rows_affected): try: percent = value * 100 / self.total_rows[row] self.total_percent_rows_affected[row] = percent except ZeroDivisionError: pass try: percent = self.total_affected * 100 / self.total self.total_percent_affected = percent except ZeroDivisionError: self.total_percent_affected = None def __repr__(self): """Dump object content in a readable format.""" pivot = '<PivotTable ' \ 'total=%f\n ' \ 'total_rows=%s\n ' \ 'total_columns=%s\n ' \ 'total_rows_affected=%s\n ' \ 'total_affected=%s\n ' \ 'rows=%s\n ' \ 'columns=%s\n ' \ 'affected columns=%s\n' \ 'data=%s>' % ( self.total, repr(self.total_rows), repr(self.total_columns), repr(self.total_rows_affected), repr(self.total_affected), repr(self.rows), repr(self.columns), repr(self.affected_columns), repr(self.data)) return pivot
unknown
codeparrot/codeparrot-clean
/* * Copyright 2012-present the original author or authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * https://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.springframework.boot.configurationsample.method; public class NestedProperty { private String myNestedProperty; public String getMyNestedProperty() { return this.myNestedProperty; } public void setMyNestedProperty(String myNestedProperty) { this.myNestedProperty = myNestedProperty; } }
java
github
https://github.com/spring-projects/spring-boot
configuration-metadata/spring-boot-configuration-processor/src/test/java/org/springframework/boot/configurationsample/method/NestedProperty.java
# Copyright 2017 The Chromium Authors. All rights reserved. # Use of this source code is governed by a BSD-style license that can be # found in the LICENSE file. """URL endpoint for adding new histograms to the dashboard.""" from __future__ import print_function from __future__ import division from __future__ import absolute_import import cloudstorage import decimal import ijson import json import logging import StringIO import sys import uuid import zlib from google.appengine.api import taskqueue from dashboard.api import api_request_handler from dashboard.common import datastore_hooks from dashboard.common import histogram_helpers from dashboard.common import request_handler from dashboard.common import timing from dashboard.common import utils from dashboard.models import graph_data from dashboard.models import histogram from tracing.value import histogram_set from tracing.value.diagnostics import diagnostic from tracing.value.diagnostics import reserved_infos SUITE_LEVEL_SPARSE_DIAGNOSTIC_NAMES = set([ reserved_infos.ARCHITECTURES.name, reserved_infos.BENCHMARKS.name, reserved_infos.BENCHMARK_DESCRIPTIONS.name, reserved_infos.BOTS.name, reserved_infos.BUG_COMPONENTS.name, reserved_infos.DOCUMENTATION_URLS.name, reserved_infos.GPUS.name, reserved_infos.MASTERS.name, reserved_infos.MEMORY_AMOUNTS.name, reserved_infos.OS_NAMES.name, reserved_infos.OS_VERSIONS.name, reserved_infos.OWNERS.name, reserved_infos.PRODUCT_VERSIONS.name, ]) HISTOGRAM_LEVEL_SPARSE_DIAGNOSTIC_NAMES = set([ reserved_infos.DEVICE_IDS.name, reserved_infos.STORIES.name, reserved_infos.STORYSET_REPEATS.name, reserved_infos.STORY_TAGS.name, ]) SPARSE_DIAGNOSTIC_NAMES = SUITE_LEVEL_SPARSE_DIAGNOSTIC_NAMES.union( HISTOGRAM_LEVEL_SPARSE_DIAGNOSTIC_NAMES) TASK_QUEUE_NAME = 'histograms-queue' _RETRY_PARAMS = cloudstorage.RetryParams(backoff_factor=1.1) _TASK_RETRY_LIMIT = 4 _ZLIB_BUFFER_SIZE = 4096 def _CheckRequest(condition, msg): if not condition: raise api_request_handler.BadRequestError(msg) class DecompressFileWrapper(object): """A file-like object implementing inline decompression. This class wraps a file-like object and does chunk-based decoding of the data. We only implement the read() function supporting fixed-chunk reading, capped to a predefined constant buffer length. Example with open('filename', 'r') as input: decompressor = DecompressFileWrapper(input) while True: chunk = decompressor.read(4096) if len(chunk) == 0: break // handle the chunk with size <= 4096 """ def __init__(self, source_file, buffer_size=_ZLIB_BUFFER_SIZE): self.source_file = source_file self.decompressor = zlib.decompressobj() self.buffer_size = buffer_size def __enter__(self): return self def read(self, size=None): # pylint: disable=invalid-name if size is None or size < 0: size = self.buffer_size # We want to read chunks of data from the buffer, chunks at a time. temporary_buffer = self.decompressor.unconsumed_tail if len(temporary_buffer) < self.buffer_size / 2: raw_buffer = self.source_file.read(size) if raw_buffer != '': temporary_buffer += raw_buffer if len(temporary_buffer) == 0: return u'' decompressed_data = self.decompressor.decompress(temporary_buffer, size) return decompressed_data def close(self): # pylint: disable=invalid-name self.decompressor.flush() def __exit__(self, exception_type, exception_value, execution_traceback): self.close() return False def _LoadHistogramList(input_file): """Incremental file decoding and JSON parsing when handling new histograms. This helper function takes an input file which yields fragments of JSON encoded histograms then incrementally builds the list of histograms to return the fully formed list in the end. Returns This function returns an instance of a list() containing dict()s decoded from the input_file. Raises This function may raise ValueError instances if we end up not finding valid JSON fragments inside the file. """ try: with timing.WallTimeLogger('json.load'): def NormalizeDecimals(obj): # Traverse every object in obj to turn Decimal objects into floats. if isinstance(obj, decimal.Decimal): return float(obj) if isinstance(obj, dict): for k, v in obj.items(): obj[k] = NormalizeDecimals(v) if isinstance(obj, list): obj = [NormalizeDecimals(x) for x in obj] return obj objects = [NormalizeDecimals(x) for x in ijson.items(input_file, 'item')] except ijson.JSONError as e: # Wrap exception in a ValueError raise ValueError('Failed to parse JSON: %s' % (e)) return objects class AddHistogramsProcessHandler(request_handler.RequestHandler): def post(self): datastore_hooks.SetPrivilegedRequest() try: params = json.loads(self.request.body) gcs_file_path = params['gcs_file_path'] try: logging.debug('Loading %s', gcs_file_path) gcs_file = cloudstorage.open( gcs_file_path, 'r', retry_params=_RETRY_PARAMS) with DecompressFileWrapper(gcs_file) as decompressing_file: histogram_dicts = _LoadHistogramList(decompressing_file) gcs_file.close() ProcessHistogramSet(histogram_dicts) finally: cloudstorage.delete(gcs_file_path, retry_params=_RETRY_PARAMS) except Exception as e: # pylint: disable=broad-except logging.error('Error processing histograms: %r', e.message) self.response.out.write(json.dumps({'error': e.message})) class AddHistogramsHandler(api_request_handler.ApiRequestHandler): def _CheckUser(self): self._CheckIsInternalUser() def Post(self): if utils.IsDevAppserver(): # Don't require developers to zip the body. # In prod, the data will be written to cloud storage and processed on the # taskqueue, so the caller will not see any errors. In dev_appserver, # process the data immediately so the caller will see errors. ProcessHistogramSet( _LoadHistogramList(StringIO.StringIO(self.request.body))) return with timing.WallTimeLogger('decompress'): try: data_str = self.request.body # Try to decompress at most 100 bytes from the data, only to determine # if we've been given compressed payload. zlib.decompressobj().decompress(data_str, 100) logging.info('Received compressed data.') except zlib.error: data_str = self.request.get('data') if not data_str: raise api_request_handler.BadRequestError( 'Missing or uncompressed data.') data_str = zlib.compress(data_str) logging.info('Received uncompressed data.') if not data_str: raise api_request_handler.BadRequestError('Missing "data" parameter') filename = uuid.uuid4() params = {'gcs_file_path': '/add-histograms-cache/%s' % filename} gcs_file = cloudstorage.open( params['gcs_file_path'], 'w', content_type='application/octet-stream', retry_params=_RETRY_PARAMS) gcs_file.write(data_str) gcs_file.close() retry_options = taskqueue.TaskRetryOptions( task_retry_limit=_TASK_RETRY_LIMIT) queue = taskqueue.Queue('default') queue.add( taskqueue.Task( url='/add_histograms/process', payload=json.dumps(params), retry_options=retry_options)) def _LogDebugInfo(histograms): hist = histograms.GetFirstHistogram() if not hist: logging.info('No histograms in data.') return log_urls = hist.diagnostics.get(reserved_infos.LOG_URLS.name) if log_urls: log_urls = list(log_urls) msg = 'Buildbot URL: %s' % str(log_urls) logging.info(msg) else: logging.info('No LOG_URLS in data.') build_urls = hist.diagnostics.get(reserved_infos.BUILD_URLS.name) if build_urls: build_urls = list(build_urls) msg = 'Build URL: %s' % str(build_urls) logging.info(msg) else: logging.info('No BUILD_URLS in data.') def ProcessHistogramSet(histogram_dicts): if not isinstance(histogram_dicts, list): raise api_request_handler.BadRequestError( 'HistogramSet JSON must be a list of dicts') histograms = histogram_set.HistogramSet() with timing.WallTimeLogger('hs.ImportDicts'): histograms.ImportDicts(histogram_dicts) with timing.WallTimeLogger('hs.DeduplicateDiagnostics'): histograms.DeduplicateDiagnostics() if len(histograms) == 0: raise api_request_handler.BadRequestError( 'HistogramSet JSON must contain at least one histogram.') with timing.WallTimeLogger('hs._LogDebugInfo'): _LogDebugInfo(histograms) with timing.WallTimeLogger('InlineDenseSharedDiagnostics'): InlineDenseSharedDiagnostics(histograms) # TODO(#4242): Get rid of this. # https://github.com/catapult-project/catapult/issues/4242 with timing.WallTimeLogger('_PurgeHistogramBinData'): _PurgeHistogramBinData(histograms) with timing.WallTimeLogger('_GetDiagnosticValue calls'): master = _GetDiagnosticValue( reserved_infos.MASTERS.name, histograms.GetFirstHistogram()) bot = _GetDiagnosticValue( reserved_infos.BOTS.name, histograms.GetFirstHistogram()) benchmark = _GetDiagnosticValue( reserved_infos.BENCHMARKS.name, histograms.GetFirstHistogram()) benchmark_description = _GetDiagnosticValue( reserved_infos.BENCHMARK_DESCRIPTIONS.name, histograms.GetFirstHistogram(), optional=True) with timing.WallTimeLogger('_ValidateMasterBotBenchmarkName'): _ValidateMasterBotBenchmarkName(master, bot, benchmark) with timing.WallTimeLogger('ComputeRevision'): suite_key = utils.TestKey('%s/%s/%s' % (master, bot, benchmark)) logging.info('Suite: %s', suite_key.id()) revision = ComputeRevision(histograms) logging.info('Revision: %s', revision) internal_only = graph_data.Bot.GetInternalOnlySync(master, bot) revision_record = histogram.HistogramRevisionRecord.GetOrCreate( suite_key, revision) revision_record.put() last_added = histogram.HistogramRevisionRecord.GetLatest( suite_key).get_result() # On first upload, a query immediately following a put may return nothing. if not last_added: last_added = revision_record _CheckRequest(last_added, 'No last revision') # We'll skip the histogram-level sparse diagnostics because we need to # handle those with the histograms, below, so that we can properly assign # test paths. with timing.WallTimeLogger('FindSuiteLevelSparseDiagnostics'): suite_level_sparse_diagnostic_entities = FindSuiteLevelSparseDiagnostics( histograms, suite_key, revision, internal_only) # TODO(896856): Refactor master/bot computation to happen above this line # so that we can replace with a DiagnosticRef rather than a full diagnostic. with timing.WallTimeLogger('DeduplicateAndPut'): new_guids_to_old_diagnostics = ( histogram.SparseDiagnostic.FindOrInsertDiagnostics( suite_level_sparse_diagnostic_entities, suite_key, revision, last_added.revision).get_result()) with timing.WallTimeLogger('ReplaceSharedDiagnostic calls'): for new_guid, old_diagnostic in new_guids_to_old_diagnostics.items(): histograms.ReplaceSharedDiagnostic( new_guid, diagnostic.Diagnostic.FromDict(old_diagnostic)) with timing.WallTimeLogger('_CreateHistogramTasks'): tasks = _CreateHistogramTasks( suite_key.id(), histograms, revision, benchmark_description) with timing.WallTimeLogger('_QueueHistogramTasks'): _QueueHistogramTasks(tasks) def _ValidateMasterBotBenchmarkName(master, bot, benchmark): for n in (master, bot, benchmark): if '/' in n: raise api_request_handler.BadRequestError('Illegal slash in %s' % n) def _QueueHistogramTasks(tasks): queue = taskqueue.Queue(TASK_QUEUE_NAME) futures = [] for i in range(0, len(tasks), taskqueue.MAX_TASKS_PER_ADD): f = queue.add_async(tasks[i:i + taskqueue.MAX_TASKS_PER_ADD]) futures.append(f) for f in futures: f.get_result() def _MakeTask(params): return taskqueue.Task( url='/add_histograms_queue', payload=json.dumps(params), _size_check=False) def _CreateHistogramTasks( suite_path, histograms, revision, benchmark_description): tasks = [] duplicate_check = set() for hist in histograms: diagnostics = FindHistogramLevelSparseDiagnostics(hist) test_path = '%s/%s' % (suite_path, histogram_helpers.ComputeTestPath(hist)) # Log the information here so we can see which histograms are being queued. logging.debug('Queueing: %s', test_path) if test_path in duplicate_check: raise api_request_handler.BadRequestError( 'Duplicate histogram detected: %s' % test_path) duplicate_check.add(test_path) # We create one task per histogram, so that we can get as much time as we # need for processing each histogram per task. task_dict = _MakeTaskDict( hist, test_path, revision, benchmark_description, diagnostics) tasks.append(_MakeTask([task_dict])) return tasks def _MakeTaskDict( hist, test_path, revision, benchmark_description, diagnostics): # TODO(simonhatch): "revision" is common to all tasks, as is the majority of # the test path params = { 'test_path': test_path, 'revision': revision, 'benchmark_description': benchmark_description } # By changing the GUID just before serializing the task, we're making it # unique for each histogram. This avoids each histogram trying to write the # same diagnostic out (datastore contention), at the cost of copyin the # data. These are sparsely written to datastore anyway, so the extra # storage should be minimal. for d in diagnostics.values(): d.ResetGuid() diagnostics = {k: d.AsDict() for k, d in diagnostics.items()} params['diagnostics'] = diagnostics params['data'] = hist.AsDict() return params def FindSuiteLevelSparseDiagnostics( histograms, suite_key, revision, internal_only): diagnostics = {} for hist in histograms: for name, diag in hist.diagnostics.items(): if name in SUITE_LEVEL_SPARSE_DIAGNOSTIC_NAMES: existing_entity = diagnostics.get(name) if existing_entity is None: diagnostics[name] = histogram.SparseDiagnostic( id=diag.guid, data=diag.AsDict(), test=suite_key, start_revision=revision, end_revision=sys.maxsize, name=name, internal_only=internal_only) elif existing_entity.key.id() != diag.guid: raise ValueError( name + ' diagnostics must be the same for all histograms') return list(diagnostics.values()) def FindHistogramLevelSparseDiagnostics(hist): diagnostics = {} for name, diag in hist.diagnostics.items(): if name in HISTOGRAM_LEVEL_SPARSE_DIAGNOSTIC_NAMES: diagnostics[name] = diag return diagnostics def _GetDiagnosticValue(name, hist, optional=False): if optional: if name not in hist.diagnostics: return None _CheckRequest( name in hist.diagnostics, 'Histogram [%s] missing "%s" diagnostic' % (hist.name, name)) value = hist.diagnostics[name] _CheckRequest( len(value) == 1, 'Histograms must have exactly 1 "%s"' % name) return value.GetOnlyElement() def ComputeRevision(histograms): _CheckRequest(len(histograms) > 0, 'Must upload at least one histogram') rev = _GetDiagnosticValue( reserved_infos.POINT_ID.name, histograms.GetFirstHistogram(), optional=True) if rev is None: rev = _GetDiagnosticValue( reserved_infos.CHROMIUM_COMMIT_POSITIONS.name, histograms.GetFirstHistogram(), optional=True) if rev is None: revision_timestamps = histograms.GetFirstHistogram().diagnostics.get( reserved_infos.REVISION_TIMESTAMPS.name) _CheckRequest(revision_timestamps is not None, 'Must specify REVISION_TIMESTAMPS, CHROMIUM_COMMIT_POSITIONS,' ' or POINT_ID') rev = revision_timestamps.max_timestamp if not isinstance(rev, int): raise api_request_handler.BadRequestError( 'Point ID must be an integer.') return rev def InlineDenseSharedDiagnostics(histograms): # TODO(896856): Delete inlined diagnostics from the set for hist in histograms: diagnostics = hist.diagnostics for name, diag in diagnostics.items(): if name not in SPARSE_DIAGNOSTIC_NAMES: diag.Inline() def _PurgeHistogramBinData(histograms): # We do this because RelatedEventSet and Breakdown data in bins is # enormous in their current implementation. for cur_hist in histograms: for cur_bin in cur_hist.bins: for dm in cur_bin.diagnostic_maps: keys = list(dm.keys()) for k in keys: del dm[k]
unknown
codeparrot/codeparrot-clean
import { codeFixAll, createCodeFixAction, registerCodeFix, } from "../_namespaces/ts.codefix.js"; import { cast, Diagnostics, Expression, factory, getTokenAtPosition, isShorthandPropertyAssignment, ShorthandPropertyAssignment, SourceFile, textChanges, } from "../_namespaces/ts.js"; const fixId = "fixPropertyAssignment"; const errorCodes = [ Diagnostics.Did_you_mean_to_use_a_Colon_An_can_only_follow_a_property_name_when_the_containing_object_literal_is_part_of_a_destructuring_pattern.code, ]; registerCodeFix({ errorCodes, fixIds: [fixId], getCodeActions(context) { const { sourceFile, span } = context; const property = getProperty(sourceFile, span.start); const changes = textChanges.ChangeTracker.with(context, t => doChange(t, context.sourceFile, property)); return [createCodeFixAction(fixId, changes, [Diagnostics.Change_0_to_1, "=", ":"], fixId, [Diagnostics.Switch_each_misused_0_to_1, "=", ":"])]; }, getAllCodeActions: context => codeFixAll(context, errorCodes, (changes, diag) => doChange(changes, diag.file, getProperty(diag.file, diag.start))), }); function doChange(changes: textChanges.ChangeTracker, sourceFile: SourceFile, node: ShorthandPropertyAssignment): void { changes.replaceNode(sourceFile, node, factory.createPropertyAssignment(node.name, node.objectAssignmentInitializer as Expression)); } function getProperty(sourceFile: SourceFile, pos: number): ShorthandPropertyAssignment { return cast(getTokenAtPosition(sourceFile, pos).parent, isShorthandPropertyAssignment); }
typescript
github
https://github.com/microsoft/TypeScript
src/services/codefixes/fixPropertyAssignment.ts
"""Grading tests""" import unittest from xmodule import graders from xmodule.graders import Score, aggregate_scores class GradesheetTest(unittest.TestCase): '''Tests the aggregate_scores method''' def test_weighted_grading(self): scores = [] Score.__sub__ = lambda me, other: (me.earned - other.earned) + (me.possible - other.possible) all_total, graded_total = aggregate_scores(scores) self.assertEqual(all_total, Score(earned=0, possible=0, graded=False, section="summary")) self.assertEqual(graded_total, Score(earned=0, possible=0, graded=True, section="summary")) scores.append(Score(earned=0, possible=5, graded=False, section="summary")) all_total, graded_total = aggregate_scores(scores) self.assertEqual(all_total, Score(earned=0, possible=5, graded=False, section="summary")) self.assertEqual(graded_total, Score(earned=0, possible=0, graded=True, section="summary")) scores.append(Score(earned=3, possible=5, graded=True, section="summary")) all_total, graded_total = aggregate_scores(scores) self.assertAlmostEqual(all_total, Score(earned=3, possible=10, graded=False, section="summary")) self.assertAlmostEqual(graded_total, Score(earned=3, possible=5, graded=True, section="summary")) scores.append(Score(earned=2, possible=5, graded=True, section="summary")) all_total, graded_total = aggregate_scores(scores) self.assertAlmostEqual(all_total, Score(earned=5, possible=15, graded=False, section="summary")) self.assertAlmostEqual(graded_total, Score(earned=5, possible=10, graded=True, section="summary")) class GraderTest(unittest.TestCase): '''Tests grader implementations''' empty_gradesheet = { } incomplete_gradesheet = { 'Homework': [], 'Lab': [], 'Midterm': [], } test_gradesheet = { 'Homework': [Score(earned=2, possible=20.0, graded=True, section='hw1'), Score(earned=16, possible=16.0, graded=True, section='hw2')], # The dropped scores should be from the assignments that don't exist yet 'Lab': [Score(earned=1, possible=2.0, graded=True, section='lab1'), # Dropped Score(earned=1, possible=1.0, graded=True, section='lab2'), Score(earned=1, possible=1.0, graded=True, section='lab3'), Score(earned=5, possible=25.0, graded=True, section='lab4'), # Dropped Score(earned=3, possible=4.0, graded=True, section='lab5'), # Dropped Score(earned=6, possible=7.0, graded=True, section='lab6'), Score(earned=5, possible=6.0, graded=True, section='lab7')], 'Midterm': [Score(earned=50.5, possible=100, graded=True, section="Midterm Exam"), ], } def test_single_section_grader(self): midterm_grader = graders.SingleSectionGrader("Midterm", "Midterm Exam") lab4_grader = graders.SingleSectionGrader("Lab", "lab4") bad_lab_grader = graders.SingleSectionGrader("Lab", "lab42") for graded in [midterm_grader.grade(self.empty_gradesheet), midterm_grader.grade(self.incomplete_gradesheet), bad_lab_grader.grade(self.test_gradesheet)]: self.assertEqual(len(graded['section_breakdown']), 1) self.assertEqual(graded['percent'], 0.0) graded = midterm_grader.grade(self.test_gradesheet) self.assertAlmostEqual(graded['percent'], 0.505) self.assertEqual(len(graded['section_breakdown']), 1) graded = lab4_grader.grade(self.test_gradesheet) self.assertAlmostEqual(graded['percent'], 0.2) self.assertEqual(len(graded['section_breakdown']), 1) def test_assignment_format_grader(self): homework_grader = graders.AssignmentFormatGrader("Homework", 12, 2) no_drop_grader = graders.AssignmentFormatGrader("Homework", 12, 0) # Even though the minimum number is 3, this should grade correctly when 7 assignments are found overflow_grader = graders.AssignmentFormatGrader("Lab", 3, 2) lab_grader = graders.AssignmentFormatGrader("Lab", 7, 3) # Test the grading of an empty gradesheet for graded in [homework_grader.grade(self.empty_gradesheet), no_drop_grader.grade(self.empty_gradesheet), homework_grader.grade(self.incomplete_gradesheet), no_drop_grader.grade(self.incomplete_gradesheet)]: self.assertAlmostEqual(graded['percent'], 0.0) # Make sure the breakdown includes 12 sections, plus one summary self.assertEqual(len(graded['section_breakdown']), 12 + 1) graded = homework_grader.grade(self.test_gradesheet) self.assertAlmostEqual(graded['percent'], 0.11) # 100% + 10% / 10 assignments self.assertEqual(len(graded['section_breakdown']), 12 + 1) graded = no_drop_grader.grade(self.test_gradesheet) self.assertAlmostEqual(graded['percent'], 0.0916666666666666) # 100% + 10% / 12 assignments self.assertEqual(len(graded['section_breakdown']), 12 + 1) graded = overflow_grader.grade(self.test_gradesheet) self.assertAlmostEqual(graded['percent'], 0.8880952380952382) # 100% + 10% / 5 assignments self.assertEqual(len(graded['section_breakdown']), 7 + 1) graded = lab_grader.grade(self.test_gradesheet) self.assertAlmostEqual(graded['percent'], 0.9226190476190477) self.assertEqual(len(graded['section_breakdown']), 7 + 1) def test_assignment_format_grader_on_single_section_entry(self): midterm_grader = graders.AssignmentFormatGrader("Midterm", 1, 0) # Test the grading on a section with one item: for graded in [midterm_grader.grade(self.empty_gradesheet), midterm_grader.grade(self.incomplete_gradesheet)]: self.assertAlmostEqual(graded['percent'], 0.0) # Make sure the breakdown includes just the one summary self.assertEqual(len(graded['section_breakdown']), 0 + 1) self.assertEqual(graded['section_breakdown'][0]['label'], 'Midterm') graded = midterm_grader.grade(self.test_gradesheet) self.assertAlmostEqual(graded['percent'], 0.505) self.assertEqual(len(graded['section_breakdown']), 0 + 1) def test_weighted_subsections_grader(self): # First, a few sub graders homework_grader = graders.AssignmentFormatGrader("Homework", 12, 2) lab_grader = graders.AssignmentFormatGrader("Lab", 7, 3) # phasing out the use of SingleSectionGraders, and instead using AssignmentFormatGraders that # will act like SingleSectionGraders on single sections. midterm_grader = graders.AssignmentFormatGrader("Midterm", 1, 0) weighted_grader = graders.WeightedSubsectionsGrader([(homework_grader, homework_grader.category, 0.25), (lab_grader, lab_grader.category, 0.25), (midterm_grader, midterm_grader.category, 0.5)]) over_one_weights_grader = graders.WeightedSubsectionsGrader([(homework_grader, homework_grader.category, 0.5), (lab_grader, lab_grader.category, 0.5), (midterm_grader, midterm_grader.category, 0.5)]) # The midterm should have all weight on this one zero_weights_grader = graders.WeightedSubsectionsGrader([(homework_grader, homework_grader.category, 0.0), (lab_grader, lab_grader.category, 0.0), (midterm_grader, midterm_grader.category, 0.5)]) # This should always have a final percent of zero all_zero_weights_grader = graders.WeightedSubsectionsGrader([(homework_grader, homework_grader.category, 0.0), (lab_grader, lab_grader.category, 0.0), (midterm_grader, midterm_grader.category, 0.0)]) empty_grader = graders.WeightedSubsectionsGrader([]) graded = weighted_grader.grade(self.test_gradesheet) self.assertAlmostEqual(graded['percent'], 0.5106547619047619) self.assertEqual(len(graded['section_breakdown']), (12 + 1) + (7 + 1) + 1) self.assertEqual(len(graded['grade_breakdown']), 3) graded = over_one_weights_grader.grade(self.test_gradesheet) self.assertAlmostEqual(graded['percent'], 0.7688095238095238) self.assertEqual(len(graded['section_breakdown']), (12 + 1) + (7 + 1) + 1) self.assertEqual(len(graded['grade_breakdown']), 3) graded = zero_weights_grader.grade(self.test_gradesheet) self.assertAlmostEqual(graded['percent'], 0.2525) self.assertEqual(len(graded['section_breakdown']), (12 + 1) + (7 + 1) + 1) self.assertEqual(len(graded['grade_breakdown']), 3) graded = all_zero_weights_grader.grade(self.test_gradesheet) self.assertAlmostEqual(graded['percent'], 0.0) self.assertEqual(len(graded['section_breakdown']), (12 + 1) + (7 + 1) + 1) self.assertEqual(len(graded['grade_breakdown']), 3) for graded in [weighted_grader.grade(self.empty_gradesheet), weighted_grader.grade(self.incomplete_gradesheet), zero_weights_grader.grade(self.empty_gradesheet), all_zero_weights_grader.grade(self.empty_gradesheet)]: self.assertAlmostEqual(graded['percent'], 0.0) self.assertEqual(len(graded['section_breakdown']), (12 + 1) + (7 + 1) + 1) self.assertEqual(len(graded['grade_breakdown']), 3) graded = empty_grader.grade(self.test_gradesheet) self.assertAlmostEqual(graded['percent'], 0.0) self.assertEqual(len(graded['section_breakdown']), 0) self.assertEqual(len(graded['grade_breakdown']), 0) def test_grader_from_conf(self): # Confs always produce a graders.WeightedSubsectionsGrader, so we test this by repeating the test # in test_graders.WeightedSubsectionsGrader, but generate the graders with confs. weighted_grader = graders.grader_from_conf([ { 'type': "Homework", 'min_count': 12, 'drop_count': 2, 'short_label': "HW", 'weight': 0.25, }, { 'type': "Lab", 'min_count': 7, 'drop_count': 3, 'category': "Labs", 'weight': 0.25 }, { 'type': "Midterm", 'name': "Midterm Exam", 'short_label': "Midterm", 'weight': 0.5, }, ]) empty_grader = graders.grader_from_conf([]) graded = weighted_grader.grade(self.test_gradesheet) self.assertAlmostEqual(graded['percent'], 0.5106547619047619) self.assertEqual(len(graded['section_breakdown']), (12 + 1) + (7 + 1) + 1) self.assertEqual(len(graded['grade_breakdown']), 3) graded = empty_grader.grade(self.test_gradesheet) self.assertAlmostEqual(graded['percent'], 0.0) self.assertEqual(len(graded['section_breakdown']), 0) self.assertEqual(len(graded['grade_breakdown']), 0) # Test that graders can also be used instead of lists of dictionaries homework_grader = graders.AssignmentFormatGrader("Homework", 12, 2) homework_grader2 = graders.grader_from_conf(homework_grader) graded = homework_grader2.grade(self.test_gradesheet) self.assertAlmostEqual(graded['percent'], 0.11) self.assertEqual(len(graded['section_breakdown']), 12 + 1) # TODO: How do we test failure cases? The parser only logs an error when # it can't parse something. Maybe it should throw exceptions?
unknown
codeparrot/codeparrot-clean
import base64 import os import email import urllib.parse import urllib.request import http.server import threading import unittest import hashlib from test import support from test.support import hashlib_helper from test.support import threading_helper try: import ssl except ImportError: ssl = None support.requires_working_socket(module=True) here = os.path.dirname(__file__) # Self-signed cert file for 'localhost' CERT_localhost = os.path.join(here, 'certdata', 'keycert.pem') # Self-signed cert file for 'fakehostname' CERT_fakehostname = os.path.join(here, 'certdata', 'keycert2.pem') # Loopback http server infrastructure class LoopbackHttpServer(http.server.HTTPServer): """HTTP server w/ a few modifications that make it useful for loopback testing purposes. """ def __init__(self, server_address, RequestHandlerClass): http.server.HTTPServer.__init__(self, server_address, RequestHandlerClass) # Set the timeout of our listening socket really low so # that we can stop the server easily. self.socket.settimeout(0.1) def get_request(self): """HTTPServer method, overridden.""" request, client_address = self.socket.accept() # It's a loopback connection, so setting the timeout # really low shouldn't affect anything, but should make # deadlocks less likely to occur. request.settimeout(10.0) return (request, client_address) class LoopbackHttpServerThread(threading.Thread): """Stoppable thread that runs a loopback http server.""" def __init__(self, request_handler): threading.Thread.__init__(self) self._stop_server = False self.ready = threading.Event() request_handler.protocol_version = "HTTP/1.0" self.httpd = LoopbackHttpServer(("127.0.0.1", 0), request_handler) self.port = self.httpd.server_port def stop(self): """Stops the webserver if it's currently running.""" self._stop_server = True self.join() self.httpd.server_close() def run(self): self.ready.set() while not self._stop_server: self.httpd.handle_request() # Authentication infrastructure class DigestAuthHandler: """Handler for performing digest authentication.""" def __init__(self): self._request_num = 0 self._nonces = [] self._users = {} self._realm_name = "Test Realm" self._qop = "auth" def set_qop(self, qop): self._qop = qop def set_users(self, users): assert isinstance(users, dict) self._users = users def set_realm(self, realm): self._realm_name = realm def _generate_nonce(self): self._request_num += 1 nonce = hashlib.md5(str(self._request_num).encode("ascii")).hexdigest() self._nonces.append(nonce) return nonce def _create_auth_dict(self, auth_str): first_space_index = auth_str.find(" ") auth_str = auth_str[first_space_index+1:] parts = auth_str.split(",") auth_dict = {} for part in parts: name, value = part.split("=") name = name.strip() if value[0] == '"' and value[-1] == '"': value = value[1:-1] else: value = value.strip() auth_dict[name] = value return auth_dict def _validate_auth(self, auth_dict, password, method, uri): final_dict = {} final_dict.update(auth_dict) final_dict["password"] = password final_dict["method"] = method final_dict["uri"] = uri HA1_str = "%(username)s:%(realm)s:%(password)s" % final_dict HA1 = hashlib.md5(HA1_str.encode("ascii")).hexdigest() HA2_str = "%(method)s:%(uri)s" % final_dict HA2 = hashlib.md5(HA2_str.encode("ascii")).hexdigest() final_dict["HA1"] = HA1 final_dict["HA2"] = HA2 response_str = "%(HA1)s:%(nonce)s:%(nc)s:" \ "%(cnonce)s:%(qop)s:%(HA2)s" % final_dict response = hashlib.md5(response_str.encode("ascii")).hexdigest() return response == auth_dict["response"] def _return_auth_challenge(self, request_handler): request_handler.send_response(407, "Proxy Authentication Required") request_handler.send_header("Content-Type", "text/html") request_handler.send_header( 'Proxy-Authenticate', 'Digest realm="%s", ' 'qop="%s",' 'nonce="%s", ' % \ (self._realm_name, self._qop, self._generate_nonce())) # XXX: Not sure if we're supposed to add this next header or # not. #request_handler.send_header('Connection', 'close') request_handler.end_headers() request_handler.wfile.write(b"Proxy Authentication Required.") return False def handle_request(self, request_handler): """Performs digest authentication on the given HTTP request handler. Returns True if authentication was successful, False otherwise. If no users have been set, then digest auth is effectively disabled and this method will always return True. """ if len(self._users) == 0: return True if "Proxy-Authorization" not in request_handler.headers: return self._return_auth_challenge(request_handler) else: auth_dict = self._create_auth_dict( request_handler.headers["Proxy-Authorization"] ) if auth_dict["username"] in self._users: password = self._users[ auth_dict["username"] ] else: return self._return_auth_challenge(request_handler) if not auth_dict.get("nonce") in self._nonces: return self._return_auth_challenge(request_handler) else: self._nonces.remove(auth_dict["nonce"]) auth_validated = False # MSIE uses short_path in its validation, but Python's # urllib.request uses the full path, so we're going to see if # either of them works here. for path in [request_handler.path, request_handler.short_path]: if self._validate_auth(auth_dict, password, request_handler.command, path): auth_validated = True if not auth_validated: return self._return_auth_challenge(request_handler) return True class BasicAuthHandler(http.server.BaseHTTPRequestHandler): """Handler for performing basic authentication.""" # Server side values USER = 'testUser' PASSWD = 'testPass' REALM = 'Test' USER_PASSWD = "%s:%s" % (USER, PASSWD) ENCODED_AUTH = base64.b64encode(USER_PASSWD.encode('ascii')).decode('ascii') def __init__(self, *args, **kwargs): http.server.BaseHTTPRequestHandler.__init__(self, *args, **kwargs) def log_message(self, format, *args): # Suppress console log message pass def do_HEAD(self): self.send_response(200) self.send_header("Content-type", "text/html") self.end_headers() def do_AUTHHEAD(self): self.send_response(401) self.send_header("WWW-Authenticate", "Basic realm=\"%s\"" % self.REALM) self.send_header("Content-type", "text/html") self.end_headers() def do_GET(self): if not self.headers.get("Authorization", ""): self.do_AUTHHEAD() self.wfile.write(b"No Auth header received") elif self.headers.get( "Authorization", "") == "Basic " + self.ENCODED_AUTH: self.send_response(200) self.end_headers() self.wfile.write(b"It works") else: # Request Unauthorized self.do_AUTHHEAD() # Proxy test infrastructure class FakeProxyHandler(http.server.BaseHTTPRequestHandler): """This is a 'fake proxy' that makes it look like the entire internet has gone down due to a sudden zombie invasion. It main utility is in providing us with authentication support for testing. """ def __init__(self, digest_auth_handler, *args, **kwargs): # This has to be set before calling our parent's __init__(), which will # try to call do_GET(). self.digest_auth_handler = digest_auth_handler http.server.BaseHTTPRequestHandler.__init__(self, *args, **kwargs) def log_message(self, format, *args): # Uncomment the next line for debugging. # sys.stderr.write(format % args) pass def do_GET(self): (scm, netloc, path, params, query, fragment) = urllib.parse.urlparse( self.path, "http") self.short_path = path if self.digest_auth_handler.handle_request(self): self.send_response(200, "OK") self.send_header("Content-Type", "text/html") self.end_headers() self.wfile.write(bytes("You've reached %s!<BR>" % self.path, "ascii")) self.wfile.write(b"Our apologies, but our server is down due to " b"a sudden zombie invasion.") # Test cases class BasicAuthTests(unittest.TestCase): USER = "testUser" PASSWD = "testPass" INCORRECT_PASSWD = "Incorrect" REALM = "Test" def setUp(self): super(BasicAuthTests, self).setUp() # With Basic Authentication def http_server_with_basic_auth_handler(*args, **kwargs): return BasicAuthHandler(*args, **kwargs) self.server = LoopbackHttpServerThread(http_server_with_basic_auth_handler) self.addCleanup(self.stop_server) self.server_url = 'http://127.0.0.1:%s' % self.server.port self.server.start() self.server.ready.wait() def stop_server(self): self.server.stop() self.server = None def tearDown(self): super(BasicAuthTests, self).tearDown() def test_basic_auth_success(self): ah = urllib.request.HTTPBasicAuthHandler() ah.add_password(self.REALM, self.server_url, self.USER, self.PASSWD) urllib.request.install_opener(urllib.request.build_opener(ah)) try: self.assertTrue(urllib.request.urlopen(self.server_url)) except urllib.error.HTTPError: self.fail("Basic auth failed for the url: %s" % self.server_url) def test_basic_auth_httperror(self): ah = urllib.request.HTTPBasicAuthHandler() ah.add_password(self.REALM, self.server_url, self.USER, self.INCORRECT_PASSWD) urllib.request.install_opener(urllib.request.build_opener(ah)) with self.assertRaises(urllib.error.HTTPError) as cm: urllib.request.urlopen(self.server_url) cm.exception.close() @hashlib_helper.requires_hashdigest("md5", openssl=True) class ProxyAuthTests(unittest.TestCase): URL = "http://localhost" USER = "tester" PASSWD = "test123" REALM = "TestRealm" def setUp(self): super(ProxyAuthTests, self).setUp() # Ignore proxy bypass settings in the environment. def restore_environ(old_environ): os.environ.clear() os.environ.update(old_environ) self.addCleanup(restore_environ, os.environ.copy()) os.environ['NO_PROXY'] = '' os.environ['no_proxy'] = '' self.digest_auth_handler = DigestAuthHandler() self.digest_auth_handler.set_users({self.USER: self.PASSWD}) self.digest_auth_handler.set_realm(self.REALM) # With Digest Authentication. def create_fake_proxy_handler(*args, **kwargs): return FakeProxyHandler(self.digest_auth_handler, *args, **kwargs) self.server = LoopbackHttpServerThread(create_fake_proxy_handler) self.addCleanup(self.stop_server) self.server.start() self.server.ready.wait() proxy_url = "http://127.0.0.1:%d" % self.server.port handler = urllib.request.ProxyHandler({"http" : proxy_url}) self.proxy_digest_handler = urllib.request.ProxyDigestAuthHandler() self.opener = urllib.request.build_opener( handler, self.proxy_digest_handler) def stop_server(self): self.server.stop() self.server = None def test_proxy_with_bad_password_raises_httperror(self): self.proxy_digest_handler.add_password(self.REALM, self.URL, self.USER, self.PASSWD+"bad") self.digest_auth_handler.set_qop("auth") with self.assertRaises(urllib.error.HTTPError) as cm: self.opener.open(self.URL) cm.exception.close() def test_proxy_with_no_password_raises_httperror(self): self.digest_auth_handler.set_qop("auth") with self.assertRaises(urllib.error.HTTPError) as cm: self.opener.open(self.URL) cm.exception.close() def test_proxy_qop_auth_works(self): self.proxy_digest_handler.add_password(self.REALM, self.URL, self.USER, self.PASSWD) self.digest_auth_handler.set_qop("auth") with self.opener.open(self.URL) as result: while result.read(): pass def test_proxy_qop_auth_int_works_or_throws_urlerror(self): self.proxy_digest_handler.add_password(self.REALM, self.URL, self.USER, self.PASSWD) self.digest_auth_handler.set_qop("auth-int") try: result = self.opener.open(self.URL) except urllib.error.URLError: # It's okay if we don't support auth-int, but we certainly # shouldn't receive any kind of exception here other than # a URLError. pass else: with result: while result.read(): pass def GetRequestHandler(responses): class FakeHTTPRequestHandler(http.server.BaseHTTPRequestHandler): server_version = "TestHTTP/" requests = [] headers_received = [] port = 80 def do_GET(self): body = self.send_head() while body: done = self.wfile.write(body) body = body[done:] def do_POST(self): content_length = self.headers["Content-Length"] post_data = self.rfile.read(int(content_length)) self.do_GET() self.requests.append(post_data) def send_head(self): FakeHTTPRequestHandler.headers_received = self.headers self.requests.append(self.path) response_code, headers, body = responses.pop(0) self.send_response(response_code) for (header, value) in headers: self.send_header(header, value % {'port':self.port}) if body: self.send_header("Content-type", "text/plain") self.end_headers() return body self.end_headers() def log_message(self, *args): pass return FakeHTTPRequestHandler class TestUrlopen(unittest.TestCase): """Tests urllib.request.urlopen using the network. These tests are not exhaustive. Assuming that testing using files does a good job overall of some of the basic interface features. There are no tests exercising the optional 'data' and 'proxies' arguments. No tests for transparent redirection have been written. """ def setUp(self): super(TestUrlopen, self).setUp() # clear _opener global variable self.addCleanup(urllib.request.urlcleanup) # Ignore proxies for localhost tests. def restore_environ(old_environ): os.environ.clear() os.environ.update(old_environ) self.addCleanup(restore_environ, os.environ.copy()) os.environ['NO_PROXY'] = '*' os.environ['no_proxy'] = '*' def urlopen(self, url, data=None, **kwargs): l = [] f = urllib.request.urlopen(url, data, **kwargs) try: # Exercise various methods l.extend(f.readlines(200)) l.append(f.readline()) l.append(f.read(1024)) l.append(f.read()) finally: f.close() return b"".join(l) def stop_server(self): self.server.stop() self.server = None def start_server(self, responses=None): if responses is None: responses = [(200, [], b"we don't care")] handler = GetRequestHandler(responses) self.server = LoopbackHttpServerThread(handler) self.addCleanup(self.stop_server) self.server.start() self.server.ready.wait() port = self.server.port handler.port = port return handler def start_https_server(self, responses=None, **kwargs): if not hasattr(urllib.request, 'HTTPSHandler'): self.skipTest('ssl support required') from test.ssl_servers import make_https_server if responses is None: responses = [(200, [], b"we care a bit")] handler = GetRequestHandler(responses) server = make_https_server(self, handler_class=handler, **kwargs) handler.port = server.port return handler def test_redirection(self): expected_response = b"We got here..." responses = [ (302, [("Location", "http://localhost:%(port)s/somewhere_else")], ""), (200, [], expected_response) ] handler = self.start_server(responses) data = self.urlopen("http://localhost:%s/" % handler.port) self.assertEqual(data, expected_response) self.assertEqual(handler.requests, ["/", "/somewhere_else"]) def test_chunked(self): expected_response = b"hello world" chunked_start = ( b'a\r\n' b'hello worl\r\n' b'1\r\n' b'd\r\n' b'0\r\n' ) response = [(200, [("Transfer-Encoding", "chunked")], chunked_start)] handler = self.start_server(response) data = self.urlopen("http://localhost:%s/" % handler.port) self.assertEqual(data, expected_response) def test_404(self): expected_response = b"Bad bad bad..." handler = self.start_server([(404, [], expected_response)]) try: self.urlopen("http://localhost:%s/weeble" % handler.port) except urllib.error.URLError as f: data = f.read() f.close() else: self.fail("404 should raise URLError") self.assertEqual(data, expected_response) self.assertEqual(handler.requests, ["/weeble"]) def test_200(self): expected_response = b"pycon 2008..." handler = self.start_server([(200, [], expected_response)]) data = self.urlopen("http://localhost:%s/bizarre" % handler.port) self.assertEqual(data, expected_response) self.assertEqual(handler.requests, ["/bizarre"]) def test_200_with_parameters(self): expected_response = b"pycon 2008..." handler = self.start_server([(200, [], expected_response)]) data = self.urlopen("http://localhost:%s/bizarre" % handler.port, b"get=with_feeling") self.assertEqual(data, expected_response) self.assertEqual(handler.requests, ["/bizarre", b"get=with_feeling"]) def test_https(self): handler = self.start_https_server() context = ssl.create_default_context(cafile=CERT_localhost) data = self.urlopen("https://localhost:%s/bizarre" % handler.port, context=context) self.assertEqual(data, b"we care a bit") def test_https_sni(self): if ssl is None: self.skipTest("ssl module required") if not ssl.HAS_SNI: self.skipTest("SNI support required in OpenSSL") sni_name = None def cb_sni(ssl_sock, server_name, initial_context): nonlocal sni_name sni_name = server_name context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) context.set_servername_callback(cb_sni) handler = self.start_https_server(context=context, certfile=CERT_localhost) context = ssl.create_default_context(cafile=CERT_localhost) self.urlopen("https://localhost:%s" % handler.port, context=context) self.assertEqual(sni_name, "localhost") def test_sending_headers(self): handler = self.start_server() req = urllib.request.Request("http://localhost:%s/" % handler.port, headers={"Range": "bytes=20-39"}) with urllib.request.urlopen(req): pass self.assertEqual(handler.headers_received["Range"], "bytes=20-39") def test_sending_headers_camel(self): handler = self.start_server() req = urllib.request.Request("http://localhost:%s/" % handler.port, headers={"X-SoMe-hEader": "foobar"}) with urllib.request.urlopen(req): pass self.assertIn("X-Some-Header", handler.headers_received.keys()) self.assertNotIn("X-SoMe-hEader", handler.headers_received.keys()) def test_basic(self): handler = self.start_server() with urllib.request.urlopen("http://localhost:%s" % handler.port) as open_url: for attr in ("read", "close", "info", "geturl"): self.assertHasAttr(open_url, attr) self.assertTrue(open_url.read(), "calling 'read' failed") def test_info(self): handler = self.start_server() open_url = urllib.request.urlopen( "http://localhost:%s" % handler.port) with open_url: info_obj = open_url.info() self.assertIsInstance(info_obj, email.message.Message, "object returned by 'info' is not an " "instance of email.message.Message") self.assertEqual(info_obj.get_content_subtype(), "plain") def test_geturl(self): # Make sure same URL as opened is returned by geturl. handler = self.start_server() open_url = urllib.request.urlopen("http://localhost:%s" % handler.port) with open_url: url = open_url.geturl() self.assertEqual(url, "http://localhost:%s" % handler.port) def test_iteration(self): expected_response = b"pycon 2008..." handler = self.start_server([(200, [], expected_response)]) data = urllib.request.urlopen("http://localhost:%s" % handler.port) for line in data: self.assertEqual(line, expected_response) def test_line_iteration(self): lines = [b"We\n", b"got\n", b"here\n", b"verylong " * 8192 + b"\n"] expected_response = b"".join(lines) handler = self.start_server([(200, [], expected_response)]) data = urllib.request.urlopen("http://localhost:%s" % handler.port) for index, line in enumerate(data): self.assertEqual(line, lines[index], "Fetched line number %s doesn't match expected:\n" " Expected length was %s, got %s" % (index, len(lines[index]), len(line))) self.assertEqual(index + 1, len(lines)) def test_issue16464(self): # See https://bugs.python.org/issue16464 # and https://bugs.python.org/issue46648 handler = self.start_server([ (200, [], b'any'), (200, [], b'any'), ]) opener = urllib.request.build_opener() request = urllib.request.Request("http://localhost:%s" % handler.port) self.assertEqual(None, request.data) opener.open(request, "1".encode("us-ascii")) self.assertEqual(b"1", request.data) self.assertEqual("1", request.get_header("Content-length")) opener.open(request, "1234567890".encode("us-ascii")) self.assertEqual(b"1234567890", request.data) self.assertEqual("10", request.get_header("Content-length")) def setUpModule(): thread_info = threading_helper.threading_setup() unittest.addModuleCleanup(threading_helper.threading_cleanup, *thread_info) if __name__ == "__main__": unittest.main()
python
github
https://github.com/python/cpython
Lib/test/test_urllib2_localnet.py
<?php $container->loadFromExtension('framework', [ 'php_errors' => [ 'throw' => true, ], ]);
php
github
https://github.com/symfony/symfony
src/Symfony/Bundle/FrameworkBundle/Tests/DependencyInjection/Fixtures/php/php_errors_enabled.php
/*! * @license * Copyright Google LLC All Rights Reserved. * * Use of this source code is governed by an MIT-style license that can be * found in the LICENSE file at https://angular.dev/license */ export interface AlgoliaConfig { apiKey: string; appId: string; indexName: string; }
typescript
github
https://github.com/angular/angular
adev/shared-docs/interfaces/algolia-config.ts
/* * rmgr.h * * Resource managers definition * * src/include/access/rmgr.h */ #ifndef RMGR_H #define RMGR_H typedef uint8 RmgrId; /* * Built-in resource managers * * The actual numerical values for each rmgr ID are defined by the order * of entries in rmgrlist.h. * * Note: RM_MAX_ID must fit in RmgrId; widening that type will affect the XLOG * file format. */ #define PG_RMGR(symname,name,redo,desc,identify,startup,cleanup,mask,decode) \ symname, typedef enum RmgrIds { #include "access/rmgrlist.h" RM_NEXT_ID } RmgrIds; #undef PG_RMGR #define RM_MAX_ID UINT8_MAX #define RM_MAX_BUILTIN_ID (RM_NEXT_ID - 1) #define RM_MIN_CUSTOM_ID 128 #define RM_MAX_CUSTOM_ID UINT8_MAX #define RM_N_IDS (UINT8_MAX + 1) #define RM_N_BUILTIN_IDS (RM_MAX_BUILTIN_ID + 1) #define RM_N_CUSTOM_IDS (RM_MAX_CUSTOM_ID - RM_MIN_CUSTOM_ID + 1) static inline bool RmgrIdIsBuiltin(int rmid) { return rmid <= RM_MAX_BUILTIN_ID; } static inline bool RmgrIdIsCustom(int rmid) { return rmid >= RM_MIN_CUSTOM_ID && rmid <= RM_MAX_CUSTOM_ID; } #define RmgrIdIsValid(rmid) (RmgrIdIsBuiltin((rmid)) || RmgrIdIsCustom((rmid))) /* * RmgrId to use for extensions that require an RmgrId, but are still in * development and have not reserved their own unique RmgrId yet. See: * https://wiki.postgresql.org/wiki/CustomWALResourceManagers */ #define RM_EXPERIMENTAL_ID 128 #endif /* RMGR_H */
c
github
https://github.com/postgres/postgres
src/include/access/rmgr.h
#!/usr/bin/python # (c) 2019, NetApp, Inc # GNU General Public License v3.0+ # (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' module: na_ontap_volume_autosize short_description: NetApp ONTAP manage volume autosize extends_documentation_fragment: - netapp.na_ontap version_added: '2.9' author: NetApp Ansible Team (@carchi8py) <ng-ansibleteam@netapp.com> description: - Modify Volume AutoSize options: volume: description: - The name of the flexible volume for which we want to set autosize. type: str required: true mode: description: - Specify the flexible volume's autosize mode of operation. type: str choices: ['grow', 'grow_shrink', 'off'] vserver: description: - Name of the vserver to use. required: true type: str grow_threshold_percent: description: - Specifies the percentage of the flexible volume's capacity at which autogrow is initiated. - The default grow threshold varies from 85% to 98%, depending on the volume size. - It is an error for the grow threshold to be less than or equal to the shrink threshold. - Range between 0 and 100 type: int increment_size: description: - Specify the flexible volume's increment size using the following format < number > [k|m|g|t] - The amount is the absolute size to set. - The trailing 'k', 'm', 'g', and 't' indicates the desired units, namely 'kilobytes', 'megabytes', 'gigabytes', and 'terabytes' (respectively). type: str maximum_size: description: - Specify the flexible volume's maximum allowed size using the following format < number > [k|m|g|t] - The amount is the absolute size to set. - The trailing 'k', 'm', 'g', and 't' indicates the desired units, namely 'kilobytes', 'megabytes', 'gigabytes', and 'terabytes' (respectively). - The default value is 20% greater than the volume size at the time autosize was enabled. - It is an error for the maximum volume size to be less than the current volume size. - It is also an error for the maximum size to be less than or equal to the minimum size. type: str minimum_size: description: - Specify the flexible volume's minimum allowed size using the following format < number > [k|m|g|t] The amount is the absolute size to set. - The trailing 'k', 'm', 'g', and 't' indicates the desired units, namely 'kilobytes', 'megabytes', 'gigabytes', and 'terabytes' (respectively). - The default value is the size of the volume at the time the 'grow_shrink' mode was enabled. - It is an error for the minimum size to be greater than or equal to the maximum size. type: str reset: description: - "Sets the values of maximum_size, increment_size, minimum_size, grow_threshold_percent, shrink_threshold_percent and mode to their defaults" type: bool shrink_threshold_percent: description: - Specifies the percentage of the flexible volume's capacity at which autoshrink is initiated. - The default shrink threshold is 50%. It is an error for the shrink threshold to be greater than or equal to the grow threshold. - Range between 0 and 100 type: int ''' EXAMPLES = """ - name: Modify volume autosize na_ontap_volume_autosize: hostname: 10.193.79.189 username: admin password: netapp1! volume: ansibleVolumesize12 mode: grow grow_threshold_percent: 99 increment_size: 50m maximum_size: 10g minimum_size: 21m shrink_threshold_percent: 40 vserver: ansible_vserver - name: Reset volume autosize na_ontap_volume_autosize: hostname: 10.193.79.189 username: admin password: netapp1! volume: ansibleVolumesize12 reset: true vserver: ansible_vserver """ RETURN = """ """ import sys import copy import traceback import ansible.module_utils.netapp as netapp_utils from ansible.module_utils.netapp_module import NetAppModule from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.netapp import OntapRestAPI from ansible.module_utils._text import to_native HAS_NETAPP_LIB = netapp_utils.has_netapp_lib() class NetAppOntapVolumeAutosize(object): def __init__(self): self.use_rest = False # Volume_autosize returns KB and not B like Volume so values are shifted down 1 self._size_unit_map = dict( k=1, m=1024, g=1024 ** 2, t=1024 ** 3, ) self.argument_spec = netapp_utils.na_ontap_host_argument_spec() self.argument_spec.update(dict( volume=dict(required=True, type="str"), mode=dict(required=False, choices=['grow', 'grow_shrink', 'off']), vserver=dict(required=True, type='str'), grow_threshold_percent=dict(required=False, type='int'), increment_size=dict(required=False, type='str'), maximum_size=dict(required=False, type='str'), minimum_size=dict(required=False, type='str'), reset=dict(required=False, type='bool'), shrink_threshold_percent=dict(required=False, type='int') )) self.module = AnsibleModule( argument_spec=self.argument_spec, supports_check_mode=True, mutually_exclusive=[ ['reset', 'maximum_size'], ['reset', 'increment_size'], ['reset', 'minimum_size'], ['reset', 'grow_threshold_percent'], ['reset', 'shrink_threshold_percent'], ['reset', 'mode'] ] ) self.na_helper = NetAppModule() self.parameters = self.na_helper.set_parameters(self.module.params) # API should be used for ONTAP 9.6 or higher, ZAPI for lower version self.restApi = OntapRestAPI(self.module) if self.restApi.is_rest(): self.use_rest = True # increment size and reset are not supported with rest api if self.parameters.get('increment_size'): self.module.fail_json(msg="Rest API does not support increment size, please switch to ZAPI") if self.parameters.get('reset'): self.module.fail_json(msg="Rest API does not support reset, please switch to ZAPI") else: if HAS_NETAPP_LIB is False: self.module.fail_json(msg="the python NetApp-Lib module is required") else: self.server = netapp_utils.setup_na_ontap_zapi(module=self.module, vserver=self.parameters['vserver']) def get_volume_autosize(self, uuid=None): """ Get volume_autosize information from the ONTAP system :return: """ if self.use_rest: params = {'fields': 'autosize'} api = 'storage/volumes/' + uuid message, error = self.restApi.get(api, params) if error is not None: self.module.fail_json(msg="%s" % error) return self._create_get_volume_return(message['autosize']) else: volume_autosize_info = netapp_utils.zapi.NaElement('volume-autosize-get') volume_autosize_info.add_new_child('volume', self.parameters['volume']) try: result = self.server.invoke_successfully(volume_autosize_info, True) except netapp_utils.zapi.NaApiError as error: self.module.fail_json(msg='Error fetching volume autosize infor for %s : %s' % (self.parameters['volume'], to_native(error)), exception=traceback.format_exc()) return self._create_get_volume_return(result) def _create_get_volume_return(self, results): """ Create a return value from volume-autosize-get info file :param results: :return: """ return_value = {} if self.use_rest: if 'mode' in results: return_value['mode'] = results['mode'] if 'grow_threshold' in results: return_value['grow_threshold_percent'] = results['grow_threshold'] if 'maximum' in results: return_value['maximum_size'] = results['maximum'] if 'minimum' in results: return_value['minimum_size'] = results['minimum'] if 'shrink_threshold' in results: return_value['shrink_threshold_percent'] = results['shrink_threshold'] else: if results.get_child_by_name('mode'): return_value['mode'] = results.get_child_content('mode') if results.get_child_by_name('grow-threshold-percent'): return_value['grow_threshold_percent'] = int(results.get_child_content('grow-threshold-percent')) if results.get_child_by_name('increment-size'): return_value['increment_size'] = results.get_child_content('increment-size') if results.get_child_by_name('maximum-size'): return_value['maximum_size'] = results.get_child_content('maximum-size') if results.get_child_by_name('minimum-size'): return_value['minimum_size'] = results.get_child_content('minimum-size') if results.get_child_by_name('shrink-threshold-percent'): return_value['shrink_threshold_percent'] = int(results.get_child_content('shrink-threshold-percent')) if return_value == {}: return_value = None return return_value def modify_volume_autosize(self, uuid=None): """ Modify a Volumes autosize :return: """ if self.use_rest: params = {} data = {} autosize = {} if self.parameters.get('mode'): autosize['mode'] = self.parameters['mode'] if self.parameters.get('grow_threshold_percent'): autosize['grow_threshold'] = self.parameters['grow_threshold_percent'] if self.parameters.get('maximum_size'): autosize['maximum'] = self.parameters['maximum_size'] if self.parameters.get('minimum_size'): autosize['minimum'] = self.parameters['minimum_size'] if self.parameters.get('shrink_threshold_percent'): autosize['shrink_threshold'] = self.parameters['shrink_threshold_percent'] data['autosize'] = autosize api = "storage/volumes/" + uuid message, error = self.restApi.patch(api, data, params) if error is not None: self.module.fail_json(msg="%s" % error) else: volume_autosize_info = netapp_utils.zapi.NaElement('volume-autosize-set') volume_autosize_info.add_new_child('volume', self.parameters['volume']) if self.parameters.get('mode'): volume_autosize_info.add_new_child('mode', self.parameters['mode']) if self.parameters.get('grow_threshold_percent'): volume_autosize_info.add_new_child('grow-threshold-percent', str(self.parameters['grow_threshold_percent'])) if self.parameters.get('increment_size'): volume_autosize_info.add_new_child('increment-size', self.parameters['increment_size']) if self.parameters.get('reset') is not None: volume_autosize_info.add_new_child('reset', str(self.parameters['reset'])) if self.parameters.get('maximum_size'): volume_autosize_info.add_new_child('maximum-size', self.parameters['maximum_size']) if self.parameters.get('minimum_size'): volume_autosize_info.add_new_child('minimum-size', self.parameters['minimum_size']) if self.parameters.get('shrink_threshold_percent'): volume_autosize_info.add_new_child('shrink-threshold-percent', str(self.parameters['shrink_threshold_percent'])) try: self.server.invoke_successfully(volume_autosize_info, True) except netapp_utils.zapi.NaApiError as error: self.module.fail_json(msg="Error modify volume autosize for %s: %s" % (self.parameters["volume"], to_native(error)), exception=traceback.format_exc()) def modify_to_kb(self, converted_parameters): """ Save a converted parameter :param converted_parameters: Dic of all parameters :return: """ for attr in ['maximum_size', 'minimum_size', 'increment_size']: if converted_parameters.get(attr): if self.use_rest: converted_parameters[attr] = self.convert_to_byte(attr, converted_parameters) else: converted_parameters[attr] = str(self.convert_to_kb(attr, converted_parameters)) return converted_parameters def convert_to_kb(self, variable, converted_parameters): """ Convert a number 10m in to its correct KB size :param variable: the Parameter we are going to covert :param converted_parameters: Dic of all parameters :return: """ if converted_parameters.get(variable)[-1] not in ['k', 'm', 'g', 't']: self.module.fail_json(msg="%s must end with a k, m, g or t" % variable) return self._size_unit_map[converted_parameters.get(variable)[-1]] * int(converted_parameters.get(variable)[:-1]) def convert_to_byte(self, variable, converted_parameters): if converted_parameters.get(variable)[-1] not in ['k', 'm', 'g', 't']: self.module.fail_json(msg="%s must end with a k, m, g or t" % variable) return (self._size_unit_map[converted_parameters.get(variable)[-1]] * int(converted_parameters.get(variable)[:-1])) * 1024 def get_volume_uuid(self): """ Get a volume's UUID :return: uuid of the volume """ params = {'fields': '*', 'name': self.parameters['volume'], 'svm.name': self.parameters['vserver']} api = "storage/volumes" message, error = self.restApi.get(api, params) if error is not None: self.module.fail_json(msg="%s" % error) return message['records'][0]['uuid'] def apply(self): # TODO Logging for rest uuid = None if not self.use_rest: netapp_utils.ems_log_event("na_ontap_volume_autosize", self.server) if self.use_rest: # we only have the volume name, we need to the the uuid for the volume uuid = self.get_volume_uuid() current = self.get_volume_autosize(uuid=uuid) converted_parameters = copy.deepcopy(self.parameters) converted_parameters = self.modify_to_kb(converted_parameters) self.na_helper.get_modified_attributes(current, converted_parameters) if self.na_helper.changed: if self.module.check_mode: pass else: self.modify_volume_autosize(uuid=uuid) if self.parameters.get('reset') is True: self.modify_volume_autosize(uuid=uuid) self.na_helper.changed = True self.module.exit_json(changed=self.na_helper.changed) def main(): """ Apply volume autosize operations from playbook :return: """ obj = NetAppOntapVolumeAutosize() obj.apply() if __name__ == '__main__': main()
unknown
codeparrot/codeparrot-clean
# -*- coding: utf-8 -*- from __future__ import unicode_literals """ For django channels controller (akin to views.py). Activated by config/routing.py """ import logging, pprint, time # from django.contrib.sites.models import Site from django.core.mail import EmailMessage from django.utils import timezone from email_app.models import Invitation # logger = logging.getLogger('email') log = logging.getLogger(__name__) log.debug( 'consumers.py loaded' ) def send_invite( message_obj ): log.debug( 'starting send_invite()' ) time.sleep( 1 ) try: log.debug( 'message_obj, `{}`'.format(message_obj) ) log.debug( 'message_obj.__dict__, `{}`'.format(pprint.pformat(message_obj.__dict__)) ) notification_dct = message_obj.content log.debug( 'notification_dct, `{}`'.format(notification_dct) ) invite_key = notification_dct['key'] log.debug( 'invite_key, `{}`'.format(invite_key) ) invite = Invitation.objects.get( key=invite_key ) except Exception as e: log.error( 'e, ```{}```'.format(unicode(repr(e))) ) log.error("Invitation to send not found") return subject = "You've been invited!" body = "Go to https://%s/invites/accept/%s/ to join!" % ( 'foo', invite.key, ) try: message = EmailMessage( subject=subject, body=body, from_email="from_email", to=[invite.email,], ) message.send() invite.sent = timezone.now() invite.save() except: log.exception('Problem sending invite %s' % (invite.id)) # def send_invite(message): # log.debug( 'starting send_invite()' ) # try: # invite = Invitation.objects.get( # id=message.content.get('id'), # ) # except Invitation.DoesNotExist: # log.error("Invitation to send not found") # return # subject = "You've been invited!" # body = "Go to https://%s/invites/accept/%s/ to join!" % ( # Site.objects.get_current().domain, # invite.key, # ) # try: # message = EmailMessage( # subject=subject, # body=body, # from_email="from_email", # to=[invite.email,], # ) # message.send() # invite.sent = timezone.now() # invite.save() # except: # log.exception('Problem sending invite %s' % (invite.id))
unknown
codeparrot/codeparrot-clean
import os import numpy import vigra from functools import partial from StringIO import StringIO ## Instead of importing requests and PIL here, ## use late imports (below) so people who don't use TiledVolume don't have to have them # New dependency: requests is way more convenient than urllib or httplib #import requests # Use PIL instead of vigra since it allows us to open images in-memory #from PIL import Image from lazyflow.utility.timer import Timer from lazyflow.utility.jsonConfig import JsonConfigParser, AutoEval, FormattedField from lazyflow.roi import getIntersectingBlocks, getBlockBounds, roiToSlice, getIntersection from lazyflow.request import Request, RequestPool import logging logger = logging.getLogger(__name__) class TiledVolume(object): """ Given a directory of image tiles that make up a volume, produces numpy array volumes for arbitrary roi requests. """ #: These fields describe the schema of the description file. #: See the source code comments for a description of each field. DescriptionFields = \ { "_schema_name" : "tiled-volume-description", "_schema_version" : 1.0, "name" : str, "format" : str, "dtype" : AutoEval(), "bounds_zyx" : AutoEval(numpy.array), # Maximum coordinates (+1) "view_origin_zyx" : AutoEval(numpy.array), # Optional offset for output 'view' "view_shape_zyx" : AutoEval(numpy.array), # Shape of the output 'view'. If not provided, defaults to bounds - origin "resolution_zyx" : AutoEval(numpy.array), "tile_shape_2d_yx" : AutoEval(numpy.array), "is_rgb" : bool, # Indicates that we must convert to grayscale "username" : str, "password" : str, # This doesn't change how the data is read from the server, # but instead specifies the indexing order of the numpy volumes produced. "output_axes" : str, "cache_tiles" : bool, # Offset not supported for now... #"origin_offset" : AutoEval(numpy.array), # For now we support 3D-only, sliced across Z (TODO: Support 5D?) # We allow multiple url schemes: tiles might be addressed via pixel coordinates or row/column indexing # (z_index and z_start are synonyms here -- either is allowed) # Example: pixel-wise tile names: # "tile_url_format" : "http://my.tiles.org/my_tiles/{z_start}-{z_stop}/{y_start}-{y_stop}/{x_start}-{x_stop}.jpg" # Example: row/column-wise tile names # "tile_url_format" : "http://my.tiles.org/my_tiles/{z_index}/{y_index}/{x_index}.jpg" # Also, local tile sources (filesystem, not http) are okay: # "tile_url_format" : "/my_hard_disk/my_tiles/{z_index}/{y_index}/{x_index}.jpg" "tile_url_format" : FormattedField( requiredFields=[], optionalFields=["x_start", "y_start", "z_start", "x_stop", "y_stop", "z_stop", "x_index", "y_index", "z_index", "raveler_z_base"] ), # Special keyword for Raveler session directories. See notes below. "invert_y_axis" : bool, # For raveler volumes, the y-axis coordinate is inverted. # A list of lists, mapping src slices to destination slices (for "filling in" missing slices) # Example If slices 101,102,103 are missing data, you might want to simply repeat the data from slice 100: # "extend_slices" : [ [100, [101, 102, 103]] ] "extend_slices" : list, # Some tiled volumes have complicated mappings from "real" or "global" coordinates to url/filepath coordinates. # This field will be eval()'d before the tile is retrieved # For example, if the slices were named according to their position in nanometers instead of pixels, this might do the trick: # "z_translation_function" : "lambda z: 40*z" "z_translation_function" : str, # Optional data transform. For example: # "data_transform_function" : "lambda a: a == 0", "data_transform_function" : str } DescriptionSchema = JsonConfigParser( DescriptionFields ) @classmethod def readDescription(cls, descriptionFilePath): # Read file description = TiledVolume.DescriptionSchema.parseConfigFile( descriptionFilePath ) cls.updateDescription(description) return description @classmethod def updateDescription(cls, description): """ Some description fields are optional. If they aren't provided in the description JSON file, then this function provides them with default values, based on the other description fields. """ # Augment with default parameters. logger.debug(str(description)) if description.view_origin_zyx is None: description.view_origin_zyx = numpy.array( [0]*len(description.bounds_zyx) ) if description.view_shape_zyx is None: description.view_shape_zyx = description.bounds_zyx - description.view_origin_zyx if not description.output_axes: description.output_axes = "zyx" assert description.output_axes is None or set(description.output_axes) == set("zyx"), \ "Axis order must include x,y,z (and nothing else)" if not description.extend_slices: description.extend_slices = [] if description.cache_tiles is None: description.cache_tiles = False def __init__( self, descriptionFilePath ): self.description = TiledVolume.readDescription( descriptionFilePath ) self._session = None assert self.description.format in vigra.impex.listExtensions().split(), \ "Unknown tile format: {}".format( self.description.format ) assert self.description.tile_shape_2d_yx.shape == (2,) assert self.description.bounds_zyx.shape == (3,) assert self.description.view_shape_zyx.shape == (3,) shape_dict = dict( zip('zyx', self.description.view_shape_zyx) ) self.output_shape = tuple( shape_dict[k] for k in self.description.output_axes ) self._slice_remapping = {} for source, destinations in self.description.extend_slices: for dest in destinations: self._slice_remapping[dest] = source def close(self): if self._session: self._session.close() def read(self, view_roi, result_out): """ roi: (start, stop) tuples, ordered according to description.output_axes roi should be relative to the view """ output_axes = self.description.output_axes roi_transposed = zip(*view_roi) roi_dict = dict( zip(output_axes, roi_transposed) ) view_roi = zip( *(roi_dict['z'], roi_dict['y'], roi_dict['x']) ) # First, normalize roi and result to zyx order result_out = vigra.taggedView(result_out, output_axes) result_out = result_out.withAxes(*'zyx') assert numpy.array(view_roi).shape == (2,3), "Invalid roi for 3D volume: {}".format( view_roi ) view_roi = numpy.array(view_roi) assert (result_out.shape == (view_roi[1] - view_roi[0])).all() # User gave roi according to the view output. # Now offset it find global roi. roi = view_roi + self.description.view_origin_zyx tile_blockshape = (1,) + tuple(self.description.tile_shape_2d_yx) tile_starts = getIntersectingBlocks( tile_blockshape, roi ) pool = RequestPool() for tile_start in tile_starts: tile_roi_in = getBlockBounds( self.description.bounds_zyx, tile_blockshape, tile_start ) tile_roi_in = numpy.array(tile_roi_in) # This tile's portion of the roi intersecting_roi = getIntersection( roi, tile_roi_in ) intersecting_roi = numpy.array( intersecting_roi ) # Compute slicing within destination array and slicing within this tile destination_relative_intersection = numpy.subtract(intersecting_roi, roi[0]) tile_relative_intersection = intersecting_roi - tile_roi_in[0] # Get a view to the output slice result_region = result_out[roiToSlice(*destination_relative_intersection)] rest_args = self._get_rest_args(tile_blockshape, tile_roi_in) if self.description.tile_url_format.startswith('http'): retrieval_fn = partial( self._retrieve_remote_tile, rest_args, tile_relative_intersection, result_region ) else: retrieval_fn = partial( self._retrieve_local_tile, rest_args, tile_relative_intersection, result_region ) PARALLEL_REQ = True if PARALLEL_REQ: pool.add( Request( retrieval_fn ) ) else: # execute serially (leave the pool empty) retrieval_fn() if PARALLEL_REQ: with Timer() as timer: pool.wait() logger.info("Loading {} tiles took a total of {}".format( len(tile_starts), timer.seconds() )) def _get_rest_args(self, tile_blockshape, tile_roi_in): """ For a single tile, return a dict of all possible parameters that can be substituted into the tile_url_format string from the volume json description file. tile_blockshape: The 3D blockshape of the tile (since tiles are only 1 slice thick, the blockshape always begins with 1). tile_roi_in: The ROI within the total volume for a particular tile. (Note that the size of the ROI is usually, but not always, the same as tile_blockshape. Near the volume borders, the tile_roi_in may be smaller.) """ # Special feature: # Some slices are missing, in which case we provide fake data from a different slice. # Overwrite the rest args to pull data from an alternate source tile. z_start = tile_roi_in[0][0] if z_start in self._slice_remapping: new_source_slice = self._slice_remapping[z_start] tile_roi_in[0][0] = new_source_slice tile_roi_in[1][0] = new_source_slice+1 tile_index = numpy.array(tile_roi_in[0]) / tile_blockshape rest_args = { 'z_start' : tile_roi_in[0][0], 'z_stop' : tile_roi_in[1][0], 'y_start' : tile_roi_in[0][1], 'y_stop' : tile_roi_in[1][1], 'x_start' : tile_roi_in[0][2], 'x_stop' : tile_roi_in[1][2], 'z_index' : tile_index[0], 'y_index' : tile_index[1], 'x_index' : tile_index[2] } # Apply special z_translation_function if self.description.z_translation_function is not None: z_update_func = eval(self.description.z_translation_function) rest_args['z_index'] = rest_args['z_start'] = z_update_func(rest_args['z_index']) rest_args['z_stop'] = 1 + rest_args['z_start'] # Quick sanity check assert rest_args['z_index'] == rest_args['z_start'] # Special arg for Raveler session directories: # For files with Z < 1000, no extra directory level # For files with Z >= 1000, there is an extra directory level, # in which case the extra '/' is INCLUDED here in the rest arg. raveler_z_base = (rest_args['z_index'] // 1000) * 1000 if raveler_z_base == 0: rest_args['raveler_z_base'] = "" else: rest_args['raveler_z_base'] = str(raveler_z_base) + '/' return rest_args def _retrieve_local_tile(self, rest_args, tile_relative_intersection, data_out): tile_path = self.description.tile_url_format.format( **rest_args ) logger.debug("Opening {}".format( tile_path )) if not os.path.exists(tile_path): logger.error("Tile does not exist: {}".format( tile_path )) data_out[...] = 0 return # Read the image from the disk with vigra img = vigra.impex.readImage(tile_path, dtype='NATIVE') assert img.ndim == 3 if self.description.is_rgb: # "Convert" to grayscale -- just take first channel. img = img[...,0:1] assert img.shape[-1] == 1, "Image has more channels than expected. "\ "If it is RGB, be sure to set the is_rgb flag in your description json." # img has axes xyc, but we want zyx img = img.transpose()[None,0,:,:] if self.description.invert_y_axis: # More special Raveler support: # Raveler's conventions for the Y-axis are the reverse for everyone else's. img = img[:, ::-1, :] # Copy just the part we need into the destination array assert img[roiToSlice(*tile_relative_intersection)].shape == data_out.shape data_out[:] = img[roiToSlice(*tile_relative_intersection)] # If there's a special transform, apply it now. if self.description.data_transform_function is not None: transform = eval(self.description.data_transform_function) data_out[:] = transform(data_out) # For late imports requests = None PIL = None TEST_MODE = False # For testing purposes only. See below. def _retrieve_remote_tile(self, rest_args, tile_relative_intersection, data_out): # Late import if not TiledVolume.requests: import requests TiledVolume.requests = requests requests = TiledVolume.requests tile_url = self.description.tile_url_format.format( **rest_args ) logger.debug("Retrieving {}".format( tile_url )) try: if self._session is None: self._session = self._create_session() # Provide authentication if we have the details. if self.description.username and self.description.password: self._session.auth = (self.description.username, self.description.password) success = False tries = 0 while not success: try: # Note: We give timeout as a tuple, which requires a recent version of requests. # If you get an exception about that, upgrade your requests module. r = self._session.get(tile_url, timeout=(3.0, 20.0)) success = True except requests.ConnectionError: # This special 'pass' is here because we keep running into exceptions like this: # ConnectionError: HTTPConnectionPool(host='neurocean.int.janelia.org', port=6081): # Max retries exceeded with url: /ssd-3-tiles/abd1.5/43/24_25_0.jpg # (Caused by <class 'httplib.BadStatusLine'>: '') # So now we loop a few times and only give up if something is really wrong. if tries == 5: raise # give up tries += 1 except: # During testing, the server we're pulling from might be in our own process. # Apparently that means that it is not very responsive, leading to exceptions. # As a cheap workaround, just try one more time. if self.TEST_MODE: import time time.sleep(0.01) r = self._session.get(tile_url, timeout=(3.0, 20.0)) else: raise if r.status_code == requests.codes.not_found: logger.warn("NOTFOUND: {}".format( tile_url )) data_out[:] = 0 else: # late import if not TiledVolume.PIL: import PIL import PIL.Image TiledVolume.PIL = PIL PIL = TiledVolume.PIL img = numpy.asarray( PIL.Image.open(StringIO(r.content)) ) if self.description.is_rgb: # "Convert" to grayscale -- just take first channel. assert img.ndim == 3 img = img[...,0] assert img.ndim == 2, "Image seems to be of the wrong dimension. "\ "If it is RGB, be sure to set the is_rgb flag in your description json." # img has axes xy, but we want zyx img = img[None] if self.description.invert_y_axis: # More special Raveler support: # Raveler's conventions for the Y-axis are the reverse for everyone else's. img = img[:, ::-1, :] # Copy just the part we need into the destination array assert img[roiToSlice(*tile_relative_intersection)].shape == data_out.shape data_out[:] = img[roiToSlice(*tile_relative_intersection)] # If there's a special transform, apply it now. if self.description.data_transform_function is not None: transform = eval(self.description.data_transform_function) data_out[:] = transform(data_out) @classmethod def _create_session(cls): """ Generate a requests.Session object to use for this TiledVolume. Using a session allows us to benefit from a connection pool instead of establishing a new connection for every request. """ # Late import if not TiledVolume.requests: import requests TiledVolume.requests = requests requests = TiledVolume.requests session = requests.Session() # Replace the session http adapters with ones that use larger connection pools n_threads = max(1, Request.global_thread_pool.num_workers) adapter = requests.adapters.HTTPAdapter(pool_connections=n_threads, pool_maxsize=n_threads) adapter2 = requests.adapters.HTTPAdapter(pool_connections=n_threads, pool_maxsize=n_threads) session.mount('http://', adapter) session.mount('https://', adapter2) return session
unknown
codeparrot/codeparrot-clean
import re # from appendix B of rfc 3986 (http://www.ietf.org/rfc/rfc3986.txt) uri_pattern = r'^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?' uri_re = re.compile(uri_pattern) # gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "@" # # sub-delims = "!" / "$" / "&" / "'" / "(" / ")" # / "*" / "+" / "," / ";" / "=" # # unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~" uri_illegal_char_re = re.compile( "[^-A-Za-z0-9:/?#[\]@!$&'()*+,;=._~%]", re.UNICODE) authority_pattern = r'^([^@]*@)?([^:]*)(:.*)?' authority_re = re.compile(authority_pattern) pct_encoded_pattern = r'%([0-9A-Fa-f]{2})' pct_encoded_re = re.compile(pct_encoded_pattern) try: unichr(0x10000) except ValueError: # narrow python build UCSCHAR = [ (0xA0, 0xD7FF), (0xF900, 0xFDCF), (0xFDF0, 0xFFEF), ] IPRIVATE = [ (0xE000, 0xF8FF), ] else: UCSCHAR = [ (0xA0, 0xD7FF), (0xF900, 0xFDCF), (0xFDF0, 0xFFEF), (0x10000, 0x1FFFD), (0x20000, 0x2FFFD), (0x30000, 0x3FFFD), (0x40000, 0x4FFFD), (0x50000, 0x5FFFD), (0x60000, 0x6FFFD), (0x70000, 0x7FFFD), (0x80000, 0x8FFFD), (0x90000, 0x9FFFD), (0xA0000, 0xAFFFD), (0xB0000, 0xBFFFD), (0xC0000, 0xCFFFD), (0xD0000, 0xDFFFD), (0xE1000, 0xEFFFD), ] IPRIVATE = [ (0xE000, 0xF8FF), (0xF0000, 0xFFFFD), (0x100000, 0x10FFFD), ] _unreserved = [False] * 256 for _ in range(ord('A'), ord('Z') + 1): _unreserved[_] = True for _ in range(ord('0'), ord('9') + 1): _unreserved[_] = True for _ in range(ord('a'), ord('z') + 1): _unreserved[_] = True _unreserved[ord('-')] = True _unreserved[ord('.')] = True _unreserved[ord('_')] = True _unreserved[ord('~')] = True _escapeme_re = re.compile('[%s]' % (''.join( map(lambda (m, n): u'%s-%s' % (unichr(m), unichr(n)), UCSCHAR + IPRIVATE)),)) def _pct_escape_unicode(char_match): c = char_match.group() return ''.join(['%%%X' % (ord(octet),) for octet in c.encode('utf-8')]) def _pct_encoded_replace_unreserved(mo): try: i = int(mo.group(1), 16) if _unreserved[i]: return chr(i) else: return mo.group().upper() except ValueError: return mo.group() def _pct_encoded_replace(mo): try: return chr(int(mo.group(1), 16)) except ValueError: return mo.group() def remove_dot_segments(path): result_segments = [] while path: if path.startswith('../'): path = path[3:] elif path.startswith('./'): path = path[2:] elif path.startswith('/./'): path = path[2:] elif path == '/.': path = '/' elif path.startswith('/../'): path = path[3:] if result_segments: result_segments.pop() elif path == '/..': path = '/' if result_segments: result_segments.pop() elif path == '..' or path == '.': path = '' else: i = 0 if path[0] == '/': i = 1 i = path.find('/', i) if i == -1: i = len(path) result_segments.append(path[:i]) path = path[i:] return ''.join(result_segments) def urinorm(uri): if isinstance(uri, unicode): uri = _escapeme_re.sub(_pct_escape_unicode, uri).encode('ascii') illegal_mo = uri_illegal_char_re.search(uri) if illegal_mo: raise ValueError('Illegal characters in URI: %r at position %s' % (illegal_mo.group(), illegal_mo.start())) uri_mo = uri_re.match(uri) scheme = uri_mo.group(2) if scheme is None: raise ValueError('No scheme specified') scheme = scheme.lower() if scheme not in ('http', 'https'): raise ValueError('Not an absolute HTTP or HTTPS URI: %r' % (uri,)) authority = uri_mo.group(4) if authority is None: raise ValueError('Not an absolute URI: %r' % (uri,)) authority_mo = authority_re.match(authority) if authority_mo is None: raise ValueError('URI does not have a valid authority: %r' % (uri,)) userinfo, host, port = authority_mo.groups() if userinfo is None: userinfo = '' if '%' in host: host = host.lower() host = pct_encoded_re.sub(_pct_encoded_replace, host) host = unicode(host, 'utf-8').encode('idna') else: host = host.lower() if port: if (port == ':' or (scheme == 'http' and port == ':80') or (scheme == 'https' and port == ':443')): port = '' else: port = '' authority = userinfo + host + port path = uri_mo.group(5) path = pct_encoded_re.sub(_pct_encoded_replace_unreserved, path) path = remove_dot_segments(path) if not path: path = '/' query = uri_mo.group(6) if query is None: query = '' fragment = uri_mo.group(8) if fragment is None: fragment = '' return scheme + '://' + authority + path + query + fragment
unknown
codeparrot/codeparrot-clean
# This file is part of Checkmate. # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # # Copyright 2015 Ozge Lule(ozge.lule@ceng.metu.edu.tr), # Esref Ozturk(esref.ozturk@ceng.metu.edu.tr) from json import * class Test: def send(self, s, command): s.send(command) print "----------------------------------------" print "Sent :", command data = s.recv(4096) print "Received :", data asd = loads(data) if asd.get('board'): for row in asd['board']: print ' ', for frame in row: print frame, print print "----------------------------------------" return data
unknown
codeparrot/codeparrot-clean
""" Copyright 2014-2021 Vincent Texier <vit@free.fr> DuniterPy is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. DuniterPy is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. """ import re from typing import TypeVar, List, Any, Type, Optional, Dict, Union, Tuple import pypeg2 from duniterpy.grammars.output import Condition from .block_uid import BlockUID from .document import Document, MalformedDocumentError from ..constants import ( PUBKEY_REGEX, TRANSACTION_HASH_REGEX, BLOCK_ID_REGEX, BLOCK_UID_REGEX, ) from ..grammars import output def reduce_base(amount: int, base: int) -> Tuple[int, int]: """ Compute the reduced base of the given parameters :param amount: the amount value :param base: current base value :return: tuple containing computed (amount, base) """ if amount == 0: return 0, 0 next_amount = amount next_base = base next_amount_is_integer = True while next_amount_is_integer: amount = next_amount base = next_base if next_amount % 10 == 0: next_amount = int(next_amount / 10) next_base += 1 else: next_amount_is_integer = False return int(amount), int(base) # required to type hint cls in classmethod InputSourceType = TypeVar("InputSourceType", bound="InputSource") class InputSource: """ A Transaction INPUT .. note:: Compact : INDEX:SOURCE:FINGERPRINT:AMOUNT """ re_inline = re.compile( "([0-9]+):([0-9]):(?:(?:(D):({pubkey_regex}):({block_id_regex}))|(?:(T):({transaction_hash_regex}):\ ([0-9]+)))".format( pubkey_regex=PUBKEY_REGEX, block_id_regex=BLOCK_ID_REGEX, transaction_hash_regex=TRANSACTION_HASH_REGEX, ) ) def __init__( self, amount: int, base: int, source: str, origin_id: str, index: int ) -> None: """ An input source can come from a dividend or a transaction. :param amount: amount of the input :param base: base of the input :param source: D if dividend, T if transaction :param origin_id: a Public key if a dividend, a tx hash if a transaction :param index: a block id if a dividend, an tx index if a transaction :return: """ self.amount = amount self.base = base self.source = source self.origin_id = origin_id self.index = index def __eq__(self, other: Any) -> bool: """ Check InputSource instances equality """ if not isinstance(other, InputSource): return NotImplemented return ( self.amount == other.amount and self.base == other.base and self.source == other.source and self.origin_id == other.origin_id and self.index == other.index ) def __hash__(self) -> int: return hash((self.amount, self.base, self.source, self.origin_id, self.index)) @classmethod def from_inline(cls: Type[InputSourceType], inline: str) -> InputSourceType: """ Return Transaction instance from inline string format :param inline: Inline string format :return: """ data = InputSource.re_inline.match(inline) if data is None: raise MalformedDocumentError("Inline input") source_offset = 2 amount = int(data.group(1)) base = int(data.group(2)) if data.group(1 + source_offset): source = data.group(1 + source_offset) origin_id = data.group(2 + source_offset) index = int(data.group(3 + source_offset)) else: source = data.group(4 + source_offset) origin_id = data.group(5 + source_offset) index = int(data.group(6 + source_offset)) return cls(amount, base, source, origin_id, index) def inline(self) -> str: """ Return an inline string format of the document :return: """ return "{0}:{1}:{2}:{3}:{4}".format( self.amount, self.base, self.source, self.origin_id, self.index ) # required to type hint cls in classmethod OutputSourceType = TypeVar("OutputSourceType", bound="OutputSource") class OutputSource: """ A Transaction OUTPUT """ re_inline = re.compile("([0-9]+):([0-9]):(.*)") def __init__(self, amount: int, base: int, condition: str) -> None: """ Init OutputSource instance :param amount: Amount of the output :param base: Base number :param condition: Condition expression """ self.amount = amount self.base = base self.condition = self.condition_from_text(condition) def __eq__(self, other: Any) -> bool: """ Check OutputSource instances equality """ if not isinstance(other, OutputSource): return NotImplemented return ( self.amount == other.amount and self.base == other.base and self.condition == other.condition ) def __hash__(self) -> int: return hash((self.amount, self.base, self.condition)) @classmethod def from_inline(cls: Type[OutputSourceType], inline: str) -> OutputSourceType: """ Return OutputSource instance from inline string format :param inline: Inline string format :return: """ data = OutputSource.re_inline.match(inline) if data is None: raise MalformedDocumentError("Inline output") amount = int(data.group(1)) base = int(data.group(2)) condition_text = data.group(3) return cls(amount, base, condition_text) def inline(self) -> str: """ Return an inline string format of the output source :return: """ return "{0}:{1}:{2}".format( self.amount, self.base, pypeg2.compose(self.condition, output.Condition) ) def inline_condition(self) -> str: """ Return an inline string format of the output source’s condition :return: """ return pypeg2.compose(self.condition, output.Condition) @staticmethod def condition_from_text(text) -> Condition: """ Return a Condition instance with PEG grammar from text :param text: PEG parsable string :return: """ try: condition = pypeg2.parse(text, output.Condition) except SyntaxError: # Invalid conditions are possible, see https://github.com/duniter/duniter/issues/1156 # In such a case, they are store as empty PEG grammar object and considered unlockable condition = Condition(text) return condition # required to type hint cls in classmethod SIGParameterType = TypeVar("SIGParameterType", bound="SIGParameter") class SIGParameter: """ A Transaction UNLOCK SIG parameter """ re_sig = re.compile("SIG\\(([0-9]+)\\)") def __init__(self, index: int) -> None: """ Init SIGParameter instance :param index: Index in list """ self.index = index def __eq__(self, other: Any) -> bool: """ Check SIGParameter instances equality """ if not isinstance(other, SIGParameter): return NotImplemented return self.index == other.index def __hash__(self) -> int: return hash(self.index) @classmethod def from_parameter( cls: Type[SIGParameterType], parameter: str ) -> Optional[SIGParameterType]: """ Return a SIGParameter instance from an index parameter :param parameter: Index parameter :return: """ sig = SIGParameter.re_sig.match(parameter) if sig: return cls(int(sig.group(1))) return None def __str__(self): """ Return a string representation of the SIGParameter instance :return: """ return "SIG({0})".format(self.index) # required to type hint cls in classmethod XHXParameterType = TypeVar("XHXParameterType", bound="XHXParameter") class XHXParameter: """ A Transaction UNLOCK XHX parameter """ re_xhx = re.compile("XHX\\(([0-9]+)\\)") def __init__(self, integer: int) -> None: """ Init XHXParameter instance :param integer: XHX number """ self.integer = integer def __eq__(self, other: Any) -> bool: """ Check XHXParameter instances equality """ if not isinstance(other, XHXParameter): return NotImplemented return self.integer == other.integer def __hash__(self) -> int: return hash(self.integer) @classmethod def from_parameter( cls: Type[XHXParameterType], parameter: str ) -> Optional[XHXParameterType]: """ Return a XHXParameter instance from an index parameter :param parameter: Index parameter :return: """ xhx = XHXParameter.re_xhx.match(parameter) if xhx: return cls(int(xhx.group(1))) return None def compute(self): pass def __str__(self): """ Return a string representation of the XHXParameter instance :return: """ return "XHX({0})".format(self.integer) # required to type hint cls in classmethod UnlockParameterType = TypeVar("UnlockParameterType", bound="UnlockParameter") class UnlockParameter: @classmethod def from_parameter( cls: Type[UnlockParameterType], parameter: str ) -> Optional[Union[SIGParameter, XHXParameter]]: """ Return UnlockParameter instance from parameter string :param parameter: Parameter string :return: """ result = None # type: Optional[Union[SIGParameter, XHXParameter]] sig_param = SIGParameter.from_parameter(parameter) if sig_param: result = sig_param else: xhx_param = XHXParameter.from_parameter(parameter) if xhx_param: result = xhx_param return result def compute(self): pass # required to type hint cls in classmethod UnlockType = TypeVar("UnlockType", bound="Unlock") class Unlock: """ A Transaction UNLOCK """ re_inline = re.compile("([0-9]+):((?:SIG\\([0-9]+\\)|XHX\\([0-9]+\\)|\\s)+)") def __init__( self, index: int, parameters: List[Union[SIGParameter, XHXParameter]] ) -> None: """ Init Unlock instance :param index: Index number :param parameters: List of UnlockParameter instances """ self.index = index self.parameters = parameters def __eq__(self, other: Any) -> bool: """ Check Unlock instances equality """ if not isinstance(other, Unlock): return NotImplemented params_equals = True for spar, opar in zip(self.parameters, other.parameters): if spar != opar: params_equals = False return self.index == other.index and params_equals def __hash__(self) -> int: return hash((self.index, self.parameters)) @classmethod def from_inline(cls: Type[UnlockType], inline: str) -> UnlockType: """ Return an Unlock instance from inline string format :param inline: Inline string format :return: """ data = Unlock.re_inline.match(inline) if data is None: raise MalformedDocumentError("Inline input") index = int(data.group(1)) parameters_str = data.group(2).split(" ") parameters = [] for parameter in parameters_str: param = UnlockParameter.from_parameter(parameter) if param: parameters.append(param) return cls(index, parameters) def inline(self) -> str: """ Return inline string format of the instance :return: """ return "{0}:{1}".format( self.index, " ".join([str(parameter) for parameter in self.parameters]) ) # required to type hint cls in classmethod TransactionType = TypeVar("TransactionType", bound="Transaction") class Transaction(Document): """ .. note:: A transaction document is specified by the following format : | Document format : | Version: VERSION | Type: Transaction | Currency: CURRENCY_NAME | Issuers: | PUBLIC_KEY | ... | Inputs: | INDEX:SOURCE:NUMBER:FINGERPRINT:AMOUNT | ... | Outputs: | PUBLIC_KEY:AMOUNT | ... | Comment: COMMENT | ... | | | Compact format : | TX:VERSION:NB_ISSUERS:NB_INPUTS:NB_OUTPUTS:HAS_COMMENT | PUBLIC_KEY:INDEX | ... | INDEX:SOURCE:FINGERPRINT:AMOUNT | ... | PUBLIC_KEY:AMOUNT | ... | COMMENT | SIGNATURE | ... """ re_type = re.compile("Type: (Transaction)\n") re_header = re.compile( "TX:([0-9]+):([0-9]+):([0-9]+):([0-9]+):([0-9]+):([01]):([0-9]+)\n" ) re_compact_blockstamp = re.compile( "({block_uid_regex})\n".format(block_uid_regex=BLOCK_UID_REGEX) ) re_blockstamp = re.compile( "Blockstamp: ({block_uid_regex})\n".format(block_uid_regex=BLOCK_UID_REGEX) ) re_locktime = re.compile("Locktime: ([0-9]+)\n") re_issuers = re.compile("Issuers:\n") re_inputs = re.compile("Inputs:\n") re_unlocks = re.compile("Unlocks:\n") re_outputs = re.compile("Outputs:\n") re_compact_comment = re.compile("([^\n]+)\n") re_comment = re.compile("Comment: ([^\n]*)\n") re_pubkey = re.compile("({pubkey_regex})\n".format(pubkey_regex=PUBKEY_REGEX)) fields_parsers = { **Document.fields_parsers, **{ "Type": re_type, "Blockstamp": re_blockstamp, "CompactBlockstamp": re_compact_blockstamp, "Locktime": re_locktime, "TX": re_header, "Issuers": re_issuers, "Inputs": re_inputs, "Unlocks": re_unlocks, "Outputs": re_outputs, "Comment": re_comment, "Compact comment": re_compact_comment, "Pubkey": re_pubkey, }, } def __init__( self, version: int, currency: str, blockstamp: Optional[BlockUID], locktime: int, issuers: List[str], inputs: List[InputSource], unlocks: List[Unlock], outputs: List[OutputSource], comment: str, signatures: List[str], time: Optional[int] = None, ) -> None: """ Init Transaction instance :param version: Version number of the document :param currency: Name of the currency :param blockstamp: BlockUID timestamp of the block :param locktime: Lock time in seconds :param issuers: List of issuers public key :param inputs: List of InputSource instances :param unlocks: List of Unlock instances :param outputs: List of OutputSource instances :param comment: Comment field :param time: time when the transaction enters the blockchain :param signatures: List of signatures """ super().__init__(version, currency, signatures) self.blockstamp = blockstamp self.locktime = locktime self.issuers = issuers self.inputs = inputs self.unlocks = unlocks self.outputs = outputs self.comment = comment self.time = time def __eq__(self, other: Any) -> bool: """ Check Transaction instances equality """ if not isinstance(other, Transaction): return NotImplemented return ( self.version == other.version and self.currency == other.currency and self.signatures == other.signatures and self.blockstamp == other.blockstamp and self.locktime == other.locktime and self.issuers == other.issuers and self.inputs == other.inputs and self.unlocks == other.unlocks and self.outputs == other.outputs and self.comment == other.comment and self.time == other.time ) def __hash__(self) -> int: return hash( ( self.version, self.currency, self.signatures, self.blockstamp, self.locktime, self.issuers, self.inputs, self.unlocks, self.outputs, self.comment, self.time, ) ) @classmethod def from_bma_history( cls: Type[TransactionType], currency: str, tx_data: Dict ) -> TransactionType: """ Get the transaction instance from json :param currency: the currency of the tx :param tx_data: json data of the transaction :return: """ tx_data = tx_data.copy() tx_data["currency"] = currency for data_list in ("issuers", "outputs", "inputs", "unlocks", "signatures"): tx_data["multiline_{0}".format(data_list)] = "\n".join(tx_data[data_list]) return cls.from_signed_raw( """Version: {version} Type: Transaction Currency: {currency} Blockstamp: {blockstamp} Locktime: {locktime} Issuers: {multiline_issuers} Inputs: {multiline_inputs} Unlocks: {multiline_unlocks} Outputs: {multiline_outputs} Comment: {comment} {multiline_signatures} """.format( **tx_data ), tx_data["time"], ) @classmethod def from_compact( cls: Type[TransactionType], currency: str, compact: str ) -> TransactionType: """ Return Transaction instance from compact string format :param currency: Name of the currency :param compact: Compact format string :return: """ lines = compact.splitlines(True) n = 0 header_data = Transaction.re_header.match(lines[n]) if header_data is None: raise MalformedDocumentError("Compact TX header") version = int(header_data.group(1)) issuers_num = int(header_data.group(2)) inputs_num = int(header_data.group(3)) unlocks_num = int(header_data.group(4)) outputs_num = int(header_data.group(5)) has_comment = int(header_data.group(6)) locktime = int(header_data.group(7)) n += 1 blockstamp = BlockUID.from_str( Transaction.parse_field("CompactBlockstamp", lines[n]) ) n += 1 issuers = [] inputs = [] unlocks = [] outputs = [] signatures = [] for index in range(0, issuers_num): issuer = Transaction.parse_field("Pubkey", lines[n + index]) issuers.append(issuer) n += issuers_num for index in range(0, inputs_num): input_source = InputSource.from_inline(lines[n + index]) inputs.append(input_source) n += inputs_num for index in range(0, unlocks_num): unlock = Unlock.from_inline(lines[n + index]) unlocks.append(unlock) n += unlocks_num for index in range(0, outputs_num): output_source = OutputSource.from_inline(lines[n + index]) outputs.append(output_source) n += outputs_num comment = "" if has_comment == 1: data = Transaction.re_compact_comment.match(lines[n]) if data: comment = data.group(1) n += 1 else: raise MalformedDocumentError("Compact TX Comment") while n < len(lines): data = Transaction.re_signature.match(lines[n]) if data: signatures.append(data.group(1)) n += 1 else: raise MalformedDocumentError("Compact TX Signatures") return cls( version, currency, blockstamp, locktime, issuers, inputs, unlocks, outputs, comment, signatures, ) @classmethod def from_signed_raw( cls: Type[TransactionType], raw: str, time: int = 0 ) -> TransactionType: """ Return a Transaction instance from a raw string format :param raw: Raw string format :param time: time when the transaction enters the blockchain :return: """ lines = raw.splitlines(True) n = 0 version = int(Transaction.parse_field("Version", lines[n])) n += 1 Transaction.parse_field("Type", lines[n]) n += 1 currency = Transaction.parse_field("Currency", lines[n]) n += 1 blockstamp = BlockUID.from_str(Transaction.parse_field("Blockstamp", lines[n])) n += 1 locktime = Transaction.parse_field("Locktime", lines[n]) n += 1 issuers = [] inputs = [] unlocks = [] outputs = [] signatures = [] if Transaction.re_issuers.match(lines[n]): n += 1 while Transaction.re_inputs.match(lines[n]) is None: issuer = Transaction.parse_field("Pubkey", lines[n]) issuers.append(issuer) n += 1 if Transaction.re_inputs.match(lines[n]): n += 1 while Transaction.re_unlocks.match(lines[n]) is None: input_source = InputSource.from_inline(lines[n]) inputs.append(input_source) n += 1 if Transaction.re_unlocks.match(lines[n]): n += 1 while Transaction.re_outputs.match(lines[n]) is None: unlock = Unlock.from_inline(lines[n]) unlocks.append(unlock) n += 1 if Transaction.re_outputs.match(lines[n]) is not None: n += 1 while not Transaction.re_comment.match(lines[n]): _output = OutputSource.from_inline(lines[n]) outputs.append(_output) n += 1 comment = Transaction.parse_field("Comment", lines[n]) n += 1 if Transaction.re_signature.match(lines[n]) is not None: while n < len(lines): sign = Transaction.parse_field("Signature", lines[n]) signatures.append(sign) n += 1 return cls( version, currency, blockstamp, locktime, issuers, inputs, unlocks, outputs, comment, signatures, time, ) def raw(self) -> str: """ Return raw string format from the instance :return: """ doc = """Version: {0} Type: Transaction Currency: {1} """.format( self.version, self.currency ) doc += "Blockstamp: {0}\n".format(self.blockstamp) doc += "Locktime: {0}\n".format(self.locktime) doc += "Issuers:\n" for p in self.issuers: doc += "{0}\n".format(p) doc += "Inputs:\n" for i in self.inputs: doc += "{0}\n".format(i.inline()) doc += "Unlocks:\n" for u in self.unlocks: doc += "{0}\n".format(u.inline()) doc += "Outputs:\n" for o in self.outputs: doc += "{0}\n".format(o.inline()) doc += "Comment: " doc += "{0}\n".format(self.comment) return doc def compact(self) -> str: """ Return a transaction in its compact format from the instance :return: TX:VERSION:NB_ISSUERS:NB_INPUTS:NB_UNLOCKS:NB_OUTPUTS:HAS_COMMENT:LOCKTIME PUBLIC_KEY:INDEX ... INDEX:SOURCE:FINGERPRINT:AMOUNT ... PUBLIC_KEY:AMOUNT ... COMMENT""" doc = "TX:{0}:{1}:{2}:{3}:{4}:{5}:{6}\n".format( self.version, len(self.issuers), len(self.inputs), len(self.unlocks), len(self.outputs), "1" if self.comment != "" else "0", self.locktime, ) doc += "{0}\n".format(self.blockstamp) for pubkey in self.issuers: doc += "{0}\n".format(pubkey) for i in self.inputs: doc += "{0}\n".format(i.inline()) for u in self.unlocks: doc += "{0}\n".format(u.inline()) for o in self.outputs: doc += "{0}\n".format(o.inline()) if self.comment != "": doc += "{0}\n".format(self.comment) for s in self.signatures: doc += "{0}\n".format(s) return doc class SimpleTransaction(Transaction): """ As transaction class, but for only one issuer. ... """ def __init__( self, version: int, currency: str, blockstamp: BlockUID, locktime: int, issuer: str, single_input: InputSource, unlocks: List[Unlock], outputs: List[OutputSource], comment: str, signature: str, time: int, ) -> None: """ Init instance :param version: Version number of the document :param currency: Name of the currency :param blockstamp: BlockUID timestamp :param locktime: Lock time in seconds :param issuer: Issuer public key :param single_input: InputSource instance :param unlocks: List of Unlock instances :param outputs: List of OutputSource instances :param comment: Comment field :param time: time when the transaction enters the blockchain :param signature: Signature """ super().__init__( version, currency, blockstamp, locktime, [issuer], [single_input], unlocks, outputs, comment, [signature], time, ) @staticmethod def is_simple(tx: Transaction) -> bool: """ Filter a transaction and checks if it is a basic one A simple transaction is a tx which has only one issuer and two outputs maximum. The unlocks must be done with simple "SIG" functions, and the outputs must be simple SIG conditions. :param tx: the transaction to check :return: True if a simple transaction """ simple = True if len(tx.issuers) != 1: simple = False for unlock in tx.unlocks: if len(unlock.parameters) != 1: simple = False elif type(unlock.parameters[0]) is not SIGParameter: simple = False for o in tx.outputs: # if right condition is not None... if getattr(o.condition, "right", None): simple = False # if left is not SIG... elif type(o.condition.left) is not output.SIG: simple = False return simple
unknown
codeparrot/codeparrot-clean
# # Secret Labs' Regular Expression Engine # # convert re-style regular expression to sre pattern # # Copyright (c) 1998-2001 by Secret Labs AB. All rights reserved. # # See the sre.py file for information on usage and redistribution. # """Internal support module for sre""" # XXX: show string offset and offending character for all errors import sys from sre_constants import * SPECIAL_CHARS = ".\\[{()*+?^$|" REPEAT_CHARS = "*+?{" DIGITS = set("0123456789") OCTDIGITS = set("01234567") HEXDIGITS = set("0123456789abcdefABCDEF") WHITESPACE = set(" \t\n\r\v\f") ESCAPES = { r"\a": (LITERAL, ord("\a")), r"\b": (LITERAL, ord("\b")), r"\f": (LITERAL, ord("\f")), r"\n": (LITERAL, ord("\n")), r"\r": (LITERAL, ord("\r")), r"\t": (LITERAL, ord("\t")), r"\v": (LITERAL, ord("\v")), r"\\": (LITERAL, ord("\\")) } CATEGORIES = { r"\A": (AT, AT_BEGINNING_STRING), # start of string r"\b": (AT, AT_BOUNDARY), r"\B": (AT, AT_NON_BOUNDARY), r"\d": (IN, [(CATEGORY, CATEGORY_DIGIT)]), r"\D": (IN, [(CATEGORY, CATEGORY_NOT_DIGIT)]), r"\s": (IN, [(CATEGORY, CATEGORY_SPACE)]), r"\S": (IN, [(CATEGORY, CATEGORY_NOT_SPACE)]), r"\w": (IN, [(CATEGORY, CATEGORY_WORD)]), r"\W": (IN, [(CATEGORY, CATEGORY_NOT_WORD)]), r"\Z": (AT, AT_END_STRING), # end of string } FLAGS = { # standard flags "i": SRE_FLAG_IGNORECASE, "L": SRE_FLAG_LOCALE, "m": SRE_FLAG_MULTILINE, "s": SRE_FLAG_DOTALL, "x": SRE_FLAG_VERBOSE, # extensions "a": SRE_FLAG_ASCII, "t": SRE_FLAG_TEMPLATE, "u": SRE_FLAG_UNICODE, } class Pattern: # master pattern object. keeps track of global attributes def __init__(self): self.flags = 0 self.open = [] self.groups = 1 self.groupdict = {} def opengroup(self, name=None): gid = self.groups self.groups = gid + 1 if name is not None: ogid = self.groupdict.get(name, None) if ogid is not None: raise error("redefinition of group name %s as group %d; " "was group %d" % (repr(name), gid, ogid)) self.groupdict[name] = gid self.open.append(gid) return gid def closegroup(self, gid): self.open.remove(gid) def checkgroup(self, gid): return gid < self.groups and gid not in self.open class SubPattern: # a subpattern, in intermediate form def __init__(self, pattern, data=None): self.pattern = pattern if data is None: data = [] self.data = data self.width = None def dump(self, level=0): nl = 1 seqtypes = (tuple, list) for op, av in self.data: print(level*" " + op, end=' '); nl = 0 if op == "in": # member sublanguage print(); nl = 1 for op, a in av: print((level+1)*" " + op, a) elif op == "branch": print(); nl = 1 i = 0 for a in av[1]: if i > 0: print(level*" " + "or") a.dump(level+1); nl = 1 i = i + 1 elif isinstance(av, seqtypes): for a in av: if isinstance(a, SubPattern): if not nl: print() a.dump(level+1); nl = 1 else: print(a, end=' ') ; nl = 0 else: print(av, end=' ') ; nl = 0 if not nl: print() def __repr__(self): return repr(self.data) def __len__(self): return len(self.data) def __delitem__(self, index): del self.data[index] def __getitem__(self, index): if isinstance(index, slice): return SubPattern(self.pattern, self.data[index]) return self.data[index] def __setitem__(self, index, code): self.data[index] = code def insert(self, index, code): self.data.insert(index, code) def append(self, code): self.data.append(code) def getwidth(self): # determine the width (min, max) for this subpattern if self.width: return self.width lo = hi = 0 UNITCODES = (ANY, RANGE, IN, LITERAL, NOT_LITERAL, CATEGORY) REPEATCODES = (MIN_REPEAT, MAX_REPEAT) for op, av in self.data: if op is BRANCH: i = sys.maxsize j = 0 for av in av[1]: l, h = av.getwidth() i = min(i, l) j = max(j, h) lo = lo + i hi = hi + j elif op is CALL: i, j = av.getwidth() lo = lo + i hi = hi + j elif op is SUBPATTERN: i, j = av[1].getwidth() lo = lo + i hi = hi + j elif op in REPEATCODES: i, j = av[2].getwidth() lo = lo + int(i) * av[0] hi = hi + int(j) * av[1] elif op in UNITCODES: lo = lo + 1 hi = hi + 1 elif op == SUCCESS: break self.width = int(min(lo, sys.maxsize)), int(min(hi, sys.maxsize)) return self.width class Tokenizer: def __init__(self, string): self.string = string self.index = 0 self.__next() def __next(self): if self.index >= len(self.string): self.next = None return char = self.string[self.index:self.index+1] # Special case for the str8, since indexing returns a integer # XXX This is only needed for test_bug_926075 in test_re.py if char and isinstance(char, bytes): char = chr(char[0]) if char == "\\": try: c = self.string[self.index + 1] except IndexError: raise error("bogus escape (end of line)") if isinstance(self.string, bytes): c = chr(c) char = char + c self.index = self.index + len(char) self.next = char def match(self, char, skip=1): if char == self.next: if skip: self.__next() return 1 return 0 def get(self): this = self.next self.__next() return this def tell(self): return self.index, self.next def seek(self, index): self.index, self.next = index def isident(char): return "a" <= char <= "z" or "A" <= char <= "Z" or char == "_" def isdigit(char): return "0" <= char <= "9" def isname(name): # check that group name is a valid string if not isident(name[0]): return False for char in name[1:]: if not isident(char) and not isdigit(char): return False return True def _class_escape(source, escape): # handle escape code inside character class code = ESCAPES.get(escape) if code: return code code = CATEGORIES.get(escape) if code: return code try: c = escape[1:2] if c == "x": # hexadecimal escape (exactly two digits) while source.next in HEXDIGITS and len(escape) < 4: escape = escape + source.get() escape = escape[2:] if len(escape) != 2: raise error("bogus escape: %s" % repr("\\" + escape)) return LITERAL, int(escape, 16) & 0xff elif c in OCTDIGITS: # octal escape (up to three digits) while source.next in OCTDIGITS and len(escape) < 4: escape = escape + source.get() escape = escape[1:] return LITERAL, int(escape, 8) & 0xff elif c in DIGITS: raise error("bogus escape: %s" % repr(escape)) if len(escape) == 2: return LITERAL, ord(escape[1]) except ValueError: pass raise error("bogus escape: %s" % repr(escape)) def _escape(source, escape, state): # handle escape code in expression code = CATEGORIES.get(escape) if code: return code code = ESCAPES.get(escape) if code: return code try: c = escape[1:2] if c == "x": # hexadecimal escape while source.next in HEXDIGITS and len(escape) < 4: escape = escape + source.get() if len(escape) != 4: raise ValueError return LITERAL, int(escape[2:], 16) & 0xff elif c == "0": # octal escape while source.next in OCTDIGITS and len(escape) < 4: escape = escape + source.get() return LITERAL, int(escape[1:], 8) & 0xff elif c in DIGITS: # octal escape *or* decimal group reference (sigh) if source.next in DIGITS: escape = escape + source.get() if (escape[1] in OCTDIGITS and escape[2] in OCTDIGITS and source.next in OCTDIGITS): # got three octal digits; this is an octal escape escape = escape + source.get() return LITERAL, int(escape[1:], 8) & 0xff # not an octal escape, so this is a group reference group = int(escape[1:]) if group < state.groups: if not state.checkgroup(group): raise error("cannot refer to open group") return GROUPREF, group raise ValueError if len(escape) == 2: return LITERAL, ord(escape[1]) except ValueError: pass raise error("bogus escape: %s" % repr(escape)) def _parse_sub(source, state, nested=1): # parse an alternation: a|b|c items = [] itemsappend = items.append sourcematch = source.match while 1: itemsappend(_parse(source, state)) if sourcematch("|"): continue if not nested: break if not source.next or sourcematch(")", 0): break else: raise error("pattern not properly closed") if len(items) == 1: return items[0] subpattern = SubPattern(state) subpatternappend = subpattern.append # check if all items share a common prefix while 1: prefix = None for item in items: if not item: break if prefix is None: prefix = item[0] elif item[0] != prefix: break else: # all subitems start with a common "prefix". # move it out of the branch for item in items: del item[0] subpatternappend(prefix) continue # check next one break # check if the branch can be replaced by a character set for item in items: if len(item) != 1 or item[0][0] != LITERAL: break else: # we can store this as a character set instead of a # branch (the compiler may optimize this even more) set = [] setappend = set.append for item in items: setappend(item[0]) subpatternappend((IN, set)) return subpattern subpattern.append((BRANCH, (None, items))) return subpattern def _parse_sub_cond(source, state, condgroup): item_yes = _parse(source, state) if source.match("|"): item_no = _parse(source, state) if source.match("|"): raise error("conditional backref with more than two branches") else: item_no = None if source.next and not source.match(")", 0): raise error("pattern not properly closed") subpattern = SubPattern(state) subpattern.append((GROUPREF_EXISTS, (condgroup, item_yes, item_no))) return subpattern _PATTERNENDERS = set("|)") _ASSERTCHARS = set("=!<") _LOOKBEHINDASSERTCHARS = set("=!") _REPEATCODES = set([MIN_REPEAT, MAX_REPEAT]) def _parse(source, state): # parse a simple pattern subpattern = SubPattern(state) # precompute constants into local variables subpatternappend = subpattern.append sourceget = source.get sourcematch = source.match _len = len PATTERNENDERS = _PATTERNENDERS ASSERTCHARS = _ASSERTCHARS LOOKBEHINDASSERTCHARS = _LOOKBEHINDASSERTCHARS REPEATCODES = _REPEATCODES while 1: if source.next in PATTERNENDERS: break # end of subpattern this = sourceget() if this is None: break # end of pattern if state.flags & SRE_FLAG_VERBOSE: # skip whitespace and comments if this in WHITESPACE: continue if this == "#": while 1: this = sourceget() if this in (None, "\n"): break continue if this and this[0] not in SPECIAL_CHARS: subpatternappend((LITERAL, ord(this))) elif this == "[": # character set set = [] setappend = set.append ## if sourcematch(":"): ## pass # handle character classes if sourcematch("^"): setappend((NEGATE, None)) # check remaining characters start = set[:] while 1: this = sourceget() if this == "]" and set != start: break elif this and this[0] == "\\": code1 = _class_escape(source, this) elif this: code1 = LITERAL, ord(this) else: raise error("unexpected end of regular expression") if sourcematch("-"): # potential range this = sourceget() if this == "]": if code1[0] is IN: code1 = code1[1][0] setappend(code1) setappend((LITERAL, ord("-"))) break elif this: if this[0] == "\\": code2 = _class_escape(source, this) else: code2 = LITERAL, ord(this) if code1[0] != LITERAL or code2[0] != LITERAL: raise error("bad character range") lo = code1[1] hi = code2[1] if hi < lo: raise error("bad character range") setappend((RANGE, (lo, hi))) else: raise error("unexpected end of regular expression") else: if code1[0] is IN: code1 = code1[1][0] setappend(code1) # XXX: <fl> should move set optimization to compiler! if _len(set)==1 and set[0][0] is LITERAL: subpatternappend(set[0]) # optimization elif _len(set)==2 and set[0][0] is NEGATE and set[1][0] is LITERAL: subpatternappend((NOT_LITERAL, set[1][1])) # optimization else: # XXX: <fl> should add charmap optimization here subpatternappend((IN, set)) elif this and this[0] in REPEAT_CHARS: # repeat previous item if this == "?": min, max = 0, 1 elif this == "*": min, max = 0, MAXREPEAT elif this == "+": min, max = 1, MAXREPEAT elif this == "{": if source.next == "}": subpatternappend((LITERAL, ord(this))) continue here = source.tell() min, max = 0, MAXREPEAT lo = hi = "" while source.next in DIGITS: lo = lo + source.get() if sourcematch(","): while source.next in DIGITS: hi = hi + sourceget() else: hi = lo if not sourcematch("}"): subpatternappend((LITERAL, ord(this))) source.seek(here) continue if lo: min = int(lo) if hi: max = int(hi) if max < min: raise error("bad repeat interval") else: raise error("not supported") # figure out which item to repeat if subpattern: item = subpattern[-1:] else: item = None if not item or (_len(item) == 1 and item[0][0] == AT): raise error("nothing to repeat") if item[0][0] in REPEATCODES: raise error("multiple repeat") if sourcematch("?"): subpattern[-1] = (MIN_REPEAT, (min, max, item)) else: subpattern[-1] = (MAX_REPEAT, (min, max, item)) elif this == ".": subpatternappend((ANY, None)) elif this == "(": group = 1 name = None condgroup = None if sourcematch("?"): group = 0 # options if sourcematch("P"): # python extensions if sourcematch("<"): # named group: skip forward to end of name name = "" while 1: char = sourceget() if char is None: raise error("unterminated name") if char == ">": break name = name + char group = 1 if not isname(name): raise error("bad character in group name") elif sourcematch("="): # named backreference name = "" while 1: char = sourceget() if char is None: raise error("unterminated name") if char == ")": break name = name + char if not isname(name): raise error("bad character in group name") gid = state.groupdict.get(name) if gid is None: raise error("unknown group name") subpatternappend((GROUPREF, gid)) continue else: char = sourceget() if char is None: raise error("unexpected end of pattern") raise error("unknown specifier: ?P%s" % char) elif sourcematch(":"): # non-capturing group group = 2 elif sourcematch("#"): # comment while 1: if source.next is None or source.next == ")": break sourceget() if not sourcematch(")"): raise error("unbalanced parenthesis") continue elif source.next in ASSERTCHARS: # lookahead assertions char = sourceget() dir = 1 if char == "<": if source.next not in LOOKBEHINDASSERTCHARS: raise error("syntax error") dir = -1 # lookbehind char = sourceget() p = _parse_sub(source, state) if not sourcematch(")"): raise error("unbalanced parenthesis") if char == "=": subpatternappend((ASSERT, (dir, p))) else: subpatternappend((ASSERT_NOT, (dir, p))) continue elif sourcematch("("): # conditional backreference group condname = "" while 1: char = sourceget() if char is None: raise error("unterminated name") if char == ")": break condname = condname + char group = 2 if isname(condname): condgroup = state.groupdict.get(condname) if condgroup is None: raise error("unknown group name") else: try: condgroup = int(condname) except ValueError: raise error("bad character in group name") else: # flags if not source.next in FLAGS: raise error("unexpected end of pattern") while source.next in FLAGS: state.flags = state.flags | FLAGS[sourceget()] if group: # parse group contents if group == 2: # anonymous group group = None else: group = state.opengroup(name) if condgroup: p = _parse_sub_cond(source, state, condgroup) else: p = _parse_sub(source, state) if not sourcematch(")"): raise error("unbalanced parenthesis") if group is not None: state.closegroup(group) subpatternappend((SUBPATTERN, (group, p))) else: while 1: char = sourceget() if char is None: raise error("unexpected end of pattern") if char == ")": break raise error("unknown extension") elif this == "^": subpatternappend((AT, AT_BEGINNING)) elif this == "$": subpattern.append((AT, AT_END)) elif this and this[0] == "\\": code = _escape(source, this, state) subpatternappend(code) else: raise error("parser error") return subpattern def fix_flags(src, flags): # Check and fix flags according to the type of pattern (str or bytes) if isinstance(src, str): if not flags & SRE_FLAG_ASCII: flags |= SRE_FLAG_UNICODE elif flags & SRE_FLAG_UNICODE: raise ValueError("ASCII and UNICODE flags are incompatible") else: if flags & SRE_FLAG_UNICODE: raise ValueError("can't use UNICODE flag with a bytes pattern") return flags def parse(str, flags=0, pattern=None): # parse 're' pattern into list of (opcode, argument) tuples source = Tokenizer(str) if pattern is None: pattern = Pattern() pattern.flags = flags pattern.str = str p = _parse_sub(source, pattern, 0) p.pattern.flags = fix_flags(str, p.pattern.flags) tail = source.get() if tail == ")": raise error("unbalanced parenthesis") elif tail: raise error("bogus characters at end of regular expression") if flags & SRE_FLAG_DEBUG: p.dump() if not (flags & SRE_FLAG_VERBOSE) and p.pattern.flags & SRE_FLAG_VERBOSE: # the VERBOSE flag was switched on inside the pattern. to be # on the safe side, we'll parse the whole thing again... return parse(str, p.pattern.flags) return p def parse_template(source, pattern): # parse 're' replacement string into list of literals and # group references s = Tokenizer(source) sget = s.get p = [] a = p.append def literal(literal, p=p, pappend=a): if p and p[-1][0] is LITERAL: p[-1] = LITERAL, p[-1][1] + literal else: pappend((LITERAL, literal)) sep = source[:0] if isinstance(sep, str): makechar = chr else: makechar = chr while 1: this = sget() if this is None: break # end of replacement string if this and this[0] == "\\": # group c = this[1:2] if c == "g": name = "" if s.match("<"): while 1: char = sget() if char is None: raise error("unterminated group name") if char == ">": break name = name + char if not name: raise error("bad group name") try: index = int(name) if index < 0: raise error("negative group number") except ValueError: if not isname(name): raise error("bad character in group name") try: index = pattern.groupindex[name] except KeyError: raise IndexError("unknown group name") a((MARK, index)) elif c == "0": if s.next in OCTDIGITS: this = this + sget() if s.next in OCTDIGITS: this = this + sget() literal(makechar(int(this[1:], 8) & 0xff)) elif c in DIGITS: isoctal = False if s.next in DIGITS: this = this + sget() if (c in OCTDIGITS and this[2] in OCTDIGITS and s.next in OCTDIGITS): this = this + sget() isoctal = True literal(makechar(int(this[1:], 8) & 0xff)) if not isoctal: a((MARK, int(this[1:]))) else: try: this = makechar(ESCAPES[this][1]) except KeyError: pass literal(this) else: literal(this) # convert template to groups and literals lists i = 0 groups = [] groupsappend = groups.append literals = [None] * len(p) if isinstance(source, str): encode = lambda x: x else: # The tokenizer implicitly decodes bytes objects as latin-1, we must # therefore re-encode the final representation. encode = lambda x: x.encode('latin1') for c, s in p: if c is MARK: groupsappend((i, s)) # literal[i] is already None else: literals[i] = encode(s) i = i + 1 return groups, literals def expand_template(template, match): g = match.group sep = match.string[:0] groups, literals = template literals = literals[:] try: for index, group in groups: literals[index] = s = g(group) if s is None: raise error("unmatched group") except IndexError: raise error("invalid group reference") return sep.join(literals)
unknown
codeparrot/codeparrot-clean
/* * parallel.c * * multi-process support * * Copyright (c) 2010-2026, PostgreSQL Global Development Group * src/bin/pg_upgrade/parallel.c */ #include "postgres_fe.h" #include <sys/wait.h> #ifdef WIN32 #include <io.h> #endif #include "pg_upgrade.h" static int parallel_jobs; #ifdef WIN32 /* * Array holding all active threads. There can't be any gaps/zeros so * it can be passed to WaitForMultipleObjects(). We use two arrays * so the thread_handles array can be passed to WaitForMultipleObjects(). */ static HANDLE *thread_handles; typedef struct { char *log_file; char *opt_log_file; char *cmd; } exec_thread_arg; typedef struct { DbInfoArr *old_db_arr; DbInfoArr *new_db_arr; char *old_pgdata; char *new_pgdata; char *old_tablespace; char *new_tablespace; } transfer_thread_arg; static exec_thread_arg **exec_thread_args; static transfer_thread_arg **transfer_thread_args; /* track current thread_args struct so reap_child() can be used for all cases */ static void **cur_thread_args; DWORD win32_exec_prog(exec_thread_arg *args); DWORD win32_transfer_all_new_dbs(transfer_thread_arg *args); #endif /* * parallel_exec_prog * * This has the same API as exec_prog, except it does parallel execution, * and therefore must throw errors and doesn't return an error status. */ void parallel_exec_prog(const char *log_file, const char *opt_log_file, const char *fmt,...) { va_list args; char cmd[MAX_STRING]; #ifndef WIN32 pid_t child; #else HANDLE child; exec_thread_arg *new_arg; #endif va_start(args, fmt); vsnprintf(cmd, sizeof(cmd), fmt, args); va_end(args); if (user_opts.jobs <= 1) /* exit_on_error must be true to allow jobs */ exec_prog(log_file, opt_log_file, true, true, "%s", cmd); else { /* parallel */ #ifdef WIN32 if (thread_handles == NULL) thread_handles = pg_malloc(user_opts.jobs * sizeof(HANDLE)); if (exec_thread_args == NULL) { int i; exec_thread_args = pg_malloc(user_opts.jobs * sizeof(exec_thread_arg *)); /* * For safety and performance, we keep the args allocated during * the entire life of the process, and we don't free the args in a * thread different from the one that allocated it. */ for (i = 0; i < user_opts.jobs; i++) exec_thread_args[i] = pg_malloc0(sizeof(exec_thread_arg)); } cur_thread_args = (void **) exec_thread_args; #endif /* harvest any dead children */ while (reap_child(false) == true) ; /* must we wait for a dead child? */ if (parallel_jobs >= user_opts.jobs) reap_child(true); /* set this before we start the job */ parallel_jobs++; /* Ensure stdio state is quiesced before forking */ fflush(NULL); #ifndef WIN32 child = fork(); if (child == 0) /* use _exit to skip atexit() functions */ _exit(!exec_prog(log_file, opt_log_file, true, true, "%s", cmd)); else if (child < 0) /* fork failed */ pg_fatal("could not create worker process: %m"); #else /* empty array element are always at the end */ new_arg = exec_thread_args[parallel_jobs - 1]; /* Can only pass one pointer into the function, so use a struct */ pg_free(new_arg->log_file); new_arg->log_file = pg_strdup(log_file); pg_free(new_arg->opt_log_file); new_arg->opt_log_file = opt_log_file ? pg_strdup(opt_log_file) : NULL; pg_free(new_arg->cmd); new_arg->cmd = pg_strdup(cmd); child = (HANDLE) _beginthreadex(NULL, 0, (void *) win32_exec_prog, new_arg, 0, NULL); if (child == 0) pg_fatal("could not create worker thread: %m"); thread_handles[parallel_jobs - 1] = child; #endif } } #ifdef WIN32 DWORD win32_exec_prog(exec_thread_arg *args) { int ret; ret = !exec_prog(args->log_file, args->opt_log_file, true, true, "%s", args->cmd); /* terminates thread */ return ret; } #endif /* * parallel_transfer_all_new_dbs * * This has the same API as transfer_all_new_dbs, except it does parallel execution * by transferring multiple tablespaces in parallel */ void parallel_transfer_all_new_dbs(DbInfoArr *old_db_arr, DbInfoArr *new_db_arr, char *old_pgdata, char *new_pgdata, char *old_tablespace, char *new_tablespace) { #ifndef WIN32 pid_t child; #else HANDLE child; transfer_thread_arg *new_arg; #endif if (user_opts.jobs <= 1) transfer_all_new_dbs(old_db_arr, new_db_arr, old_pgdata, new_pgdata, NULL, NULL); else { /* parallel */ #ifdef WIN32 if (thread_handles == NULL) thread_handles = pg_malloc(user_opts.jobs * sizeof(HANDLE)); if (transfer_thread_args == NULL) { int i; transfer_thread_args = pg_malloc(user_opts.jobs * sizeof(transfer_thread_arg *)); /* * For safety and performance, we keep the args allocated during * the entire life of the process, and we don't free the args in a * thread different from the one that allocated it. */ for (i = 0; i < user_opts.jobs; i++) transfer_thread_args[i] = pg_malloc0(sizeof(transfer_thread_arg)); } cur_thread_args = (void **) transfer_thread_args; #endif /* harvest any dead children */ while (reap_child(false) == true) ; /* must we wait for a dead child? */ if (parallel_jobs >= user_opts.jobs) reap_child(true); /* set this before we start the job */ parallel_jobs++; /* Ensure stdio state is quiesced before forking */ fflush(NULL); #ifndef WIN32 child = fork(); if (child == 0) { transfer_all_new_dbs(old_db_arr, new_db_arr, old_pgdata, new_pgdata, old_tablespace, new_tablespace); /* if we take another exit path, it will be non-zero */ /* use _exit to skip atexit() functions */ _exit(0); } else if (child < 0) /* fork failed */ pg_fatal("could not create worker process: %m"); #else /* empty array element are always at the end */ new_arg = transfer_thread_args[parallel_jobs - 1]; /* Can only pass one pointer into the function, so use a struct */ new_arg->old_db_arr = old_db_arr; new_arg->new_db_arr = new_db_arr; pg_free(new_arg->old_pgdata); new_arg->old_pgdata = pg_strdup(old_pgdata); pg_free(new_arg->new_pgdata); new_arg->new_pgdata = pg_strdup(new_pgdata); pg_free(new_arg->old_tablespace); new_arg->old_tablespace = old_tablespace ? pg_strdup(old_tablespace) : NULL; new_arg->new_tablespace = new_tablespace ? pg_strdup(new_tablespace) : NULL; child = (HANDLE) _beginthreadex(NULL, 0, (void *) win32_transfer_all_new_dbs, new_arg, 0, NULL); if (child == 0) pg_fatal("could not create worker thread: %m"); thread_handles[parallel_jobs - 1] = child; #endif } } #ifdef WIN32 DWORD win32_transfer_all_new_dbs(transfer_thread_arg *args) { transfer_all_new_dbs(args->old_db_arr, args->new_db_arr, args->old_pgdata, args->new_pgdata, args->old_tablespace, args->new_tablespace); /* terminates thread */ return 0; } #endif /* * collect status from a completed worker child */ bool reap_child(bool wait_for_child) { #ifndef WIN32 int work_status; pid_t child; #else int thread_num; DWORD res; #endif if (user_opts.jobs <= 1 || parallel_jobs == 0) return false; #ifndef WIN32 child = waitpid(-1, &work_status, wait_for_child ? 0 : WNOHANG); if (child == (pid_t) -1) pg_fatal("%s() failed: %m", "waitpid"); if (child == 0) return false; /* no children, or no dead children */ if (work_status != 0) pg_fatal("child process exited abnormally: status %d", work_status); #else /* wait for one to finish */ thread_num = WaitForMultipleObjects(parallel_jobs, thread_handles, false, wait_for_child ? INFINITE : 0); if (thread_num == WAIT_TIMEOUT || thread_num == WAIT_FAILED) return false; /* compute thread index in active_threads */ thread_num -= WAIT_OBJECT_0; /* get the result */ GetExitCodeThread(thread_handles[thread_num], &res); if (res != 0) pg_fatal("child worker exited abnormally: %m"); /* dispose of handle to stop leaks */ CloseHandle(thread_handles[thread_num]); /* Move last slot into dead child's position */ if (thread_num != parallel_jobs - 1) { void *tmp_args; thread_handles[thread_num] = thread_handles[parallel_jobs - 1]; /* * Move last active thread arg struct into the now-dead slot, and the * now-dead slot to the end for reuse by the next thread. Though the * thread struct is in use by another thread, we can safely swap the * struct pointers within the array. */ tmp_args = cur_thread_args[thread_num]; cur_thread_args[thread_num] = cur_thread_args[parallel_jobs - 1]; cur_thread_args[parallel_jobs - 1] = tmp_args; } #endif /* do this after job has been removed */ parallel_jobs--; return true; }
c
github
https://github.com/postgres/postgres
src/bin/pg_upgrade/parallel.c
#ifndef RBIMPL_ARITHMERIC_ST_DATA_T_H /*-*-C++-*-vi:se ft=cpp:*/ #define RBIMPL_ARITHMERIC_ST_DATA_T_H /** * @file * @author Ruby developers <ruby-core@ruby-lang.org> * @copyright This file is a part of the programming language Ruby. * Permission is hereby granted, to either redistribute and/or * modify this file, provided that the conditions mentioned in the * file COPYING are met. Consult the file for details. * @warning Symbols prefixed with either `RBIMPL` or `rbimpl` are * implementation details. Don't take them as canon. They could * rapidly appear then vanish. The name (path) of this header file * is also an implementation detail. Do not expect it to persist * at the place it is now. Developers are free to move it anywhere * anytime at will. * @note To ruby-core: remember that this header can be possibly * recursively included from extension libraries written in C++. * Do not expect for instance `__VA_ARGS__` is always available. * We assume C99 for ruby itself but we don't assume languages of * extension libraries. They could be written in C++98. * @brief Arithmetic conversion between C's `st_data_t` and Ruby's. */ #include "ruby/internal/arithmetic/fixnum.h" #include "ruby/internal/arithmetic/long.h" #include "ruby/internal/attr/artificial.h" #include "ruby/internal/attr/const.h" #include "ruby/internal/attr/constexpr.h" #include "ruby/internal/cast.h" #include "ruby/internal/value.h" #include "ruby/assert.h" #include "ruby/st.h" #define ST2FIX RB_ST2FIX /**< @old{RB_ST2FIX} */ /** @cond INTERNAL_MACRO */ #define RB_ST2FIX RB_ST2FIX /** @endcond */ RBIMPL_ATTR_CONST_UNLESS_DEBUG() RBIMPL_ATTR_CONSTEXPR_UNLESS_DEBUG(CXX14) RBIMPL_ATTR_ARTIFICIAL() /** * Converts a C's `st_data_t` into an instance of ::rb_cInteger. * * @param[in] i The data in question. * @return A converted result * @warning THIS CONVERSION LOSES DATA! Be warned. * @see https://bugs.ruby-lang.org/issues/13877 * @see https://bugs.ruby-lang.org/issues/14218 * * @internal * * This is needed because of hash functions. Hash functions return * `st_data_t`, which could theoretically be bigger than Fixnums. However * allocating Bignums for them every time we calculate hash values is just too * heavy. To avoid penalty we need to ignore some upper bit(s) and stick to * Fixnums. This function is used for that purpose. */ static inline VALUE RB_ST2FIX(st_data_t i) { SIGNED_VALUE x = RBIMPL_CAST((SIGNED_VALUE)i); if (x >= 0) { x &= RUBY_FIXNUM_MAX; } else { x |= RUBY_FIXNUM_MIN; } RBIMPL_ASSERT_OR_ASSUME(RB_FIXABLE(x)); unsigned long y = RBIMPL_CAST((unsigned long)x); return RB_LONG2FIX(RBIMPL_CAST((long)y)); } #endif /* RBIMPL_ARITHMETIC_ST_DATA_T_H */
c
github
https://github.com/ruby/ruby
include/ruby/internal/arithmetic/st_data_t.h
######################################################################## # File name: anonymous.py # This file is part of: aiosasl # # LICENSE # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as # published by the Free Software Foundation, either version 3 of the # License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this program. If not, see # <http://www.gnu.org/licenses/>. # ######################################################################## import logging import typing from . import common, statemachine, stringprep logger = logging.getLogger() class ANONYMOUS(statemachine.SASLMechanism): """ The ANONYMOUS SASL mechanism (see :rfc:`4505`). .. versionadded:: 0.3 """ def __init__(self, token: str) -> None: super().__init__() self._token = stringprep.trace(token).encode("utf-8") @classmethod def any_supported( self, mechanisms: typing.Iterable[str], ) -> typing.Optional[str]: if "ANONYMOUS" in mechanisms: return "ANONYMOUS" return None async def authenticate( self, sm: statemachine.SASLStateMachine, mechanism: typing.Any) -> None: logger.info("attempting ANONYMOUS mechanism") state, _ = await sm.initiate( mechanism="ANONYMOUS", payload=self._token ) if state != common.SASLState.SUCCESS: raise common.SASLFailure( None, text="SASL protocol violation")
unknown
codeparrot/codeparrot-clean
//////////////////////////////////////////////////////////////////////////// // // Copyright 2014 Realm Inc. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // //////////////////////////////////////////////////////////////////////////// #import <Realm/Realm.h> @interface Place : RLMObject @property NSString *postalCode; @property NSString *placeName; @property NSString *state; @property NSString *stateAbbreviation; @property NSString *county; @property double latitude; @property double longitude; @end
c
github
https://github.com/realm/realm-swift
examples/tvos/objc/PreloadedData/Place.h
{ "$schema": "https://docs.renovatebot.com/renovate-schema.json", "extends": [ "config:recommended" ], "includePaths": [".github/**"], "schedule": "* 0 * * 1", "minimumReleaseAge": "3 days", "assignees": ["boomanaiden154"], "ignorePaths": [".github/workflows/containers/**"], "groupName": "[Github] Update GHA Dependencies", "packageRules": [ { "matchPackageNames": ["windows", "macos"], "matchManagers": ["github-actions"], "enabled": false }, { "matchPackageNames": ["python"], "matchManagers": ["github-actions"], "matchFileNames": ["release-binaries.yml"], "enabled": false } ] }
json
github
https://github.com/llvm/llvm-project
.github/renovate.json
// Copyright IBM Corp. 2016, 2025 // SPDX-License-Identifier: BUSL-1.1 package okta import ( "encoding/json" "fmt" "os" "strings" "time" "github.com/hashicorp/go-secure-stdlib/base62" pwd "github.com/hashicorp/go-secure-stdlib/password" "github.com/hashicorp/vault/api" ) // CLIHandler struct type CLIHandler struct{} // Auth cli method func (h *CLIHandler) Auth(c *api.Client, m map[string]string) (*api.Secret, error) { mount, ok := m["mount"] if !ok { mount = "okta" } username, ok := m["username"] if !ok { return nil, fmt.Errorf("'username' var must be set") } password, ok := m["password"] if !ok { fmt.Fprintf(os.Stderr, "Password (will be hidden): ") var err error password, err = pwd.Read(os.Stdin) fmt.Fprintf(os.Stderr, "\n") if err != nil { return nil, err } } data := map[string]interface{}{ "password": password, } // Okta or Google totp code if totp, ok := m["totp"]; ok { data["totp"] = totp } // provider is an optional parameter if provider, ok := m["provider"]; ok { data["provider"] = provider } nonce := base62.MustRandom(20) data["nonce"] = nonce // Create a done channel to signal termination of the login so that we can // clean up the goroutine doneCh := make(chan struct{}) defer close(doneCh) go func() { for { timer := time.NewTimer(time.Second) select { case <-doneCh: timer.Stop() return case <-timer.C: } resp, _ := c.Logical().Read(fmt.Sprintf("auth/%s/verify/%s", mount, nonce)) if resp != nil { fmt.Fprintf(os.Stderr, "In Okta Verify, tap the number %q\n", resp.Data["correct_answer"].(json.Number)) return } } }() path := fmt.Sprintf("auth/%s/login/%s", mount, username) secret, err := c.Logical().Write(path, data) if err != nil { return nil, err } if secret == nil { return nil, fmt.Errorf("empty response from credential provider") } return secret, nil } // Help method for okta cli func (h *CLIHandler) Help() string { help := ` Usage: vault login -method=okta [CONFIG K=V...] The Okta auth method allows users to authenticate using Okta. Authenticate as "sally": $ vault login -method=okta username=sally Password (will be hidden): Authenticate as "bob": $ vault login -method=okta username=bob password=password Configuration: password=<string> Okta password to use for authentication. If not provided, the CLI will prompt for this on stdin. username=<string> Okta username to use for authentication. ` return strings.TrimSpace(help) }
go
github
https://github.com/hashicorp/vault
builtin/credential/okta/cli.go
""" Access to the Unix group database. Group entries are reported as 4-tuples containing the following fields from the group database, in order: name - name of the group passwd - group password (encrypted); often empty gid - numeric ID of the group mem - list of members The gid is an integer, name and password are strings. (Note that most users are not explicitly listed as members of the groups they are in according to the password database. Check both databases to get complete membership information.) """ __all__ = ['getgrgid', 'getgrnam', 'getgrall'] from os import _name, _posix_impl from org.python.core.Py import newString if _name == 'nt': raise ImportError, 'grp module not supported on Windows' class struct_group(tuple): """ grp.struct_group: Results from getgr*() routines. This object may be accessed either as a tuple of (gr_name,gr_passwd,gr_gid,gr_mem) or via the object attributes as named in the above tuple. """ attrs = ['gr_name', 'gr_passwd', 'gr_gid', 'gr_mem'] def __new__(cls, grp): grp = (newString(grp.name), newString(grp.password), int(grp.GID), [newString(member) for member in grp.members]) return tuple.__new__(cls, grp) def __getattr__(self, attr): try: return self[self.attrs.index(attr)] except ValueError: raise AttributeError def getgrgid(uid): """ getgrgid(id) -> tuple Return the group database entry for the given numeric group ID. If id is not valid, raise KeyError. """ entry = _posix_impl.getgrgid(uid) if not entry: raise KeyError(uid) return struct_group(entry) def getgrnam(name): """ getgrnam(name) -> tuple Return the group database entry for the given group name. If name is not valid, raise KeyError. """ entry = _posix_impl.getgrnam(name) if not entry: raise KeyError(name) return struct_group(entry) def getgrall(): """ getgrall() -> list of tuples Return a list of all available group database entries, in arbitrary order. """ groups = [] while True: group = _posix_impl.getgrent() if not group: break groups.append(struct_group(group)) return groups
unknown
codeparrot/codeparrot-clean
### # Copyright (c) 2004, Brett Kelly # Copyright (c) 2010, James McCoy # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # # * Redistributions of source code must retain the above copyright notice, # this list of conditions, and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright notice, # this list of conditions, and the following disclaimer in the # documentation and/or other materials provided with the distribution. # * Neither the name of the author of this software nor the name of # contributors to this software may be used to endorse or promote products # derived from this software without specific prior written consent. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE # LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR # CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF # SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN # CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) # ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE # POSSIBILITY OF SUCH DAMAGE. ### import re import time import operator import supybot.dbi as dbi import supybot.log as log import supybot.conf as conf import supybot.utils as utils import supybot.ircdb as ircdb from supybot.commands import * import supybot.ircmsgs as ircmsgs import supybot.plugins as plugins import supybot.ircutils as ircutils import supybot.callbacks as callbacks from supybot import commands from supybot.i18n import PluginInternationalization, internationalizeDocstring _ = PluginInternationalization('Note') class NoteRecord(dbi.Record): __fields__ = [ 'frm', 'to', 'at', 'notified', 'read', 'public', 'text', ] class DbiNoteDB(dbi.DB): Mapping = 'flat' Record = NoteRecord def __init__(self, *args, **kwargs): dbi.DB.__init__(self, *args, **kwargs) self.unRead = {} self.unNotified = {} for record in self: self._addCache(record) def _addCache(self, record): if not record.read: self.unRead.setdefault(record.to, []).append(record.id) if not record.notified: self.unNotified.setdefault(record.to, []).append(record.id) def _removeCache(self, record): if record.notified: try: self.unNotified[record.to].remove(record.id) except (KeyError, ValueError): pass if record.read: try: self.unRead[record.to].remove(record.id) except (KeyError, ValueError): pass def setRead(self, id): n = self.get(id) n.read = True n.notified = True self._removeCache(n) self.set(id, n) def setNotified(self, id): n = self.get(id) n.notified = True self._removeCache(n) self.set(id, n) def getUnnotifiedIds(self, to): return self.unNotified.get(to, []) def getUnreadIds(self, to): return self.unRead.get(to, []) def send(self, frm, to, public, text): n = self.Record(frm=frm, to=to, text=text, at=time.time(), public=public) id = self.add(n) self._addCache(n) return id def unsend(self, id): self.remove(id) for cache in self.unRead, self.unNotified: for (to, ids) in cache.items(): while id in ids: ids.remove(id) NoteDB = plugins.DB('Note', {'flat': DbiNoteDB}) class Note(callbacks.Plugin): """Allows you to send notes to other users.""" def __init__(self, irc): self.__parent= super(Note, self) self.__parent.__init__(irc) self.db = NoteDB() def die(self): self.__parent.die() self.db.close() def doPrivmsg(self, irc, msg): if ircmsgs.isCtcp(msg) and not ircmsgs.isAction(msg): return self._notify(irc, msg) def doJoin(self, irc, msg): if self.registryValue('notify.onJoin'): repeatedly = self.registryValue('notify.onJoin.repeatedly') self._notify(irc, msg, repeatedly) def _notify(self, irc, msg, repeatedly=False): irc = callbacks.SimpleProxy(irc, msg) try: to = ircdb.users.getUserId(msg.prefix) except KeyError: return ids = self.db.getUnnotifiedIds(to) if len(ids) <= self.registryValue('notify.autoSend'): for id in ids: irc.reply(self._formatNote(self.db.get(id), to), private=True) self.db.setRead(id) return unnotifiedIds = ['#%s' % nid for nid in ids] unnotified = len(unnotifiedIds) if unnotified or repeatedly: unreadIds = ['#%s' % nid for nid in self.db.getUnreadIds(to)] unread = len(unreadIds) s = format('You have %n; %i that I haven\'t told you about ' 'before now. %L %b still unread.', (unread, 'unread', 'note'), unnotified, unreadIds, unread) # Later we'll have a user value for allowing this to be a NOTICE. irc.reply(s, private=True) for nid in unnotifiedIds: id = int(nid[1:]) self.db.setNotified(id) def _getUserId(self, irc, name): if ircdb.users.hasUser(name): return ircdb.users.getUserId(name) else: try: hostmask = irc.state.nickToHostmask(name) return ircdb.users.getUserId(hostmask) except KeyError: return None def send(self, irc, msg, args, user, targets, text): """<recipient>,[<recipient>,[...]] <text> Sends a new note to the user specified. Multiple recipients may be specified by separating their names by commas. """ # Let's get the from user. public = irc.isChannel(msg.args[0]) sent = [] for target in targets: id = self.db.send(user.id, target.id, public, text) s = format('note #%i sent to %s', id, target.name) sent.append(s) irc.reply(format('%L.', sent).capitalize()) send = wrap(send, ['user', commalist('otherUser'), 'text']) def reply(self, irc, msg, args, user, id, text): """<id> <text> Sends a note in reply to <id>. """ try: note = self.db.get(id) except dbi.NoRecordError: irc.error('That\'s not a note in my database.', Raise=True) if note.to != user.id: irc.error('You may only reply to notes ' 'that have been sent to you.', Raise=True) self.db.setRead(id) text += ' (in reply to #%s)' % id public = irc.isChannel(msg.args[0]) try: target = ircdb.users.getUser(note.frm) except KeyError: irc.error('The user who sent you that note ' 'is no longer in my user database.', Raise=True) id = self.db.send(user.id, note.frm, public, text) irc.reply(format('Note #%i sent to %s.', id, target.name)) reply = wrap(reply, ['user', ('id', 'note'), 'text']) def unsend(self, irc, msg, args, user, id): """<id> Unsends the note with the id given. You must be the author of the note, and it must be unread. """ try: note = self.db.get(id) except dbi.NoRecordError: irc.errorInvalid('note id') if note.frm == user.id: if not note.read: self.db.unsend(id) irc.replySuccess() else: irc.error('That note has been read already.') else: irc.error('That note wasn\'t sent by you.') unsend = wrap(unsend, ['user', ('id', 'note')]) def _formatNote(self, note, to): elapsed = utils.timeElapsed(time.time() - note.at) if note.to == to: author = plugins.getUserName(note.frm) return format('#%i: %s (Sent by %s %s ago)', note.id, note.text, author, elapsed) else: assert note.frm == to, 'Odd, userid isn\'t frm either.' recipient = plugins.getUserName(note.to) return format('#%i: %s (Sent to %s %s ago)', note.id, note.text, recipient, elapsed) def note(self, irc, msg, args, user, id): """<id> Retrieves a single note by its unique note id. Use the 'note list' command to see what unread notes you have. """ try: note = self.db.get(id) except dbi.NoRecordError: irc.errorInvalid('note id') if user.id != note.frm and user.id != note.to: s = 'You may only retrieve notes you\'ve sent or received.' irc.error(s) return newnote = self._formatNote(note, user.id) irc.reply(newnote, private=(not note.public)) self.db.setRead(id) note = wrap(note, ['user', ('id', 'note')]) def _formatNoteId(self, msg, note, sent=False): if note.public or not ircutils.isChannel(msg.args[0]): if sent: sender = plugins.getUserName(note.to) return format('#%i to %s', note.id, sender) else: sender = plugins.getUserName(note.frm) return format('#%i from %s', note.id, sender) else: return format('#%i (private)', note.id) def search(self, irc, msg, args, user, optlist, glob): """[--{regexp} <value>] [--sent] [<glob>] Searches your received notes for ones matching <glob>. If --regexp is given, its associated value is taken as a regexp and matched against the notes. If --sent is specified, only search sent notes. """ criteria = [] def to(note): return note.to == user.id def frm(note): return note.frm == user.id own = to for (option, arg) in optlist: if option == 'regexp': criteria.append(lambda s: regexp_wrapper(s, reobj=arg, timeout=0.1, plugin_name=self.name(), fcn_name='search')) elif option == 'sent': own = frm if glob: glob = utils.python.glob2re(glob) criteria.append(re.compile(glob).search) def match(note): for p in criteria: if not p(note.text): return False return True notes = list(self.db.select(lambda n: match(n) and own(n))) if not notes: irc.reply('No matching notes were found.') else: utils.sortBy(operator.attrgetter('id'), notes) ids = [self._formatNoteId(msg, note) for note in notes] ids = self._condense(ids) irc.reply(format('%L', ids)) search = wrap(search, ['user', getopts({'regexp': ('regexpMatcher', True), 'sent': ''}), additional('glob')]) def list(self, irc, msg, args, user, optlist): """[--{old,sent}] [--{from,to} <user>] Retrieves the ids of all your unread notes. If --old is given, list read notes. If --sent is given, list notes that you have sent. If --from is specified, only lists notes sent to you from <user>. If --to is specified, only lists notes sent by you to <user>. """ (sender, receiver, old, sent) = (None, None, False, False) for (option, arg) in optlist: if option == 'old': old = True if option == 'sent': sent = True if option == 'from': sender = arg if option == 'to': receiver = arg sent = True if old: return self._oldnotes(irc, msg, sender) if sent: return self._sentnotes(irc, msg, receiver) def p(note): return not note.read and note.to == user.id if sender: originalP = p def p(note): return originalP(note) and note.frm == sender.id notes = list(self.db.select(p)) if not notes: irc.reply('You have no unread notes.') else: utils.sortBy(operator.attrgetter('id'), notes) ids = [self._formatNoteId(msg, note) for note in notes] ids = self._condense(ids) irc.reply(format('%L.', ids)) list = wrap(list, ['user', getopts({'old': '', 'sent': '', 'from': 'otherUser', 'to': 'otherUser'})]) def next(self, irc, msg, args, user): """takes no arguments Retrieves your next unread note, if any. """ notes = self.db.getUnreadIds(user.id) if not notes: irc.reply('You have no unread notes.') else: found = False for id in notes: try: note = self.db.get(id) except KeyError: continue found = True break if not found: irc.reply('You have no unread notes.') else: irc.reply(self._formatNote(note, user.id), private=(not note.public)) self.db.setRead(note.id) next = wrap(next, ['user']) def _condense(self, notes): temp = {} for note in notes: note = note.split(' ', 1) if note[1] in temp: temp[note[1]].append(note[0]) else: temp[note[1]] = [note[0]] notes = [] for (k,v) in temp.items(): if '(private)' in k: k = k.replace('(private)', format('%b private', len(v))) notes.append(format('%L %s', v, k)) return notes def _sentnotes(self, irc, msg, receiver): try: user = ircdb.users.getUser(msg.prefix) except KeyError: irc.errorNotRegistered() return def p(note): return note.frm == user.id if receiver: originalP = p def p(note): return originalP(note) and note.to == receiver.id notes = list(self.db.select(p)) if not notes: irc.error('I couldn\'t find any sent notes for your user.') else: utils.sortBy(operator.attrgetter('id'), notes) notes.reverse() # Most recently sent first. ids = [self._formatNoteId(msg, note, sent=True) for note in notes] ids = self._condense(ids) irc.reply(format('%L.', ids)) def _oldnotes(self, irc, msg, sender): try: user = ircdb.users.getUser(msg.prefix) except KeyError: irc.errorNotRegistered() return def p(note): return note.to == user.id and note.read if sender: originalP = p def p(note): return originalP(note) and note.frm == sender.id notes = list(self.db.select(p)) if not notes: irc.reply('I couldn\'t find any matching read notes ' 'for your user.') else: utils.sortBy(operator.attrgetter('id'), notes) notes.reverse() ids = [self._formatNoteId(msg, note) for note in notes] ids = self._condense(ids) irc.reply(format('%L.', ids)) Class = Note # vim: shiftwidth=4 softtabstop=4 expandtab textwidth=79:
unknown
codeparrot/codeparrot-clean
import re from guessit.patterns import sep, build_or_pattern from guessit.patterns.numeral import parse_numeral range_separators = ['-', 'to', 'a'] discrete_separators = ['&', 'and', 'et'] excluded_separators = ['.'] # Dot cannot serve as a discrete_separator discrete_sep = sep for range_separator in range_separators: discrete_sep = discrete_sep.replace(range_separator, '') for excluded_separator in excluded_separators: discrete_sep = discrete_sep.replace(excluded_separator, '') discrete_separators.append(discrete_sep) all_separators = list(range_separators) all_separators.extend(discrete_separators) range_separators_re = re.compile(build_or_pattern(range_separators), re.IGNORECASE) discrete_separators_re = re.compile(build_or_pattern(discrete_separators), re.IGNORECASE) all_separators_re = re.compile(build_or_pattern(all_separators), re.IGNORECASE) def list_parser(value, property_list_name, discrete_separators_re=discrete_separators_re, range_separators_re=range_separators_re, allow_discrete=False, fill_gaps=False): discrete_elements = filter(lambda x: x != '', discrete_separators_re.split(value)) discrete_elements = [x.strip() for x in discrete_elements] proper_discrete_elements = [] i = 0 while i < len(discrete_elements): if i < len(discrete_elements) - 2 and range_separators_re.match(discrete_elements[i+1]): proper_discrete_elements.append(discrete_elements[i] + discrete_elements[i+1] + discrete_elements[i+2]) i += 3 else: match = range_separators_re.search(discrete_elements[i]) if match and match.start() == 0: proper_discrete_elements[i - 1] += discrete_elements[i] elif match and match.end() == len(discrete_elements[i]): proper_discrete_elements.append(discrete_elements[i] + discrete_elements[i + 1]) else: proper_discrete_elements.append(discrete_elements[i]) i += 1 discrete_elements = proper_discrete_elements ret = [] for discrete_element in discrete_elements: range_values = filter(lambda x: x != '', range_separators_re.split(discrete_element)) range_values = [x.strip() for x in range_values] if len(range_values) > 1: for x in range(0, len(range_values) - 1): start_range_ep = parse_numeral(range_values[x]) end_range_ep = parse_numeral(range_values[x+1]) for range_ep in range(start_range_ep, end_range_ep + 1): if range_ep not in ret: ret.append(range_ep) else: discrete_value = parse_numeral(discrete_element) if discrete_value not in ret: ret.append(discrete_value) if len(ret) > 1: if not allow_discrete: valid_ret = list() # replace discrete elements by ranges valid_ret.append(ret[0]) for i in range(0, len(ret) - 1): previous = valid_ret[len(valid_ret) - 1] if ret[i+1] < previous: pass else: valid_ret.append(ret[i+1]) ret = valid_ret if fill_gaps: ret = list(range(min(ret), max(ret) + 1)) if len(ret) > 1: return {None: ret[0], property_list_name: ret} if len(ret) > 0: return ret[0] return None
unknown
codeparrot/codeparrot-clean
# -*- coding: utf-8 -*- from django.contrib import auth, messages from django.contrib.auth.decorators import login_required from django.core.urlresolvers import reverse from django.forms import formset_factory from django.shortcuts import get_object_or_404, redirect, render from django.template.context_processors import csrf from polls.forms import PollForm, PollSubjectForm from polls.models import PollSubject from threads.models import Post, Subject, Thread from .forms import PostForm, ThreadForm # Create your views here. def forum(request): """ forum(): pull the all subjects from the database and pass them to our view """ return render(request, 'forum/forum.html', {'subjects': Subject.objects.all()}) def threads(request, subject_id): """ threads(): """ subject = get_object_or_404(Subject, pk=subject_id) return render(request, 'forum/threads.html', {'subject': subject}) @login_required def new_thread(request, subject_id): """ new_thread(): """ subject = get_object_or_404(Subject, pk=subject_id) poll_subject_formset_class = formset_factory(PollSubjectForm, extra=3) if request.method == "POST": thread_form = ThreadForm(request.POST) post_form = PostForm(request.POST) poll_form = PollForm(request.POST) poll_subject_formset = poll_subject_formset_class(request.POST) if (thread_form.is_valid() and post_form.is_valid() and poll_form.is_valid() and poll_subject_formset.is_valid()): thread = thread_form.save(False) thread.subject = subject thread.user = request.user thread.save() post = post_form.save(False) post.user = request.user post.thread = thread post.save() if 'is_a_poll' in request.POST: poll = poll_form.save(False) poll.thread = thread poll.save() for subject_form in poll_subject_formset: subject = subject_form.save(False) subject.poll = poll subject.save() messages.success(request, "You have created a new poll!") else: messages.success(request, "You have create a new thread!") return redirect(reverse('thread', args={thread.pk})) else: thread_form = ThreadForm() post_form = PostForm() poll_form = PollForm() poll_subject_formset = poll_subject_formset_class() args = { 'thread_form': thread_form, 'post_form': post_form, 'subject': subject, 'poll_form': poll_form, 'poll_subject_formset': poll_subject_formset, } args.update(csrf(request)) return render(request, 'forum/thread_form.html', args) def thread(request, thread_id): """ thread(): """ thread_ = get_object_or_404(Thread, pk=thread_id) args = {'thread': thread_} args.update(csrf(request)) return render(request, 'forum/thread.html', args) @login_required def new_post(request, thread_id): """ new_post(): """ thread = get_object_or_404(Thread, pk=thread_id) if request.method == "POST": form = PostForm(request.POST) if form.is_valid(): post = form.save(False) post.thread = thread post.user = request.user post.save() messages.success( request, "Your post has been added to the thread!") return redirect(reverse('thread', args={thread.pk})) else: form = PostForm() args = { 'form': form, 'form_action': reverse('new_post', args={thread.id}), 'button_text': 'Update Post' } args.update(csrf(request)) return render(request, 'forum/post_form.html', args) @login_required def edit_post(request, thread_id, post_id): thread = get_object_or_404(Thread, pk=thread_id) post = get_object_or_404(Post, pk=post_id) if request.method == "POST": form = PostForm(request.POST, instance=post) if form.is_valid(): form.save() messages.success(request, "You have updated your thread!") return redirect(reverse('thread', args={thread.pk})) else: form = PostForm(instance=post) args = { 'form': form, 'form_action': reverse('edit_post', kwargs={"thread_id": thread.id, "post_id": post.id}), 'button_text': 'Update Post' } args.update(csrf(request)) return render(request, 'forum/post_form.html', args) @login_required def delete_post(request, thread_id, post_id): post = get_object_or_404(Post, pk=post_id) thread_id = post.thread.id post.delete() messages.success(request, "Your post was deleted!") return redirect(reverse('thread', args={thread_id})) @login_required def thread_vote(request, thread_id, subject_id): thread = Thread.objects.get(id=thread_id) subject = thread.poll.votes.filter(user=request.user) if subject: messages.error(request, "You already voted on this!... You're not trying to cheat are you?") return redirect(reverse('thread', args={thread_id})) subject = PollSubject.objects.get(id=subject_id) subject.votes.create(poll=subject.poll, user=request.user) messages.success(request, "We've registered your vote!") return redirect(reverse('thread', args={thread_id}))
unknown
codeparrot/codeparrot-clean
from __future__ import unicode_literals import re import json import os from .common import InfoExtractor from ..compat import ( compat_urlparse, compat_urllib_parse, compat_urllib_parse_urlparse ) from ..utils import ( unified_strdate, ) class NHLBaseInfoExtractor(InfoExtractor): @staticmethod def _fix_json(json_string): return json_string.replace('\\\'', '\'') def _extract_video(self, info): video_id = info['id'] self.report_extraction(video_id) initial_video_url = info['publishPoint'] if info['formats'] == '1': parsed_url = compat_urllib_parse_urlparse(initial_video_url) filename, ext = os.path.splitext(parsed_url.path) path = '%s_sd%s' % (filename, ext) data = compat_urllib_parse.urlencode({ 'type': 'fvod', 'path': compat_urlparse.urlunparse(parsed_url[:2] + (path,) + parsed_url[3:]) }) path_url = 'http://video.nhl.com/videocenter/servlets/encryptvideopath?' + data path_doc = self._download_xml( path_url, video_id, 'Downloading final video url') video_url = path_doc.find('path').text else: video_url = initial_video_url join = compat_urlparse.urljoin return { 'id': video_id, 'title': info['name'], 'url': video_url, 'description': info['description'], 'duration': int(info['duration']), 'thumbnail': join(join(video_url, '/u/'), info['bigImage']), 'upload_date': unified_strdate(info['releaseDate'].split('.')[0]), } class NHLIE(NHLBaseInfoExtractor): IE_NAME = 'nhl.com' _VALID_URL = r'https?://video(?P<team>\.[^.]*)?\.nhl\.com/videocenter/console(?:\?(?:.*?[?&])?)id=(?P<id>[-0-9a-zA-Z]+)' _TESTS = [{ 'url': 'http://video.canucks.nhl.com/videocenter/console?catid=6?id=453614', 'md5': 'db704a4ea09e8d3988c85e36cc892d09', 'info_dict': { 'id': '453614', 'ext': 'mp4', 'title': 'Quick clip: Weise 4-3 goal vs Flames', 'description': 'Dale Weise scores his first of the season to put the Canucks up 4-3.', 'duration': 18, 'upload_date': '20131006', }, }, { 'url': 'http://video.nhl.com/videocenter/console?id=2014020024-628-h', 'md5': 'd22e82bc592f52d37d24b03531ee9696', 'info_dict': { 'id': '2014020024-628-h', 'ext': 'mp4', 'title': 'Alex Galchenyuk Goal on Ray Emery (14:40/3rd)', 'description': 'Home broadcast - Montreal Canadiens at Philadelphia Flyers - October 11, 2014', 'duration': 0, 'upload_date': '20141011', }, }, { 'url': 'http://video.mapleleafs.nhl.com/videocenter/console?id=58665&catid=802', 'md5': 'c78fc64ea01777e426cfc202b746c825', 'info_dict': { 'id': '58665', 'ext': 'flv', 'title': 'Classic Game In Six - April 22, 1979', 'description': 'It was the last playoff game for the Leafs in the decade, and the last time the Leafs and Habs played in the playoffs. Great game, not a great ending.', 'duration': 400, 'upload_date': '20100129' }, }, { 'url': 'http://video.flames.nhl.com/videocenter/console?id=630616', 'only_matching': True, }] def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) video_id = mobj.group('id') json_url = 'http://video.nhl.com/videocenter/servlets/playlist?ids=%s&format=json' % video_id data = self._download_json( json_url, video_id, transform_source=self._fix_json) return self._extract_video(data[0]) class NHLVideocenterIE(NHLBaseInfoExtractor): IE_NAME = 'nhl.com:videocenter' IE_DESC = 'NHL videocenter category' _VALID_URL = r'https?://video\.(?P<team>[^.]*)\.nhl\.com/videocenter/(console\?[^(id=)]*catid=(?P<catid>[0-9]+)(?![&?]id=).*?)?$' _TEST = { 'url': 'http://video.canucks.nhl.com/videocenter/console?catid=999', 'info_dict': { 'id': '999', 'title': 'Highlights', }, 'playlist_count': 12, } def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) team = mobj.group('team') webpage = self._download_webpage(url, team) cat_id = self._search_regex( [r'var defaultCatId = "(.+?)";', r'{statusIndex:0,index:0,.*?id:(.*?),'], webpage, 'category id') playlist_title = self._html_search_regex( r'tab0"[^>]*?>(.*?)</td>', webpage, 'playlist title', flags=re.DOTALL).lower().capitalize() data = compat_urllib_parse.urlencode({ 'cid': cat_id, # This is the default value 'count': 12, 'ptrs': 3, 'format': 'json', }) path = '/videocenter/servlets/browse?' + data request_url = compat_urlparse.urljoin(url, path) response = self._download_webpage(request_url, playlist_title) response = self._fix_json(response) if not response.strip(): self._downloader.report_warning('Got an empty reponse, trying ' 'adding the "newvideos" parameter') response = self._download_webpage(request_url + '&newvideos=true', playlist_title) response = self._fix_json(response) videos = json.loads(response) return { '_type': 'playlist', 'title': playlist_title, 'id': cat_id, 'entries': [self._extract_video(v) for v in videos], }
unknown
codeparrot/codeparrot-clean
import logging import os import sys import threading import time chroma_logger = logging.getLogger("test") chroma_logger.setLevel(logging.DEBUG) try: import nose nose_installed = True except ImportError: nose_installed = False if nose_installed: # Monkey patch TextTestResult to print errors as they occur def monkeyPatchedAddError(self, test, err): super(nose.result.TextTestResult, self).addError(test, err) if self.showAll: self.stream.writeln("ERROR") self.stream.writeln(self._exc_info_to_string(err, test)) chroma_logger.error(self._exc_info_to_string(err, test)) elif self.dots: self.stream.write("E") self.stream.flush() def monkeyPatchedAddFailure(self, test, err): super(nose.result.TextTestResult, self).addFailure(test, err) if self.showAll: self.stream.writeln("FAIL") self.stream.writeln(self._exc_info_to_string(err, test)) chroma_logger.error(self._exc_info_to_string(err, test)) elif self.dots: self.stream.write("F") self.stream.flush() nose.result.TextTestResult.chroma_logger = chroma_logger nose.result.TextTestResult.addError = monkeyPatchedAddError nose.result.TextTestResult.addFailure = monkeyPatchedAddFailure # Monkey patch TextTestRunner to exit hard if there are hanging threads def monkeyPatchedRun(self, test): self.descriptions = 0 threads_at_beginning_of_test_run = threading.enumerate() chroma_logger.info("Starting tests with these threads running: '%s'" % threads_at_beginning_of_test_run) wrapper = self.config.plugins.prepareTest(test) if wrapper is not None: test = wrapper wrapped = self.config.plugins.setOutputStream(self.stream) if wrapped is not None: self.stream = wrapped result = self._makeResult() start = time.time() test(result) stop = time.time() result.printErrors() result.printSummary(start, stop) self.config.plugins.finalize(result) def get_hanging_threads(): ending_threads = threading.enumerate() hanging_threads = [] for thread in ending_threads: if thread not in threads_at_beginning_of_test_run and thread.is_alive(): hanging_threads.append(thread) return hanging_threads # Give the threads some time to stop running_time = 0 while running_time < 300 and get_hanging_threads(): time.sleep(5) running_time += 5 chroma_logger.info("Ending tests with these threads running: '%s'" % threading.enumerate()) hanging_threads = get_hanging_threads() if hanging_threads: sys.stderr.write( "\n********************\n\nTERMINATING TEST RUN - NOT ALL THREADS STOPPED AT END OF TESTS: '%s'\n\n********************\n" % hanging_threads ) os._exit(1) return result nose.core.TextTestRunner.chroma_logger = chroma_logger nose.core.TextTestRunner.run = monkeyPatchedRun
unknown
codeparrot/codeparrot-clean
# icu_provider [![crates.io](https://img.shields.io/crates/v/icu_provider)](https://crates.io/crates/icu_provider) <!-- cargo-rdme start --> `icu_provider` is one of the `ICU4X` components. Unicode's experience with ICU4X's parent projects, ICU4C and ICU4J, led the team to realize that data management is the most critical aspect of deploying internationalization, and that it requires a high level of customization for the needs of the platform it is embedded in. As a result ICU4X comes with a selection of providers that should allow for ICU4X to naturally fit into different business and technological needs of customers. `icu_provider` defines traits and structs for transmitting data through the ICU4X locale data pipeline. The primary trait is [`DataProvider`]. It is parameterized by a [`DataMarker`], which is the type-system-level data identifier. [`DataProvider`] has a single method, [`DataProvider::load`], which transforms a [`DataRequest`] into a [`DataResponse`]. - [`DataRequest`] contains selectors to choose a specific variant of the marker, such as a locale. - [`DataResponse`] contains the data if the request was successful. The most common types required for this crate are included via the prelude: ```rust use icu_provider::prelude::*; ``` ### Dynamic Data Providers If the type system cannot be leveraged to load data (such as when dynamically loading from I/O), there's another form of the [`DataProvider`]: [`DynamicDataProvider`]. While [`DataProvider`] is parametrized on the type-system level by a [`DataMarker`] (which are distinct types implementing this trait), [`DynamicDataProvider`]s are parametrized at runtime by a [`DataMarkerInfo`] struct, which essentially is the runtime representation of the [`DataMarker`] type. The [`DynamicDataProvider`] is still type-level parametrized by the type that it loads, and there are two implementations that should be called out - [`DynamicDataProvider<BufferMarker>`], a.k.a. [`BufferProvider`](buf::BufferProvider) returns data as `[u8]` buffers. #### BufferProvider These providers are able to return unstructured data typically represented as [`serde`]-serialized buffers. Users can call [`as_deserializing()`] to get an object implementing [`DataProvider`] by invoking Serde Deserialize. Examples of BufferProviders: - [`FsDataProvider`] reads individual buffers from the filesystem. - [`BlobDataProvider`] reads buffers from a large in-memory blob. ### Provider Adapters ICU4X offers several built-in modules to combine providers in interesting ways. These can be found in the [`icu_provider_adapters`] crate. ### Testing Provider This crate also contains a concrete provider for demonstration purposes: - [`HelloWorldProvider`] returns "hello world" strings in several languages. ### Types and Lifetimes Types compatible with [`Yokeable`] can be passed through the data provider, so long as they are associated with a marker type implementing [`DynamicDataMarker`]. Data structs should generally have one lifetime argument: `'data`. This lifetime allows data structs to borrow zero-copy data. [`FixedProvider`]: https://docs.rs/icu_provider_adapters/latest/fixed/any_payload/struct.FixedProvider.html [`HelloWorldProvider`]: hello_world::HelloWorldProvider [`Yokeable`]: yoke::Yokeable [`impl_dynamic_data_provider!`]: dynutil::impl_dynamic_data_provider [`icu_provider_adapters`]: https://docs.rs/icu_provider_adapters/latest/icu_provider_adapters/index.html [`SourceDataProvider`]: https://docs.rs/icu_provider_source/latest/icu_provider_source/struct.SourceDataProvider.html [`as_deserializing()`]: buf::AsDeserializingBufferProvider::as_deserializing [`FsDataProvider`]: https://docs.rs/icu_provider_fs/latest/icu_provider_fs/struct.FsDataProvider.html [`BlobDataProvider`]: https://docs.rs/icu_provider_blob/latest/icu_provider_blob/struct.BlobDataProvider.html <!-- cargo-rdme end --> ## More Information For more information on development, authorship, contributing etc. please visit [`ICU4X home page`](https://github.com/unicode-org/icu4x).
unknown
github
https://github.com/nodejs/node
deps/crates/vendor/icu_provider/README.md
#!/usr/bin/python # Copyright: (c) 2018, Rob White (@wimnat) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: elb_network_lb short_description: Manage a Network Load Balancer description: - Manage an AWS Network Elastic Load Balancer. See U(https://aws.amazon.com/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/) for details. version_added: "2.6" requirements: [ boto3 ] author: "Rob White (@wimnat)" options: cross_zone_load_balancing: description: - Indicates whether cross-zone load balancing is enabled. required: false default: no type: bool deletion_protection: description: - Indicates whether deletion protection for the ELB is enabled. required: false default: no type: bool listeners: description: - A list of dicts containing listeners to attach to the ELB. See examples for detail of the dict required. Note that listener keys are CamelCased. required: false name: description: - The name of the load balancer. This name must be unique within your AWS account, can have a maximum of 32 characters, must contain only alphanumeric characters or hyphens, and must not begin or end with a hyphen. required: true purge_listeners: description: - If yes, existing listeners will be purged from the ELB to match exactly what is defined by I(listeners) parameter. If the I(listeners) parameter is not set then listeners will not be modified default: yes type: bool purge_tags: description: - If yes, existing tags will be purged from the resource to match exactly what is defined by I(tags) parameter. If the I(tags) parameter is not set then tags will not be modified. required: false default: yes type: bool subnet_mappings: description: - A list of dicts containing the IDs of the subnets to attach to the load balancer. You can also specify the allocation ID of an Elastic IP to attach to the load balancer. You can specify one Elastic IP address per subnet. This parameter is mutually exclusive with I(subnets) required: false subnets: description: - A list of the IDs of the subnets to attach to the load balancer. You can specify only one subnet per Availability Zone. You must specify subnets from at least two Availability Zones. Required if state=present. This parameter is mutually exclusive with I(subnet_mappings) required: false scheme: description: - Internet-facing or internal load balancer. An ELB scheme can not be modified after creation. required: false default: internet-facing choices: [ 'internet-facing', 'internal' ] state: description: - Create or destroy the load balancer. required: true choices: [ 'present', 'absent' ] tags: description: - A dictionary of one or more tags to assign to the load balancer. required: false wait: description: - Whether or not to wait for the network load balancer to reach the desired state. type: bool wait_timeout: description: - The duration in seconds to wait, used in conjunction with I(wait). extends_documentation_fragment: - aws - ec2 notes: - Listeners are matched based on port. If a listener's port is changed then a new listener will be created. - Listener rules are matched based on priority. If a rule's priority is changed then a new rule will be created. ''' EXAMPLES = ''' # Note: These examples do not set authentication details, see the AWS Guide for details. # Create an ELB and attach a listener - elb_network_lb: name: myelb subnets: - subnet-012345678 - subnet-abcdef000 listeners: - Protocol: TCP # Required. The protocol for connections from clients to the load balancer (TCP or TLS) (case-sensitive). Port: 80 # Required. The port on which the load balancer is listening. DefaultActions: - Type: forward # Required. Only 'forward' is accepted at this time TargetGroupName: mytargetgroup # Required. The name of the target group state: present # Create an ELB with an attached Elastic IP address - elb_network_lb: name: myelb subnet_mappings: - SubnetId: subnet-012345678 AllocationId: eipalloc-aabbccdd listeners: - Protocol: TCP # Required. The protocol for connections from clients to the load balancer (TCP or TLS) (case-sensitive). Port: 80 # Required. The port on which the load balancer is listening. DefaultActions: - Type: forward # Required. Only 'forward' is accepted at this time TargetGroupName: mytargetgroup # Required. The name of the target group state: present # Remove an ELB - elb_network_lb: name: myelb state: absent ''' RETURN = ''' availability_zones: description: The Availability Zones for the load balancer. returned: when state is present type: list sample: "[{'subnet_id': 'subnet-aabbccddff', 'zone_name': 'ap-southeast-2a', 'load_balancer_addresses': []}]" canonical_hosted_zone_id: description: The ID of the Amazon Route 53 hosted zone associated with the load balancer. returned: when state is present type: str sample: ABCDEF12345678 created_time: description: The date and time the load balancer was created. returned: when state is present type: str sample: "2015-02-12T02:14:02+00:00" deletion_protection_enabled: description: Indicates whether deletion protection is enabled. returned: when state is present type: str sample: true dns_name: description: The public DNS name of the load balancer. returned: when state is present type: str sample: internal-my-elb-123456789.ap-southeast-2.elb.amazonaws.com idle_timeout_timeout_seconds: description: The idle timeout value, in seconds. returned: when state is present type: str sample: 60 ip_address_type: description: The type of IP addresses used by the subnets for the load balancer. returned: when state is present type: str sample: ipv4 listeners: description: Information about the listeners. returned: when state is present type: complex contains: listener_arn: description: The Amazon Resource Name (ARN) of the listener. returned: when state is present type: str sample: "" load_balancer_arn: description: The Amazon Resource Name (ARN) of the load balancer. returned: when state is present type: str sample: "" port: description: The port on which the load balancer is listening. returned: when state is present type: int sample: 80 protocol: description: The protocol for connections from clients to the load balancer. returned: when state is present type: str sample: HTTPS certificates: description: The SSL server certificate. returned: when state is present type: complex contains: certificate_arn: description: The Amazon Resource Name (ARN) of the certificate. returned: when state is present type: str sample: "" ssl_policy: description: The security policy that defines which ciphers and protocols are supported. returned: when state is present type: str sample: "" default_actions: description: The default actions for the listener. returned: when state is present type: str contains: type: description: The type of action. returned: when state is present type: str sample: "" target_group_arn: description: The Amazon Resource Name (ARN) of the target group. returned: when state is present type: str sample: "" load_balancer_arn: description: The Amazon Resource Name (ARN) of the load balancer. returned: when state is present type: str sample: arn:aws:elasticloadbalancing:ap-southeast-2:0123456789:loadbalancer/app/my-elb/001122334455 load_balancer_name: description: The name of the load balancer. returned: when state is present type: str sample: my-elb load_balancing_cross_zone_enabled: description: Indicates whether cross-zone load balancing is enabled. returned: when state is present type: str sample: true scheme: description: Internet-facing or internal load balancer. returned: when state is present type: str sample: internal state: description: The state of the load balancer. returned: when state is present type: dict sample: "{'code': 'active'}" tags: description: The tags attached to the load balancer. returned: when state is present type: dict sample: "{ 'Tag': 'Example' }" type: description: The type of load balancer. returned: when state is present type: str sample: network vpc_id: description: The ID of the VPC for the load balancer. returned: when state is present type: str sample: vpc-0011223344 ''' from ansible.module_utils.aws.core import AnsibleAWSModule from ansible.module_utils.ec2 import camel_dict_to_snake_dict, boto3_tag_list_to_ansible_dict, compare_aws_tags from ansible.module_utils.aws.elbv2 import NetworkLoadBalancer, ELBListeners, ELBListener def create_or_update_elb(elb_obj): """Create ELB or modify main attributes. json_exit here""" if elb_obj.elb: # ELB exists so check subnets, security groups and tags match what has been passed # Subnets if not elb_obj.compare_subnets(): elb_obj.modify_subnets() # Tags - only need to play with tags if tags parameter has been set to something if elb_obj.tags is not None: # Delete necessary tags tags_need_modify, tags_to_delete = compare_aws_tags(boto3_tag_list_to_ansible_dict(elb_obj.elb['tags']), boto3_tag_list_to_ansible_dict(elb_obj.tags), elb_obj.purge_tags) if tags_to_delete: elb_obj.delete_tags(tags_to_delete) # Add/update tags if tags_need_modify: elb_obj.modify_tags() else: # Create load balancer elb_obj.create_elb() # ELB attributes elb_obj.update_elb_attributes() elb_obj.modify_elb_attributes() # Listeners listeners_obj = ELBListeners(elb_obj.connection, elb_obj.module, elb_obj.elb['LoadBalancerArn']) listeners_to_add, listeners_to_modify, listeners_to_delete = listeners_obj.compare_listeners() # Delete listeners for listener_to_delete in listeners_to_delete: listener_obj = ELBListener(elb_obj.connection, elb_obj.module, listener_to_delete, elb_obj.elb['LoadBalancerArn']) listener_obj.delete() listeners_obj.changed = True # Add listeners for listener_to_add in listeners_to_add: listener_obj = ELBListener(elb_obj.connection, elb_obj.module, listener_to_add, elb_obj.elb['LoadBalancerArn']) listener_obj.add() listeners_obj.changed = True # Modify listeners for listener_to_modify in listeners_to_modify: listener_obj = ELBListener(elb_obj.connection, elb_obj.module, listener_to_modify, elb_obj.elb['LoadBalancerArn']) listener_obj.modify() listeners_obj.changed = True # If listeners changed, mark ELB as changed if listeners_obj.changed: elb_obj.changed = True # Get the ELB again elb_obj.update() # Get the ELB listeners again listeners_obj.update() # Update the ELB attributes elb_obj.update_elb_attributes() # Convert to snake_case and merge in everything we want to return to the user snaked_elb = camel_dict_to_snake_dict(elb_obj.elb) snaked_elb.update(camel_dict_to_snake_dict(elb_obj.elb_attributes)) snaked_elb['listeners'] = [] for listener in listeners_obj.current_listeners: snaked_elb['listeners'].append(camel_dict_to_snake_dict(listener)) # Change tags to ansible friendly dict snaked_elb['tags'] = boto3_tag_list_to_ansible_dict(snaked_elb['tags']) elb_obj.module.exit_json(changed=elb_obj.changed, **snaked_elb) def delete_elb(elb_obj): if elb_obj.elb: elb_obj.delete() elb_obj.module.exit_json(changed=elb_obj.changed) def main(): argument_spec = ( dict( cross_zone_load_balancing=dict(type='bool'), deletion_protection=dict(type='bool'), listeners=dict(type='list', elements='dict', options=dict( Protocol=dict(type='str', required=True), Port=dict(type='int', required=True), SslPolicy=dict(type='str'), Certificates=dict(type='list'), DefaultActions=dict(type='list', required=True) ) ), name=dict(required=True, type='str'), purge_listeners=dict(default=True, type='bool'), purge_tags=dict(default=True, type='bool'), subnets=dict(type='list'), subnet_mappings=dict(type='list'), scheme=dict(default='internet-facing', choices=['internet-facing', 'internal']), state=dict(choices=['present', 'absent'], type='str'), tags=dict(type='dict'), wait_timeout=dict(type='int'), wait=dict(type='bool') ) ) module = AnsibleAWSModule(argument_spec=argument_spec, mutually_exclusive=[['subnets', 'subnet_mappings']]) # Check for subnets or subnet_mappings if state is present state = module.params.get("state") if state == 'present': if module.params.get("subnets") is None and module.params.get("subnet_mappings") is None: module.fail_json(msg="'subnets' or 'subnet_mappings' is required when state=present") # Quick check of listeners parameters listeners = module.params.get("listeners") if listeners is not None: for listener in listeners: for key in listener.keys(): if key == 'Protocol' and listener[key] not in ['TCP', 'TLS']: module.fail_json(msg="'Protocol' must be either 'TCP' or 'TLS'") connection = module.client('elbv2') connection_ec2 = module.client('ec2') elb = NetworkLoadBalancer(connection, connection_ec2, module) if state == 'present': create_or_update_elb(elb) else: delete_elb(elb) if __name__ == '__main__': main()
unknown
codeparrot/codeparrot-clean
--- title: Pre-Rendering --- # Pre-Rendering [MODES: framework] <br/> <br/> Pre-Rendering allows you to speed up page loads for static content by rendering pages at build time instead of at runtime. ## Configuration Pre-rendering is enabled via the `prerender` config in `react-router.config.ts`. The simplest configuration is a boolean `true` which will pre-render all off the applications static paths based on `routes.ts`: ```ts filename=react-router.config.ts import type { Config } from "@react-router/dev/config"; export default { prerender: true, } satisfies Config; ``` The boolean `true` will not include any dynamic paths (i.e., `/blog/:slug`) because the parameter values are unknown. To configure specific paths including dynamic values, you can specify an array of paths: ```ts filename=react-router.config.ts import type { Config } from "@react-router/dev/config"; let slugs = getPostSlugs(); export default { prerender: [ "/", "/blog", ...slugs.map((s) => `/blog/${s}`), ], } satisfies Config; ``` If you need to perform more complex and/or asynchronous logic to determine the paths, you can also provide a function that returns an array of paths. This function provides you with a `getStaticPaths` method you can use to avoid manually adding all of the static paths in your application: ```ts filename=react-router.config.ts import type { Config } from "@react-router/dev/config"; export default { async prerender({ getStaticPaths }) { let slugs = await getPostSlugsFromCMS(); return [ ...getStaticPaths(), // "/" and "/blog" ...slugs.map((s) => `/blog/${s}`), ]; }, } satisfies Config; ``` ### Concurrency (unstable) <docs-warning>This API is experimental and subject to breaking changes in minor/patch releases. Please use with caution and pay **very** close attention to release notes for relevant changes.</docs-warning> By default, pages are pre-rendered one path at a time. You can enable concurrency to pre-render multiple paths in parallel which can speed up build times in many cases. You should experiment with the value that provides the best performance for your app. To specify concurrency, move your `prerender` config down into a `prerender.paths` field and you can specify the concurrency in `prerender.unstable_concurrency`: ```ts filename=react-router.config.ts import type { Config } from "@react-router/dev/config"; let slugs = getPostSlugs(); export default { prerender: { paths: [ "/", "/blog", ...slugs.map((s) => `/blog/${s}`), ], unstable_concurrency: 4, }, } satisfies Config; ``` ## Pre-Rendering with/without a Runtime Server Pre-Rendering can be used in two ways based on the `ssr` config value: - Alongside a runtime SSR server with `ssr:true` (the default value) - Deployed to a static file server with `ssr:false` ### Pre-rendering with `ssr:true` When pre-rendering with `ssr:true`, you're indicating you will still have a runtime server but you are choosing to pre-render certain paths for quicker Response times. ```ts filename=react-router.config.ts import type { Config } from "@react-router/dev/config"; export default { // Can be omitted - defaults to true ssr: true, prerender: ["/", "/blog", "/blog/popular-post"], } satisfies Config; ``` #### Data Loading and Pre-rendering There is no extra application API for pre-rendering. Routes being pre-rendered use the same route `loader` functions as server rendering: ```tsx export async function loader({ request, params }) { let post = await getPost(params.slug); return post; } export function Post({ loaderData }) { return <div>{loaderData.title}</div>; } ``` Instead of a request coming to your route on a deployed server, the build creates a `new Request()` and runs it through your app just like a server would. When server rendering, requests to paths that have not been pre-rendered will be server rendered as usual. #### Static File Output The rendered result will be written out to your `build/client` directory. You'll notice two files for each path: - `[url].html` HTML file for initial document requests - `[url].data` file for client side navigation browser requests The output of your build will indicate what files were pre-rendered: ```sh > react-router build vite v5.2.11 building for production... ... vite v5.2.11 building SSR bundle for production... ... Prerender: Generated build/client/index.html Prerender: Generated build/client/blog.data Prerender: Generated build/client/blog/index.html Prerender: Generated build/client/blog/my-first-post.data Prerender: Generated build/client/blog/my-first-post/index.html ... ``` During development, pre-rendering doesn't save the rendered results to the public directory, this only happens for `react-router build`. ### Pre-rendering with `ssr:false` The above examples assume you are deploying a runtime server but are pre-rendering some static pages to avoid hitting the server, resulting in faster loads. To disable runtime SSR and configure pre-rendering to be served from a static file server, you can set the `ssr:false` config flag: ```ts filename=react-router.config.ts import type { Config } from "@react-router/dev/config"; export default { ssr: false, // disable runtime server rendering prerender: true, // pre-render all static routes } satisfies Config; ``` If you specify `ssr:false` without a `prerender` config, React Router refers to that as [SPA Mode](./spa). In SPA Mode, we render a single HTML file that is capable of hydrating for _any_ of your application paths. It can do this because it only renders the `root` route into the HTML file and then determines which child routes to load based on the browser URL during hydration. This means you can use a `loader` on the root route, but not on any other routes because we don't know which routes to load until hydration in the browser. If you want to pre-render paths with `ssr:false`, those matched routes _can_ have loaders because we'll pre-render all of the matched routes for those paths, not just the root. You cannot include `actions` or `headers` functions in any routes when `ssr:false` is set because there will be no runtime server to run them on. #### Pre-rendering with a SPA Fallback If you want `ssr:false` but don't want to pre-render _all_ of your routes - that's fine too! You may have some paths where you need the performance/SEO benefits of pre-rendering, but other pages where a SPA would be fine. You can do this using the combination of config options as well - just limit your `prerender` config to the paths that you want to pre-render and React Router will also output a "SPA Fallback" HTML file that can be served to hydrate any other paths (using the same approach as [SPA Mode](./spa)). This will be written to one of the following paths: - `build/client/index.html` - If the `/` path is not pre-rendered - `build/client/__spa-fallback.html` - If the `/` path is pre-rendered ```ts filename=react-router.config.ts import type { Config } from "@react-router/dev/config"; export default { ssr: false, // SPA fallback will be written to build/client/index.html prerender: ["/about-us"], // SPA fallback will be written to build/client/__spa-fallback.html prerender: ["/", "/about-us"], } satisfies Config; ``` You can configure your deployment server to serve this file for any path that otherwise would 404. Some hosts do this by default, but others don't. As an example, a host may support a `_redirects` file to do this: ``` # If you did not pre-render the `/` route /* /index.html 200 # If you pre-rendered the `/` route /* /__spa-fallback.html 200 ``` If you're getting 404s at valid routes for your app, it's likely you need to configure your host. Here's another example of how you can do this with the [`sirv-cli`](https://www.npmjs.com/package/sirv-cli#user-content-single-page-applications) tool: ```sh # If you did not pre-render the `/` route sirv-cli build/client --single index.html # If you pre-rendered the `/` route sirv-cli build/client --single __spa-fallback.html ``` #### Invalid Exports When pre-rendering with `ssr:false`, React Router will error at build time if you have invalid exports to help prevent some mistakes that can be easily overlooked. - `headers`/`action` functions are prohibited in all routes because there will be no runtime server on which to run them - When using `ssr:false` without a `prerender` config (SPA Mode), a `loader` is permitted on the root route only - When using `ssr:false` with a `prerender` config, a `loader` is permitted on any route matched by a `prerender` path - If you are using a `loader` on a pre-rendered route that has child routes, you will need to make sure the parent `loaderData` can be determined at run-time properly by either: - Pre-rendering all child routes so that the parent `loader` can be called at build-time for each child route path and rendered into a `.data` file, or - Use a `clientLoader` on the parent that can be called at run-time for non-pre-rendered child paths
unknown
github
https://github.com/remix-run/react-router
docs/how-to/pre-rendering.md
""" WSGI config for djember project. This module contains the WSGI application used by Django's development server and any production WSGI deployments. It should expose a module-level variable named ``application``. Django's ``runserver`` and ``runfcgi`` commands discover this application via the ``WSGI_APPLICATION`` setting. Usually you will have the standard Django WSGI application here, but it also might make sense to replace the whole Django WSGI application with a custom one that later delegates to the Django one. For example, you could introduce WSGI middleware here, or combine a Django application with an application of another framework. """ import os from os.path import abspath, dirname from sys import path SITE_ROOT = dirname(dirname(abspath(__file__))) path.append(SITE_ROOT) # We defer to a DJANGO_SETTINGS_MODULE already in the environment. This breaks # if running multiple sites in the same mod_wsgi process. To fix this, use # mod_wsgi daemon mode with each site in its own daemon process, or use # os.environ["DJANGO_SETTINGS_MODULE"] = "jajaja.settings" os.environ.setdefault("DJANGO_SETTINGS_MODULE", "djember.settings.production") # This application object is used by any WSGI server configured to use this # file. This includes Django's development server, if the WSGI_APPLICATION # setting points here. from django.core.wsgi import get_wsgi_application application = get_wsgi_application() # Apply WSGI middleware here. # from helloworld.wsgi import HelloWorldApplication # application = HelloWorldApplication(application)
unknown
codeparrot/codeparrot-clean
# Copyright 2016 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """Reuters newswire topic classification dataset.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from tensorflow.contrib.keras.python.keras.datasets.reuters import get_word_index from tensorflow.contrib.keras.python.keras.datasets.reuters import load_data del absolute_import del division del print_function
unknown
codeparrot/codeparrot-clean
import threading #import time ##DEBUG #import logging ##DEBUG __author__ = "Mateusz Kobos" """ This code is a derivative of the code from ActiveState Code service at the address: http://code.activestate.com/recipes/577803-reader-writer-lock-with-priority-for-writers and is licensed under the MIT license. """ class RWLock: """Synchronization object used in a solution of so-called second readers-writers problem. In this problem, many readers can simultaneously access a share, and a writer has an exclusive access to this share. Additionally, the following constraints should be met: - no reader should be kept waiting if the share is currently opened for reading unless a writer is also waiting for the share, - no writer should be kept waiting for the share longer than absolutely necessary. The implementation is based on [1, secs. 4.2.2, 4.2.6, 4.2.7] with a modification -- adding an additional lock (C{self.__readers_queue}) -- in accordance with [2]. Sources: 1. A.B. Downey: "The little book of semaphores", Version 2.1.5, 2008 2. P.J. Courtois, F. Heymans, D.L. Parnas: "Concurrent Control with 'Readers' and 'Writers'", Communications of the ACM, 1971 (via [3]) 3. http://en.wikipedia.org/wiki/Readers-writers_problem """ def __init__(self): self.__read_switch = _LightSwitch() self.__write_switch = _LightSwitch() self.__no_readers = threading.Lock() self.__no_writers = threading.Lock() self.__readers_queue = threading.Lock() """A lock giving an even higher priority to the writer in certain cases (see [2] for a discussion)""" def reader_acquire(self): ##DEBUG #logging.info("thread={}: reader_acquire: BEGIN".format(threading.current_thread().name)) self.__readers_queue.acquire() #logging.info("thread={}: reader_acquire: after readers_queue.acquire()".format(threading.current_thread().name)) self.__no_readers.acquire() #logging.info("thread={}: reader_acquire: after no_readers.acquire()".format(threading.current_thread().name)) self.__read_switch.acquire(self.__no_writers) #logging.info("thread={}: reader_acquire: after read_switch.acquire(self.__no_writers)".format(threading.current_thread().name)) self.__no_readers.release() #logging.info("thread={}: reader_acquire: after no_readers.release()".format(threading.current_thread().name)) self.__readers_queue.release() ##DEBUG #logging.info("thread={}: reader_acquire: END".format(threading.current_thread().name)) def reader_release(self): ##DEBUG #logging.info("thread={}: reader_release: BEGIN".format(threading.current_thread().name)) self.__read_switch.release(self.__no_writers) ##DEBUG #logging.info("thread={}: reader_release: END".format(threading.current_thread().name)) def writer_acquire(self): ##DEBUG #logging.info("thread={}: writer_acquire: BEGIN".format(threading.current_thread().name)) self.__write_switch.acquire(self.__no_readers) #logging.info("thread={}: writer_acquire: after write_switch.acquire(self.__no_readers)".format(threading.current_thread().name)) self.__no_writers.acquire() ##DEBUG #logging.info("thread={}: writer_acquire: END".format(threading.current_thread().name)) def writer_release(self): ##DEBUG #logging.info("thread={}: writer_release: BEGIN".format(threading.current_thread().name)) self.__no_writers.release() self.__write_switch.release(self.__no_readers) ##DEBUG #logging.info("{} thread={}: writer_release: END".format(threading.current_thread().name)) class _LightSwitch: """An auxiliary "light switch"-like object. The first thread turns on the "switch", the last one turns it off (see [1, sec. 4.2.2] for details).""" def __init__(self): self.__counter = 0 self.__mutex = threading.Lock() def acquire(self, lock): self.__mutex.acquire() self.__counter += 1 if self.__counter == 1: lock.acquire() self.__mutex.release() def release(self, lock): self.__mutex.acquire() self.__counter -= 1 if self.__counter == 0: lock.release() self.__mutex.release()
unknown
codeparrot/codeparrot-clean
from quantopian.algorithm import attach_pipeline, pipeline_output from quantopian.pipeline import Pipeline from quantopian.pipeline.data.builtin import USEquityPricing from quantopian.pipeline.factors import CustomFactor, SimpleMovingAverage from quantopian.pipeline.data import morningstar import pandas as pd import numpy as np v = morningstar.valuation # --- Liquidity Factor --- class AvgDailyDollarVolumeTraded(CustomFactor): inputs = [USEquityPricing.close, USEquityPricing.volume] window_length = 20 def compute(self, today, assets, out, close_price, volume): out[:] = np.mean(close_price * volume, axis=0) # --- Value & Growth Factor --- class Value(CustomFactor): #EV_To_Sales_SalesGrowth_12M inputs = [morningstar.income_statement.total_revenue, v.enterprise_value] window_length = 252 def compute(self, today, assets, out, sales, ev): out[:] = ev[-1] / ((sales[-1] * 4)/(((sales[-1] * 4) - (sales[0]) * 4) / (sales[0] * 4))) # --- Momentum Factor --- # --- 9/13: Modified Momentum factor to include (I/S)*LT scheme (I=50d, S=20d, LT=140d) class Momentum(CustomFactor): inputs = [USEquityPricing.close] window_length = 140 def compute(self, today, assets, out, close): out[:] = ((close[-1] / close[-50]) / (close[-1] / (close[-20]))* close[-1]) # --- Quality Factor --- class Quality(CustomFactor): inputs = [morningstar.operation_ratios.roe] window_length = 1 def compute(self, today, assets, out, roe): out[:] = roe[-1] # --- Volatility Factor --- #-- 9/13 High Alpha Mean Reversion on 12M & 3M volatility class Volatility(CustomFactor): inputs = [USEquityPricing.close] window_length = 252 def compute(self, today, assets, out, close): close = pd.DataFrame(data=close, columns=assets) # Since we are going to rank largest is best we need to invert the sdev. out[:] = 1 / np.log(close).diff().std() # Compute final rank and assign long and short baskets. def before_trading_start(context, data): results = pipeline_output('factors').dropna() ranks = results.rank().mean(axis=1).order() context.shorts = 1 / ranks.head(200) context.shorts /= context.shorts.sum() context.longs = ranks.tail(200) context.longs /= context.longs.sum() update_universe(context.longs.index + context.shorts.index) # Put any initialization logic here. The context object will be passed to # the other methods in your algorithm. def initialize(context): pipe = Pipeline() pipe = attach_pipeline(pipe, name='factors') pipe.add(Value(), "value") pipe.add(Momentum(), "momentum") pipe.add(Quality(), "quality") pipe.add(Volatility(), "volatility") sma_200 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=200) dollar_volume = AvgDailyDollarVolumeTraded() # Screen out penny stocks and low liquidity securities. pipe.set_screen((sma_200 > 5) & (dollar_volume > 10**7)) context.spy = sid(8554) context.shorts = None context.longs = None schedule_function(rebalance, date_rules.month_start()) schedule_function(cancel_open_orders, date_rules.every_day(), time_rules.market_close()) # Will be called on every trade event for the securities you specify. def handle_data(context, data): record(lever=context.account.leverage, exposure=context.account.net_leverage, num_pos=len(context.portfolio.positions), oo=len(get_open_orders())) def cancel_open_orders(context, data): for security in get_open_orders(): for order in get_open_orders(security): cancel_order(order) def rebalance(context, data): for security in context.shorts.index: if get_open_orders(security): continue if security in data: order_target_percent(security, -context.shorts[security]) for security in context.longs.index: if get_open_orders(security): continue if security in data: order_target_percent(security, context.longs[security]) for security in context.portfolio.positions: if get_open_orders(security): continue if security in data: if security not in (context.longs.index + context.shorts.index): order_target_percent(security, 0)
unknown
codeparrot/codeparrot-clean
# Copyright 2003 Iddo Friedberg. All rights reserved. # This code is part of the Biopython distribution and governed by its # license. Please see the LICENSE file that should have been included # as part of this package. import sys # Add path to Bio sys.path.append('../..') """A parser for the NCBI blastpgp version 2.2.5 output format. Currently only supports the '-m 9' option, (table w/ annotations). Returns a BlastTableRec instance """ import sys class BlastTableEntry(object): def __init__(self, in_rec): bt_fields = in_rec.split() self.qid = bt_fields[0].split('|') self.sid = bt_fields[1].split('|') self.pid = float(bt_fields[2]) self.ali_len = int(bt_fields[3]) self.mis = int(bt_fields[4]) self.gaps = int(bt_fields[5]) self.q_bounds = (int(bt_fields[6]), int(bt_fields[7])) self.s_bounds = (int(bt_fields[8]), int(bt_fields[9])) self.e_value = float(bt_fields[10]) self.bit_score = float(bt_fields[11]) class BlastTableRec(object): def __init__(self): self.program = None self.version = None self.date = None self.iteration = None self.query = None self.database = None self.entries = [] def add_entry(self, entry): self.entries.append(entry) reader_keywords = {'BLASTP': 'version', 'Iteration': 'iteration', 'Query': 'query', 'Database': 'database', 'Fields': 'fields'} class BlastTableReader(object): def __init__(self, handle): self.handle = handle inline = self.handle.readline() # zip forward to start of record while inline and 'BLASTP' not in inline: inline = self.handle.readline() self._lookahead = inline self._n = 0 self._in_header = 1 def __next__(self): self.table_record = BlastTableRec() self._n += 1 inline = self._lookahead if not inline: return None while inline: if inline[0] == '#': if self._in_header: self._in_header = self._consume_header(inline) else: break else: self._consume_entry(inline) self._in_header = 0 inline = self.handle.readline() self._lookahead = inline self._in_header = 1 return self.table_record if sys.version_info[0] < 3: def next(self): """Python 2 style alias for Python 3 style __next__ method.""" return self.__next__() def _consume_entry(self, inline): current_entry = BlastTableEntry(inline) self.table_record.add_entry(current_entry) def _consume_header(self, inline): for keyword in reader_keywords: if keyword in inline: in_header = self._Parse('_parse_%s' % reader_keywords[keyword], inline) break return in_header def _parse_version(self, inline): program, version, date = inline.split()[1:] self.table_record.program = program self.table_record.version = version self.table_record.date = date return 1 def _parse_iteration(self, inline): self.table_record.iteration = int(inline.split()[2]) return 1 def _parse_query(self, inline): self.table_record.query = inline.split()[2:] return 1 def _parse_database(self, inline): self.table_record.database = inline.split()[2] return 1 def _parse_fields(self, inline): return 0 def _Parse(self, method_name, inline): return getattr(self, method_name)(inline)
unknown
codeparrot/codeparrot-clean
# Orca # # Copyright 2005-2008 Google Inc. # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the # Free Software Foundation, Inc., Franklin Street, Fifth Floor, # Boston MA 02110-1301 USA. # """Dectalk voice definitions using ACSS. This module encapsulates Dectalk-specific voice definitions. It maps device-independent ACSS voice definitions into appropriate Dectalk voice parameter settings. """ __id__ = "$Id$" __author__ = "T. V. Raman" __version__ = "$Revision$" __date__ = "$Date$" __copyright__ = "Copyright (c) 2005-2008 Google Inc." __license__ = "LGPL" import chnames # Handling of special characters # # Emacspeak uses Tcl syntax to communicate with its speech servers. It # embraces text in curly braces, so that at least {, }, and \ must be quoted # when sending text to speech server. But individual speech engines have # their own special characters in addition to those of Tcl. Dectalk # perceives speech parameters enclosed in square brackets, and Emacspeak # exploits this to transmit speech settings to Dectalk. Thus we must quote # [ and ] too. def makeSpecialCharMap(): """Returns list of pairs mapping characters which are special for Dectalk speech server to their replacements. """ chars = r'{\}[]' return [(ch, ' '+chnames.getCharacterName(ch)+' ') for ch in chars] # Speech parameters _defined_voices = {} # Map from ACSS dimensions to Dectalk settings: _table = {} #family codes: _table['family'] = { 'male' : ' :np ', 'paul' : ':np', 'man' : ':nh', 'harry' : ' :nh ', 'dennis' : ':nd', 'frank' : ':nf', 'betty' : ':nb', 'female' : ' :nb ', 'ursula' : ':nu', 'wendy' : ':nw', 'rita' : ':nr', 'kid' : ':nk', 'child' : ' :nk ' } # average-pitch : # Average pitch for standard male voice is 122hz --this is mapped to # a setting of 5. # Average pitch varies inversely with speaker head size --a child # has a small head and a higher pitched voice. # We change parameter head-size in conjunction with average pitch to # produce a more natural change on the Dectalk. #male average pitch def _update_map(table, key, format, settings): """Internal function to update acss->synth mapping.""" table[key] = {} for setting in settings: _table[key][setting[0]] = format % setting[1:] _male_ap = [ (0, 96, 115), (1, 101, 112), (2, 108, 109), (3, 112, 106), (4, 118, 103), (5, 122, 100), (6, 128, 98), (7, 134, 96), (8, 140, 94), (9, 147, 91) ] _update_map(_table, ('male', 'average-pitch'), " ap %s hs %s ", _male_ap) _update_map(_table, ('paul', 'average-pitch'), " ap %s hs %s ", _male_ap) #Man has a big head --and a lower pitch for the middle setting _man_ap = [ (0, 50, 125), (1, 59, 123), (2, 68, 121), (3, 77, 120), (4, 83, 118), (5, 89, 115), (6, 95, 112), (7, 110, 105), (8, 125, 100), (9, 140, 95) ] _update_map(_table, ('man', 'average-pitch'), " ap %s hs %s ",_man_ap) _update_map(_table, ('harry', 'average-pitch'), " ap %s hs %s ",_man_ap) _female_ap = [ (0, 160, 115), (1, 170, 112), (2, 181, 109), (3, 192, 106), (4, 200, 103), (5, 208, 100), (6, 219, 98), (7, 225, 96), (8, 240, 94), (9, 260, 91) ] _update_map(_table, ('female', 'average-pitch'), " ap %s hs %s ",_female_ap) _update_map(_table, ('betty', 'average-pitch'), " ap %s hs %s ",_female_ap) # The default DECtalk values for the pitch of the other voices seem # to be as follows: # Frank = 155, Dennis = 110, Ursula = 240, Rita = 106, Wendy = 200 # Kit = Child = 306 # Therefore, follow TV Raman's lead: _frank_ap = [ (0, 129, 115), (1, 134, 112), (2, 141, 109), (3, 145, 106), (4, 151, 103), (5, 155, 100), (6, 159, 98), (7, 165, 96), (8, 171, 94), (9, 178, 91) ] _update_map(_table, ('frank', 'average-pitch'), " ap %s hs %s ", _frank_ap) _dennis_ap = [ (0, 84, 115), (1, 89, 112), (2, 96, 109), (3, 100, 106), (4, 106, 103), (5, 110, 100), (6, 116, 98), (7, 122, 96), (8, 128, 94), (9, 135, 91) ] _update_map(_table, ('dennis', 'average-pitch'), " ap %s hs %s ", _dennis_ap) _ursula_ap = [ (0, 196, 115), (1, 206, 112), (2, 215, 109), (3, 224, 106), (4, 232, 103), (5, 240, 100), (6, 251, 98), (7, 265, 96), (8, 280, 94), (9, 300, 91) ] _update_map(_table, ('ursula', 'average-pitch'), " ap %s hs %s ", _ursula_ap) _rita_ap = [ (0, 62, 115), (1, 72, 112), (2, 81, 109), (3, 90, 106), (4, 98, 103), (5, 106, 100), (6, 117, 98), (7, 131, 96), (8, 146, 94), (9, 166, 91) ] _update_map(_table, ('rita', 'average-pitch'), " ap %s hs %s ", _rita_ap) # For some reason, Wendy at a high pitch causes the # synthesizer to click and eventually make a feedback sound! # It doesn't seem to be the result of the pitch. # Keeping head size constant for higher pitch seems to eliminate # the problem. _wendy_ap = [ (0, 156, 115), (1, 166, 112), (2, 175, 109), (3, 184, 106), (4, 192, 103), (5, 200, 100), (6, 211, 100), (7, 225, 100), (8, 240, 100), (9, 260, 100) ] _update_map(_table, ('wendy', 'average-pitch'), " ap %s hs %s ", _wendy_ap) # Kit/Child can't have the traditional adult head size # Setting the largest head size is the smallest adult # female head size. _child_ap = [ (0, 256, 91), (1, 266, 89), (2, 276, 87), (3, 286, 85), (4, 296, 83), (5, 306, 81), (6, 316, 79), (7, 326, 77), (8, 336, 75), (9, 346, 73) ] _update_map(_table, ('child', 'average-pitch'), " ap %s hs %s ", _child_ap) _update_map(_table, ('kit', 'average-pitch'), " ap %s hs %s ", _child_ap) # pitch-range for male: # Standard pitch range is 100 and is mapped to # a setting of 5. # A value of 0 produces a flat monotone voice --maximum value of 250 # produces a highly animated voice. # Additionally, we also set the assertiveness of the voice so the # voice is less assertive at lower pitch ranges. _male_pr = [ (0, 0, 0), (1, 20, 10), (2, 40, 20), (3, 60, 30), (4, 80, 40, ), (5, 100, 50, ), (6, 137, 60), (7, 174, 70), (8, 211, 80), (9, 250, 100), ] _update_map(_table, ('male', 'pitch-range'), " pr %s as %s ", _male_pr) _update_map(_table, ('paul', 'pitch-range'), " pr %s as %s ", _male_pr) # For now, assume that standard pitch range is reasonably # consistent for all male voices with the execption of harry _update_map(_table, ('frank', 'pitch-range'), " pr %s as %s ", _male_pr) _update_map(_table, ('dennis', 'pitch-range'), " pr %s as %s ", _male_pr) _man_pr = [ (0, 0, 0), (1, 16, 20), (2, 32, 40), (3, 48, 60), (4, 64, 80, ), (5, 80, 100, ), (6, 137, 100), (7, 174, 100), (8, 211, 100), (9, 250, 100) ] _update_map(_table, ('man', 'pitch-range'), " pr %s as %s ", _man_pr) _update_map(_table, ('harry', 'pitch-range'), " pr %s as %s ", _man_pr) _female_pr = [ (0, 0, 0), (1, 50, 10), (2, 80, 20), (3, 100, 25), (4, 110, 30, ), (5, 140, 35), (6, 165, 57), (7, 190, 75), (8, 220, 87), (9, 250, 100) ] _update_map(_table, ('female', 'pitch-range'), " pr %s as %s ", _female_pr) _update_map(_table, ('betty', 'pitch-range'), " pr %s as %s ", _female_pr) # For now, assume that standard pitch range is reasonably # consistent for all female voices, including kit _update_map(_table, ('ursula', 'pitch-range'), " pr %s as %s ", _female_pr) _update_map(_table, ('rita', 'pitch-range'), " pr %s as %s ", _female_pr) _update_map(_table, ('wendy', 'pitch-range'), " pr %s as %s ", _female_pr) _update_map(_table, ('kit', 'pitch-range'), " pr %s as %s ", _female_pr) _update_map(_table, ('child', 'pitch-range'), " pr %s as %s ", _female_pr) # Stress: # On the Dectalk we vary four parameters # The hat rise which controls the overall shape of the F0 contour # for sentence level intonation and stress, # The stress rise that controls the level of stress on stressed # syllables, # the baseline fall for paragraph level intonation # and the quickness --a parameter that controls whether the final # frequency targets are completely achieved in the phonetic transitions. _male_stress = [ (0, 0, 0, 0, 0), (1, 3, 6, 20, 3), (2, 6, 12, 40, 6), (3, 9, 18, 60, 9, ), (4, 12, 24, 80, 14), (5, 18, 32, 100, 18), (6, 34, 50, 100, 20), (7, 48, 65, 100, 35), (8, 63, 82, 100, 60), (9, 80, 90, 100, 40) ] _update_map(_table, ('male', 'stress'), " hr %s sr %s qu %s bf %s ", _male_stress) _update_map(_table, ('paul', 'stress'), " hr %s sr %s qu %s bf %s ", _male_stress) # For now, grabbing these values for all males but Harry _update_map(_table, ('frank', 'stress'), " hr %s sr %s qu %s bf %s ", _male_stress) _update_map(_table, ('dennis', 'stress'), " hr %s sr %s qu %s bf %s ", _male_stress) _man_stress = [ (0, 0, 0, 0, 0), (1, 4, 6, 2, 2), (2, 8, 12, 4, 4), (3, 12, 18, 6, 6), (4, 16, 24, 8, 8), (5, 20, 30, 10, 9), (6, 40, 48, 32, 16), (7, 60, 66, 54, 22), (8, 80, 78, 77, 34), (9, 100, 100, 100, 40) ] _update_map(_table, ('man', 'stress'), " hr %s sr %s qu %s bf %s ", _man_stress) _update_map(_table, ('harry', 'stress'), " hr %s sr %s qu %s bf %s ", _man_stress) _female_stress = [ (0, 1, 1, 0, 0), (1, 3, 4, 11, 0), (2, 5, 8, 22, 0), (3, 8, 12, 33, 0), (4, 11, 16, 44, 0), (5, 14, 20, 55, 0), (6, 35, 40, 65, 10), (7, 56, 80, 75, 20), (8, 77, 90, 85, 30), (9, 100, 100, 100, 40) ] _update_map(_table, ('female', 'stress'), " hr %s sr %s qu %s bf %s ", _female_stress) _update_map(_table, ('betty', 'stress'), " hr %s sr %s qu %s bf %s ", _female_stress) # For now, grabbing these values for all females including kit _update_map(_table, ('ursula', 'stress'), " hr %s sr %s qu %s bf %s ", _female_stress) _update_map(_table, ('rita', 'stress'), " hr %s sr %s qu %s bf %s ", _female_stress) _update_map(_table, ('wendy', 'stress'), " hr %s sr %s qu %s bf %s ", _female_stress) _update_map(_table, ('kit', 'stress'), " hr %s sr %s qu %s bf %s ", _female_stress) _update_map(_table, ('child', 'stress'), " hr %s sr %s qu %s bf %s ", _female_stress) #richness # Smoothness and richness vary inversely. # a maximally smooth voice produces a quieter effect # a rich voice is "bright" in contrast. _male_richness = [ (0, 0, 100), (1, 14, 80), (2, 28, 60), (3, 42, 40), (4, 56, 30), (5, 70, 28), (6, 60, 24 ), (7, 70, 16), (8, 80, 8), (9, 100, 0) ] _update_map(_table, ('male', 'richness'), " ri %s sm %s " ,_male_richness) _update_map(_table, ('paul', 'richness'), " ri %s sm %s " ,_male_richness) # For now, grabbing these values for all males but Harry _update_map(_table, ('frank', 'richness'), " ri %s sm %s " ,_male_richness) _update_map(_table, ('dennis', 'richness'), " ri %s sm %s " ,_male_richness) _man_richness = [ (0, 100, 0), (1, 96, 3), (2, 93, 6), (3, 90, 9), (4, 88, 11), (5, 86, 12), (6, 60, 24, ), (7, 40, 44), (8, 20, 65), (9, 0, 70) ] _update_map(_table, ('man', 'richness'), " ri %s sm %s " , _man_richness) _update_map(_table, ('harry', 'richness'), " ri %s sm %s " , _man_richness) _female_richness = [ (0, 0, 100), (1, 8, 76), (2, 16, 52), (3, 24,28), (4, 32, 10), (5, 40, 4), (6, 50, 3), (7, 65, 3), (8, 80, 2), (9, 100, 0) ] _update_map(_table, ('female', 'richness'), " ri %s sm %s ", _female_richness) _update_map(_table, ('betty', 'richness'), " ri %s sm %s ", _female_richness) # For now, grabbing these values for all females including kit _update_map(_table, ('ursula', 'richness'), " ri %s sm %s ", _female_richness) _update_map(_table, ('rita', 'richness'), " ri %s sm %s ", _female_richness) _update_map(_table, ('wendy', 'richness'), " ri %s sm %s ", _female_richness) _update_map(_table, ('kit', 'richness'), " ri %s sm %s ", _female_richness) _update_map(_table, ('child', 'richness'), " ri %s sm %s ", _female_richness) def getrate(r): return int(180 + 4*r) def getvolume(v): return int(10*v) def getvoicelist(): return _table['family'].keys() def getvoice(acss): """Memoized function that returns synthesizer code for specified ACSS setting. Synthesizer code is a tupple of the form (open,close) where open sets the voice, and close resets it.""" name = acss.name() if name in _defined_voices: return _defined_voices[name] _defined_voices[name] = acss2voice(acss) return _defined_voices[name] def acss2voice(acss): """Return synthesizer code.""" code = "" familyName = 'male' if 'family' in acss: familyName = acss['family']['name'] if familyName in _table['family']: code += _table['family'][familyName] if 'rate' in acss: code += " :ra %s" % getrate(acss['rate']) if 'punctuations' in acss: code += " :punc %s" % acss['punctuations'] if 'gain' in acss: code += " :volume set %s" % getvolume(acss['gain']) voice = "" dv = "" for d in ['average-pitch', 'pitch-range', 'richness', 'stress']: if d in acss: if (familyName, d) in _table: voice += _table[(familyName, d)][int(acss[d])] if voice: dv = " :dv %s" % voice if code or voice: code = "[%s %s]" % (code, dv) return (code, " [:np] ")
unknown
codeparrot/codeparrot-clean
import distutils, os from setuptools import Command from distutils.util import convert_path from distutils import log from distutils.errors import * __all__ = ['config_file', 'edit_config', 'option_base', 'setopt'] def config_file(kind="local"): """Get the filename of the distutils, local, global, or per-user config `kind` must be one of "local", "global", or "user" """ if kind=='local': return 'setup.cfg' if kind=='global': return os.path.join( os.path.dirname(distutils.__file__),'distutils.cfg' ) if kind=='user': dot = os.name=='posix' and '.' or '' return os.path.expanduser(convert_path("~/%spydistutils.cfg" % dot)) raise ValueError( "config_file() type must be 'local', 'global', or 'user'", kind ) def edit_config(filename, settings, dry_run=False): """Edit a configuration file to include `settings` `settings` is a dictionary of dictionaries or ``None`` values, keyed by command/section name. A ``None`` value means to delete the entire section, while a dictionary lists settings to be changed or deleted in that section. A setting of ``None`` means to delete that setting. """ from ConfigParser import RawConfigParser log.debug("Reading configuration from %s", filename) opts = RawConfigParser() opts.read([filename]) for section, options in settings.items(): if options is None: log.info("Deleting section [%s] from %s", section, filename) opts.remove_section(section) else: if not opts.has_section(section): log.debug("Adding new section [%s] to %s", section, filename) opts.add_section(section) for option,value in options.items(): if value is None: log.debug("Deleting %s.%s from %s", section, option, filename ) opts.remove_option(section,option) if not opts.options(section): log.info("Deleting empty [%s] section from %s", section, filename) opts.remove_section(section) else: log.debug( "Setting %s.%s to %r in %s", section, option, value, filename ) opts.set(section,option,value) log.info("Writing %s", filename) if not dry_run: f = open(filename,'w'); opts.write(f); f.close() class option_base(Command): """Abstract base class for commands that mess with config files""" user_options = [ ('global-config', 'g', "save options to the site-wide distutils.cfg file"), ('user-config', 'u', "save options to the current user's pydistutils.cfg file"), ('filename=', 'f', "configuration file to use (default=setup.cfg)"), ] boolean_options = [ 'global-config', 'user-config', ] def initialize_options(self): self.global_config = None self.user_config = None self.filename = None def finalize_options(self): filenames = [] if self.global_config: filenames.append(config_file('global')) if self.user_config: filenames.append(config_file('user')) if self.filename is not None: filenames.append(self.filename) if not filenames: filenames.append(config_file('local')) if len(filenames)>1: raise DistutilsOptionError( "Must specify only one configuration file option", filenames ) self.filename, = filenames class setopt(option_base): """Save command-line options to a file""" description = "set an option in setup.cfg or another config file" user_options = [ ('command=', 'c', 'command to set an option for'), ('option=', 'o', 'option to set'), ('set-value=', 's', 'value of the option'), ('remove', 'r', 'remove (unset) the value'), ] + option_base.user_options boolean_options = option_base.boolean_options + ['remove'] def initialize_options(self): option_base.initialize_options(self) self.command = None self.option = None self.set_value = None self.remove = None def finalize_options(self): option_base.finalize_options(self) if self.command is None or self.option is None: raise DistutilsOptionError("Must specify --command *and* --option") if self.set_value is None and not self.remove: raise DistutilsOptionError("Must specify --set-value or --remove") def run(self): edit_config( self.filename, { self.command: {self.option.replace('-','_'):self.set_value} }, self.dry_run )
unknown
codeparrot/codeparrot-clean
# -*- coding: utf-8 -*- ############################################################################## # # OpenERP, Open Source Management Solution # Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>). # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU Affero General Public License as # published by the Free Software Foundation, either version 3 of the # License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Affero General Public License for more details. # # You should have received a copy of the GNU Affero General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # ############################################################################## from openerp.osv import fields, osv from openerp.tools.translate import _ class account_fiscalyear_close_state(osv.osv_memory): """ Closes Account Fiscalyear """ _name = "account.fiscalyear.close.state" _description = "Fiscalyear Close state" _columns = { 'fy_id': fields.many2one('account.fiscalyear', \ 'Fiscal Year to Close', required=True, help="Select a fiscal year to close"), } def data_save(self, cr, uid, ids, context=None): """ This function close account fiscalyear @param cr: the current row, from the database cursor, @param uid: the current user’s ID for security checks, @param ids: List of Account fiscalyear close state’s IDs """ journal_period_obj = self.pool.get('account.journal.period') period_obj = self.pool.get('account.period') fiscalyear_obj = self.pool.get('account.fiscalyear') account_move_obj = self.pool.get('account.move') for data in self.read(cr, uid, ids, context=context): fy_id = data['fy_id'][0] account_move_ids = account_move_obj.search(cr, uid, [('period_id.fiscalyear_id', '=', fy_id), ('state', '=', "draft")], context=context) if account_move_ids: raise osv.except_osv(_('Invalid Action!'), _('In order to close a fiscalyear, you must first post related journal entries.')) cr.execute('UPDATE account_journal_period ' \ 'SET state = %s ' \ 'WHERE period_id IN (SELECT id FROM account_period \ WHERE fiscalyear_id = %s)', ('done', fy_id)) cr.execute('UPDATE account_period SET state = %s ' \ 'WHERE fiscalyear_id = %s', ('done', fy_id)) cr.execute('UPDATE account_fiscalyear ' \ 'SET state = %s WHERE id = %s', ('done', fy_id)) self.invalidate_cache(cr, uid, context=context) return {'type': 'ir.actions.act_window_close'} # vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
unknown
codeparrot/codeparrot-clean
/*------------------------------------------------------------------------- * * joininfo.c * joininfo list manipulation routines * * Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * * IDENTIFICATION * src/backend/optimizer/util/joininfo.c * *------------------------------------------------------------------------- */ #include "postgres.h" #include "nodes/makefuncs.h" #include "optimizer/joininfo.h" #include "optimizer/pathnode.h" #include "optimizer/paths.h" #include "optimizer/planmain.h" #include "optimizer/restrictinfo.h" /* * have_relevant_joinclause * Detect whether there is a joinclause that involves * the two given relations. * * Note: the joinclause does not have to be evaluable with only these two * relations. This is intentional. For example consider * SELECT * FROM a, b, c WHERE a.x = (b.y + c.z) * If a is much larger than the other tables, it may be worthwhile to * cross-join b and c and then use an inner indexscan on a.x. Therefore * we should consider this joinclause as reason to join b to c, even though * it can't be applied at that join step. */ bool have_relevant_joinclause(PlannerInfo *root, RelOptInfo *rel1, RelOptInfo *rel2) { bool result = false; List *joininfo; Relids other_relids; ListCell *l; /* * We could scan either relation's joininfo list; may as well use the * shorter one. */ if (list_length(rel1->joininfo) <= list_length(rel2->joininfo)) { joininfo = rel1->joininfo; other_relids = rel2->relids; } else { joininfo = rel2->joininfo; other_relids = rel1->relids; } foreach(l, joininfo) { RestrictInfo *rinfo = (RestrictInfo *) lfirst(l); if (bms_overlap(other_relids, rinfo->required_relids)) { result = true; break; } } /* * We also need to check the EquivalenceClass data structure, which might * contain relationships not emitted into the joininfo lists. */ if (!result && rel1->has_eclass_joins && rel2->has_eclass_joins) result = have_relevant_eclass_joinclause(root, rel1, rel2); return result; } /* * add_join_clause_to_rels * Add 'restrictinfo' to the joininfo list of each relation it requires. * * Note that the same copy of the restrictinfo node is linked to by all the * lists it is in. This allows us to exploit caching of information about * the restriction clause (but we must be careful that the information does * not depend on context). * * 'restrictinfo' describes the join clause * 'join_relids' is the set of relations participating in the join clause * (some of these could be outer joins) */ void add_join_clause_to_rels(PlannerInfo *root, RestrictInfo *restrictinfo, Relids join_relids) { int cur_relid; /* Don't add the clause if it is always true */ if (restriction_is_always_true(root, restrictinfo)) return; /* * Substitute the origin qual with constant-FALSE if it is provably always * false. * * Note that we need to keep the same rinfo_serial, since it is in * practice the same condition. We also need to reset the * last_rinfo_serial counter, which is essential to ensure that the * RestrictInfos for the "same" qual condition get identical serial * numbers (see deconstruct_distribute_oj_quals). */ if (restriction_is_always_false(root, restrictinfo)) { int save_rinfo_serial = restrictinfo->rinfo_serial; int save_last_rinfo_serial = root->last_rinfo_serial; restrictinfo = make_restrictinfo(root, (Expr *) makeBoolConst(false, false), restrictinfo->is_pushed_down, restrictinfo->has_clone, restrictinfo->is_clone, restrictinfo->pseudoconstant, 0, /* security_level */ restrictinfo->required_relids, restrictinfo->incompatible_relids, restrictinfo->outer_relids); restrictinfo->rinfo_serial = save_rinfo_serial; root->last_rinfo_serial = save_last_rinfo_serial; } cur_relid = -1; while ((cur_relid = bms_next_member(join_relids, cur_relid)) >= 0) { RelOptInfo *rel = find_base_rel_ignore_join(root, cur_relid); /* We only need to add the clause to baserels */ if (rel == NULL) continue; rel->joininfo = lappend(rel->joininfo, restrictinfo); } } /* * remove_join_clause_from_rels * Delete 'restrictinfo' from all the joininfo lists it is in * * This reverses the effect of add_join_clause_to_rels. It's used when we * discover that a relation need not be joined at all. * * 'restrictinfo' describes the join clause * 'join_relids' is the set of relations participating in the join clause * (some of these could be outer joins) */ void remove_join_clause_from_rels(PlannerInfo *root, RestrictInfo *restrictinfo, Relids join_relids) { int cur_relid; cur_relid = -1; while ((cur_relid = bms_next_member(join_relids, cur_relid)) >= 0) { RelOptInfo *rel = find_base_rel_ignore_join(root, cur_relid); /* We would only have added the clause to baserels */ if (rel == NULL) continue; /* * Remove the restrictinfo from the list. Pointer comparison is * sufficient. */ Assert(list_member_ptr(rel->joininfo, restrictinfo)); rel->joininfo = list_delete_ptr(rel->joininfo, restrictinfo); } }
c
github
https://github.com/postgres/postgres
src/backend/optimizer/util/joininfo.c
/* * Copyright (C) 2013 The Guava Authors * * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except * in compliance with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software distributed under the License * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express * or implied. See the License for the specific language governing permissions and limitations under * the License. */ package com.google.common.eventbus; /** * Handler for exceptions thrown by event subscribers. * * @since 16.0 */ public interface SubscriberExceptionHandler { /** Handles exceptions thrown by subscribers. */ void handleException(Throwable exception, SubscriberExceptionContext context); }
java
github
https://github.com/google/guava
android/guava/src/com/google/common/eventbus/SubscriberExceptionHandler.java
import type { HydrationState } from "../../lib/router/router"; import { createMemoryHistory } from "../../lib/router/history"; import { createRouter, IDLE_NAVIGATION } from "../../lib/router/router"; import type { AgnosticDataRouteObject, AgnosticRouteObject, } from "../../lib/router/utils"; import { data, ErrorResponseImpl, redirect } from "../../lib/router/utils"; import { urlMatch } from "./utils/custom-matchers"; import { cleanup, createDeferred, getFetcherData, setup, TASK_ROUTES, } from "./utils/data-router-setup"; import { createFormData, tick } from "./utils/utils"; interface CustomMatchers<R = jest.Expect> { urlMatch(url: string): R; } declare global { namespace jest { interface Expect extends CustomMatchers {} interface Matchers<R> extends CustomMatchers<R> {} interface InverseAsymmetricMatchers extends CustomMatchers {} } } expect.extend({ urlMatch, }); function initializeTest(init?: { url?: string; hydrationData?: HydrationState; }) { return setup({ routes: [ { path: "", id: "root", hasErrorBoundary: true, loader: true, children: [ { path: "/", id: "index", loader: true, action: true, }, { path: "/foo", id: "foo", loader: true, action: true, }, { path: "/bar", id: "bar", loader: true, action: true, }, { path: "/no-loader", id: "noLoader", }, ], }, ], hydrationData: init?.hydrationData || { loaderData: { root: "ROOT", index: "INDEX" }, }, ...(init?.url ? { initialEntries: [init.url] } : {}), }); } beforeEach(() => { jest.spyOn(console, "warn").mockImplementation(() => {}); }); // Detect any failures inside the router navigate code afterEach(() => { cleanup(); // @ts-ignore console.warn.mockReset(); }); describe("a router", () => { describe("init", () => { it("requires routes", async () => { let history = createMemoryHistory({ initialEntries: ["/"] }); expect(() => createRouter({ routes: [], history, hydrationData: {}, }), ).toThrowErrorMatchingInlineSnapshot( `"You must provide a non-empty routes array to createRouter"`, ); }); it("converts routes to data routes", async () => { let history = createMemoryHistory({ initialEntries: ["/child/grandchild"], }); let routes = [ { path: "/", children: [ { id: "child-keep-me", path: "child", children: [ { path: "grandchild", }, ], }, ], }, ]; let originalRoutes = JSON.parse(JSON.stringify(routes)); let router = createRouter({ routes, history, hydrationData: {}, }); // routes are not mutated in place expect(routes).toEqual(originalRoutes); expect(router.state.matches).toMatchObject([ { route: { id: "0", }, }, { route: { id: "child-keep-me", }, }, { route: { id: "0-0-0", }, }, ]); }); it("throws if it finds duplicate route ids", async () => { let history = createMemoryHistory({ initialEntries: ["/child/grandchild"], }); let routes = [ { path: "/", children: [ { id: "child", path: "child", children: [ { id: "child", path: "grandchild", }, ], }, ], }, ]; expect(() => createRouter({ routes, history, hydrationData: {}, }), ).toThrowErrorMatchingInlineSnapshot( `"Found a route id collision on id "child". Route id's must be globally unique within Data Router usages"`, ); }); it("throws if it finds index routes with children", async () => { let routes: AgnosticRouteObject[] = [ // @ts-expect-error { index: true, children: [ { path: "nope", }, ], }, ]; expect(() => createRouter({ routes, history: createMemoryHistory(), }), ).toThrowErrorMatchingInlineSnapshot( `"Cannot specify children on an index route"`, ); }); it("supports a basename prop for route matching", async () => { let history = createMemoryHistory({ initialEntries: ["/base/name/path"], }); let router = createRouter({ basename: "/base/name", routes: [{ path: "path" }], history, }); expect(router.state).toMatchObject({ location: { hash: "", key: expect.any(String), pathname: "/base/name/path", search: "", state: null, }, matches: [ { params: {}, pathname: "/path", pathnameBase: "/path", route: { id: "0", path: "path", }, }, ], initialized: true, }); }); it("supports a basename prop for route matching without a leading slash", async () => { let history = createMemoryHistory({ initialEntries: ["/base/name/path"], }); let router = createRouter({ basename: "base/name", routes: [{ path: "path" }], history, }); expect(router.state).toMatchObject({ location: { hash: "", key: expect.any(String), pathname: "/base/name/path", search: "", state: null, }, matches: [ { params: {}, pathname: "/path", pathnameBase: "/path", route: { id: "0", path: "path", }, }, ], initialized: true, }); }); it("supports subscribers", async () => { let history = createMemoryHistory({ initialEntries: ["/"] }); let count = 0; let router = createRouter({ routes: [ { id: "root", path: "/", hasErrorBoundary: true, loader: () => ++count, }, ], history, hydrationData: { loaderData: { root: 0 }, }, }).initialize(); expect(router.state.loaderData).toEqual({ root: 0, }); let subscriber = jest.fn(); let unsubscribe = router.subscribe(subscriber); let subscriber2 = jest.fn(); let unsubscribe2 = router.subscribe(subscriber2); await router.navigate("/?key=a"); expect(subscriber.mock.calls[0][0].navigation.state).toBe("loading"); expect(subscriber.mock.calls[0][0].navigation.location.search).toBe( "?key=a", ); expect(subscriber.mock.calls[1][0].navigation.state).toBe("idle"); expect(subscriber.mock.calls[1][0].location.search).toBe("?key=a"); expect(subscriber2.mock.calls[0][0].navigation.state).toBe("loading"); expect(subscriber2.mock.calls[0][0].navigation.location.search).toBe( "?key=a", ); expect(subscriber2.mock.calls[1][0].navigation.state).toBe("idle"); expect(subscriber2.mock.calls[1][0].location.search).toBe("?key=a"); unsubscribe2(); await router.navigate("/?key=b"); expect(subscriber.mock.calls[2][0].navigation.state).toBe("loading"); expect(subscriber.mock.calls[2][0].navigation.location.search).toBe( "?key=b", ); expect(subscriber.mock.calls[3][0].navigation.state).toBe("idle"); expect(subscriber.mock.calls[3][0].location.search).toBe("?key=b"); unsubscribe(); await router.navigate("/?key=c"); expect(subscriber).toHaveBeenCalledTimes(4); expect(subscriber2).toHaveBeenCalledTimes(2); }); }); describe("no route match", () => { it("navigations to root catch", () => { let t = initializeTest(); t.navigate("/not-found"); expect(t.router.state.loaderData).toEqual({ root: "ROOT", }); expect(t.router.state.errors).toEqual({ root: new ErrorResponseImpl( 404, "Not Found", new Error('No route matches URL "/not-found"'), true, ), }); expect(t.router.state.matches).toMatchObject([ { params: {}, pathname: "", route: { hasErrorBoundary: true, children: expect.any(Array), id: "root", loader: expect.any(Function), path: "", }, }, ]); }); it("matches root pathless route", () => { let t = setup({ routes: [{ id: "root", children: [{ path: "foo" }] }], }); t.navigate("/not-found"); expect(t.router.state.errors).toEqual({ root: new ErrorResponseImpl( 404, "Not Found", new Error('No route matches URL "/not-found"'), true, ), }); expect(t.router.state.matches).toMatchObject([ { params: {}, pathname: "", route: { id: "root", children: expect.any(Array), }, }, ]); }); it("clears prior loader/action data", async () => { let t = initializeTest(); expect(t.router.state.loaderData).toEqual({ root: "ROOT", index: "INDEX", }); let A = await t.navigate("/foo", { formMethod: "post", formData: createFormData({ key: "value" }), }); await A.actions.foo.resolve("ACTION"); await A.loaders.root.resolve("ROOT*"); await A.loaders.foo.resolve("LOADER"); expect(t.router.state.actionData).toEqual({ foo: "ACTION", }); expect(t.router.state.loaderData).toEqual({ root: "ROOT*", foo: "LOADER", }); t.navigate("/not-found"); expect(t.router.state.actionData).toBe(null); expect(t.router.state.loaderData).toEqual({ root: "ROOT*", }); expect(t.router.state.errors).toEqual({ root: new ErrorResponseImpl( 404, "Not Found", new Error('No route matches URL "/not-found"'), true, ), }); expect(t.router.state.matches).toMatchObject([ { params: {}, pathname: "", route: { hasErrorBoundary: true, children: expect.any(Array), id: "root", loader: expect.any(Function), path: "", }, }, ]); }); }); describe("navigation (new)", () => { it("navigates through a history stack without data loading", async () => { let t = setup({ routes: [ { id: "index", index: true, }, { id: "tasks", path: "tasks", }, { id: "tasksId", path: "tasks/:id", }, ], initialEntries: ["/"], }); expect(t.router.state).toMatchObject({ historyAction: "POP", location: { pathname: "/", search: "", hash: "", state: null, key: expect.any(String), }, navigation: IDLE_NAVIGATION, loaderData: {}, matches: [expect.objectContaining({ pathname: "/" })], }); expect(t.history.action).toEqual("POP"); expect(t.history.location.pathname).toEqual("/"); await t.navigate("/tasks"); expect(t.router.state).toMatchObject({ historyAction: "PUSH", location: { pathname: "/tasks", search: "", hash: "", state: null, key: expect.any(String), }, navigation: IDLE_NAVIGATION, loaderData: {}, matches: [expect.objectContaining({ pathname: "/tasks" })], }); expect(t.history.action).toEqual("PUSH"); expect(t.history.location.pathname).toEqual("/tasks"); await t.navigate("/tasks/1", { replace: true }); expect(t.router.state).toMatchObject({ historyAction: "REPLACE", location: { pathname: "/tasks/1", search: "", hash: "", state: null, key: expect.any(String), }, navigation: IDLE_NAVIGATION, loaderData: {}, matches: [expect.objectContaining({ pathname: "/tasks/1" })], }); expect(t.history.action).toEqual("REPLACE"); expect(t.history.location.pathname).toEqual("/tasks/1"); t.router.navigate(-1); await tick(); expect(t.router.state).toMatchObject({ historyAction: "POP", location: { pathname: "/", search: "", hash: "", state: null, key: expect.any(String), }, navigation: IDLE_NAVIGATION, loaderData: {}, matches: [expect.objectContaining({ pathname: "/" })], }); expect(t.history.action).toEqual("POP"); expect(t.history.location.pathname).toEqual("/"); await t.navigate("/tasks?foo=bar#hash"); expect(t.router.state).toMatchObject({ historyAction: "PUSH", location: { pathname: "/tasks", search: "?foo=bar", hash: "#hash", state: null, key: expect.any(String), }, navigation: IDLE_NAVIGATION, loaderData: {}, matches: [expect.objectContaining({ pathname: "/tasks" })], }); expect(t.history.action).toEqual("PUSH"); expect(t.history.location).toEqual({ pathname: "/tasks", search: "?foo=bar", hash: "#hash", state: null, key: expect.any(String), }); }); it("navigates through a history stack without data loading (with a basename)", async () => { let t = setup({ basename: "/base/name", routes: [ { id: "index", index: true, }, { id: "tasks", path: "tasks", }, { id: "tasksId", path: "tasks/:id", }, ], initialEntries: ["/base/name"], }); expect(t.router.state).toMatchObject({ location: { pathname: "/base/name", }, matches: [{ route: { id: "index" } }], }); expect(t.history.action).toEqual("POP"); expect(t.history.location.pathname).toEqual("/base/name"); await t.navigate("/tasks"); expect(t.router.state).toMatchObject({ location: { pathname: "/base/name/tasks", }, matches: [{ route: { id: "tasks" } }], }); expect(t.history.action).toEqual("PUSH"); expect(t.history.location.pathname).toEqual("/base/name/tasks"); await t.navigate("/tasks/1"); expect(t.router.state).toMatchObject({ location: { pathname: "/base/name/tasks/1", }, matches: [{ route: { id: "tasksId" } }], }); expect(t.history.location.pathname).toEqual("/base/name/tasks/1"); }); it("handles 404 routes", () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], }); t.navigate("/junk"); expect(t.router.state).toMatchObject({ location: { pathname: "/junk", }, navigation: IDLE_NAVIGATION, loaderData: {}, errors: { root: new ErrorResponseImpl( 404, "Not Found", new Error('No route matches URL "/junk"'), true, ), }, }); }); it("handles 404 routes when the root route contains a path (initialization)", () => { let t = setup({ routes: [ { id: "root", path: "/path", children: [ { index: true, }, ], }, ], initialEntries: ["/junk"], }); expect(t.router.state).toMatchObject({ errors: { root: new ErrorResponseImpl( 404, "Not Found", new Error('No route matches URL "/junk"'), true, ), }, initialized: true, location: { pathname: "/junk", }, matches: [ { route: { id: "root", }, }, ], }); }); it("handles 404 routes when the root route contains a path (navigation)", () => { let t = setup({ routes: [ { id: "root", path: "/path", children: [ { index: true, }, ], }, ], initialEntries: ["/path"], }); expect(t.router.state).toMatchObject({ errors: null, }); t.navigate("/junk"); expect(t.router.state).toMatchObject({ errors: { root: new ErrorResponseImpl( 404, "Not Found", new Error('No route matches URL "/junk"'), true, ), }, location: { pathname: "/junk", }, matches: [ { route: { id: "root", }, }, ], }); }); it("converts formData to URLSearchParams for unspecified formMethod", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], }); await t.navigate("/tasks", { formData: createFormData({ key: "value" }), }); expect(t.router.state.navigation.state).toBe("loading"); expect(t.router.state.navigation.location).toMatchObject({ pathname: "/tasks", search: "?key=value", }); expect(t.router.state.navigation.formMethod).toBe("GET"); expect(t.router.state.navigation.formData).toEqual( createFormData({ key: "value" }), ); }); it("converts formData to URLSearchParams for formMethod=get", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], }); await t.navigate("/tasks", { formMethod: "get", formData: createFormData({ key: "value" }), }); expect(t.router.state.navigation.state).toBe("loading"); expect(t.router.state.navigation.location).toMatchObject({ pathname: "/tasks", search: "?key=value", }); expect(t.router.state.navigation.formMethod).toBe("GET"); expect(t.router.state.navigation.formData).toEqual( createFormData({ key: "value" }), ); }); it("does not preserve existing 'action' URLSearchParams for formMethod='get'", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], }); await t.navigate("/tasks?key=1", { formMethod: "get", formData: createFormData({ key: "2" }), }); expect(t.router.state.navigation.state).toBe("loading"); expect(t.router.state.navigation.location).toMatchObject({ pathname: "/tasks", search: "?key=2", }); expect(t.router.state.navigation.formMethod).toBe("GET"); expect(t.router.state.navigation.formData).toEqual( createFormData({ key: "2" }), ); }); it("preserves existing 'action' URLSearchParams for formMethod='post'", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], }); await t.navigate("/tasks?key=1", { formMethod: "post", formData: createFormData({ key: "2" }), }); expect(t.router.state.navigation.state).toBe("submitting"); expect(t.router.state.navigation.location).toMatchObject({ pathname: "/tasks", search: "?key=1", }); expect(t.router.state.navigation.formMethod).toBe("POST"); expect(t.router.state.navigation.formData).toEqual( createFormData({ key: "2" }), ); }); it("url-encodes File names on GET submissions", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT DATA", index: "INDEX DATA", }, }, }); let formData = new FormData(); formData.append( "blob", new Blob(["<h1>Some html file contents</h1>"], { type: "text/html", }), "blob.html", ); let A = await t.navigate("/tasks", { formMethod: "get", formData: formData, }); let params = new URL(A.loaders.tasks.stub.mock.calls[0][0].request.url) .searchParams; expect(params.get("blob")).toEqual("blob.html"); }); it("returns a 405 error if attempting to submit with method=HEAD", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT DATA", index: "INDEX DATA", }, }, }); let formData = new FormData(); formData.append( "blob", new Blob(["<h1>Some html file contents</h1>"], { type: "text/html", }), ); await t.navigate("/tasks", { // @ts-expect-error formMethod: "head", formData, }); expect(t.router.state.navigation.state).toBe("idle"); expect(t.router.state.location).toMatchObject({ pathname: "/tasks", search: "", }); expect(t.router.state.errors).toEqual({ tasks: new ErrorResponseImpl( 405, "Method Not Allowed", new Error('Invalid request method "HEAD"'), true, ), }); }); it("returns a 405 error if attempting to submit with method=OPTIONS", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT DATA", index: "INDEX DATA", }, }, }); let formData = new FormData(); formData.append( "blob", new Blob(["<h1>Some html file contents</h1>"], { type: "text/html", }), ); await t.navigate("/tasks", { // @ts-expect-error formMethod: "options", formData: formData, }); expect(t.router.state.navigation.state).toBe("idle"); expect(t.router.state.location).toMatchObject({ pathname: "/tasks", search: "", }); expect(t.router.state.errors).toEqual({ tasks: new ErrorResponseImpl( 405, "Method Not Allowed", new Error('Invalid request method "OPTIONS"'), true, ), }); }); it("handles promises for navigations", async () => { let aDfd = createDeferred(); let router = createRouter({ history: createMemoryHistory(), routes: [ { id: "index", path: "/", }, { id: "a", path: "/a", loader: () => aDfd.promise, }, ], }); let sequence: string[] = []; router.navigate("/a").then(() => sequence.push("/a complete")); await tick(); expect(sequence).toEqual([]); aDfd.resolve("A DATA"); await tick(); expect(router.state.navigation).toBe(IDLE_NAVIGATION); expect(sequence).toEqual(["/a complete"]); }); it("handles promises for popstate navigations", async () => { let indexDfd = createDeferred(); let router = createRouter({ history: createMemoryHistory(), routes: [ { id: "index", path: "/", loader: () => indexDfd.promise, }, { id: "a", path: "/a", }, ], hydrationData: { loaderData: { index: "INDEX DATA", }, }, }).initialize(); let sequence: string[] = []; await router.navigate("/a"); expect(router.state.location.pathname).toBe("/a"); expect(router.state.navigation).toBe(IDLE_NAVIGATION); router.navigate(-1).then(() => sequence.push("back complete")); await tick(); expect(sequence).toEqual([]); indexDfd.resolve("INDEX DATA"); await tick(); expect(router.state.navigation).toBe(IDLE_NAVIGATION); expect(sequence).toEqual(["back complete"]); }); it("handles promises for interrupted navigations", async () => { let indexDfd = createDeferred(); let aDfd = createDeferred(); let bDfd = createDeferred(); let cDfd = createDeferred(); let router = createRouter({ history: createMemoryHistory(), routes: [ { id: "index", path: "/", loader: () => indexDfd.promise, }, { id: "a", path: "/a", loader: () => aDfd.promise, }, { id: "b", path: "/b", loader: () => bDfd.promise, }, { id: "c", path: "/c", loader: () => cDfd.promise, }, ], hydrationData: { loaderData: { index: "INDEX DATA", }, }, }); let sequence: string[] = []; router.navigate("/a").then(() => sequence.push("/a complete")); aDfd.resolve("A DATA"); await tick(); expect(router.state.navigation).toBe(IDLE_NAVIGATION); router.navigate("/b").then(() => sequence.push("/b complete")); await tick(); expect(sequence).toEqual(["/a complete"]); router.navigate("/c").then(() => sequence.push("/c complete")); await tick(); expect(sequence).toEqual(["/a complete", "/b complete"]); bDfd.resolve("B DATA"); // no-op await tick(); expect(router.state.navigation.state).toBe("loading"); expect(sequence).toEqual(["/a complete", "/b complete"]); cDfd.resolve("C DATA"); await tick(); expect(router.state.navigation).toBe(IDLE_NAVIGATION); expect(sequence).toEqual(["/a complete", "/b complete", "/c complete"]); }); it("handles promises for interrupted popstate navigations", async () => { let indexDfd = createDeferred(); let aDfd = createDeferred(); let bDfd = createDeferred(); let router = createRouter({ history: createMemoryHistory(), routes: [ { id: "index", path: "/", loader: () => indexDfd.promise, }, { id: "a", path: "/a", loader: () => aDfd.promise, }, { id: "b", path: "/b", loader: () => bDfd.promise, }, ], hydrationData: { loaderData: { index: "INDEX DATA", }, }, }); let sequence: string[] = []; router.navigate("/a").then(() => sequence.push("/a complete")); aDfd.resolve("A DATA"); await tick(); expect(router.state.navigation).toBe(IDLE_NAVIGATION); router.navigate(-1).then(() => sequence.push("back complete")); await tick(); expect(sequence).toEqual(["/a complete"]); router.navigate("/b").then(() => sequence.push("/b complete")); await tick(); expect(sequence).toEqual(["/a complete", "back complete"]); indexDfd.resolve("A DATA"); await tick(); expect(router.state.navigation.state).toBe("loading"); expect(sequence).toEqual(["/a complete", "back complete"]); bDfd.resolve("B DATA"); await tick(); expect(router.state.navigation).toBe(IDLE_NAVIGATION); expect(sequence).toEqual(["/a complete", "back complete", "/b complete"]); }); it("handles promises for fetcher redirect interrupted popstate navigations", async () => { let indexDfd = createDeferred(); let bDfd = createDeferred(); let router = createRouter({ history: createMemoryHistory(), routes: [ { id: "index", path: "/", loader: () => indexDfd.promise, }, { id: "a", path: "/a", }, { id: "b", path: "/b", loader: () => bDfd.promise, }, { id: "fetch", path: "/fetch", loader: () => redirect("/b"), }, ], hydrationData: { loaderData: { index: "INDEX DATA", }, }, }); let sequence: string[] = []; router.navigate("/a").then(() => sequence.push("/a complete")); await tick(); expect(router.state.navigation).toBe(IDLE_NAVIGATION); router.navigate(-1).then(() => sequence.push("back complete")); await tick(); expect(sequence).toEqual(["/a complete"]); router .fetch("key", "a", "/fetch") .then(() => sequence.push("fetch redirect complete")); await tick(); expect(sequence).toEqual(["/a complete", "back complete"]); indexDfd.resolve("A DATA"); // no-op await tick(); expect(router.state.navigation.state).toBe("loading"); expect(sequence).toEqual(["/a complete", "back complete"]); bDfd.resolve("B DATA"); await tick(); expect(router.state.navigation).toBe(IDLE_NAVIGATION); expect(sequence).toEqual([ "/a complete", "back complete", "fetch redirect complete", ]); }); }); describe("data loading (new)", () => { it("marks as initialized immediately when no loaders are present", async () => { let t = setup({ routes: [ { id: "root", path: "/", }, ], initialEntries: ["/"], }); expect(console.warn).not.toHaveBeenCalled(); expect(t.router.state).toMatchObject({ historyAction: "POP", location: { pathname: "/", }, initialized: true, navigation: IDLE_NAVIGATION, loaderData: {}, }); }); it("hydrates initial data", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT DATA", index: "INDEX DATA", }, }, }); expect(console.warn).not.toHaveBeenCalled(); expect(t.router.state).toMatchObject({ historyAction: "POP", location: { pathname: "/", }, initialized: true, navigation: IDLE_NAVIGATION, loaderData: { root: "ROOT DATA", index: "INDEX DATA", }, }); }); it("does not run middlewares when complete hydrationData exists", async () => { let middlewareSpy = jest.fn(); let loaderSpy = jest.fn(); let router = createRouter({ history: createMemoryHistory(), routes: [ { id: "index", path: "/", middleware: [middlewareSpy], loader: loaderSpy, }, ], hydrationData: { loaderData: { index: "INDEX DATA", }, }, }); router.initialize(); expect(router.state).toMatchObject({ historyAction: "POP", location: { pathname: "/", }, initialized: true, navigation: IDLE_NAVIGATION, loaderData: { index: "INDEX DATA", }, }); expect(middlewareSpy).not.toHaveBeenCalled(); expect(loaderSpy).not.toHaveBeenCalled(); }); it("kicks off initial data load if no hydration data is provided", async () => { let parentDfd = createDeferred(); let parentSpy = jest.fn(() => parentDfd.promise); let childDfd = createDeferred(); let childSpy = jest.fn(() => childDfd.promise); let router = createRouter({ history: createMemoryHistory({ initialEntries: ["/child"] }), routes: [ { path: "/", loader: parentSpy, children: [ { path: "child", loader: childSpy, }, ], }, ], }); router.initialize(); expect(console.warn).not.toHaveBeenCalled(); expect(parentSpy.mock.calls.length).toBe(1); expect(childSpy.mock.calls.length).toBe(1); expect(router.state).toMatchObject({ historyAction: "POP", location: expect.objectContaining({ pathname: "/child" }), initialized: false, navigation: IDLE_NAVIGATION, }); expect(router.state.loaderData).toEqual({}); await parentDfd.resolve("PARENT DATA"); expect(router.state).toMatchObject({ historyAction: "POP", location: expect.objectContaining({ pathname: "/child" }), initialized: false, navigation: IDLE_NAVIGATION, }); expect(router.state.loaderData).toEqual({}); await childDfd.resolve("CHILD DATA"); expect(router.state).toMatchObject({ historyAction: "POP", location: expect.objectContaining({ pathname: "/child" }), initialized: true, navigation: IDLE_NAVIGATION, loaderData: { "0": "PARENT DATA", "0-0": "CHILD DATA", }, }); router.dispose(); }); it("run middlewares without loaders on initial load if no hydration data is provided", async () => { let parentDfd = createDeferred(); let parentSpy = jest.fn(() => parentDfd.promise); let childDfd = createDeferred(); let childSpy = jest.fn(() => childDfd.promise); let router = createRouter({ history: createMemoryHistory(), routes: [ { path: "/", middleware: [parentSpy], children: [ { index: true, middleware: [childSpy], }, ], }, ], }); router.initialize(); await tick(); expect(console.warn).not.toHaveBeenCalled(); expect(parentSpy.mock.calls.length).toBe(1); expect(childSpy.mock.calls.length).toBe(0); expect(router.state).toMatchObject({ historyAction: "POP", location: expect.objectContaining({ pathname: "/" }), initialized: false, navigation: IDLE_NAVIGATION, }); expect(router.state.loaderData).toEqual({}); await parentDfd.resolve(undefined); expect(router.state).toMatchObject({ historyAction: "POP", location: expect.objectContaining({ pathname: "/" }), initialized: false, navigation: IDLE_NAVIGATION, }); expect(router.state.loaderData).toEqual({}); await childDfd.resolve(undefined); expect(router.state).toMatchObject({ historyAction: "POP", location: expect.objectContaining({ pathname: "/" }), initialized: true, navigation: IDLE_NAVIGATION, loaderData: {}, }); router.dispose(); }); it("allows routes to be initialized with undefined loaderData", async () => { let t = setup({ routes: [ { id: "root", path: "/", loader: true, }, ], hydrationData: { loaderData: { root: undefined, }, }, }); expect(t.router.state).toMatchObject({ historyAction: "POP", location: { pathname: "/", }, initialized: true, navigation: IDLE_NAVIGATION, loaderData: { root: undefined, }, }); }); it("handles interruptions of initial data load", async () => { let parentDfd = createDeferred(); let parentSpy = jest.fn(() => parentDfd.promise); let childDfd = createDeferred(); let childSpy = jest.fn(() => childDfd.promise); let child2Dfd = createDeferred(); let child2Spy = jest.fn(() => child2Dfd.promise); let router = createRouter({ history: createMemoryHistory({ initialEntries: ["/child"] }), routes: [ { path: "/", loader: parentSpy, children: [ { path: "child", loader: childSpy, }, { path: "child2", loader: child2Spy, }, ], }, ], }); router.initialize(); expect(console.warn).not.toHaveBeenCalled(); expect(parentSpy.mock.calls.length).toBe(1); expect(childSpy.mock.calls.length).toBe(1); expect(router.state).toMatchObject({ historyAction: "POP", location: expect.objectContaining({ pathname: "/child" }), initialized: false, navigation: IDLE_NAVIGATION, }); expect(router.state.loaderData).toEqual({}); await parentDfd.resolve("PARENT DATA"); expect(router.state).toMatchObject({ historyAction: "POP", location: expect.objectContaining({ pathname: "/child" }), initialized: false, navigation: IDLE_NAVIGATION, }); expect(router.state.loaderData).toEqual({}); router.navigate("/child2"); await childDfd.resolve("CHILD DATA"); expect(router.state).toMatchObject({ historyAction: "POP", location: expect.objectContaining({ pathname: "/child" }), initialized: false, navigation: { state: "loading", location: { pathname: "/child2" }, }, }); expect(router.state.loaderData).toEqual({}); await child2Dfd.resolve("CHILD2 DATA"); expect(router.state).toMatchObject({ historyAction: "PUSH", location: expect.objectContaining({ pathname: "/child2" }), initialized: true, navigation: IDLE_NAVIGATION, loaderData: { "0": "PARENT DATA", "0-1": "CHILD2 DATA", }, }); router.dispose(); }); it("handles errors in initial data load", async () => { let router = createRouter({ history: createMemoryHistory({ initialEntries: ["/child"] }), routes: [ { path: "/", loader: () => Promise.reject("Kaboom!"), children: [ { path: "child", loader: () => Promise.resolve("child"), }, ], }, ], }); router.initialize(); await tick(); expect(router.state).toMatchObject({ historyAction: "POP", location: expect.objectContaining({ pathname: "/child" }), initialized: true, navigation: IDLE_NAVIGATION, loaderData: { "0-0": "child", }, errors: { "0": "Kaboom!", }, }); router.dispose(); }); it("handles initial load 404s when the error boundary router has a loader", async () => { let router = createRouter({ history: createMemoryHistory({ initialEntries: ["/404"] }), routes: [ { path: "/", hasErrorBoundary: true, loader: () => {}, }, ], }); expect(router.state).toMatchObject({ historyAction: "POP", location: expect.objectContaining({ pathname: "/404" }), initialized: true, navigation: IDLE_NAVIGATION, loaderData: {}, errors: { "0": new ErrorResponseImpl( 404, "Not Found", new Error('No route matches URL "/404"'), true, ), }, }); router.dispose(); }); it("kicks off initial data load when hash is present", async () => { let loaderDfd = createDeferred(); let loaderSpy = jest.fn(() => loaderDfd.promise); let router = createRouter({ history: createMemoryHistory({ initialEntries: ["/#hash"] }), routes: [ { path: "/", loader: loaderSpy, }, ], }); router.initialize(); expect(console.warn).not.toHaveBeenCalled(); expect(loaderSpy.mock.calls.length).toBe(1); expect(router.state).toMatchObject({ historyAction: "POP", location: expect.objectContaining({ pathname: "/", hash: "#hash" }), initialized: false, navigation: IDLE_NAVIGATION, }); expect(router.state.loaderData).toEqual({}); await loaderDfd.resolve("DATA"); expect(router.state).toMatchObject({ historyAction: "POP", location: expect.objectContaining({ pathname: "/", hash: "#hash" }), initialized: true, navigation: IDLE_NAVIGATION, loaderData: { "0": "DATA", }, }); router.dispose(); }); it("executes loaders on push navigations", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); let nav1 = await t.navigate("/tasks"); expect(t.router.state).toMatchObject({ historyAction: "POP", location: { pathname: "/", }, navigation: { location: { pathname: "/tasks", }, state: "loading", }, loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }); expect(t.history.action).toEqual("POP"); expect(t.history.location.pathname).toEqual("/"); await nav1.loaders.tasks.resolve("TASKS_DATA"); expect(t.router.state).toMatchObject({ historyAction: "PUSH", location: { pathname: "/tasks", }, navigation: IDLE_NAVIGATION, loaderData: { root: "ROOT_DATA", tasks: "TASKS_DATA", }, }); expect(t.history.action).toEqual("PUSH"); expect(t.history.location.pathname).toEqual("/tasks"); let nav2 = await t.navigate("/tasks/1"); await nav2.loaders.tasksId.resolve("TASKS_ID_DATA"); expect(t.router.state).toMatchObject({ historyAction: "PUSH", location: { pathname: "/tasks/1", }, navigation: IDLE_NAVIGATION, loaderData: { root: "ROOT_DATA", tasksId: "TASKS_ID_DATA", }, }); expect(t.history.action).toEqual("PUSH"); expect(t.history.location.pathname).toEqual("/tasks/1"); }); it("executes loaders on replace navigations", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); let nav = await t.navigate("/tasks", { replace: true }); expect(t.router.state).toMatchObject({ historyAction: "POP", location: { pathname: "/", }, navigation: { location: { pathname: "/tasks", }, state: "loading", }, loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }); expect(t.history.action).toEqual("POP"); expect(t.history.location.pathname).toEqual("/"); await nav.loaders.tasks.resolve("TASKS_DATA"); expect(t.router.state).toMatchObject({ historyAction: "REPLACE", location: { pathname: "/tasks", }, navigation: IDLE_NAVIGATION, loaderData: { root: "ROOT_DATA", tasks: "TASKS_DATA", }, }); expect(t.history.action).toEqual("REPLACE"); expect(t.history.location.pathname).toEqual("/tasks"); }); it("executes loaders on go navigations", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/", "/tasks"], initialIndex: 0, hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); // pop forward to /tasks let nav2 = await t.navigate(1); expect(t.router.state).toMatchObject({ historyAction: "POP", location: { pathname: "/", }, navigation: { location: { pathname: "/tasks", }, state: "loading", }, loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }); expect(t.history.action).toEqual("POP"); expect(t.history.location.pathname).toEqual("/tasks"); await nav2.loaders.tasks.resolve("TASKS_DATA"); expect(t.router.state).toMatchObject({ historyAction: "POP", location: { pathname: "/tasks", }, navigation: IDLE_NAVIGATION, loaderData: { root: "ROOT_DATA", tasks: "TASKS_DATA", }, }); expect(t.history.action).toEqual("POP"); expect(t.history.location.pathname).toEqual("/tasks"); }); it("persists location keys throughout navigations", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); expect(t.router.state.location.key).toBe("default"); let A = await t.navigate("/tasks"); let navigationKey = t.router.state.navigation.location?.key; expect(t.router.state.location.key).toBe("default"); expect(t.router.state.navigation.state).toBe("loading"); expect(navigationKey).not.toBe("default"); expect(Number(navigationKey?.length) > 0).toBe(true); await A.loaders.tasks.resolve("TASKS"); expect(t.router.state.navigation.state).toBe("idle"); // Make sure we keep the same location.key throughout the navigation and // history isn't creating a new one in history.push expect(t.router.state.location.key).toBe(navigationKey); expect(t.history.location.key).toBe(navigationKey); }); it("sends proper arguments to loaders", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); let nav = await t.navigate("/tasks"); expect(nav.loaders.tasks.stub).toHaveBeenCalledWith({ params: {}, request: new Request("http://localhost/tasks", { signal: nav.loaders.tasks.stub.mock.calls[0][0].request.signal, }), unstable_pattern: "/tasks", context: {}, }); let nav2 = await t.navigate("/tasks/1"); expect(nav2.loaders.tasksId.stub).toHaveBeenCalledWith({ params: { id: "1" }, request: new Request("http://localhost/tasks/1", { signal: nav2.loaders.tasksId.stub.mock.calls[0][0].request.signal, }), unstable_pattern: "/tasks/:id", context: {}, }); let nav3 = await t.navigate("/tasks?foo=bar#hash"); expect(nav3.loaders.tasks.stub).toHaveBeenCalledWith({ params: {}, request: new Request("http://localhost/tasks?foo=bar", { signal: nav3.loaders.tasks.stub.mock.calls[0][0].request.signal, }), unstable_pattern: "/tasks", context: {}, }); let nav4 = await t.navigate("/tasks#hash", { formData: createFormData({ foo: "bar" }), }); expect(nav4.loaders.tasks.stub).toHaveBeenCalledWith({ params: {}, request: new Request("http://localhost/tasks?foo=bar", { signal: nav4.loaders.tasks.stub.mock.calls[0][0].request.signal, }), unstable_pattern: "/tasks", context: {}, }); expect(t.router.state.navigation.formAction).toBe("/tasks"); expect(t.router.state.navigation?.location?.pathname).toBe("/tasks"); expect(t.router.state.navigation?.location?.search).toBe("?foo=bar"); }); it("handles errors thrown from loaders", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); // Throw from tasks, handled by tasks let nav = await t.navigate("/tasks"); await nav.loaders.tasks.reject("TASKS_ERROR"); expect(t.router.state.navigation).toEqual(IDLE_NAVIGATION); expect(t.router.state.loaderData).toEqual({ root: "ROOT_DATA", }); expect(t.router.state.errors).toEqual({ tasks: "TASKS_ERROR", }); // Throw from index, handled by root let nav2 = await t.navigate("/"); await nav2.loaders.index.reject("INDEX_ERROR"); expect(t.router.state.navigation).toEqual(IDLE_NAVIGATION); expect(t.router.state.loaderData).toEqual({ root: "ROOT_DATA", }); expect(t.router.state.errors).toEqual({ root: "INDEX_ERROR", }); }); it("re-runs loaders on post-error navigations", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { errors: { root: "ROOT_ERROR", }, }, }); // If a route has an error, we should call the loader if that route is // re-used on a navigation let nav = await t.navigate("/tasks"); await nav.loaders.tasks.resolve("TASKS_DATA"); expect(t.router.state.navigation.state).toEqual("loading"); expect(t.router.state.loaderData).toEqual({}); expect(t.router.state.errors).toEqual({ root: "ROOT_ERROR", }); await nav.loaders.root.resolve("ROOT_DATA"); expect(t.router.state.navigation).toEqual(IDLE_NAVIGATION); expect(t.router.state.loaderData).toEqual({ root: "ROOT_DATA", tasks: "TASKS_DATA", }); expect(t.router.state.errors).toBe(null); }); it("handles interruptions during navigations", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); let historySpy = jest.spyOn(t.history, "push"); let nav = await t.navigate("/tasks"); expect(t.router.state.navigation.state).toEqual("loading"); expect(t.router.state.location.pathname).toEqual("/"); expect(nav.loaders.tasks.signal.aborted).toBe(false); expect(t.history.action).toEqual("POP"); expect(t.history.location.pathname).toEqual("/"); // Interrupt and confirm prior loader was aborted let nav2 = await t.navigate("/tasks/1"); expect(t.router.state.navigation.state).toEqual("loading"); expect(t.router.state.location.pathname).toEqual("/"); expect(nav.loaders.tasks.signal.aborted).toBe(true); expect(t.history.action).toEqual("POP"); expect(t.history.location.pathname).toEqual("/"); // Complete second navigation await nav2.loaders.tasksId.resolve("TASKS_ID_DATA"); expect(t.router.state.navigation).toEqual(IDLE_NAVIGATION); expect(t.router.state.location.pathname).toEqual("/tasks/1"); expect(t.history.location.pathname).toEqual("/tasks/1"); expect(t.router.state.loaderData).toEqual({ root: "ROOT_DATA", tasksId: "TASKS_ID_DATA", }); expect(t.history.action).toEqual("PUSH"); expect(t.history.location.pathname).toEqual("/tasks/1"); // Resolve first navigation - should no-op await nav.loaders.tasks.resolve("TASKS_DATA"); expect(t.router.state.navigation).toEqual(IDLE_NAVIGATION); expect(t.router.state.location.pathname).toEqual("/tasks/1"); expect(t.history.location.pathname).toEqual("/tasks/1"); expect(t.router.state.loaderData).toEqual({ root: "ROOT_DATA", tasksId: "TASKS_ID_DATA", }); expect(t.history.action).toEqual("PUSH"); expect(t.history.location.pathname).toEqual("/tasks/1"); expect(historySpy.mock.calls).toEqual([ [ expect.objectContaining({ pathname: "/tasks/1", }), null, ], ]); }); it("handles redirects thrown from loaders", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); let nav1 = await t.navigate("/tasks"); expect(t.router.state).toMatchObject({ historyAction: "POP", location: { pathname: "/", }, navigation: { location: { pathname: "/tasks", }, state: "loading", }, loaderData: { root: "ROOT_DATA", }, errors: null, }); expect(t.history.action).toEqual("POP"); expect(t.history.location.pathname).toEqual("/"); let nav2 = await nav1.loaders.tasks.redirect("/tasks/1"); // Should not abort if it redirected expect(nav1.loaders.tasks.signal.aborted).toBe(false); expect(t.router.state).toMatchObject({ historyAction: "POP", location: { pathname: "/", }, navigation: { location: { pathname: "/tasks/1", }, state: "loading", }, loaderData: { root: "ROOT_DATA", }, errors: null, }); expect(t.history.action).toEqual("POP"); expect(t.history.location.pathname).toEqual("/"); await nav2.loaders.tasksId.resolve("TASKS_ID_DATA"); expect(t.router.state).toMatchObject({ historyAction: "PUSH", location: { pathname: "/tasks/1", }, navigation: IDLE_NAVIGATION, loaderData: { root: "ROOT_DATA", tasksId: "TASKS_ID_DATA", }, errors: null, }); expect(t.history.action).toEqual("PUSH"); expect(t.history.location.pathname).toEqual("/tasks/1"); }); it("handles redirects returned from loaders", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); let nav1 = await t.navigate("/tasks"); expect(t.router.state).toMatchObject({ historyAction: "POP", location: { pathname: "/", }, navigation: { location: { pathname: "/tasks", }, state: "loading", }, loaderData: { root: "ROOT_DATA", }, errors: null, }); expect(t.history.action).toEqual("POP"); expect(t.history.location.pathname).toEqual("/"); let nav2 = await nav1.loaders.tasks.redirectReturn("/tasks/1"); // Should not abort if it redirected expect(nav1.loaders.tasks.signal.aborted).toBe(false); expect(t.router.state).toMatchObject({ historyAction: "POP", location: { pathname: "/", }, navigation: { location: { pathname: "/tasks/1", }, state: "loading", }, loaderData: { root: "ROOT_DATA", }, errors: null, }); expect(t.history.action).toEqual("POP"); expect(t.history.location.pathname).toEqual("/"); await nav2.loaders.tasksId.resolve("TASKS_ID_DATA"); expect(t.router.state).toMatchObject({ historyAction: "PUSH", location: { pathname: "/tasks/1", }, navigation: IDLE_NAVIGATION, loaderData: { root: "ROOT_DATA", tasksId: "TASKS_ID_DATA", }, errors: null, }); expect(t.history.action).toEqual("PUSH"); expect(t.history.location.pathname).toEqual("/tasks/1"); }); it("handles thrown non-redirect Responses as ErrorResponse's (text)", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); // Throw from tasks, handled by tasks let nav = await t.navigate("/tasks"); await nav.loaders.tasks.reject( new Response("broken", { status: 400, statusText: "Bad Request" }), ); expect(t.router.state).toMatchObject({ navigation: IDLE_NAVIGATION, loaderData: { root: "ROOT_DATA", }, actionData: null, errors: { tasks: new ErrorResponseImpl(400, "Bad Request", "broken"), }, }); }); it("handles thrown non-redirect Responses as ErrorResponse's (json)", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); // Throw from tasks, handled by tasks let nav = await t.navigate("/tasks"); await nav.loaders.tasks.reject( new Response(JSON.stringify({ key: "value" }), { status: 400, statusText: "Bad Request", headers: { "Content-Type": "application/json", }, }), ); expect(t.router.state).toMatchObject({ navigation: IDLE_NAVIGATION, loaderData: { root: "ROOT_DATA", }, actionData: null, errors: { tasks: new ErrorResponseImpl(400, "Bad Request", { key: "value" }), }, }); }); it("handles thrown non-redirect Responses as ErrorResponse's (json utf8)", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); // Throw from tasks, handled by tasks let nav = await t.navigate("/tasks"); await nav.loaders.tasks.reject( new Response(JSON.stringify({ key: "value" }), { status: 400, statusText: "Bad Request", headers: { "Content-Type": "application/json; charset=utf-8", }, }), ); expect(t.router.state).toMatchObject({ navigation: IDLE_NAVIGATION, loaderData: { root: "ROOT_DATA", }, actionData: null, errors: { tasks: new ErrorResponseImpl(400, "Bad Request", { key: "value" }), }, }); }); it("handles thrown data() values as ErrorResponse's", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); let nav = await t.navigate("/tasks"); await nav.loaders.tasks.reject( data("broken", { status: 400, statusText: "Bad Request" }), ); expect(t.router.state).toMatchObject({ navigation: IDLE_NAVIGATION, loaderData: { root: "ROOT_DATA", }, errors: { tasks: new ErrorResponseImpl(400, "Bad Request", "broken"), }, }); }); it("sends proper arguments to actions", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); let nav = await t.navigate("/tasks", { formMethod: "post", formData: createFormData({ query: "params" }), }); expect(nav.actions.tasks.stub).toHaveBeenCalledWith({ params: {}, request: expect.any(Request), unstable_pattern: "/tasks", context: {}, }); // Assert request internals, cannot do a deep comparison above since some // internals aren't the same on separate creations let request = nav.actions.tasks.stub.mock.calls[0][0].request; expect(request.method).toBe("POST"); expect(request.url).toBe("http://localhost/tasks"); expect(request.headers.get("Content-Type")).toBe( "application/x-www-form-urlencoded;charset=UTF-8", ); expect((await request.formData()).get("query")).toBe("params"); await nav.actions.tasks.resolve("TASKS ACTION"); let rootLoaderRequest = nav.loaders.root.stub.mock.calls[0][0].request; expect(rootLoaderRequest.method).toBe("GET"); expect(rootLoaderRequest.url).toBe("http://localhost/tasks"); let tasksLoaderRequest = nav.loaders.tasks.stub.mock.calls[0][0].request; expect(tasksLoaderRequest.method).toBe("GET"); expect(tasksLoaderRequest.url).toBe("http://localhost/tasks"); }); it("sends proper arguments to actions (using query string)", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); let formData = createFormData({ query: "params" }); let nav = await t.navigate("/tasks?foo=bar", { formMethod: "post", formData, }); expect(nav.actions.tasks.stub).toHaveBeenCalledWith({ params: {}, request: expect.any(Request), unstable_pattern: expect.any(String), context: {}, }); // Assert request internals, cannot do a deep comparison above since some // internals aren't the same on separate creations let request = nav.actions.tasks.stub.mock.calls[0][0].request; expect(request.url).toBe("http://localhost/tasks?foo=bar"); expect(request.method).toBe("POST"); expect(request.headers.get("Content-Type")).toBe( "application/x-www-form-urlencoded;charset=UTF-8", ); expect((await request.formData()).get("query")).toBe("params"); }); // https://fetch.spec.whatwg.org/#concept-method it("properly handles method=PATCH weirdness", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", index: "INDEX_DATA", }, }, }); let nav = await t.navigate("/tasks", { formMethod: "patch", formData: createFormData({ query: "params" }), }); expect(nav.actions.tasks.stub).toHaveBeenCalledWith({ params: {}, request: expect.any(Request), unstable_pattern: expect.any(String), context: {}, }); // Assert request internals, cannot do a deep comparison above since some // internals aren't the same on separate creations let request = nav.actions.tasks.stub.mock.calls[0][0].request; expect(request.method).toBe("PATCH"); expect(request.url).toBe("http://localhost/tasks"); expect(request.headers.get("Content-Type")).toBe( "application/x-www-form-urlencoded;charset=UTF-8", ); expect((await request.formData()).get("query")).toBe("params"); await nav.actions.tasks.resolve("TASKS ACTION"); let rootLoaderRequest = nav.loaders.root.stub.mock.calls[0][0].request; expect(rootLoaderRequest.method).toBe("GET"); expect(rootLoaderRequest.url).toBe("http://localhost/tasks"); let tasksLoaderRequest = nav.loaders.tasks.stub.mock.calls[0][0].request; expect(tasksLoaderRequest.method).toBe("GET"); expect(tasksLoaderRequest.url).toBe("http://localhost/tasks"); }); it("handles multipart/form-data submissions", async () => { let t = setup({ routes: [ { id: "root", path: "/", action: true, }, ], initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", }, }, }); let fd = new FormData(); fd.append("key", "value"); fd.append("file", new Blob(["1", "2", "3"]), "file.txt"); let A = await t.navigate("/", { formMethod: "post", formEncType: "multipart/form-data", formData: fd, }); expect( A.actions.root.stub.mock.calls[0][0].request.headers.get( "Content-Type", ), ).toMatch( /^multipart\/form-data; boundary=----formdata-undici-[a-z0-9]+/, ); }); it("url-encodes File names on x-www-form-urlencoded submissions", async () => { let t = setup({ routes: [ { id: "root", path: "/", action: true, }, ], initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT_DATA", }, }, }); let fd = new FormData(); fd.append("key", "value"); fd.append("file", new Blob(["1", "2", "3"]), "file.txt"); let A = await t.navigate("/", { formMethod: "post", formEncType: "application/x-www-form-urlencoded", formData: fd, }); let req = A.actions.root.stub.mock.calls[0][0].request.clone(); expect((await req.formData()).get("file")).toEqual("file.txt"); }); it("races actions and loaders against abort signals", async () => { let loaderDfd = createDeferred(); let actionDfd = createDeferred(); let router = createRouter({ routes: [ { index: true, }, { path: "foo", loader: () => loaderDfd.promise, action: () => actionDfd.promise, }, { path: "bar", }, ], hydrationData: { loaderData: { "0": null } }, history: createMemoryHistory(), }); expect(router.state.initialized).toBe(true); let fooPromise = router.navigate("/foo"); expect(router.state.navigation.state).toBe("loading"); let barPromise = router.navigate("/bar"); // This should resolve _without_ us resolving the loader await fooPromise; await barPromise; expect(router.state.navigation.state).toBe("idle"); expect(router.state.location.pathname).toBe("/bar"); let fooPromise2 = router.navigate("/foo", { formMethod: "post", formData: createFormData({ key: "value" }), }); expect(router.state.navigation.state).toBe("submitting"); let barPromise2 = router.navigate("/bar"); // This should resolve _without_ us resolving the action await fooPromise2; await barPromise2; expect(router.state.navigation.state).toBe("idle"); expect(router.state.location.pathname).toBe("/bar"); router.dispose(); }); it("allows returning undefined from actions/loaders", async () => { let t = setup({ routes: [ { index: true, }, { id: "path", path: "/path", loader: true, action: true, }, ], }); let nav1 = await t.navigate("/path"); await nav1.loaders.path.resolve(undefined); expect(t.router.state).toMatchObject({ location: { pathname: "/path", }, loaderData: { path: undefined, }, errors: null, }); await t.navigate("/"); expect(t.router.state).toMatchObject({ location: { pathname: "/", }, errors: {}, }); let nav3 = await t.navigate("/path", { formMethod: "post", formData: createFormData({}), }); await nav3.actions.path.resolve(undefined); await nav3.loaders.path.resolve("PATH"); expect(t.router.state).toMatchObject({ location: { pathname: "/path", }, actionData: { path: undefined, }, loaderData: { path: "PATH", }, errors: null, }); }); }); describe("router.enhanceRoutes", () => { // Detect any failures inside the router navigate code afterEach(() => cleanup()); it("should retain existing routes until revalidation completes on loader removal", async () => { let t = initializeTest(); let ogRoutes = t.router.routes; let A = await t.navigate("/foo"); await A.loaders.foo.resolve("foo"); expect(t.router.state.loaderData).toMatchObject({ root: "ROOT", foo: "foo", }); let newRoutes = t.enhanceRoutes([ { path: "", id: "root", hasErrorBoundary: true, loader: true, children: [ { path: "/", id: "index", loader: true, action: true, hasErrorBoundary: false, }, { path: "/foo", id: "foo", loader: false, action: true, hasErrorBoundary: false, }, ], }, ]); t._internalSetRoutes(newRoutes); // Get a new revalidation helper that should use the updated routes let R = await t.revalidate(); expect(t.router.state.revalidation).toBe("loading"); // Should still expose be the og routes until revalidation completes expect(t.router.routes).toBe(ogRoutes); // Resolve any loaders that should have ran (foo's loader has been removed) await R.loaders.root.resolve("ROOT*"); expect(t.router.state.revalidation).toBe("idle"); // Routes should be updated expect(t.router.routes).not.toBe(ogRoutes); expect(t.router.routes).toEqual(newRoutes); // Loader data should be updated and foo removed expect(t.router.state.loaderData).toEqual({ root: "ROOT*", }); expect(t.router.state.errors).toEqual(null); }); it("should retain existing routes until revalidation completes on loader addition", async () => { let t = initializeTest(); let ogRoutes = t.router.routes; await t.navigate("/no-loader"); expect(t.router.state.loaderData).toMatchObject({ root: "ROOT", }); let newRoutes = t.enhanceRoutes([ { path: "", id: "root", hasErrorBoundary: true, loader: true, children: [ { path: "/no-loader", id: "noLoader", loader: true, action: true, hasErrorBoundary: false, }, ], }, ]); t._internalSetRoutes(newRoutes); // Get a new revalidation helper that should use the updated routes let R = await t.revalidate(); expect(t.router.state.revalidation).toBe("loading"); expect(t.router.routes).toBe(ogRoutes); // Should still expose be the og routes until revalidation completes expect(t.router.routes).toBe(ogRoutes); // Resolve any loaders that should have ran await R.loaders.root.resolve("ROOT*"); await R.loaders.noLoader.resolve("NO_LOADER*"); expect(t.router.state.revalidation).toBe("idle"); // Routes should be updated expect(t.router.routes).not.toBe(ogRoutes); expect(t.router.routes).toEqual(newRoutes); // Loader data should be updated expect(t.router.state.loaderData).toEqual({ root: "ROOT*", noLoader: "NO_LOADER*", }); expect(t.router.state.errors).toEqual(null); }); it("should retain existing routes until interrupting navigation completes", async () => { let t = initializeTest(); let ogRoutes = t.router.routes; let A = await t.navigate("/foo"); await A.loaders.foo.resolve("foo"); expect(t.router.state.loaderData).toMatchObject({ root: "ROOT", foo: "foo", }); let newRoutes = t.enhanceRoutes([ { path: "", id: "root", hasErrorBoundary: true, loader: true, children: [ { path: "/", id: "index", loader: false, action: true, hasErrorBoundary: false, }, { path: "/foo", id: "foo", loader: false, action: true, hasErrorBoundary: false, }, ], }, ]); t._internalSetRoutes(newRoutes); // Revalidate and interrupt with a navigation let R = await t.revalidate(); let N = await t.navigate("/?revalidate"); expect(t.router.state.navigation.state).toBe("loading"); expect(t.router.state.revalidation).toBe("loading"); // Should still expose be the og routes until navigation completes expect(t.router.routes).toBe(ogRoutes); // Revalidation cancelled so this shouldn't make it through await R.loaders.root.resolve("ROOT STALE"); // Resolve any loaders that should have ran (foo's loader has been removed) await N.loaders.root.resolve("ROOT*"); expect(t.router.state.navigation.state).toBe("idle"); expect(t.router.state.revalidation).toBe("idle"); // Routes should be updated expect(t.router.routes).not.toBe(ogRoutes); expect(t.router.routes).toEqual(newRoutes); // Loader data should be updated expect(t.router.state.loaderData).toEqual({ root: "ROOT*", }); expect(t.router.state.errors).toEqual(null); }); it("should retain existing routes until interrupted navigation completes", async () => { let t = initializeTest(); let ogRoutes = t.router.routes; let N = await t.navigate("/foo"); let newRoutes = t.enhanceRoutes([ { path: "", id: "root", hasErrorBoundary: true, loader: true, children: [ { path: "/", id: "index", loader: false, action: true, hasErrorBoundary: false, }, { path: "/foo", id: "foo", loader: false, action: true, hasErrorBoundary: false, }, ], }, ]); t._internalSetRoutes(newRoutes); // Interrupt /foo navigation with a revalidation let R = await t.revalidate(); expect(t.router.state.navigation.state).toBe("loading"); expect(t.router.state.revalidation).toBe("loading"); // Should still expose be the og routes until navigation completes expect(t.router.routes).toBe(ogRoutes); // NAvigation interrupted so this shouldn't make it through await N.loaders.root.resolve("ROOT STALE"); // Resolve any loaders that should have ran (foo's loader has been removed) await R.loaders.root.resolve("ROOT*"); expect(t.router.state.navigation.state).toBe("idle"); expect(t.router.state.revalidation).toBe("idle"); // Routes should be updated expect(t.router.routes).not.toBe(ogRoutes); expect(t.router.routes).toEqual(newRoutes); // Loader data should be updated expect(t.router.state.loaderData).toEqual({ root: "ROOT*", }); expect(t.router.state.errors).toEqual(null); }); it("should retain existing routes until revalidation completes on loader removal (fetch)", async () => { let rootDfd = createDeferred(); let fooDfd = createDeferred(); let ogRoutes: AgnosticDataRouteObject[] = [ { path: "/", id: "root", hasErrorBoundary: true, loader: () => rootDfd.promise, children: [ { index: true, id: "index", hasErrorBoundary: false, }, { path: "foo", id: "foo", loader: () => fooDfd.promise, children: undefined, hasErrorBoundary: false, }, ], }, ]; let router = createRouter({ routes: ogRoutes, history: createMemoryHistory(), hydrationData: { loaderData: { root: "ROOT INITIAL", }, }, }); let fetcherData = getFetcherData(router); router.initialize(); let key = "key"; router.fetch(key, "root", "/foo"); await fooDfd.resolve("FOO"); await tick(); expect(fetcherData.get(key)).toBe("FOO"); let rootDfd2 = createDeferred(); let newRoutes: AgnosticDataRouteObject[] = [ { path: "/", id: "root", loader: () => rootDfd2.promise, hasErrorBoundary: true, children: [ { index: true, id: "index", hasErrorBoundary: false, }, { path: "foo", id: "foo", children: undefined, hasErrorBoundary: false, }, ], }, ]; router._internalSetRoutes(newRoutes); // Interrupt /foo navigation with a revalidation router.revalidate(); expect(router.state.revalidation).toBe("loading"); // Should still expose be the og routes until navigation completes expect(router.routes).toEqual(ogRoutes); // Resolve any loaders that should have ran (foo's loader has been removed) await rootDfd2.resolve("ROOT*"); await tick(); expect(router.state.revalidation).toBe("idle"); // Routes should be updated expect(router.routes).not.toEqual(ogRoutes); expect(router.routes).toEqual(newRoutes); // Loader data should be updated expect(router.state.loaderData).toEqual({ root: "ROOT*", }); // Fetcher should have been revalidated but throw an error since the // loader was removed // The data remains in the UI layer in this test setup since it hasn't // unmounted - but normally it would unmount and the data would be removed expect(fetcherData.get("key")).toBe("FOO"); expect(router.state.errors).toMatchInlineSnapshot(` { "root": ErrorResponseImpl { "data": "Error: No route matches URL "/foo"", "error": [Error: No route matches URL "/foo"], "internal": true, "status": 404, "statusText": "Not Found", }, } `); cleanup(router); }); it("should retain existing routes until revalidation completes on route removal (fetch)", async () => { let rootDfd = createDeferred(); let fooDfd = createDeferred(); let ogRoutes: AgnosticDataRouteObject[] = [ { path: "/", id: "root", hasErrorBoundary: true, loader: () => rootDfd.promise, children: [ { index: true, id: "index", hasErrorBoundary: false, }, { path: "foo", id: "foo", loader: () => fooDfd.promise, children: undefined, hasErrorBoundary: false, }, ], }, ]; let router = createRouter({ routes: ogRoutes, history: createMemoryHistory(), hydrationData: { loaderData: { root: "ROOT INITIAL", }, }, }); let fetcherData = getFetcherData(router); router.initialize(); let key = "key"; router.fetch(key, "root", "/foo"); await fooDfd.resolve("FOO"); expect(fetcherData.get(key)).toBe("FOO"); let rootDfd2 = createDeferred(); let newRoutes: AgnosticDataRouteObject[] = [ { path: "/", id: "root", loader: () => rootDfd2.promise, hasErrorBoundary: true, children: [ { index: true, id: "index", hasErrorBoundary: false, }, ], }, ]; router._internalSetRoutes(newRoutes); // Interrupt /foo navigation with a revalidation router.revalidate(); expect(router.state.revalidation).toBe("loading"); // Should still expose be the og routes until navigation completes expect(router.routes).toEqual(ogRoutes); // Resolve any loaders that should have ran (foo's loader has been removed) await rootDfd2.resolve("ROOT*"); expect(router.state.revalidation).toBe("idle"); // Routes should be updated expect(router.routes).not.toEqual(ogRoutes); expect(router.routes).toEqual(newRoutes); // Loader data should be updated expect(router.state.loaderData).toEqual({ root: "ROOT*", }); // Fetcher should have been revalidated but thrown a 404 wince the route was removed // The data remains in the UI layer in this test setup since it hasn't // unmounted - but normally it would unmount and the data would be removed expect(fetcherData.get(key)).toBe("FOO"); expect(router.state.errors).toEqual({ root: new ErrorResponseImpl( 404, "Not Found", new Error('No route matches URL "/foo"'), true, ), }); cleanup(router); }); }); describe("router.dispose", () => { it("should cancel pending navigations", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT DATA", index: "INDEX DATA", }, }, }); let A = await t.navigate("/tasks"); expect(t.router.state.navigation.state).toBe("loading"); t.router.dispose(); expect(A.loaders.tasks.signal.aborted).toBe(true); }); it("should cancel pending fetchers", async () => { let t = setup({ routes: TASK_ROUTES, initialEntries: ["/"], hydrationData: { loaderData: { root: "ROOT DATA", index: "INDEX DATA", }, }, }); let A = await t.fetch("/tasks"); let B = await t.fetch("/tasks"); t.router.dispose(); expect(A.loaders.tasks.signal.aborted).toBe(true); expect(B.loaders.tasks.signal.aborted).toBe(true); }); }); });
typescript
github
https://github.com/remix-run/react-router
packages/react-router/__tests__/router/router-test.ts
""" WSGI config for awesome-app project. This module contains the WSGI application used by Django's development server and any production WSGI deployments. It should expose a module-level variable named ``application``. Django's ``runserver`` and ``runfcgi`` commands discover this application via the ``WSGI_APPLICATION`` setting. Usually you will have the standard Django WSGI application here, but it also might make sense to replace the whole Django WSGI application with a custom one that later delegates to the Django one. For example, you could introduce WSGI middleware here, or combine a Django application with an application of another framework. """ import os import sys from django.core.wsgi import get_wsgi_application # This allows easy placement of apps within the interior # awesome_app directory. app_path = os.path.dirname(os.path.abspath(__file__)).replace('/config', '') sys.path.append(os.path.join(app_path, 'awesome_app')) if os.environ.get('DJANGO_SETTINGS_MODULE') == 'config.settings.production': from raven.contrib.django.raven_compat.middleware.wsgi import Sentry # We defer to a DJANGO_SETTINGS_MODULE already in the environment. This breaks # if running multiple sites in the same mod_wsgi process. To fix this, use # mod_wsgi daemon mode with each site in its own daemon process, or use # os.environ["DJANGO_SETTINGS_MODULE"] = "config.settings.production" os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.production") # This application object is used by any WSGI server configured to use this # file. This includes Django's development server, if the WSGI_APPLICATION # setting points here. application = get_wsgi_application() if os.environ.get('DJANGO_SETTINGS_MODULE') == 'config.settings.production': application = Sentry(application) # Apply WSGI middleware here. # from helloworld.wsgi import HelloWorldApplication # application = HelloWorldApplication(application)
unknown
codeparrot/codeparrot-clean
# -*- coding: utf-8 -*- """ Pelican Comment System ====================== A Pelican plugin, which allows you to add comments to your articles. Author: Bernhard Scheirle """ from __future__ import unicode_literals import logging import os import copy logger = logging.getLogger(__name__) from itertools import chain from pelican import signals from pelican.readers import Readers from pelican.writers import Writer from . comment import Comment from . import avatars _all_comments = [] def setdefault(pelican, settings): from pelican.settings import DEFAULT_CONFIG for key, value in settings: DEFAULT_CONFIG.setdefault(key, value) if not pelican: return for key, value in settings: pelican.settings.setdefault(key, value) def pelican_initialized(pelican): from pelican.settings import DEFAULT_CONFIG settings = [ ('PELICAN_COMMENT_SYSTEM', False), ('PELICAN_COMMENT_SYSTEM_DIR', 'comments'), ('PELICAN_COMMENT_SYSTEM_IDENTICON_OUTPUT_PATH', 'images/identicon'), ('PELICAN_COMMENT_SYSTEM_IDENTICON_DATA', ()), ('PELICAN_COMMENT_SYSTEM_IDENTICON_SIZE', 72), ('PELICAN_COMMENT_SYSTEM_AUTHORS', {}), ('PELICAN_COMMENT_SYSTEM_FEED', os.path.join('feeds', 'comment.%s.atom.xml')), ('PELICAN_COMMENT_SYSTEM_FEED_ALL', os.path.join('feeds', 'comments.all.atom.xml')), ('COMMENT_URL', '#comment-{slug}') ] setdefault(pelican, settings) DEFAULT_CONFIG['PAGE_EXCLUDES'].append( DEFAULT_CONFIG['PELICAN_COMMENT_SYSTEM_DIR']) DEFAULT_CONFIG['ARTICLE_EXCLUDES'].append( DEFAULT_CONFIG['PELICAN_COMMENT_SYSTEM_DIR']) if pelican: pelican.settings['PAGE_EXCLUDES'].append( pelican.settings['PELICAN_COMMENT_SYSTEM_DIR']) pelican.settings['ARTICLE_EXCLUDES'].append( pelican.settings['PELICAN_COMMENT_SYSTEM_DIR']) def initialize(article_generator): avatars.init( article_generator.settings['OUTPUT_PATH'], article_generator.settings[ 'PELICAN_COMMENT_SYSTEM_IDENTICON_OUTPUT_PATH'], article_generator.settings['PELICAN_COMMENT_SYSTEM_IDENTICON_DATA'], article_generator.settings[ 'PELICAN_COMMENT_SYSTEM_IDENTICON_SIZE'] / 3, article_generator.settings['PELICAN_COMMENT_SYSTEM_AUTHORS'], ) def warn_on_slug_collision(items): slugs = {} for comment in items: if not comment.slug in slugs: slugs[comment.slug] = [comment] else: slugs[comment.slug].append(comment) for slug, itemList in slugs.items(): len_ = len(itemList) if len_ > 1: logger.warning('There are %s comments with the same slug: %s', len_, slug) for x in itemList: logger.warning(' %s', x.source_path) def write_feed_all(gen, writer): if gen.settings['PELICAN_COMMENT_SYSTEM'] is not True: return if gen.settings['PELICAN_COMMENT_SYSTEM_FEED_ALL'] is None: return context = copy.copy(gen.context) context['SITENAME'] += " - All Comments" context['SITESUBTITLE'] = "" path = gen.settings['PELICAN_COMMENT_SYSTEM_FEED_ALL'] global _all_comments _all_comments = sorted(_all_comments) _all_comments.reverse() for com in _all_comments: com.title = com.article.title + " - " + com.title com.override_url = com.article.url + com.url writer = Writer(gen.output_path, settings=gen.settings) writer.write_feed(_all_comments, context, path) def write_feed(gen, items, context, slug): if gen.settings['PELICAN_COMMENT_SYSTEM_FEED'] is None: return path = gen.settings['PELICAN_COMMENT_SYSTEM_FEED'] % slug writer = Writer(gen.output_path, settings=gen.settings) writer.write_feed(items, context, path) def add_static_comments(gen, content): if gen.settings['PELICAN_COMMENT_SYSTEM'] is not True: return global _all_comments content.comments_count = 0 content.comments = [] # Modify the local context, so we get proper values for the feed context = copy.copy(gen.context) context['SITEURL'] += "/" + content.url context['SITENAME'] += " - Comments: " + content.title context['SITESUBTITLE'] = "" folder = os.path.join( gen.settings['PATH'], gen.settings['PELICAN_COMMENT_SYSTEM_DIR'], content.slug ) if not os.path.isdir(folder): logger.debug("No comments found for: %s", content.slug) write_feed(gen, [], context, content.slug) return reader = Readers(gen.settings) comments = [] replies = [] for file in os.listdir(folder): name, extension = os.path.splitext(file) if extension[1:].lower() in reader.extensions: com = reader.read_file( base_path=folder, path=file, content_class=Comment, context=context) com.article = content _all_comments.append(com) if hasattr(com, 'replyto'): replies.append(com) else: comments.append(com) feed_items = sorted(comments + replies) feed_items.reverse() warn_on_slug_collision(feed_items) write_feed(gen, feed_items, context, content.slug) # TODO: Fix this O(n²) loop for reply in replies: for comment in chain(comments, replies): if comment.slug == reply.replyto: comment.addReply(reply) count = 0 for comment in comments: comment.sortReplies() count += comment.countReplies() comments = sorted(comments) content.comments_count = len(comments) + count content.comments = comments def writeIdenticonsToDisk(gen, writer): avatars.generateAndSaveMissingAvatars() def pelican_finalized(pelican): if pelican.settings['PELICAN_COMMENT_SYSTEM'] is not True: return global _all_comments print('Processed %s comment(s)' % len(_all_comments)) _all_comments = [] def register(): signals.initialized.connect(pelican_initialized) signals.article_generator_init.connect(initialize) signals.article_generator_write_article.connect(add_static_comments) signals.article_writer_finalized.connect(writeIdenticonsToDisk) signals.article_writer_finalized.connect(write_feed_all) signals.finalized.connect(pelican_finalized)
unknown
codeparrot/codeparrot-clean
(torch.compiler_ir)= # IRs PyTorch 2.0 offers two set of IRs for backends to interface with: Core Aten IR and Prims IR. ## Core Aten IR Core aten ops is the core subset of aten operators that can be used to compose other operators. Core aten IR is fully functional, and there is no `inplace` or `_out` variants in this opset. In contrast to Prims IR, core aten ops reuses the existing aten ops in "native_functions.yaml", and it doesn't further decompose ops into explicit type promotion and broadcasting ops. This opset is designed to serve as the functional IR to interface with backends. ```{warning} This opset is still under active development, more ops will be added in the future. ``` ```{csv-table} :file: ../../../build/ir/aten_ops.csv :widths: auto :header-rows: 1 ``` ## Prims IR Prims IR is a set of primitive operators that can be used to compose other operators. Prims IR is a lower level opset than core aten IR, and it further decomposes ops into explicit type promotion and broadcasting ops: prims.convert_element_type and prims.broadcast_in_dim. This opset is designed to interface with compiler backends. ```{warning} This opset is still under active development, more ops will be added in the future. ``` ```{csv-table} :file: ../../../build/ir/prims_ops.csv :widths: auto :header-rows: 1 ```
unknown
github
https://github.com/pytorch/pytorch
docs/source/user_guide/torch_compiler/torch.compiler_ir.md
# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # Copyright (c) 2010 Citrix Systems, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Network-related utilities for supporting libvirt connection code.""" import os import jinja2 import netaddr from oslo.config import cfg from nova.network import model from nova import paths CONF = cfg.CONF netutils_opts = [ cfg.StrOpt('injected_network_template', default=paths.basedir_def('nova/virt/interfaces.template'), help='Template file for injected network'), ] CONF.register_opts(netutils_opts) CONF.import_opt('use_ipv6', 'nova.netconf') def get_net_and_mask(cidr): net = netaddr.IPNetwork(cidr) return str(net.ip), str(net.netmask) def get_net_and_prefixlen(cidr): net = netaddr.IPNetwork(cidr) return str(net.ip), str(net._prefixlen) def get_ip_version(cidr): net = netaddr.IPNetwork(cidr) return int(net.version) def _get_first_network(network, version): # Using a generator expression with a next() call for the first element # of a list since we don't want to evaluate the whole list as we can # have a lot of subnets try: return (i for i in network['subnets'] if i['version'] == version).next() except StopIteration: pass def get_injected_network_template(network_info, use_ipv6=None, template=None, libvirt_virt_type=None): """Returns a rendered network template for the given network_info. :param network_info: :py:meth:`~nova.network.manager.NetworkManager.get_instance_nw_info` :param use_ipv6: If False, do not return IPv6 template information even if an IPv6 subnet is present in network_info. :param template: Path to the interfaces template file. :param libvirt_virt_type: The Libvirt `virt_type`, will be `None` for other hypervisors.. """ if use_ipv6 is None: use_ipv6 = CONF.use_ipv6 if not template: template = CONF.injected_network_template if not (network_info and template): return nets = [] ifc_num = -1 ipv6_is_available = False for vif in network_info: if not vif['network'] or not vif['network']['subnets']: continue network = vif['network'] # NOTE(bnemec): The template only supports a single subnet per # interface and I'm not sure how/if that can be fixed, so this # code only takes the first subnet of the appropriate type. subnet_v4 = _get_first_network(network, 4) subnet_v6 = _get_first_network(network, 6) ifc_num += 1 if not network.get_meta('injected'): continue address = None netmask = None gateway = '' broadcast = None dns = None if subnet_v4: if subnet_v4.get_meta('dhcp_server') is not None: continue if subnet_v4['ips']: ip = subnet_v4['ips'][0] address = ip['address'] netmask = model.get_netmask(ip, subnet_v4) if subnet_v4['gateway']: gateway = subnet_v4['gateway']['address'] broadcast = str(subnet_v4.as_netaddr().broadcast) dns = ' '.join([i['address'] for i in subnet_v4['dns']]) address_v6 = None gateway_v6 = '' netmask_v6 = None have_ipv6 = (use_ipv6 and subnet_v6) if have_ipv6: if subnet_v6.get_meta('dhcp_server') is not None: continue if subnet_v6['ips']: ipv6_is_available = True ip_v6 = subnet_v6['ips'][0] address_v6 = ip_v6['address'] netmask_v6 = model.get_netmask(ip_v6, subnet_v6) if subnet_v6['gateway']: gateway_v6 = subnet_v6['gateway']['address'] net_info = {'name': 'eth%d' % ifc_num, 'address': address, 'netmask': netmask, 'gateway': gateway, 'broadcast': broadcast, 'dns': dns, 'address_v6': address_v6, 'gateway_v6': gateway_v6, 'netmask_v6': netmask_v6, } nets.append(net_info) if not nets: return tmpl_path, tmpl_file = os.path.split(CONF.injected_network_template) env = jinja2.Environment(loader=jinja2.FileSystemLoader(tmpl_path), trim_blocks=True) template = env.get_template(tmpl_file) return template.render({'interfaces': nets, 'use_ipv6': ipv6_is_available, 'libvirt_virt_type': libvirt_virt_type})
unknown
codeparrot/codeparrot-clean
#!/usr/bin/python # __________ __ ___. # Open \______ \ ____ ____ | | _\_ |__ _______ ___ # Source | _// _ \_/ ___\| |/ /| __ \ / _ \ \/ / # Jukebox | | ( <_> ) \___| < | \_\ ( <_> > < < # Firmware |____|_ /\____/ \___ >__|_ \|___ /\____/__/\_ \ # \/ \/ \/ \/ \/ # $Id: deploy-themeeditor.py 28153 2010-09-23 18:04:57Z bluebrother $ # # Copyright (c) 2010 Dominik Riebeling # # All files in this archive are subject to the GNU General Public License. # See the file COPYING in the source tree root for full license agreement. # # This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY # KIND, either express or implied. # import deploy import sys deploy.program = "rbthemeeditor" deploy.project = "utils/themeeditor/themeeditor.pro" deploy.svnserver = "svn://svn.rockbox.org/rockbox/" deploy.svnpaths = \ [ "utils/themeeditor/", "lib/skin_parser/", "docs/COPYING" ] deploy.useupx = False deploy.bundlecopy = { "resources/windowicon.icns" : "Contents/Resources/", "Info.plist" : "Contents/" } # Windows nees some special treatment. Differentiate between program name # and executable filename. if sys.platform == "win32": deploy.progexe = "Release/" + deploy.program + ".exe" deploy.make = "mingw32-make" elif sys.platform == "darwin": deploy.progexe = deploy.program + ".app" # OS X 10.6 defaults to gcc 4.2. Building universal binaries that are # compatible with 10.4 requires using gcc-4.0. if not "QMAKESPEC" in deploy.environment: deploy.environment["QMAKESPEC"] = "macx-g++40" else: deploy.progexe = deploy.program # all files of the program. Will get put into an archive after building # (zip on w32, tar.bz2 on Linux). Does not apply on Mac which uses dmg. deploy.programfiles = [ deploy.progexe ] deploy.nsisscript = "utils/themeeditor/themeeditor.nsi" deploy.deploy()
unknown
codeparrot/codeparrot-clean
//// [tests/cases/conformance/async/es2017/asyncArrowFunction/arrowFunctionWithParameterNameAsync_es2017.ts] //// //// [arrowFunctionWithParameterNameAsync_es2017.ts] const x = async => async; //// [arrowFunctionWithParameterNameAsync_es2017.js] "use strict"; var x = function (async) { return async; };
javascript
github
https://github.com/microsoft/TypeScript
tests/baselines/reference/arrowFunctionWithParameterNameAsync_es2017(target=es5).js
/* * Copyright (C) 2014 The Guava Authors * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.google.common.graph; import com.google.common.annotations.Beta; import com.google.errorprone.annotations.DoNotMock; import java.util.Collection; import java.util.Set; import org.jspecify.annotations.Nullable; /** * An interface for <a * href="https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)">graph</a>-structured data, * whose edges are anonymous entities with no identity or information of their own. * * <p>A graph is composed of a set of nodes and a set of edges connecting pairs of nodes. * * <p>There are three primary interfaces provided to represent graphs. In order of increasing * complexity they are: {@link Graph}, {@link ValueGraph}, and {@link Network}. You should generally * prefer the simplest interface that satisfies your use case. See the <a * href="https://github.com/google/guava/wiki/GraphsExplained#choosing-the-right-graph-type"> * "Choosing the right graph type"</a> section of the Guava User Guide for more details. * * <h3>Capabilities</h3> * * <p>{@code Graph} supports the following use cases (<a * href="https://github.com/google/guava/wiki/GraphsExplained#definitions">definitions of * terms</a>): * * <ul> * <li>directed graphs * <li>undirected graphs * <li>graphs that do/don't allow self-loops * <li>graphs whose nodes/edges are insertion-ordered, sorted, or unordered * </ul> * * <p>{@code Graph} explicitly does not support parallel edges, and forbids implementations or * extensions with parallel edges. If you need parallel edges, use {@link Network}. * * <h3>Building a {@code Graph}</h3> * * <p>The implementation classes that {@code common.graph} provides are not public, by design. To * create an instance of one of the built-in implementations of {@code Graph}, use the {@link * GraphBuilder} class: * * {@snippet : * MutableGraph<Integer> graph = GraphBuilder.undirected().build(); * } * * <p>{@link GraphBuilder#build()} returns an instance of {@link MutableGraph}, which is a subtype * of {@code Graph} that provides methods for adding and removing nodes and edges. If you do not * need to mutate a graph (e.g. if you write a method than runs a read-only algorithm on the graph), * you should use the non-mutating {@link Graph} interface, or an {@link ImmutableGraph}. * * <p>You can create an immutable copy of an existing {@code Graph} using {@link * ImmutableGraph#copyOf(Graph)}: * * {@snippet : * ImmutableGraph<Integer> immutableGraph = ImmutableGraph.copyOf(graph); * } * * <p>Instances of {@link ImmutableGraph} do not implement {@link MutableGraph} (obviously!) and are * contractually guaranteed to be unmodifiable and thread-safe. * * <p>The Guava User Guide has <a * href="https://github.com/google/guava/wiki/GraphsExplained#building-graph-instances">more * information on (and examples of) building graphs</a>. * * <h3>Additional documentation</h3> * * <p>See the Guava User Guide for the {@code common.graph} package (<a * href="https://github.com/google/guava/wiki/GraphsExplained">"Graphs Explained"</a>) for * additional documentation, including: * * <ul> * <li><a * href="https://github.com/google/guava/wiki/GraphsExplained#equals-hashcode-and-graph-equivalence"> * {@code equals()}, {@code hashCode()}, and graph equivalence</a> * <li><a href="https://github.com/google/guava/wiki/GraphsExplained#synchronization"> * Synchronization policy</a> * <li><a href="https://github.com/google/guava/wiki/GraphsExplained#notes-for-implementors">Notes * for implementors</a> * </ul> * * @author James Sexton * @author Joshua O'Madadhain * @param <N> Node parameter type * @since 20.0 */ @Beta @DoNotMock("Use GraphBuilder to create a real instance") public interface Graph<N> extends BaseGraph<N> { // // Graph-level accessors // /** Returns all nodes in this graph, in the order specified by {@link #nodeOrder()}. */ @Override Set<N> nodes(); /** Returns all edges in this graph. */ @Override Set<EndpointPair<N>> edges(); // // Graph properties // /** * Returns true if the edges in this graph are directed. Directed edges connect a {@link * EndpointPair#source() source node} to a {@link EndpointPair#target() target node}, while * undirected edges connect a pair of nodes to each other. */ @Override boolean isDirected(); /** * Returns true if this graph allows self-loops (edges that connect a node to itself). Attempting * to add a self-loop to a graph that does not allow them will throw an {@link * IllegalArgumentException}. */ @Override boolean allowsSelfLoops(); /** Returns the order of iteration for the elements of {@link #nodes()}. */ @Override ElementOrder<N> nodeOrder(); /** * Returns an {@link ElementOrder} that specifies the order of iteration for the elements of * {@link #edges()}, {@link #adjacentNodes(Object)}, {@link #predecessors(Object)}, {@link * #successors(Object)} and {@link #incidentEdges(Object)}. * * @since 29.0 */ @Override ElementOrder<N> incidentEdgeOrder(); // // Element-level accessors // /** * Returns a live view of the nodes which have an incident edge in common with {@code node} in * this graph. * * <p>This is equal to the union of {@link #predecessors(Object)} and {@link #successors(Object)}. * * <p>If {@code node} is removed from the graph after this method is called, the {@code Set} * {@code view} returned by this method will be invalidated, and will throw {@code * IllegalStateException} if it is accessed in any way, with the following exceptions: * * <ul> * <li>{@code view.equals(view)} evaluates to {@code true} (but any other {@code equals()} * expression involving {@code view} will throw) * <li>{@code hashCode()} does not throw * <li>if {@code node} is re-added to the graph after having been removed, {@code view}'s * behavior is undefined * </ul> * * @throws IllegalArgumentException if {@code node} is not an element of this graph */ @Override Set<N> adjacentNodes(N node); /** * Returns a live view of all nodes in this graph adjacent to {@code node} which can be reached by * traversing {@code node}'s incoming edges <i>against</i> the direction (if any) of the edge. * * <p>In an undirected graph, this is equivalent to {@link #adjacentNodes(Object)}. * * <p>If {@code node} is removed from the graph after this method is called, the {@code Set} * {@code view} returned by this method will be invalidated, and will throw {@code * IllegalStateException} if it is accessed in any way, with the following exceptions: * * <ul> * <li>{@code view.equals(view)} evaluates to {@code true} (but any other {@code equals()} * expression involving {@code view} will throw) * <li>{@code hashCode()} does not throw * <li>if {@code node} is re-added to the graph after having been removed, {@code view}'s * behavior is undefined * </ul> * * @throws IllegalArgumentException if {@code node} is not an element of this graph */ @Override Set<N> predecessors(N node); /** * Returns a live view of all nodes in this graph adjacent to {@code node} which can be reached by * traversing {@code node}'s outgoing edges in the direction (if any) of the edge. * * <p>In an undirected graph, this is equivalent to {@link #adjacentNodes(Object)}. * * <p>This is <i>not</i> the same as "all nodes reachable from {@code node} by following outgoing * edges". For that functionality, see {@link Graphs#reachableNodes(Graph, Object)}. * * <p>If {@code node} is removed from the graph after this method is called, the {@code Set} * {@code view} returned by this method will be invalidated, and will throw {@code * IllegalStateException} if it is accessed in any way, with the following exceptions: * * <ul> * <li>{@code view.equals(view)} evaluates to {@code true} (but any other {@code equals()} * expression involving {@code view} will throw) * <li>{@code hashCode()} does not throw * <li>if {@code node} is re-added to the graph after having been removed, {@code view}'s * behavior is undefined * </ul> * * @throws IllegalArgumentException if {@code node} is not an element of this graph */ @Override Set<N> successors(N node); /** * Returns a live view of the edges in this graph whose endpoints include {@code node}. * * <p>This is equal to the union of incoming and outgoing edges. * * <p>If {@code node} is removed from the graph after this method is called, the {@code Set} * {@code view} returned by this method will be invalidated, and will throw {@code * IllegalStateException} if it is accessed in any way, with the following exceptions: * * <ul> * <li>{@code view.equals(view)} evaluates to {@code true} (but any other {@code equals()} * expression involving {@code view} will throw) * <li>{@code hashCode()} does not throw * <li>if {@code node} is re-added to the graph after having been removed, {@code view}'s * behavior is undefined * </ul> * * @throws IllegalArgumentException if {@code node} is not an element of this graph * @since 24.0 */ @Override Set<EndpointPair<N>> incidentEdges(N node); /** * Returns the count of {@code node}'s incident edges, counting self-loops twice (equivalently, * the number of times an edge touches {@code node}). * * <p>For directed graphs, this is equal to {@code inDegree(node) + outDegree(node)}. * * <p>For undirected graphs, this is equal to {@code incidentEdges(node).size()} + (number of * self-loops incident to {@code node}). * * <p>If the count is greater than {@code Integer.MAX_VALUE}, returns {@code Integer.MAX_VALUE}. * * @throws IllegalArgumentException if {@code node} is not an element of this graph */ @Override int degree(N node); /** * Returns the count of {@code node}'s incoming edges (equal to {@code predecessors(node).size()}) * in a directed graph. In an undirected graph, returns the {@link #degree(Object)}. * * <p>If the count is greater than {@code Integer.MAX_VALUE}, returns {@code Integer.MAX_VALUE}. * * @throws IllegalArgumentException if {@code node} is not an element of this graph */ @Override int inDegree(N node); /** * Returns the count of {@code node}'s outgoing edges (equal to {@code successors(node).size()}) * in a directed graph. In an undirected graph, returns the {@link #degree(Object)}. * * <p>If the count is greater than {@code Integer.MAX_VALUE}, returns {@code Integer.MAX_VALUE}. * * @throws IllegalArgumentException if {@code node} is not an element of this graph */ @Override int outDegree(N node); /** * Returns true if there is an edge that directly connects {@code nodeU} to {@code nodeV}. This is * equivalent to {@code nodes().contains(nodeU) && successors(nodeU).contains(nodeV)}. * * <p>In an undirected graph, this is equal to {@code hasEdgeConnecting(nodeV, nodeU)}. * * @since 23.0 */ @Override boolean hasEdgeConnecting(N nodeU, N nodeV); /** * Returns true if there is an edge that directly connects {@code endpoints} (in the order, if * any, specified by {@code endpoints}). This is equivalent to {@code * edges().contains(endpoints)}. * * <p>Unlike the other {@code EndpointPair}-accepting methods, this method does not throw if the * endpoints are unordered and the graph is directed; it simply returns {@code false}. This is for * consistency with the behavior of {@link Collection#contains(Object)} (which does not generally * throw if the object cannot be present in the collection), and the desire to have this method's * behavior be compatible with {@code edges().contains(endpoints)}. * * @since 27.1 */ @Override boolean hasEdgeConnecting(EndpointPair<N> endpoints); // // Graph identity // /** * Returns {@code true} iff {@code object} is a {@link Graph} that has the same elements and the * same structural relationships as those in this graph. * * <p>Thus, two graphs A and B are equal if <b>all</b> of the following are true: * * <ul> * <li>A and B have equal {@link #isDirected() directedness}. * <li>A and B have equal {@link #nodes() node sets}. * <li>A and B have equal {@link #edges() edge sets}. * </ul> * * <p>Graph properties besides {@link #isDirected() directedness} do <b>not</b> affect equality. * For example, two graphs may be considered equal even if one allows self-loops and the other * doesn't. Additionally, the order in which nodes or edges are added to the graph, and the order * in which they are iterated over, are irrelevant. * * <p>A reference implementation of this is provided by {@link AbstractGraph#equals(Object)}. */ @Override boolean equals(@Nullable Object object); /** * Returns the hash code for this graph. The hash code of a graph is defined as the hash code of * the set returned by {@link #edges()}. * * <p>A reference implementation of this is provided by {@link AbstractGraph#hashCode()}. */ @Override int hashCode(); }
java
github
https://github.com/google/guava
android/guava/src/com/google/common/graph/Graph.java
/* __next_internal_action_entry_do_not_use__ {"001ab723c80dcca470e0410b4b2a2fc2bf21f41476":{"name":"c"},"006a88810ecce4a4e8b59d53b8327d7e98bbf251d7":{"name":"$$RSC_SERVER_ACTION_0"},"006e7bc104e4d6e7fda190c4a51be969cfd0be6d6d":{"name":"a"},"00d1f7eb64271d7c601dfef7d4d7053de1c2ca4338":{"name":"b"}} */ import { registerServerReference } from "private-next-rsc-server-reference"; export async function a() {} export async function b() {} export async function c() {} function d() {} export const $$RSC_SERVER_ACTION_0 = async function e() {}; registerServerReference($$RSC_SERVER_ACTION_0, "006a88810ecce4a4e8b59d53b8327d7e98bbf251d7", null); function Foo() { var e = $$RSC_SERVER_ACTION_0; } import { ensureServerEntryExports } from "private-next-rsc-action-validate"; ensureServerEntryExports([ a, b, c ]); registerServerReference(a, "006e7bc104e4d6e7fda190c4a51be969cfd0be6d6d", null); registerServerReference(b, "00d1f7eb64271d7c601dfef7d4d7053de1c2ca4338", null); registerServerReference(c, "001ab723c80dcca470e0410b4b2a2fc2bf21f41476", null);
javascript
github
https://github.com/vercel/next.js
crates/next-custom-transforms/tests/fixture/server-actions/server-graph/4/output.js
#!/usr/bin/env python # Copyright 2014 the V8 project authors. All rights reserved. # Use of this source code is governed by a BSD-style license that can be # found in the LICENSE file. import argparse import os import subprocess import sys BOTS = { '--arm32': 'v8_arm32_perf_try', '--linux32': 'v8_linux32_perf_try', '--linux64': 'v8_linux64_perf_try', '--linux64_atom': 'v8_linux64_atom_perf_try', '--linux64_haswell': 'v8_linux64_haswell_perf_try', '--nexus5': 'v8_nexus5_perf_try', '--nexus7': 'v8_nexus7_perf_try', '--nexus9': 'v8_nexus9_perf_try', '--nexus10': 'v8_nexus10_perf_try', } DEFAULT_BOTS = [ 'v8_arm32_perf_try', 'v8_linux32_perf_try', 'v8_linux64_haswell_perf_try', 'v8_nexus10_perf_try', ] PUBLIC_BENCHMARKS = [ 'arewefastyet', 'embenchen', 'emscripten', 'compile', 'jetstream', 'jetstream-ignition', 'jsbench', 'jstests', 'kraken_orig', 'kraken_orig-ignition', 'massive', 'memory', 'octane', 'octane-noopt', 'octane-ignition', 'octane-pr', 'octane-tf', 'octane-tf-pr', 'simdjs', 'sunspider', 'sunspider-ignition', 'wasm', ] V8_BASE = os.path.abspath(os.path.dirname(os.path.dirname(__file__))) def main(): parser = argparse.ArgumentParser(description='') parser.add_argument('benchmarks', nargs='+', help='The benchmarks to run.') parser.add_argument('--extra-flags', default='', help='Extra flags to be passed to the executable.') parser.add_argument('-r', '--revision', type=str, default=None, help='Revision (use full hash!) to use for the try job; ' 'default: the revision will be determined by the ' 'try server; see its waterfall for more info') for option in sorted(BOTS): parser.add_argument( option, dest='bots', action='append_const', const=BOTS[option], help='Add %s trybot.' % BOTS[option]) options = parser.parse_args() if not options.bots: print 'No trybots specified. Using default %s.' % ','.join(DEFAULT_BOTS) options.bots = DEFAULT_BOTS if not options.benchmarks: print 'Please specify the benchmarks to run as arguments.' return 1 for benchmark in options.benchmarks: if benchmark not in PUBLIC_BENCHMARKS: print ('%s not found in our benchmark list. The respective trybot might ' 'fail, unless you run something this script isn\'t aware of. ' 'Available public benchmarks: %s' % (benchmark, PUBLIC_BENCHMARKS)) print 'Proceed anyways? [Y/n] ', answer = sys.stdin.readline().strip() if answer != "" and answer != "Y" and answer != "y": return 1 assert '"' not in options.extra_flags and '\'' not in options.extra_flags, ( 'Invalid flag specification.') # Ensure depot_tools are updated. subprocess.check_output( 'gclient', shell=True, stderr=subprocess.STDOUT, cwd=V8_BASE) cmd = ['git cl try -m internal.client.v8'] cmd += ['-b %s' % bot for bot in options.bots] if options.revision: cmd += ['-r %s' % options.revision] benchmarks = ['"%s"' % benchmark for benchmark in options.benchmarks] cmd += ['-p \'testfilter=[%s]\'' % ','.join(benchmarks)] if options.extra_flags: cmd += ['-p \'extra_flags="%s"\'' % options.extra_flags] subprocess.check_call(' '.join(cmd), shell=True, cwd=V8_BASE) if __name__ == '__main__': # pragma: no cover sys.exit(main())
unknown
codeparrot/codeparrot-clean
# -*- coding: utf-8 -*- # Generated by Django 1.10.6 on 2017-05-24 23:40 from __future__ import unicode_literals from django.conf import settings from django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration): initial = True dependencies = [ ('contenttypes', '0002_remove_content_type_name'), migrations.swappable_dependency(settings.AUTH_USER_MODEL), ] operations = [ migrations.CreateModel( name='Action', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('object_id', models.PositiveIntegerField()), ('action', models.CharField(max_length=512)), ('timestamp', models.DateTimeField(auto_now_add=True)), ('category', models.IntegerField(blank=True, choices=[(1, 'Communicate'), (2, 'Assign'), (3, 'Refer'), (4, 'Issue'), (5, 'Decline'), (6, 'Publish'), (7, 'Lodge'), (8, 'Next Step')], null=True)), ('content_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='contenttypes.ContentType')), ('user', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL)), ], ), ]
unknown
codeparrot/codeparrot-clean
# -*- coding: utf-8 -*- """ Created on Thu Oct 02 08:41:08 2014 @author: Acer """ import sys import pycd3 import math class NodeFactory(pycd3.INodeFactory): def __init__(self, node): pycd3.INodeFactory.__init__(self) self.node = node print "NodeFactory.__init__" def getNodeName(self): print "NodeFactory.getName" return self.node.__name__ def createNode(self): print "NodeFactory.createNode" n = self.node() n.__disown__() print "NodeFactory.disowned" return n def getSource(self): print "NodeFactory.getSource" return "Practice.py" class Muskingum(pycd3.Node): def __init__(self): pycd3.Node.__init__(self) self.rain = pycd3.Flow() self.runoff = pycd3.Flow() self.inflow = pycd3.Flow() #dir (self.inf) print "init node" self.addInPort("rain", self.rain) self.addInPort("inflow", self.inflow) self.addOutPort("runoff", self.runoff) #Catchment area + fraction info of pervious and impervious parts self.area_width = pycd3.Double(10) self.addParameter("area_width [m]", self.area_width) self.area_length = pycd3.Double(100) self.addParameter("area_length [m]", self.area_length) self.perv_area = pycd3.Double(0.4) self.addParameter("perv_area [-]", self.perv_area) self.imp_area_stormwater = pycd3.Double(0.4) self.addParameter("imp_area_stormwater [-]", self.imp_area_stormwater) self.imp_area_raintank = pycd3.Double(1) self.addParameter("imp_area_raintank [-]", self.imp_area_raintank) #number of subareas for flowconcentration self.amount_subareas = pycd3.Double(1) self.addParameter("amount_subareas [-]", self.amount_subareas) #Muskingum parameters K flowtime for entire catchment self.muskingum_veloc = pycd3.Double(0.4) self.addParameter("muskingum_vel [m/s]", self.muskingum_veloc) self.muskingum_X = pycd3.Double(0.07) self.addParameter("muskingum_X [-]", self.muskingum_X) def init(self, start, stop, dt): print start print stop print dt #calculation catchment area self.area_property = self.area_length * self.area_width #calculating the K values for a single subreach self.muskingum_K_single_subreach = (self.area_length/self.amount_subareas)/self.muskingum_veloc #calculating the Muskingum coefficients self.C_x=(dt/2-self.muskingum_K_single_subreach*self.muskingum_X)/(dt/2+self.muskingum_K_single_subreach*(1-self.muskingum_X)) self.C_y=(1/(dt/2+self.muskingum_K_single_subreach*(1-self.muskingum_X))) #preparing the storage coefficients for the stored Volume in each subreach self.Q_i_storage_1 = [] self.Q_i_storage_2 = [] for i in range(self.amount_subareas): self.Q_i_storage_1.append(0) self.Q_i_storage_2.append(0) return True def f(self, current, dt): #dividing are in 'amout_subareas' parts (same size) self.subarea_size = self.area_property*self.imp_area_raintank/self.amount_subareas #preparing the flow array self.Q_i = [] for i in range(self.amount_subareas): self.Q_i.append(0) #calculating the flow in for each subreach if i==0: self.Q_i[i]=(self.inflow[0]*1000+self.rain[0]*self.subarea_size)*self.C_x+self.Q_i_storage_2[i]*self.C_y self.Q_i_storage_2[i]=self.Q_i[i]*(1-self.C_x)*dt+self.Q_i_storage_1[i]*(1-self.C_y*dt) self.Q_i_storage_1[i]=self.Q_i_storage_2[i] else: self.Q_i[i]=(self.Q_i[i-1]+self.rain[0]*self.subarea_size)*self.C_x+self.Q_i_storage_2[i]*self.C_y self.Q_i_storage_2[i]=self.Q_i[i]*(1-self.C_x)*dt+self.Q_i_storage_1[i]*(1-self.C_y*dt) self.Q_i_storage_1[i]=self.Q_i_storage_2[i] #represents the inflow in knot self.runoff[0]=self.Q_i[-1] /1000 return dt def getClassName(self): #print "getClassName" return "Muskingum" def register(nr): for c in pycd3.Node.__subclasses__(): nf = NodeFactory(c) nf.__disown__() nr.addNodeFactory(nf) # def test(): # nr = pycd3.NodeRegistry() # nf = NodeFactory(Household).__disown__() # nr.addNodeFactory(nf) # node = nr.createNode("Household") #test()
unknown
codeparrot/codeparrot-clean
# This must be a play as we need to invoke it with the ANSIBLE_SCP_IF_SSH env # to control the mechanism used. Unfortunately while ansible_scp_if_ssh is # documented, it isn't actually used hence the separate invocation --- - name: further fetch tests with metachar characters in filename hosts: windows force_handlers: yes serial: 1 gather_facts: no tasks: - name: setup remote tmp dir import_role: name: ../../setup_remote_tmp_dir - name: create remote file with metachar in name win_copy: content: some content dest: '{{ remote_tmp_dir }}\file ^with &whoami' - name: test fetch against a file with cmd metacharacters block: - name: fetch file with metachar in name fetch: src: '{{ remote_tmp_dir }}\file ^with &whoami' dest: ansible-test.txt flat: yes register: fetch_res - name: assert fetch file with metachar in name assert: that: - fetch_res is changed - fetch_res.checksum == '94e66df8cd09d410c62d9e0dc59d3a884e458e05' always: - name: remove local copy of file file: path: ansible-test.txt state: absent delegate_to: localhost
unknown
github
https://github.com/ansible/ansible
test/integration/targets/connection_windows_ssh/tests_fetch.yml
/** Add your relevant code here for the issue to reproduce */ export default function Home() { return null; }
typescript
github
https://github.com/vercel/next.js
examples/reproduction-template-pages/pages/index.tsx
--- navigation_title: Query across clusters mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-cross-clusters.html applies_to: stack: preview =9.0, ga 9.1+ serverless: unavailable products: - id: elasticsearch --- # Use ES|QL across clusters [esql-cross-clusters] With {{esql}}, you can execute a single query across multiple clusters. ## Prerequisites [esql-ccs-prerequisites] * {{ccs-cap}} requires remote clusters. To set up remote clusters, see [*Remote clusters*](docs-content://deploy-manage/remote-clusters.md). To ensure your remote cluster configuration supports {{ccs}}, see [Supported {{ccs}} configurations](docs-content://explore-analyze/cross-cluster-search.md#ccs-supported-configurations). * For full {{ccs}} capabilities, the local and remote cluster must be on the same [subscription level](https://www.elastic.co/subscriptions). * The local coordinating node must have the [`remote_cluster_client`](docs-content://deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md#remote-node) node role. * If you use [sniff mode](docs-content:///deploy-manage/remote-clusters/remote-clusters-self-managed.md#sniff-mode), the local coordinating node must be able to connect to seed and gateway nodes on the remote cluster. We recommend using gateway nodes capable of serving as coordinating nodes. The seed nodes can be a subset of these gateway nodes. * If you use [proxy mode](docs-content:///deploy-manage/remote-clusters/remote-clusters-self-managed.md#proxy-mode), the local coordinating node must be able to connect to the configured `proxy_address`. The proxy at this address must be able to route connections to gateway and coordinating nodes on the remote cluster. * {{ccs-cap}} requires different security privileges on the local cluster and remote cluster. See [Configure privileges for {{ccs}}](docs-content://deploy-manage/remote-clusters/remote-clusters-cert.md#remote-clusters-privileges-ccs) and [*Remote clusters*](docs-content://deploy-manage/remote-clusters.md). ## Security model [esql-ccs-security-model] {{es}} supports two security models for cross-cluster search (CCS): * [TLS certificate authentication](#esql-ccs-security-model-certificate) * [API key authentication](#esql-ccs-security-model-api-key) ::::{tip} To check which security model is being used to connect your clusters, run `GET _remote/info`. If you’re using the API key authentication method, you’ll see the `"cluster_credentials"` key in the response. :::: ### TLS certificate authentication [esql-ccs-security-model-certificate] ::::{admonition} Deprecated in 9.0.0. :class: warning Use [API key authentication](#esql-ccs-security-model-api-key) instead. :::: TLS certificate authentication secures remote clusters with mutual TLS. This could be the preferred model when a single administrator has full control over both clusters. We generally recommend that roles and their privileges be identical in both clusters. Refer to [TLS certificate authentication](docs-content://deploy-manage/remote-clusters/remote-clusters-cert.md) for prerequisites and detailed setup instructions. ### API key authentication [esql-ccs-security-model-api-key] The following information pertains to using {{esql}} across clusters with the [**API key based security model**](docs-content://deploy-manage/remote-clusters/remote-clusters-api-key.md). You’ll need to follow the steps on that page for the **full setup instructions**. This page only contains additional information specific to {{esql}}. API key based cross-cluster search (CCS) enables more granular control over allowed actions between clusters. This may be the preferred model when you have different administrators for different clusters and want more control over who can access what data. In this model, cluster administrators must explicitly define the access given to clusters and users. You will need to: * Create an API key on the **remote cluster** using the [Create cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) API or using the [Kibana API keys UI](docs-content://deploy-manage/api-keys/elasticsearch-api-keys.md). * Add the API key to the keystore on the **local cluster**, as part of the steps in [configuring the local cluster](docs-content://deploy-manage/remote-clusters/remote-clusters-api-key.md#remote-clusters-security-api-key-local-actions). All cross-cluster requests from the local cluster are bound by the API key’s privileges. Using {{esql}} with the API key based security model requires some additional permissions that may not be needed when using the traditional query DSL based search. The following example API call creates a role that can query remote indices using {{esql}} when using the API key based security model. The final privilege, `remote_cluster`, is required to allow remote enrich operations. ```console POST /_security/role/remote1 { "cluster": ["cross_cluster_search"], <1> "indices": [ { "names" : [""], <2> "privileges": ["read"] } ], "remote_indices": [ <3> { "names": [ "logs-*" ], "privileges": [ "read","read_cross_cluster" ], <4> "clusters" : ["my_remote_cluster"] <5> } ], "remote_cluster": [ <6> { "privileges": [ "monitor_enrich" ], "clusters": [ "my_remote_cluster" ] } ] } ``` 1. The `cross_cluster_search` cluster privilege is required for the *local* cluster. 2. Typically, users will have permissions to read both local and remote indices. However, for cases where the role is intended to ONLY search the remote cluster, the `read` permission is still required for the local cluster. To provide read access to the local cluster, but disallow reading any indices in the local cluster, the `names` field may be an empty string. 3. The indices allowed read access to the remote cluster. The configured [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) must also allow this index to be read. 4. The `read_cross_cluster` privilege is always required when using {{esql}} across clusters with the API key based security model. 5. The remote clusters to which these privileges apply. This remote cluster must be configured with a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) and connected to the remote cluster before the remote index can be queried. Verify connection using the [Remote cluster info](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) API. 6. Required to allow remote enrichment. Without this, the user cannot read from the `.enrich` indices on the remote cluster. The `remote_cluster` security privilege was introduced in version **8.15.0**. You will then need a user or API key with the permissions you created above. The following example API call creates a user with the `remote1` role. ```console POST /_security/user/remote_user { "password" : "<PASSWORD>", "roles" : [ "remote1" ] } ``` Remember that all cross-cluster requests from the local cluster are bound by the cross cluster API key’s privileges, which are controlled by the remote cluster’s administrator. ::::{tip} Cross cluster API keys created in versions prior to 8.15.0 will need to replaced or updated to add the new permissions required for {{esql}} with ENRICH. :::: ## Remote cluster setup [ccq-remote-cluster-setup] Once the security model is configured, you can add remote clusters. The following [cluster update settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) API request adds three remote clusters: `cluster_one`, `cluster_two`, and `cluster_three`. ```console PUT _cluster/settings { "persistent": { "cluster": { "remote": { "cluster_one": { "seeds": [ "35.238.149.1:9300" ], "skip_unavailable": true }, "cluster_two": { "seeds": [ "35.238.149.2:9300" ], "skip_unavailable": false }, "cluster_three": { <1> "seeds": [ "35.238.149.3:9300" ] } } } } } ``` % TEST[setup:host] % TEST[s/35.238.149.\d+:930\d+/\${transport_host}/] % end::ccs-remote-cluster-setup[] 1. Since `skip_unavailable` was not set on `cluster_three`, it uses the default of `true`. See the [Optional remote clusters](#ccq-skip-unavailable-clusters) section for details. ## Query across multiple clusters [ccq-from] In the `FROM` command, specify data streams and indices on remote clusters using the format `<remote_cluster_name>:<target>`. For instance, the following {{esql}} request queries the `my-index-000001` index on a single remote cluster named `cluster_one`: ```esql FROM cluster_one:my-index-000001 | LIMIT 10 ``` Similarly, this {{esql}} request queries the `my-index-000001` index from three clusters: * The local ("querying") cluster * Two remote clusters, `cluster_one` and `cluster_two` ```esql FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001 | LIMIT 10 ``` Likewise, this {{esql}} request queries the `my-index-000001` index from all remote clusters (`cluster_one`, `cluster_two`, and `cluster_three`): ```esql FROM *:my-index-000001 | LIMIT 10 ``` ## Cross-cluster metadata [ccq-cluster-details] Using the `"include_ccs_metadata": true` option, users can request that ES|QL {{ccs}} responses include metadata about the search on each cluster (when the response format is JSON). Here we show an example using the async search endpoint. {{ccs-cap}} metadata is also present in the synchronous search endpoint response when requested. If the search returns partial results and there are partial shard or remote cluster failures, `_clusters` metadata containing the failures will be included in the response regardless of the `include_ccs_metadata` parameter. ```console POST /_query/async?format=json { "query": """ FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index* | STATS COUNT(http.response.status_code) BY user.id | LIMIT 2 """, "include_ccs_metadata": true } ``` % TEST[setup:my_index] % TEST[s/cluster_one:my-index-000001,cluster_two:my-index//] Which returns: ```console-result { "is_running": false, "took": 42, <1> "is_partial": false, <7> "columns" : [ { "name" : "COUNT(http.response.status_code)", "type" : "long" }, { "name" : "user.id", "type" : "keyword" } ], "values" : [ [4, "elkbee"], [1, "kimchy"] ], "_clusters": { <2> "total": 3, "successful": 3, "running": 0, "skipped": 0, "partial": 0, "failed": 0, "details": { <3> "(local)": { <4> "status": "successful", "indices": "blogs", "took": 41, <5> "_shards": { <6> "total": 13, "successful": 13, "skipped": 0, "failed": 0 } }, "cluster_one": { "status": "successful", "indices": "cluster_one:my-index-000001", "took": 38, "_shards": { "total": 4, "successful": 4, "skipped": 0, "failed": 0 } }, "cluster_two": { "status": "successful", "indices": "cluster_two:my-index*", "took": 40, "_shards": { "total": 18, "successful": 18, "skipped": 1, "failed": 0 } } } } } ``` % TEST[skip: cross-cluster testing env not set up] 1. How long the entire search (across all clusters) took, in milliseconds. 2. This section of counters shows all possible cluster search states and how many cluster searches are currently in that state. The clusters can have one of the following statuses: **running**, **successful** (searches on all shards were successful), **skipped** (the search failed on a cluster marked with `skip_unavailable`=`true`), **failed** (the search failed on a cluster marked with `skip_unavailable`=`false`) or **partial** (the search was [interrupted](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-esql) before finishing or has partially failed). 3. The `_clusters/details` section shows metadata about the search on each cluster. 4. If you included indices from the local cluster you sent the request to in your {{ccs}}, it is identified as "(local)". 5. How long (in milliseconds) the search took on each cluster. This can be useful to determine which clusters have slower response times than others. 6. The shard details for the search on that cluster, including a count of shards that were skipped due to the can-match phase results. Shards are skipped when they cannot have any matching data and therefore are not included in the full ES|QL query. 7. The `is_partial` field is set to `true` if the search has partial results for any reason, for example due to partial shard failures, failures in remote clusters, or if the async query was stopped by calling the [async query stop API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-esql). The cross-cluster metadata can be used to determine whether any data came back from a cluster. For instance, in the query below, the wildcard expression for `cluster-two` did not resolve to a concrete index (or indices). The cluster is, therefore, marked as *skipped* and the total number of shards searched is set to zero. ```console POST /_query/async?format=json { "query": """ FROM cluster_one:my-index*,cluster_two:logs* | STATS COUNT(http.response.status_code) BY user.id | LIMIT 2 """, "include_ccs_metadata": true } ``` % TEST[continued] % TEST[s/cluster_one:my-index\*,cluster_two:logs\*/my-index-000001/] Which returns: ```console-result { "is_running": false, "took": 55, "is_partial": true, <3> "columns": [ ... ], "values": [ ... ], "_clusters": { "total": 2, "successful": 1, "running": 0, "skipped": 1, <1> "partial": 0, "failed": 0, "details": { "cluster_one": { "status": "successful", "indices": "cluster_one:my-index*", "took": 38, "_shards": { "total": 4, "successful": 4, "skipped": 0, "failed": 0 } }, "cluster_two": { "status": "skipped", <1> "indices": "cluster_two:logs*", "took": 0, "_shards": { "total": 0, <2> "successful": 0, "skipped": 0, "failed": 0 } } } } } ``` % TEST[skip: cross-cluster testing env not set up] 1. This cluster is marked as *skipped*, since there were no matching indices on that cluster. 2. Indicates that no shards were searched (due to not having any matching indices). 3. Since one of the clusters is skipped, the search result is marked as partial. ## Enrich across clusters [ccq-enrich] Enrich in {{esql}} across clusters operates similarly to [local enrich](commands/enrich.md). If the enrich policy and its enrich indices are consistent across all clusters, simply write the enrich command as you would without remote clusters. In this default mode, {{esql}} can execute the enrich command on either the local cluster or the remote clusters, aiming to minimize computation or inter-cluster data transfer. Ensuring that the policy exists with consistent data on both the local cluster and the remote clusters is critical for ES|QL to produce a consistent query result. ::::{tip} Enrich in {{esql}} across clusters using the API key based security model was introduced in version **8.15.0**. Cross cluster API keys created in versions prior to 8.15.0 will need to replaced or updated to use the new required permissions. Refer to the example in the [API key authentication](#esql-ccs-security-model-api-key) section. :::: In the following example, the enrich with `hosts` policy can be executed on either the local cluster or the remote cluster `cluster_one`. ```esql FROM my-index-000001,cluster_one:my-index-000001 | ENRICH hosts ON ip | LIMIT 10 ``` Enrich with an {{esql}} query against remote clusters only can also happen on the local cluster. This means the below query requires the `hosts` enrich policy to exist on the local cluster as well. ```esql FROM cluster_one:my-index-000001,cluster_two:my-index-000001 | LIMIT 10 | ENRICH hosts ON ip ``` ### Enrich with coordinator mode [esql-enrich-coordinator] {{esql}} provides the enrich `_coordinator` mode to force {{esql}} to execute the enrich command on the local cluster. This mode should be used when the enrich policy is not available on the remote clusters or maintaining consistency of enrich indices across clusters is challenging. ```esql FROM my-index-000001,cluster_one:my-index-000001 | ENRICH _coordinator:hosts ON ip | SORT host_name | LIMIT 10 ``` ::::{important} Enrich with the `_coordinator` mode usually increases inter-cluster data transfer and workload on the local cluster. :::: ### Enrich with remote mode [esql-enrich-remote] {{esql}} also provides the enrich `_remote` mode to force {{esql}} to execute the enrich command independently on each remote cluster where the target indices reside. This mode is useful for managing different enrich data on each cluster, such as detailed information of hosts for each region where the target (main) indices contain log events from these hosts. In the below example, the `hosts` enrich policy is required to exist on all remote clusters: the `querying` cluster (as local indices are included), the remote cluster `cluster_one`, and `cluster_two`. ```esql FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001 | ENRICH _remote:hosts ON ip | SORT host_name | LIMIT 10 ``` A `_remote` enrich cannot be executed after a [`STATS`](commands/stats-by.md) command. The following example would result in an error: ```esql FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001 | STATS COUNT(*) BY ip | ENRICH _remote:hosts ON ip | SORT host_name | LIMIT 10 ``` ### Multiple enrich commands [esql-multi-enrich] You can include multiple enrich commands in the same query with different modes. {{esql}} will attempt to execute them accordingly. For example, this query performs two enriches, first with the `hosts` policy on any cluster and then with the `vendors` policy on the local cluster. ```esql FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001 | ENRICH hosts ON ip | ENRICH _coordinator:vendors ON os | LIMIT 10 ``` A `_remote` enrich command can’t be executed after a `_coordinator` enrich command. The following example would result in an error. ```esql FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001 | ENRICH _coordinator:hosts ON ip | ENRICH _remote:vendors ON os | LIMIT 10 ``` ## Excluding clusters or indices from {{esql}} query [ccq-exclude] To exclude an entire cluster, prefix the cluster alias with a minus sign in the `FROM` command, for example: `-my_cluster:*`: ```esql FROM my-index-000001,cluster*:my-index-000001,-cluster_three:* | LIMIT 10 ``` To exclude a specific remote index, prefix the index with a minus sign in the `FROM` command, such as `my_cluster:-my_index`: ```esql FROM my-index-000001,cluster*:my-index-*,cluster_three:-my-index-000001 | LIMIT 10 ``` ## Skipping problematic remote clusters [ccq-skip-unavailable-clusters] {{ccs-cap}} for {{esql}} behavior when there are problems connecting to or running query on remote clusters differs between versions. ::::{applies-switch} :::{applies-item} stack: ga 9.1+ Remote clusters are configured with the `skip_unavailable: true` setting by default. With this setting, clusters are marked as `skipped` or `partial` rather than causing queries to fail in the following scenarios: * The remote cluster is disconnected from the querying cluster, either before or during the query execution. * The remote cluster does not have the requested index, or it is not accessible due to security settings. * An error happened while processing the query on the remote cluster. The `partial` status means the remote query either has errors or was interrupted by an explicit user action, but some data may be returned. Queries will still fail when `skip_unavailable` is set `true`, if none of the specified indices exist. For example, the following queries will fail: ```esql FROM cluster_one:missing-index | LIMIT 10 FROM cluster_one:missing-index* | LIMIT 10 FROM cluster_one:missing-index*,cluster_two:missing-index | LIMIT 10 ``` ::: :::{applies-item} stack: ga =9.0 If a remote cluster disconnects from the querying cluster, {{ccs}} for {{esql}} will set it to `skipped` and continue the query with other clusters, unless the remote cluster's `skip_unavailable` setting is set to `false`, in which case the query will fail. ::: :::: ## Query across clusters during an upgrade [ccq-during-upgrade] You can still search a remote cluster while performing a rolling upgrade on the local cluster. However, the local coordinating node’s "upgrade from" and "upgrade to" version must be compatible with the remote cluster’s gateway node. ::::{warning} Running multiple versions of {{es}} in the same cluster beyond the duration of an upgrade is not supported. :::: For more information about upgrades, see [Upgrading {{es}}](docs-content://deploy-manage/upgrade/deployment-or-cluster.md).
unknown
github
https://github.com/elastic/elasticsearch
docs/reference/query-languages/esql/esql-cross-clusters.md
@file:OptIn(ExperimentalContracts::class) @file:Suppress("LEAKED_IN_PLACE_LAMBDA", "WRONG_INVOCATION_KIND") package kotlinx.coroutines import kotlinx.coroutines.internal.* import kotlinx.coroutines.intrinsics.* import kotlinx.coroutines.selects.* import kotlin.contracts.* import kotlin.coroutines.* import kotlin.coroutines.intrinsics.* import kotlin.jvm.* import kotlin.time.* import kotlin.time.Duration.Companion.milliseconds /** * Runs a given suspending [block] of code inside a coroutine with a specified [timeout][timeMillis] and throws * a [TimeoutCancellationException] if the timeout was exceeded. * If the given [timeMillis] is non-positive, [TimeoutCancellationException] is thrown immediately. * * The code that is executing inside the [block] is cancelled on timeout and the active or next invocation of * the cancellable suspending function inside the block throws a [TimeoutCancellationException]. * * The sibling function that does not throw an exception on timeout is [withTimeoutOrNull]. * Note that the timeout action can be specified for a [select] invocation with [onTimeout][SelectBuilder.onTimeout] clause. * * **The timeout event is asynchronous with respect to the code running in the block** and may happen at any time, * even right before the return from inside the timeout [block]. Keep this in mind if you open or acquire some * resource inside the [block] that needs closing or release outside the block. * See the * [Asynchronous timeout and resources](https://kotlinlang.org/docs/reference/coroutines/cancellation-and-timeouts.html#asynchronous-timeout-and-resources) * section of the coroutines guide for details. * * > Implementation note: how the time is tracked exactly is an implementation detail of the context's [CoroutineDispatcher]. * * @param timeMillis timeout time in milliseconds. */ public suspend fun <T> withTimeout(timeMillis: Long, block: suspend CoroutineScope.() -> T): T { contract { callsInPlace(block, InvocationKind.EXACTLY_ONCE) } if (timeMillis <= 0L) throw TimeoutCancellationException("Timed out immediately") return suspendCoroutineUninterceptedOrReturn { uCont -> setupTimeout(TimeoutCoroutine(timeMillis, uCont), block) } } /** * Runs a given suspending [block] of code inside a coroutine with the specified [timeout] and throws * a [TimeoutCancellationException] if the timeout was exceeded. * If the given [timeout] is non-positive, [TimeoutCancellationException] is thrown immediately. * * The code that is executing inside the [block] is cancelled on timeout and the active or next invocation of * the cancellable suspending function inside the block throws a [TimeoutCancellationException]. * * The sibling function that does not throw an exception on timeout is [withTimeoutOrNull]. * Note that the timeout action can be specified for a [select] invocation with [onTimeout][SelectBuilder.onTimeout] clause. * * **The timeout event is asynchronous with respect to the code running in the block** and may happen at any time, * even right before the return from inside the timeout [block]. Keep this in mind if you open or acquire some * resource inside the [block] that needs closing or release outside the block. * See the * [Asynchronous timeout and resources](https://kotlinlang.org/docs/reference/coroutines/cancellation-and-timeouts.html#asynchronous-timeout-and-resources) * section of the coroutines guide for details. * * > Implementation note: how the time is tracked exactly is an implementation detail of the context's [CoroutineDispatcher]. */ public suspend fun <T> withTimeout(timeout: Duration, block: suspend CoroutineScope.() -> T): T { contract { callsInPlace(block, InvocationKind.EXACTLY_ONCE) } return withTimeout(timeout.toDelayMillis(), block) } /** * Runs a given suspending block of code inside a coroutine with a specified [timeout][timeMillis] and returns * `null` if this timeout was exceeded. * If the given [timeMillis] is non-positive, `null` is returned immediately. * * The code that is executing inside the [block] is cancelled on timeout and the active or next invocation of * cancellable suspending function inside the block throws a [TimeoutCancellationException]. * * The sibling function that throws an exception on timeout is [withTimeout]. * Note that the timeout action can be specified for a [select] invocation with [onTimeout][SelectBuilder.onTimeout] clause. * * **The timeout event is asynchronous with respect to the code running in the block** and may happen at any time, * even right before the return from inside the timeout [block]. Keep this in mind if you open or acquire some * resource inside the [block] that needs closing or release outside the block. * See the * [Asynchronous timeout and resources](https://kotlinlang.org/docs/reference/coroutines/cancellation-and-timeouts.html#asynchronous-timeout-and-resources) * section of the coroutines guide for details. * * > Implementation note: how the time is tracked exactly is an implementation detail of the context's [CoroutineDispatcher]. * * @param timeMillis timeout time in milliseconds. */ public suspend fun <T> withTimeoutOrNull(timeMillis: Long, block: suspend CoroutineScope.() -> T): T? { if (timeMillis <= 0L) return null var coroutine: TimeoutCoroutine<T?, T?>? = null try { return suspendCoroutineUninterceptedOrReturn { uCont -> val timeoutCoroutine = TimeoutCoroutine(timeMillis, uCont) coroutine = timeoutCoroutine setupTimeout<T?, T?>(timeoutCoroutine, block) } } catch (e: TimeoutCancellationException) { // Return null if it's our exception, otherwise propagate it upstream (e.g. in case of nested withTimeouts) if (e.coroutine === coroutine) { return null } throw e } } /** * Runs a given suspending block of code inside a coroutine with the specified [timeout] and returns * `null` if this timeout was exceeded. * If the given [timeout] is non-positive, `null` is returned immediately. * * The code that is executing inside the [block] is cancelled on timeout and the active or next invocation of * cancellable suspending function inside the block throws a [TimeoutCancellationException]. * * The sibling function that throws an exception on timeout is [withTimeout]. * Note that the timeout action can be specified for a [select] invocation with [onTimeout][SelectBuilder.onTimeout] clause. * * **The timeout event is asynchronous with respect to the code running in the block** and may happen at any time, * even right before the return from inside the timeout [block]. Keep this in mind if you open or acquire some * resource inside the [block] that needs closing or release outside the block. * See the * [Asynchronous timeout and resources](https://kotlinlang.org/docs/reference/coroutines/cancellation-and-timeouts.html#asynchronous-timeout-and-resources) * section of the coroutines guide for details. * * > Implementation note: how the time is tracked exactly is an implementation detail of the context's [CoroutineDispatcher]. */ public suspend fun <T> withTimeoutOrNull(timeout: Duration, block: suspend CoroutineScope.() -> T): T? = withTimeoutOrNull(timeout.toDelayMillis(), block) private fun <U, T : U> setupTimeout( coroutine: TimeoutCoroutine<U, T>, block: suspend CoroutineScope.() -> T ): Any? { // schedule cancellation of this coroutine on time val cont = coroutine.uCont val context = cont.context coroutine.disposeOnCompletion(context.delay.invokeOnTimeout(coroutine.time, coroutine, coroutine.context)) // restart the block using a new coroutine with a new job, // however, start it undispatched, because we already are in the proper context return coroutine.startUndispatchedOrReturnIgnoreTimeout(coroutine, block) } private class TimeoutCoroutine<U, in T : U>( @JvmField val time: Long, uCont: Continuation<U> // unintercepted continuation ) : ScopeCoroutine<T>(uCont.context, uCont), Runnable { override fun run() { cancelCoroutine(TimeoutCancellationException(time, context.delay, this)) } override fun nameString(): String = "${super.nameString()}(timeMillis=$time)" } /** * This exception is thrown by [withTimeout] to indicate timeout. */ public class TimeoutCancellationException internal constructor( message: String, @JvmField @Transient internal val coroutine: Job? ) : CancellationException(message), CopyableThrowable<TimeoutCancellationException> { /** * Creates a timeout exception with the given message. * This constructor is needed for exception stack-traces recovery. */ internal constructor(message: String) : this(message, null) // message is never null in fact override fun createCopy(): TimeoutCancellationException = TimeoutCancellationException(message ?: "", coroutine).also { it.initCause(this) } } internal fun TimeoutCancellationException( time: Long, delay: Delay, coroutine: Job ) : TimeoutCancellationException { val message = (delay as? DelayWithTimeoutDiagnostics)?.timeoutMessage(time.milliseconds) ?: "Timed out waiting for $time ms" return TimeoutCancellationException(message, coroutine) }
kotlin
github
https://github.com/Kotlin/kotlinx.coroutines
kotlinx-coroutines-core/common/src/Timeout.kt
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """This module defines the data structures used to represent a grammar. These are a bit arcane because they are derived from the data structures used by Python's 'pgen' parser generator. There's also a table here mapping operators to their names in the token module; the Python tokenize module reports all operators as the fallback token code OP, but the parser needs the actual token code. """ # Python imports import pickle # Local imports from . import token, tokenize class Grammar(object): """Pgen parsing tables tables conversion class. Once initialized, this class supplies the grammar tables for the parsing engine implemented by parse.py. The parsing engine accesses the instance variables directly. The class here does not provide initialization of the tables; several subclasses exist to do this (see the conv and pgen modules). The load() method reads the tables from a pickle file, which is much faster than the other ways offered by subclasses. The pickle file is written by calling dump() (after loading the grammar tables using a subclass). The report() method prints a readable representation of the tables to stdout, for debugging. The instance variables are as follows: symbol2number -- a dict mapping symbol names to numbers. Symbol numbers are always 256 or higher, to distinguish them from token numbers, which are between 0 and 255 (inclusive). number2symbol -- a dict mapping numbers to symbol names; these two are each other's inverse. states -- a list of DFAs, where each DFA is a list of states, each state is is a list of arcs, and each arc is a (i, j) pair where i is a label and j is a state number. The DFA number is the index into this list. (This name is slightly confusing.) Final states are represented by a special arc of the form (0, j) where j is its own state number. dfas -- a dict mapping symbol numbers to (DFA, first) pairs, where DFA is an item from the states list above, and first is a set of tokens that can begin this grammar rule (represented by a dict whose values are always 1). labels -- a list of (x, y) pairs where x is either a token number or a symbol number, and y is either None or a string; the strings are keywords. The label number is the index in this list; label numbers are used to mark state transitions (arcs) in the DFAs. start -- the number of the grammar's start symbol. keywords -- a dict mapping keyword strings to arc labels. tokens -- a dict mapping token numbers to arc labels. """ def __init__(self): self.symbol2number = {} self.number2symbol = {} self.states = [] self.dfas = {} self.labels = [(0, "EMPTY")] self.keywords = {} self.tokens = {} self.symbol2label = {} self.start = 256 def dump(self, filename): """Dump the grammar tables to a pickle file.""" f = open(filename, "wb") pickle.dump(self.__dict__, f, 2) f.close() def load(self, filename): """Load the grammar tables from a pickle file.""" f = open(filename, "rb") d = pickle.load(f) f.close() self.__dict__.update(d) def copy(self): """ Copy the grammar. """ new = self.__class__() for dict_attr in ("symbol2number", "number2symbol", "dfas", "keywords", "tokens", "symbol2label"): setattr(new, dict_attr, getattr(self, dict_attr).copy()) new.labels = self.labels[:] new.states = self.states[:] new.start = self.start return new def report(self): """Dump the grammar tables to standard output, for debugging.""" from pprint import pprint print "s2n" pprint(self.symbol2number) print "n2s" pprint(self.number2symbol) print "states" pprint(self.states) print "dfas" pprint(self.dfas) print "labels" pprint(self.labels) print "start", self.start # Map from operator to number (since tokenize doesn't do this) opmap_raw = """ ( LPAR ) RPAR [ LSQB ] RSQB : COLON , COMMA ; SEMI + PLUS - MINUS * STAR / SLASH | VBAR & AMPER < LESS > GREATER = EQUAL . DOT % PERCENT ` BACKQUOTE { LBRACE } RBRACE @ AT == EQEQUAL != NOTEQUAL <> NOTEQUAL <= LESSEQUAL >= GREATEREQUAL ~ TILDE ^ CIRCUMFLEX << LEFTSHIFT >> RIGHTSHIFT ** DOUBLESTAR += PLUSEQUAL -= MINEQUAL *= STAREQUAL /= SLASHEQUAL %= PERCENTEQUAL &= AMPEREQUAL |= VBAREQUAL ^= CIRCUMFLEXEQUAL <<= LEFTSHIFTEQUAL >>= RIGHTSHIFTEQUAL **= DOUBLESTAREQUAL // DOUBLESLASH //= DOUBLESLASHEQUAL -> RARROW """ opmap = {} for line in opmap_raw.splitlines(): if line: op, name = line.split() opmap[op] = getattr(token, name)
unknown
codeparrot/codeparrot-clean
use std::fmt; use derive_more::Error; /// Copy of `http_range::HttpRangeParseError`. #[derive(Debug, Clone)] enum HttpRangeParseError { InvalidRange, NoOverlap, } impl From<http_range::HttpRangeParseError> for HttpRangeParseError { fn from(err: http_range::HttpRangeParseError) -> Self { match err { http_range::HttpRangeParseError::InvalidRange => Self::InvalidRange, http_range::HttpRangeParseError::NoOverlap => Self::NoOverlap, } } } #[derive(Debug, Clone, Error)] #[non_exhaustive] pub struct ParseRangeErr(#[error(not(source))] HttpRangeParseError); impl fmt::Display for ParseRangeErr { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.write_str("invalid Range header: ")?; f.write_str(match self.0 { HttpRangeParseError::InvalidRange => "invalid syntax", HttpRangeParseError::NoOverlap => "range starts after end of content", }) } } /// HTTP Range header representation. #[derive(Debug, Clone, Copy)] pub struct HttpRange { /// Start of range. pub start: u64, /// Length of range. pub length: u64, } impl HttpRange { /// Parses Range HTTP header string as per RFC 2616. /// /// `header` is HTTP Range header (e.g. `bytes=bytes=0-9`). /// `size` is full size of response (file). pub fn parse(header: &str, size: u64) -> Result<Vec<HttpRange>, ParseRangeErr> { let ranges = http_range::HttpRange::parse(header, size).map_err(|err| ParseRangeErr(err.into()))?; Ok(ranges .iter() .map(|range| HttpRange { start: range.start, length: range.length, }) .collect()) } } #[cfg(test)] mod tests { use super::*; struct T(&'static str, u64, Vec<HttpRange>); #[test] fn test_parse() { let tests = vec![ T("", 0, vec![]), T("", 1000, vec![]), T("foo", 0, vec![]), T("bytes=", 0, vec![]), T("bytes=7", 10, vec![]), T("bytes= 7 ", 10, vec![]), T("bytes=1-", 0, vec![]), T("bytes=5-4", 10, vec![]), T("bytes=0-2,5-4", 10, vec![]), T("bytes=2-5,4-3", 10, vec![]), T("bytes=--5,4--3", 10, vec![]), T("bytes=A-", 10, vec![]), T("bytes=A- ", 10, vec![]), T("bytes=A-Z", 10, vec![]), T("bytes= -Z", 10, vec![]), T("bytes=5-Z", 10, vec![]), T("bytes=Ran-dom, garbage", 10, vec![]), T("bytes=0x01-0x02", 10, vec![]), T("bytes= ", 10, vec![]), T("bytes= , , , ", 10, vec![]), T( "bytes=0-9", 10, vec![HttpRange { start: 0, length: 10, }], ), T( "bytes=0-", 10, vec![HttpRange { start: 0, length: 10, }], ), T( "bytes=5-", 10, vec![HttpRange { start: 5, length: 5, }], ), T( "bytes=0-20", 10, vec![HttpRange { start: 0, length: 10, }], ), T( "bytes=15-,0-5", 10, vec![HttpRange { start: 0, length: 6, }], ), T( "bytes=1-2,5-", 10, vec![ HttpRange { start: 1, length: 2, }, HttpRange { start: 5, length: 5, }, ], ), T( "bytes=-2 , 7-", 11, vec![ HttpRange { start: 9, length: 2, }, HttpRange { start: 7, length: 4, }, ], ), T( "bytes=0-0 ,2-2, 7-", 11, vec![ HttpRange { start: 0, length: 1, }, HttpRange { start: 2, length: 1, }, HttpRange { start: 7, length: 4, }, ], ), T( "bytes=-5", 10, vec![HttpRange { start: 5, length: 5, }], ), T( "bytes=-15", 10, vec![HttpRange { start: 0, length: 10, }], ), T( "bytes=0-499", 10000, vec![HttpRange { start: 0, length: 500, }], ), T( "bytes=500-999", 10000, vec![HttpRange { start: 500, length: 500, }], ), T( "bytes=-500", 10000, vec![HttpRange { start: 9500, length: 500, }], ), T( "bytes=9500-", 10000, vec![HttpRange { start: 9500, length: 500, }], ), T( "bytes=0-0,-1", 10000, vec![ HttpRange { start: 0, length: 1, }, HttpRange { start: 9999, length: 1, }, ], ), T( "bytes=500-600,601-999", 10000, vec![ HttpRange { start: 500, length: 101, }, HttpRange { start: 601, length: 399, }, ], ), T( "bytes=500-700,601-999", 10000, vec![ HttpRange { start: 500, length: 201, }, HttpRange { start: 601, length: 399, }, ], ), // Match Apache laxity: T( "bytes= 1 -2 , 4- 5, 7 - 8 , ,,", 11, vec![ HttpRange { start: 1, length: 2, }, HttpRange { start: 4, length: 2, }, HttpRange { start: 7, length: 2, }, ], ), ]; for t in tests { let header = t.0; let size = t.1; let expected = t.2; let res = HttpRange::parse(header, size); if let Err(err) = res { if expected.is_empty() { continue; } else { panic!("parse({header}, {size}) returned error {err:?}"); } } let got = res.unwrap(); if got.len() != expected.len() { panic!( "len(parseRange({}, {})) = {}, want {}", header, size, got.len(), expected.len() ); } for i in 0..expected.len() { if got[i].start != expected[i].start { panic!( "parseRange({}, {})[{}].start = {}, want {}", header, size, i, got[i].start, expected[i].start ) } if got[i].length != expected[i].length { panic!( "parseRange({}, {})[{}].length = {}, want {}", header, size, i, got[i].length, expected[i].length ) } } } } }
rust
github
https://github.com/actix/actix-web
actix-files/src/range.rs
import { PayloadAction, createSlice } from "@reduxjs/toolkit"; import { initializeFromLocalStorage } from "./initializeFromLocalStorage"; export const localStorageKeyCollapsedPools = "serviceDiscovery.collapsedPools"; export const localStorageKeyTargetHealthFilter = "serviceDiscovery.healthFilter"; interface ServiceDiscoveryPage { collapsedPools: string[]; showLimitAlert: boolean; } const initialState: ServiceDiscoveryPage = { collapsedPools: initializeFromLocalStorage<string[]>( localStorageKeyCollapsedPools, [] ), showLimitAlert: false, }; export const serviceDiscoveryPageSlice = createSlice({ name: "serviceDiscoveryPage", initialState, reducers: { setCollapsedPools: (state, { payload }: PayloadAction<string[]>) => { state.collapsedPools = payload; }, setShowLimitAlert: (state, { payload }: PayloadAction<boolean>) => { state.showLimitAlert = payload; }, }, }); export const { setCollapsedPools, setShowLimitAlert } = serviceDiscoveryPageSlice.actions; export default serviceDiscoveryPageSlice.reducer;
typescript
github
https://github.com/prometheus/prometheus
web/ui/mantine-ui/src/state/serviceDiscoveryPageSlice.ts