id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,594,065,098 | flutter | Possibility to anchor `showMenu` within its `RelativeRect` | ### Use case
When creating drop down buttons a very common use-case is to align them with the right edge of their parents, like this:

But as far as I'm aware it is not possible to do this consistently since you can't access the width of the menu until it has been laid out, so you will only be able to align it to the left:

```dart
showMenu(
position: _menuPosition(context),
...
)
RelativeRect _menuPosition(BuildContext context) {
final box = context.findRenderObject() as RenderBox;
final overlay = Overlay.of(context).context.findRenderObject() as RenderBox;
const offset = Offset.zero;
return RelativeRect.fromRect(
Rect.fromPoints(
box.localToGlobal(
box.size.bottomLeft(offset),
ancestor: overlay,
),
box.localToGlobal(
box.size.bottomRight(offset),
ancestor: overlay,
),
),
offset & overlay.size,
);
}
```
### Proposal
It would be great if there was an `menuAligment` argument to `showMenu` so that you could choose how to align the menu within the `RelativeRect`. | c: new feature,framework,f: material design,c: proposal,P3,team-design,triaged-design | low | Minor |
2,594,068,226 | excalidraw | Add distribute tool for equidistant spacing | I am missing the distribute feature from Inkscape. It would be awesome if that could be added.
See
https://inkscape-manuals.readthedocs.io/en/latest/align-and-distribute.html

| good first issue,UX/UI | low | Minor |
2,594,078,271 | rust | A out-of-air lifetime that that may not live long vs. the implementation is not general enough | ### Code
````rust
fn test<F>(f:F)
where F: for<'b> FnOnce(&'b i32){}
fn main(){
test(|s:&'static i32|{});
}
````
### Current output
```
--> src/main.rs:11:11
|
11 | test(|s:&'static i32|{});
| ^ help: if this is intentional, prefix it with an underscore: `_s`
error: lifetime may not live long enough
--> src/main.rs:11:11
|
11 | test(|s:&'static i32|{});
| ^ - let's call the lifetime of this reference `'1`
| |
| requires that `'1` must outlive `'static`
```
### Desired output
> The implementation is not general enough
### Rationale and extra context
The closure parameter is explicitly specified as `&'static i32`, where does the lifetime `'l` come from? Moreover, the lifetime `'static` should outlive any lifetime, however, the diagnosis instead says
> lifetime may not live long enough
Presumably, the implementation of trait `FnOnce` for the closure type is only for `'static`, which is not general enough.
### Other cases
_No response_
### Rust Version
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-apple-darwin
release: 1.81.0
LLVM version: 18.1.7
### Anything else?
_No response_ | A-diagnostics,T-compiler,A-NLL,NLL-diagnostics,D-confusing,D-terse,A-higher-ranked | low | Critical |
2,594,086,834 | tensorflow | DLPack with Int32 tensor on the GPU: inconsistent eager mode / graph mode / XLA | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
v1.12.1-117097-gecf05620570 2.19.0-dev20241016
### Custom code
No
### OS platform and distribution
Linux Ubuntu 22.04
### Mobile device
_No response_
### Python version
3.12
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Hello,
I realize that `int32` is a special dtype in TensorFlow for historical reasons. It seems that the handling of GPU int32-typed tensors has evolved over time.
Currently, the `device` field of a tensor created with:
```py
with tf.device('gpu'):
x = tf.constant([0,1,2], tf.int32)
```
*does* indicate it's a GPU tensor: `/job:localhost/replica:0/task:0/device:GPU:0`.
However, when exporting and re-importing it via DLPack, it comes back as a CPU tensor.
There even seems to be a unit test validating this:
https://github.com/tensorflow/tensorflow/blob/d3de971a7348ecaefdbb920e580c37ebde10d780/tensorflow/python/dlpack/dlpack_test.py#L75-L78
However, @jhoydis found that this is *not* consistent between modes. In particular, if the tensor goes through an XLA-compiled function, it will correctly live on the GPU even after a round-trip through DLPack. (See reproducer below).
Would it please be possible to revisit this behavior, so that **exporting an int32 GPU tensor via DLPack does result in a GPU DLPack capsule in all modes, not just XLA?**
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
def f_eager(x):
return x
f_graph = tf.function()(f_eager)
f_xla = tf.function(jit_compile=True)(f_eager)
with tf.device('gpu'):
x = tf.constant([0,1,2], tf.int32)
print("Original tensor:", x.device)
dlcapsule = tf.experimental.dlpack.to_dlpack(x)
x_ = tf.experimental.dlpack.from_dlpack(dlcapsule)
print("Default:", x_.device)
dlcapsule = tf.experimental.dlpack.to_dlpack(f_eager(x))
x_ = tf.experimental.dlpack.from_dlpack(dlcapsule)
print("Eager:", x_.device)
dlcapsule = tf.experimental.dlpack.to_dlpack(f_graph(x))
x_ = tf.experimental.dlpack.from_dlpack(dlcapsule)
print("Graph:", x_.device)
dlcapsule = tf.experimental.dlpack.to_dlpack(f_xla(x))
x_ = tf.experimental.dlpack.from_dlpack(dlcapsule)
print("XLA:", x_.device)
```
### Relevant log output
```shell
Original tensor: /job:localhost/replica:0/task:0/device:GPU:0
Default: /job:localhost/replica:0/task:0/device:CPU:0
Eager: /job:localhost/replica:0/task:0/device:CPU:0
Graph: /job:localhost/replica:0/task:0/device:CPU:0
XLA: /job:localhost/replica:0/task:0/device:GPU:0
```
| stat:awaiting tensorflower,type:bug,comp:apis,comp:gpu | medium | Critical |
2,594,181,170 | pytorch | Inconsistent behavior of `torch.tensor` in converting `NaN` to int32 on normal list input and numpy array, and different architectures. | ### 🐛 Describe the bug
When converting `NaN` value to `int32`, the behavior of `torch.tensor` are different on different input type and different platform.
If the input is a python built-in list containing NaN value, there will be an exception. However, if the list is wrapped with numpy array, `torch.tensor` will sliently cast NaN to 0 (on macOS) and -2147483648 (on Linux).
```python
import torch
import numpy as np
numpy_array = np.array([np.nan], dtype=np.float64)
out = torch.tensor(numpy_array, dtype=torch.int32)
# tensor([0], dtype=torch.int32) on arm
# tensor([-2147483648], dtype=torch.int32) on x86
print(out)
normal_list = [np.nan]
out = torch.tensor(normal_list, dtype=torch.int32) # ValueError: cannot convert float NaN to integer
```
It seems that sliently casting NaN may cause trouble in debugging. If such checking is already implemented on normal list input, it may be helpful if the exception can be thrown on numpy array input, also making its behavior consistent on different OS.
### Versions
# MacOS:
Collecting environment information...
PyTorch version: 2.4.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.0.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: version 3.30.4
Libc version: N/A
Python version: 3.9.20 (main, Oct 3 2024, 02:24:59) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.0.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Max
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.13.0
[pip3] torch==2.4.1
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
# Linux:
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 16.0.1 (https://github.com/llvm/llvm-project.git cd89023f797900e4492da58b7bed36f702120011)
CMake version: version 3.23.2
Libc version: glibc-2.34
Python version: 3.9.18 (main, Aug 23 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 11.2.67
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
GPU 1: NVIDIA TITAN RTX
GPU 2: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 1
Stepping: 7
CPU(s) scaling MHz: 78%
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 576 KiB (18 instances)
L1i cache: 576 KiB (18 instances)
L2 cache: 18 MiB (18 instances)
L3 cache: 24.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-35
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.1
[pip3] triton==3.0.0
[conda] Could not collect
cc @malfet @albanD @mruberry @rgommers @snadampal @milpuz01 | module: error checking,triaged | low | Critical |
2,594,190,258 | TypeScript | Add a new type of class declaration to support mixins | ### 🔍 Search Terms
"mixin", "InstanceType", "typeof", "class"
### ✅ Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
When I run a mixin function, it outputs just the value of the returned expression, not the type
```ts
function myMixin<T extends new (...args: any[]) => any>(ctor: T) {
return class extends ctor {
a = 1;
}
}
class MyClass {
b = 2
}
const Result = myMixin(MyClass);
// ↓ Result refers to a value, but is being used as a type here. Did you mean typeof Result ? ↓
const istance: Result = new Result();
```
So I would need to define the type too
```ts
// ...
const Result = myMixin(MyClass);
type Result = InstanceType<typeof Result>;
// ...
```
I suggest adding a new syntax to define both a variable and a type with the same name
```ts
// ...
const Result = myMixin(MyClass) as type; // "as class" is also a viable option
// ...
```
Examples:
```ts
namespace Something {
export class Idk {
c = 1;
}
}
const a = Something.Idk as type; // Ok
function someFunc() {
return class {
e = 1;
}
}
const b = someFunc() as type; // Ok
const c = 1 as type; // Error: Expression "1" doesn't have any type attached
```
### ⭐ OPTIONAL extra generalised suggestion
Additionally, it would be nice if we could be able to generalise this process for all variables that also have an attached type
```ts
const a = 1;
type a = 2; // The type is unrelated to the value, no usage of this comes to my mind at the moment, but it already works
const b = a;
type b = a;
// Or
const b = a as type; // Defines a value from the `a` variable and a type from the `a` type
```
But it wouldn't work out of the box for the main suggestion, since `Result` doesn't actually have any type attached it would require functions (`myMixin()` for example) to be able to have attached return types
### 📃 Motivating Example
(Mentioned on the suggestion body)
### 💻 Use Cases
### What do you want to use this for?
To improve the development of mixins in general
### What shortcomings exist with current approaches?
Too verbose and feels like something it should to by default
```ts
const Result = myMixin(MyClass);
type Result = InstanceType<typeof Result>;
```
Creates an actual runtime class, that surely adds an overhead, althought it's minimal
```ts
class Result extends myMixin(MyClass) { }
```
### What workarounds are you using in the meantime?
The runtime class one | Suggestion,Awaiting More Feedback | low | Critical |
2,594,214,036 | rust | `IntErrorKind` should derive `Copy` and `Hash` | When including std errors as part of custom types, I've noticed that [`IntErrorKind`](https://doc.rust-lang.org/std/num/enum.IntErrorKind.html) does not derive the `Copy` or the `Hash` traits, and I think it most probably should, since it would make it easier to propagate and embed it. For example [`IoErrorKind`](https://doc.rust-lang.org/std/io/enum.ErrorKind.html) does in fact derive them. | T-libs-api,C-feature-request | low | Critical |
2,594,216,636 | pytorch | [profiler] are CUDA events mutually exclusive with _ExperimentalConfig.profiler_metrics? | ### 🐛 Describe the bug
Trying to profile with the _ExperimentalConfig options, looks like with non-empty 'profiler_metrics' CUDA events are not captured, given the following case trace_handler displays nothing, are they mutually exclusive or I'm using the API (and the metrics) the wrong way? Thanks.
Minimal repro:
```python
import torch
import torchvision.models as models
from torch.profiler import profile, record_function, ProfilerActivity
from torch._C._profiler import _ExperimentalConfig
model = models.resnet18().cuda()
def trace_handler(prof):
print(prof.key_averages().table(sort_by="self_cuda_time_total", row_limit=10))
prof.export_chrome_trace("trace.json")
with profile(
#activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
activities=[ProfilerActivity.CUDA],
on_trace_ready=trace_handler,
experimental_config=_ExperimentalConfig(
# For the record, not sure whether they are valid metrics, comments are appreciated
profiler_metrics=["achieved_occupancy", "branch_efficiency", "dram_read_bytes", "inst_executed"],
)
) as prof:
model(torch.randn(1, 3, 224, 224).cuda())
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.36
Python version: 3.11.9 (main, Aug 13 2024, 15:13:18) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.4.143.bsk.8-amd64-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn.so.9
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_adv.so.9
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_cnn.so.9
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_graph.so.9
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_heuristic.so.9
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_ops.so.9
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8457C
Stepping: 8
CPU MHz: 2600.000
BogoMIPS: 5200.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 288 KiB
L1i cache: 192 KiB
L2 cache: 12 MiB
L3 cache: 97.5 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.4.0
[pip3] torch-tb-profiler==0.4.3
[pip3] torchaudio==2.4.0.dev20240812+cu124
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] Could not collect
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise | oncall: profiler | low | Critical |
2,594,248,473 | material-ui | After applying high contrast themes, there is no differentiation between selected and non-selected control in Windows. | ### Steps to reproduce
Link to live example: (required)
https://mui.com/material-ui/react-button/
Prerequisites:
Turn on High Contrast Aquatic/Desert Modes (Go to settings->Contrast Themes->High Contrast aquatic/Desert Themes).
Steps:
1. Open URL-https://mui.com/material-ui/react-button/
2. "React Material UI." page will get Open.
3. Press Tab key to navigate to left navigation section and select radio group option.
4. Press Tab key to "Customization" section.
5. Verify with tab key whether after applying high contrast themes, there is no differentiation between selected and non-selected radio button control.
### Current behavior
After applying high contrast themes, there is no differentiation between selected and non-selected radio buttons.
### Expected behavior
After applying high contrast themes, there should be differentiation, or border should be present on selected radio button.
### Context
_No response_
### Your environment
<a data-no-markdown-link="true" class="MuiTypography-root MuiTypography-inherit MuiLink-root MuiLink-underlineNone app-drawer-active css-1oy3mdg" style="--_depth:1;--_expandable:0" href="/base-ui/react-radio-group/">Radio Group<div class="MuiChip-root MuiChip-filled MuiChip-sizeMedium MuiChip-colorDefault MuiChip-filledDefault css-ey7mmr"><span class="MuiChip-label MuiChip-labelMedium css-s01idy">Planned</span></div></a>
**Search keywords**: contrast themes, Radio group | bug 🐛,waiting for 👍,component: button,ready to take,platform: windows | low | Minor |
2,594,280,779 | ant-design | ColorPicker with presets doesn't correctly recognizes screen boundaries | ### Reproduction link
[https://ant.design/components/color-picker#color-picker-demo-presets](https://ant.design/components/color-picker#color-picker-demo-presets)
### Steps to reproduce
- Open the documentation page
- Scroll to the position shown on the screenshot
- Open color picker
<img width="681" alt="image" src="https://github.com/user-attachments/assets/a85efbcc-5409-478f-81cf-b9ed05f45848">
### What is expected?
The color pickers understands some of presets go beyond the screen and changes the placement of the popup.
### What is actually happening?
Some presets are hidden
| Environment | Info |
| --- | --- |
| antd | 5.21.4 |
| React | 18 |
| System | Mac OS |
| Browser | Chrome |
---
In this particular case of documentation page, it's possible to scroll the page a little. However, in some other cases the scroll might not be available
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,unconfirmed | low | Minor |
2,594,289,630 | pytorch | Torch Dynamo support for Flux Transformer model | ### 🐛 Describe the bug
I'm using the script below to export the Flux Transformer model to ONNX using torch.onnx.dynamo_export(). However, I run into a TypeError relating to an attribute type.
The script below can be used to reproduce the issue:
```
import torch
from diffusers.models import FluxTransformer2DModel
# Load Model
model_dir = "black-forest-labs/FLUX.1-dev"
device = "cuda"
model = FluxTransformer2DModel.from_pretrained(model_dir, subfolder="transformer", torch_dtype=torch.float16).to(device)
# Define Inputs
dtype = torch.float32
tensor_dtype = torch.float16
batch_size = 1
text_maxlen = 512
latent_height, latent_width = 1024 // 8, 1024 // 8
config = {
'in_channels': 64,
'joint_attention_dim': 4096,
'pooled_projection_dim': 768
}
inputs = {
'hidden_states': torch.randn(batch_size, (latent_height // 2) * (latent_width // 2), config['in_channels'], dtype=tensor_dtype, device=device),
'encoder_hidden_states': torch.randn(batch_size, text_maxlen, config['joint_attention_dim'], dtype=tensor_dtype, device=device),
'pooled_projections': torch.randn(batch_size, config['pooled_projection_dim'], dtype=tensor_dtype, device=device),
'timestep': torch.tensor([1.]*batch_size, dtype=tensor_dtype, device=device),
'img_ids': torch.randn((latent_height // 2) * (latent_width // 2), 3, dtype=dtype, device=device),
'txt_ids': torch.randn(text_maxlen, 3, dtype=dtype, device=device),
'guidance': torch.tensor([1.]*batch_size, dtype=dtype, device=device),
}
# Export to ONNX
out = torch.onnx.dynamo_export(model,
**inputs,
)
out.save("transformer_dynamo.onnx")
```
The error is pasted below:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/_exporter_legacy.py", line 801, in dynamo_export
).export()
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/_exporter_legacy.py", line 570, in export
onnxscript_graph = fx_interpreter.run(
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 146, in wrapper
ctx.log_and_raise_if_error(diag)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/context.py", line 355, in log_and_raise_if_error
raise diagnostic.source_exception
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 130, in wrapper
return_values = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py", line 537, in run
self.run_node(
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 146, in wrapper
ctx.log_and_raise_if_error(diag)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/context.py", line 355, in log_and_raise_if_error
raise diagnostic.source_exception
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 130, in wrapper
return_values = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py", line 450, in run_node
self.call_module(
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py", line 738, in call_module
sub_onnxscript_graph = self.run(
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 146, in wrapper
ctx.log_and_raise_if_error(diag)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/context.py", line 355, in log_and_raise_if_error
raise diagnostic.source_exception
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 130, in wrapper
return_values = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py", line 537, in run
self.run_node(
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 146, in wrapper
ctx.log_and_raise_if_error(diag)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/context.py", line 355, in log_and_raise_if_error
raise diagnostic.source_exception
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 130, in wrapper
return_values = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py", line 450, in run_node
self.call_module(
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py", line 738, in call_module
sub_onnxscript_graph = self.run(
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 146, in wrapper
ctx.log_and_raise_if_error(diag)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/context.py", line 355, in log_and_raise_if_error
raise diagnostic.source_exception
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 130, in wrapper
return_values = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py", line 537, in run
self.run_node(
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 146, in wrapper
ctx.log_and_raise_if_error(diag)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/context.py", line 355, in log_and_raise_if_error
raise diagnostic.source_exception
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 130, in wrapper
return_values = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py", line 440, in run_node
self.call_function(
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py", line 656, in call_function
) = symbolic_fn(*onnx_args, **onnx_kwargs)
File "/usr/local/lib/python3.10/dist-packages/onnxscript/values.py", line 583, in __call__
return self.func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/onnxscript/function_libs/torch_lib/ops/nn.py", line 1883, in aten__scaled_dot_product_flash_attention
) = _aten__scaled_dot_product_flash_attention_fillin_empty_outputs(query)
File "/usr/local/lib/python3.10/dist-packages/onnxscript/function_libs/torch_lib/ops/nn.py", line 1842, in _aten__scaled_dot_product_flash_attention_fillin_empty_outputs
op.Constant(value=onnx.helper.make_tensor("Empty_INTS", INT64.dtype, [0], []))
File "/usr/local/lib/python3.10/dist-packages/onnxscript/onnx_opset/_impl/opset13.py", line 453, in Constant
return op(
File "/usr/local/lib/python3.10/dist-packages/onnxscript/values.py", line 301, in __call__
return evaluator.default().eval(schema, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/onnxscript/function_libs/torch_lib/graph_building/_graph_building_torch.py", line 346, in eval
return self._graph.add_op_call(schema, inputs, attributes)
File "/usr/local/lib/python3.10/dist-packages/onnxscript/function_libs/torch_lib/graph_building/_graph_building_torch.py", line 888, in add_op_call
result = self._add_torchscript_op_call(
File "/usr/local/lib/python3.10/dist-packages/onnxscript/function_libs/torch_lib/graph_building/_graph_building_torch.py", line 768, in _add_torchscript_op_call
result = _create_op_call_in_torch_graph(
File "/usr/local/lib/python3.10/dist-packages/onnxscript/function_libs/torch_lib/graph_building/_graph_building_torch.py", line 508, in _create_op_call_in_torch_graph
_add_attribute_to_torchscript_node(node, key, value)
File "/usr/local/lib/python3.10/dist-packages/onnxscript/function_libs/torch_lib/graph_building/_graph_building_torch.py", line 470, in _add_attribute_to_torchscript_node
raise TypeError(
TypeError: Unsupported attribute type '<class 'onnx.onnx_ml_pb2.TensorProto'>' for attribute 'value' in node=%146 : Tensor = onnx::Constant()
, value is dims: 0
data_type: 7
name: "Empty_INTS"
```
### Versions
transformers 4.42.2
torch 2.6.0.dev20241016+cu124
diffusers 0.31.0.dev0
| module: onnx,triaged | low | Critical |
2,594,294,708 | electron | "second-instance" is not triggered with Deep Links when running with admin privileges | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.0.0
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 11
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
When using standard [Deep Links](https://www.electronjs.org/docs/latest/tutorial/launch-app-from-url-in-another-app) with elevated permission on the app, whether it's with an administrator terminal when using a development local server or when running the app as administrator, the `app.on("second-instance")` should be triggered when using the specified protocol url.
### Actual Behavior
`app.on("second-instance")` is not being triggered when running a [Deep Link](https://www.electronjs.org/docs/latest/tutorial/launch-app-from-url-in-another-app) url protocol when having administrator privileges. This behaviour is occurring when using a development local server with an elevated terminal and also when building into production and running the app as administrator (tested with electron-forge makers: Wix and Squirrel).
You can test it by just adding a standard Deep Link following the official documentation [here](https://www.electronjs.org/docs/latest/tutorial/launch-app-from-url-in-another-app)
### Testcase Gist URL
https://gist.github.com/gabriel-demoura-IMTF/9d0b4805ba0a5d26b1dded119d0557bb
### Additional Information
An issue as been created 2 years ago, but it has been closed and asked to create a new one if still occurring in newer versions: https://github.com/electron/electron/issues/35681 | platform/windows,bug :beetle:,has-repro-gist,33-x-y | low | Critical |
2,594,297,653 | terminal | Broadcast Exit to multiple panes crash | ### Windows Terminal version
1.22.2702.0_x64
### Windows build number
10.0.22631.4317
### Other Software
_No response_
### Steps to reproduce
1. Open the Windows Terminal Preview
2. Create a second pane.
3. Toggle broadcast input to all panes (you should now be able to type in all panes)
4. Press Ctrl + D or type "exit"
5. the terminal crashes, even if you have other tabs open.
### Expected Behavior
I'd expect it to exit out of the tab, so that if you have other tabs you can return to using them.
### Actual Behavior
Upon exiting with multiple panes at the same time, it crashes and the program shuts down. | Needs-Repro,Issue-Bug,Area-UserInterface,Severity-Crash,Product-Terminal | low | Critical |
2,594,333,695 | pytorch | Torch Dynamo support for Flux T5 model | ### 🐛 Describe the bug
I'm using the script below to export the Flux T5 model to ONNX using torch.onnx.dynamo_export(). However, I run into an error due to missing support for `fused_layer_norm_cuda.PyCapsule.rms_forward_affine`.
The script below can be used to reproduce the issue:
```
import torch
from transformers import T5EncoderModel
# Load Model
model_dir = "black-forest-labs/FLUX.1-dev"
device = "cuda"
model = T5EncoderModel.from_pretrained(model_dir, subfolder="text_encoder_2").to(device)
# Define Input
inputs = (
torch.zeros(1, 512, dtype=torch.int32, device=device)
)
# Export to ONNX
out = torch.onnx.dynamo_export(
model,
inputs,
)
out.save("t5_dynamo.onnx")
```
The error is pasted below:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/_exporter_legacy.py", line 1474, in dynamo_export
).export()
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/_exporter_legacy.py", line 1200, in export
graph_module = self.options.fx_tracer.generate_fx(
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 196, in generate_fx
graph_module, graph_guard = torch._dynamo.export(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 1430, in inner
result_traced = opt_f(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 464, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 151, in wrapped
return output_adapter.apply(model_func(*args, **kwargs), model=model)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1224, in __call__
return self._torchdynamo_orig_callable(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 514, in __call__
return _compile(
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 896, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 662, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_utils_internal.py", line 85, in wrapper_function
return StrobelightCompileTimeProfiler.profile_compile_time(
File "/usr/local/lib/python3.10/dist-packages/torch/_strobelight/compile_time_profiler.py", line 129, in profile_compile_time
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 697, in _compile_inner
out_code = transform_code_object(code, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 631, in transform
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2722, in run
super().run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 957, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 869, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 557, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1654, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 804, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 810, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2937, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3053, in inline_call_
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 957, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 869, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 557, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1642, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 804, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 383, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 322, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 106, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 810, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2937, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3053, in inline_call_
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 957, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 869, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 557, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1654, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 804, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 810, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2937, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3053, in inline_call_
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 957, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 869, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 557, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1642, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 804, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 383, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 322, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 106, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 810, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2937, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3053, in inline_call_
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 957, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 869, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 557, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1654, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 804, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 810, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2937, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3053, in inline_call_
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 957, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 869, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 557, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1642, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 804, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 383, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 322, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 106, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 810, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2937, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3053, in inline_call_
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 957, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 869, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 557, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1564, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 804, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 810, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2937, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3053, in inline_call_
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 957, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 869, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 557, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1642, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 804, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 383, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 322, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 106, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 810, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2937, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3053, in inline_call_
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 957, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 869, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 557, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1564, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 804, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 322, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 106, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 810, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2937, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3053, in inline_call_
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 957, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 869, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 557, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1642, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 804, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/misc.py", line 954, in call_function
return self.obj.call_method(tx, self.name, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/misc.py", line 711, in call_method
return self.call_apply(tx, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/misc.py", line 636, in call_apply
).call_function(tx, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 1826, in call_function
(fwd_out, _), fwd_graph, fwd_freevars = speculate_subgraph(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 528, in speculate_subgraph
raise ex
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 457, in speculate_subgraph
output = f.call_function(tx, args, sub_kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 322, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 106, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 810, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2937, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3053, in inline_call_
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 957, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 869, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 557, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1564, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 804, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 731, in call_function
unimplemented(msg)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/exc.py", line 283, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: Graph break due to unsupported builtin fused_layer_norm_cuda.PyCapsule.rms_forward_affine. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
from user code:
File "/usr/local/lib/python3.10/dist-packages/transformers/models/t5/modeling_t5.py", line 1971, in forward
encoder_outputs = self.encoder(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/t5/modeling_t5.py", line 1106, in forward
layer_outputs = layer_module(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/t5/modeling_t5.py", line 686, in forward
self_attention_outputs = self.layer[0](
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/t5/modeling_t5.py", line 592, in forward
normed_hidden_states = self.layer_norm(hidden_states)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/apex/normalization/fused_layer_norm.py", line 416, in forward
return fused_rms_norm_affine(
File "/usr/local/lib/python3.10/dist-packages/apex/normalization/fused_layer_norm.py", line 215, in fused_rms_norm_affine
return FusedRMSNormAffineFunction.apply(*args)
File "/usr/local/lib/python3.10/dist-packages/apex/normalization/fused_layer_norm.py", line 75, in forward
output, invvar = fused_layer_norm_cuda.rms_forward_affine(
```
### Versions
transformers 4.42.2
diffusers 0.31.0.dev0
torch 2.5.0a0+b465a5843b.nv24.9 (Nvidia NGC 24.09 PyTorch container) | module: onnx,triaged | low | Critical |
2,594,346,608 | pytorch | Compile error with -WERROR on clang-12 | ### 🐛 Describe the bug
EDIT(albanD): removed llm example showing the final result works and reworded a little bit.
When trying to compile the lattest torch version on my system with amd cpu. the compilation failed. In order for it to compile i had to modify a file which i specified below.
The file i had to modify to get it to compile was
. /home/myles/pytorch/test/cpp/api/CMakeLists.txt at line 59 replaced that text below.
```C++
if(NOT MSVC)
# Clang has an unfixed bug leading to spurious missing braces warnings
# see https://bugs.llvm.org/show_bug.cgi?id=21629
target_compile_options_if_supported(test_api "-Wno-missing-braces")
# Considered to be flaky. See the discussion at
# https://github.com/pytorch/pytorch/pull/9608
target_compile_options_if_supported(test_api "-Wno-maybe-uninitialized")
# gcc gives nonsensical warnings about variadic.h
target_compile_options_if_supported(test_api "-Wno-unused-but-set-parameter")
# Suppress warnings for deprecated declarations and nonnull errors
target_compile_options(test_api PRIVATE "-Wno-deprecated-declarations" "-Wno-error=nonnull")
endif()
```
### Error logs
```shell
FAILED: test_api/CMakeFiles/test_api.dir/dataloader.cpp.o
/usr/bin/ccache /usr/bin/c++ -DFLASHATTENTION_DISABLE_ALIBI -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_C10D_GLOO -DUSE_C10D_MPI -DUSE_C10D_NCCL -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -I/home/myles/pytorch/build/aten/src -I/home/myles/pytorch/aten/src -I/home/myles/pytorch/build -I/home/myles/pytorch -I/home/myles/pytorch/cmake/../third_party/benchmark/include -I/home/myles/pytorch/third_party/onnx -I/home/myles/pytorch/build/third_party/onnx -I/home/myles/pytorch/nlohmann -I/home/myles/pytorch/build/caffe2/../aten/src -I/home/myles/pytorch/torch/csrc/api -I/home/myles/pytorch/torch/csrc/api/include -I/home/myles/pytorch/c10/.. -I/home/myles/pytorch/c10/cuda/../.. -isystem /home/myles/pytorch/build/third_party/gloo -isystem /home/myles/pytorch/cmake/../third_party/gloo -isystem /home/myles/pytorch/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /home/myles/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /home/myles/pytorch/cmake/../third_party/googletest/googletest/include -isystem /home/myles/pytorch/third_party/protobuf/src -isystem /home/myles/pytorch/third_party/XNNPACK/include -isystem /home/myles/pytorch/third_party/ittapi/include -isystem /home/myles/pytorch/cmake/../third_party/eigen -isystem /usr/local/cuda-12.4/include -isystem /home/myles/pytorch/third_party/ideep/mkl-dnn/include/oneapi/dnnl -isystem /home/myles/pytorch/third_party/ideep/include -isystem /home/myles/pytorch/INTERFACE -isystem /home/myles/pytorch/third_party/nlohmann/include -isystem /home/myles/pytorch/third_party/googletest/googletest/include -isystem /home/myles/pytorch/third_party/googletest/googletest -D_GLIBCXX_USE_CXX11_ABI=1 -Wno-error -Wno-non-virtual-dtor -Wno-return-type -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -std=gnu++17 -fPIE -DMKL_HAS_SBGEMM -DMKL_HAS_SHGEMM -DTORCH_USE_LIBUV -DCAFFE2_USE_GLOO -Wno-missing-braces -Wno-maybe-uninitialized -Wno-unused-but-set-parameter -MD -MT test_api/CMakeFiles/test_api.dir/dataloader.cpp.o -MF test_api/CMakeFiles/test_api.dir/dataloader.cpp.o.d -o test_api/CMakeFiles/test_api.dir/dataloader.cpp.o -c /home/myles/pytorch/test/cpp/api/dataloader.cpp
In file included from /usr/include/c++/12/bits/stl_uninitialized.h:63,
from /usr/include/c++/12/memory:65,
from /home/myles/pytorch/third_party/googletest/googletest/include/gtest/gtest.h:57,
from /home/myles/pytorch/test/cpp/api/dataloader.cpp:1:
In static member function ‘static _Tp* std::__copy_move<_IsMove, true, std::random_access_iterator_tag>::__copy_m(const _Tp*, const _Tp*, _Tp*) [with _Tp = long unsigned int; bool _IsMove = false]’,
inlined from ‘_OI std::__copy_move_a2(_II, _II, _OI) [with bool _IsMove = false; _II = const long unsigned int*; _OI = long unsigned int*]’ at /usr/include/c++/12/bits/stl_algobase.h:495:30,
inlined from ‘_OI std::__copy_move_a1(_II, _II, _OI) [with bool _IsMove = false; _II = const long unsigned int*; _OI = long unsigned int*]’ at /usr/include/c++/12/bits/stl_algobase.h:522:42,
inlined from ‘_OI std::__copy_move_a(_II, _II, _OI) [with bool _IsMove = false; _II = __gnu_cxx::__normal_iterator<const long unsigned int*, vector<long unsigned int> >; _OI = __gnu_cxx::__normal_iterator<long unsigned int*, vector<long unsigned int> >]’ at /usr/include/c++/12/bits/stl_algobase.h:529:31,
inlined from ‘_OI std::copy(_II, _II, _OI) [with _II = __gnu_cxx::__normal_iterator<const long unsigned int*, vector<long unsigned int> >; _OI = __gnu_cxx::__normal_iterator<long unsigned int*, vector<long unsigned int> >]’ at /usr/include/c++/12/bits/stl_algobase.h:620:7,
inlined from ‘std::vector<_Tp, _Alloc>& std::vector<_Tp, _Alloc>::operator=(const std::vector<_Tp, _Alloc>&) [with _Tp = long unsigned int; _Alloc = std::allocator<long unsigned int>]’ at /usr/include/c++/12/bits/vector.tcc:244:21:
/usr/include/c++/12/bits/stl_algobase.h:431:30: error: argument 1 null where non-null expected [-Werror=nonnull]
431 | __builtin_memmove(__result, __first, sizeof(_Tp) * _Num);
| ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/12/bits/stl_algobase.h:431:30: note: in a call to built-in function ‘void* __builtin_memmove(void*, const void*, long unsigned int)’
cc1plus: some warnings being treated as errors
```
### Minified repro
_No response_
### Versions
```shell
curl -OL https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python3 collect_env.py
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 23357 100 23357 0 0 63001 0 --:--:-- --:--:-- --:--:-- 63127
Collecting environment information...
PyTorch version: 2.6.0a0+gite248c1d
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.2
Libc version: glibc-2.35
Python version: 3.12.6 (main, Sep 7 2024, 19:24:43) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-44-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
GPU 2: NVIDIA GeForce RTX 4090
GPU 3: NVIDIA GeForce RTX 4090
GPU 4: NVIDIA GeForce RTX 4090
GPU 5: NVIDIA GeForce RTX 4090
GPU 6: NVIDIA GeForce RTX 4090
GPU 7: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3995WX 64-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4308.3979
CPU min MHz: 2200.0000
BogoMIPS: 5389.69
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.2
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.19.2
[pip3] open_clip_torch==2.26.1
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.0.0+757b6a61e7
[pip3] torch==2.6.0a0+gite248c1d
[pip3] torchaudio==2.5.0a0+b4a286a
[pip3] torchvision==0.20.0a0+945bdad
[pip3] triton==3.0.0
[conda] Could not collect
```
cc @ezyang @chauhang @penguinwu | triaged,actionable | low | Critical |
2,594,403,515 | pytorch | Faster dropout (aten.bernoulli) on cpu | ### 🚀 The feature, motivation and pitch
I use torch 2.3.1 on MacOS 15.1 beta
I profiling my model and observed that aten::bernoulli is taking 34% of the runtime on cpu while dropout time is marginal for MPS (GPU) device.
Can we optimize bernoulli computation for cpu ?
### Alternatives
_No response_
### Additional context
Here the stats:
cpu using torch.profiler:
````
STAGE:2024-10-17 11:38:42 87880:56463086 ActivityProfilerController.cpp:314] Completed Stage: Warm Up
STAGE:2024-10-17 11:38:42 87880:56463086 ActivityProfilerController.cpp:320] Completed Stage: Collection
STAGE:2024-10-17 11:38:42 87880:56463086 ActivityProfilerController.cpp:324] Completed Stage: Post Processing
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
aten::dropout 0.05% 56.000us 35.83% 44.111ms 6.302ms 7
aten::bernoulli_ 33.98% 41.829ms 33.98% 41.829ms 5.976ms 7
autograd::engine::evaluate_function: IndexBackward0 0.11% 140.000us 14.23% 17.515ms 547.344us 32
IndexBackward0 0.08% 101.000us 14.11% 17.375ms 542.969us 32
aten::_index_put_impl_ 14.00% 17.238ms 14.05% 17.298ms 455.211us 38
````
cpu using cProfile
````
1823 function calls (1767 primitive calls) in 0.117 seconds
Ordered by: cumulative time
List reduced from 168 to 100 due to restriction <100>
ncalls tottime percall cumtime percall filename:lineno(function)
24/2 0.000 0.000 0.069 0.034 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/nn/modules/module.py:1528(_wrapped_call_impl)
24/2 0.000 0.000 0.069 0.034 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/nn/modules/module.py:1534(_call_impl)
1 0.004 0.004 0.069 0.069 /Users/tgg/Github/mlx-graphs-last/AttentiveFP.py:432(forward)
1 0.000 0.000 0.045 0.045 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/_tensor.py:466(backward)
1 0.000 0.000 0.045 0.045 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/autograd/__init__.py:165(backward)
1 0.000 0.000 0.045 0.045 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/autograd/graph.py:739(_engine_run_backward)
1 0.045 0.045 0.045 0.045 {method 'run_backward' of 'torch._C._EngineBase' objects}
7 0.000 0.000 0.041 0.006 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/nn/modules/dropout.py:58(forward)
7 0.000 0.000 0.041 0.006 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/nn/functional.py:1279(dropout)
7 0.041 0.006 0.041 0.006 {built-in method torch.dropout}
````
mps using cProfile
````
1823 function calls (1767 primitive calls) in 0.093 seconds
Ordered by: cumulative time
List reduced from 168 to 100 due to restriction <100>
ncalls tottime percall cumtime percall filename:lineno(function)
24/2 0.000 0.000 0.080 0.040 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/nn/modules/module.py:1528(_wrapped_call_impl)
24/2 0.000 0.000 0.080 0.040 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/nn/modules/module.py:1534(_call_impl)
1 0.068 0.068 0.080 0.080 /Users/tgg/Github/mlx-graphs-last/AttentiveFP.py:432(forward)
1 0.000 0.000 0.006 0.006 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/_tensor.py:466(backward)
1 0.000 0.000 0.006 0.006 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/autograd/__init__.py:165(backward)
1 0.000 0.000 0.006 0.006 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/autograd/graph.py:739(_engine_run_backward)
1 0.006 0.006 0.006 0.006 {method 'run_backward' of 'torch._C._EngineBase' objects}
1 0.000 0.000 0.006 0.006 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/optim/optimizer.py:374(wrapper)
1 0.000 0.000 0.006 0.006 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/optim/optimizer.py:58(_use_grad)
1 0.000 0.000 0.006 0.006 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/optim/adam.py:135(step)
1 0.000 0.000 0.006 0.006 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/optim/adam.py:260(adam)
1 0.001 0.001 0.006 0.006 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/optim/adam.py:338(_single_tensor_adam)
3 0.002 0.001 0.002 0.001 {built-in method torch.stack}
4 0.000 0.000 0.001 0.000 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/nn/modules/rnn.py:1457(forward)
4 0.001 0.000 0.001 0.000 {built-in method torch.gru_cell}
1 0.001 0.001 0.001 0.001 /Users/tgg/Github/mlx-graphs-last/AttentiveFP.py:441(<listcomp>)
3 0.001 0.000 0.001 0.000 {method 'type' of 'torch._C.TensorBase' objects}
1 0.001 0.001 0.001 0.001 /Users/tgg/Github/mlx-graphs-last/AttentiveFP.py:444(<listcomp>)
7 0.000 0.000 0.001 0.000 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/nn/modules/dropout.py:58(forward)
7 0.000 0.000 0.001 0.000 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/nn/functional.py:1279(dropout)
11 0.000 0.000 0.001 0.000 /Users/tgg/miniforge3/envs/mlxgraphenv-py311/lib/python3.11/site-packages/torch/nn/modules/linear.py:115(forward)
7 0.001 0.000 0.001 0.000 {built-in method torch.dropout}
````
cc @msaroufim @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | module: performance,triaged,module: mps | low | Minor |
2,594,444,341 | pytorch | Crash on Apple MPS (M2) when inverting large matrix | ### 🐛 Describe the bug
Matrix inversion on MPS fails for matrices larger than 1024. Below is a minimal example that includes a working and non-working situation. The non-working situation throws error: ```/AppleInternal/Library/BuildRoots/5a8a3fcc-55cb-11ef-848e-8a553ba56670/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:118: failed assertion `[MPSNDArrayDescriptor sliceDimension:withSubrange:] error: dimension index (2) not within number of dimensions (2)```.
```python
import torch
device = torch.device("mps")
###########################################
############# WORKING #####################
###########################################
n = 1024
# generate random positive semi-definite matrix
A = torch.rand(n, n, device=device)
A = torch.mm(A, A.t())
# invert
print("Before matrix inversion with n=1024")
V_pi = torch.linalg.inv(A)
print("After matrix inversion with n=1024")
###########################################
######### NOT WORKING #####################
###########################################
n = 1025
# generate random positive semi-definite matrix
A = torch.rand(n, n, device=device)
A = torch.mm(A, A.t())
# invert
print("Before matrix inversion with n=1025")
V_pi = torch.linalg.inv(A) # THIS FAILS
print("After matrix inversion with n=1025") # THIS IS NEVER REACHED
```
### Versions
Collecting environment information...
PyTorch version: 2.4.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.0.1 (arm64)
GCC version: (Homebrew GCC 11.5.0) 11.5.0
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.20 (main, Oct 3 2024, 02:24:59) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.0.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] torch==2.4.1
[conda] numpy 2.0.2 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | high priority,module: crash,triaged,module: third_party,module: linear algebra,module: mps | low | Critical |
2,594,474,660 | kubernetes | Support for expanded 'operators' (e.g. In) in field selectors | ### What would you like to be added?
Support for selecting objects using 'set operations' (notably 'In') with field selectors, bringing parity with labelSelectors.
More specifically, I'd like to be able to select over the `metadata.namespace` field, so I can establish multi-namespace watches with consistent RV semantics.
### Why is this needed?
To perform a multi-namespace list/watch today, a user must fetch objects at the cluster-scope and then filter down the response client-side. This is expensive and doesn't scale well as the size of the cluster grows (and various permissions/authZ boundaries are put in place).
Forcing the apiserver to perform this sort of filtering has typically been avoided in the past, as etcd does not natively support something like this, meaning apiservers have to establish a cluster-scoped watch for each client and perform filtering in memory, leading to high amounts of decoding objects from etcd (only to throw them away again after).
Recent work within the apiserver I believe enables us to reduce the impact of these kinds of requests, namely:
* https://github.com/kubernetes/enhancements/issues/2340 (consistent reads from cache)
* https://github.com/kubernetes/enhancements/issues/4568 (resilient watch cache initialization)
Combined, these allow us to ensure we serve list/field selectors from the watch cache, allowing us to optimise and index (similar to how we index certain field selectors and label selectors today).
I believe this feature would complement https://github.com/kubernetes/enhancements/issues/4601 (authorise with selectors) well, as it'd allow end-users to authorise field selector usage as well.
---
For this feature, I think it'll be important to ensure we also index watches and probably also list operations that utilise fieldSelectors, at the very least for `metadata.namespace`. For watches, this is relatively simple as we can add the `cacheWatcher` for the request to the `indexedWatchers` structure multiple times (by generating multiple namespaceName/scope entries): https://github.com/kubernetes/kubernetes/blob/06a15c5cf96131faaf44f93f1be228a013ae5c0d/staging/src/k8s.io/apiserver/pkg/storage/cacher/cacher.go#L135-L163. I've hacked together a quick PoC of the sort of change needed for watches here: https://github.com/munnerz/kubernetes/commit/ae075a0d30906a7218fdc123dd11cc96748bc20d - please do note that this relies on stuffing a `kubernetes.io/metadata.namespace` label into GetAttr funcs to emulate fieldSelector 'In' support, but it demonstrates the performance optimisation technique somewhat :)
Optimising LIST operations is a bit more complex today (as far as I can tell!) due the underlying Store/Indexer implementations in `k8s.io/client-go/tools/cache` not supporting indexed 'set operations' natively. | sig/api-machinery,kind/feature,triage/accepted | medium | Major |
2,594,486,812 | node | `Buffer.concat` and `Buffer.copy` silently produce invalid results when the operation involves indices equal or greater than 2^32 | ### Version
v22.9.0, v23.0.0
### Platform
```text
Windows 11 x64
Microsoft Windows NT 10.0.22631.0 x64
```
### Subsystem
Buffer
### What steps will reproduce the bug?
```js
const largeBuffer = Buffer.alloc(2 ** 32 + 5)
largeBuffer.fill(111)
const result = Buffer.concat([largeBuffer])
console.log(result)
```
### How often does it reproduce? Is there a required condition?
Consistent in `v22.9.0` and `v23.0.0`
### What is the expected behavior? Why is that the expected behavior?
All bytes of the return buffer produced by `Buffer.concat([largeBuffer])` should be identical to the source:
In this example:
```
111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, ....
```
### What do you see instead?
In the returned buffer, first 5 bytes are `111`, and all following ones are 0.
```
111, 111, 111, 111, 111, 0, 0, 0, 0, 0, 0, ....
```
The `console.log(result)` output looks like:
```
<Buffer 6f 6f 6f 6f 6f 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ... 4294967251
more bytes>
```
### Additional information
_No response_ | confirmed-bug,help wanted,buffer,regression | medium | Critical |
2,594,530,399 | next.js | Cookies cannot be read in middleware while defined in server actions (since 14 at least) | ### Link to the code that reproduces this issue
https://github.com/ScreamZ/reproduction-app-cookies-middleware
### To Reproduce
1. Start application
2. Clic "click me" button
### Current vs. Expected behavior
Cookies are set in Server action and middleware is not able to catch it!

### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:39:07 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 20.11.1
npm: 10.2.4
Yarn: 1.22.19
pnpm: 9.12.1
Relevant Packages:
next: 15.0.0-canary.196
eslint-config-next: N/A
react: 19.0.0-rc-77b637d6-20241016
react-dom: 19.0.0-rc-77b637d6-20241016
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware, Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
_No response_ | bug,Middleware,Runtime | low | Minor |
2,594,537,398 | vscode | 【BUG】After opening the ipynb file, maximize the command terminal panel, and there will be a line of "space" at the top of the terminal panel. | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
```
Version: 1.85.2 (user setup)
Commit: 8b3775030ed1a69b13e4f4c628c612102e30a681
Date: 2024-01-18T06:40:10.514Z
Electron: 25.9.7
ElectronBuildId: 26354273
Chromium: 114.0.5735.289
Node.js: 18.15.0
V8: 11.4.183.29-electron.0
OS: Windows_NT x64 10.0.19045
```
- OS Version: Windows 10 Pro 22H2 19045.5011
Steps to Reproduce:
1. Open a *.ipynb file
2. click Maximize Panel Size
After opening the ipynb file, maximize the command terminal panel, and there will be a line of "space" at the top of the terminal panel.
1. Open a Jupyter file and pay attention to the position marked by the red line at the top.

2. Then maximize the terminal panel. A gap appears here.

Even if I disable all extensions, it's still the same when opening an ipynb file locally.
| bug,notebook-layout | low | Critical |
2,594,546,008 | Python | Add binary search | ### What would you like to share?
In this i want to add binary search ques answer please add the label of gssoc ext and hacktober fest
### Additional information
_No response_ | awaiting triage | low | Major |
2,594,557,217 | langchain | ValueError: `my_func_tool` is not strict. Only `strict` function tools can be auto-parsed (after `openai` upgrade) | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangGraph/LangChain rather than my code.
- [X] I am sure this is better as an issue [rather than a GitHub discussion](https://github.com/langchain-ai/langgraph/discussions/new/choose), since this is a LangGraph bug and not a design question.
### Example Code
```python
from enum import Enum
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
class MyEnum(Enum):
...
@tool("my_func_tool")
def my_func_tool(s: str, category: MyEnum) -> dict:
# do some stuff ...
return {'a': True}
llm = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages(
[
("system", formatted_primary_assistant_prompt),
("placeholder", "{messages}"),
]
)
runnable = prompt | llm.bind_tools([my_func_tool])
state: ...
runnable.invoke(state)
```
### Error Message and Stack Trace (if applicable)
```shell
[2024-10-16 13:25:29,690: ERROR/ForkPoolWorker-8] Task app.main.func[8db70515-f08a-47b7-9ae3-abdc3bd6f6ef] raised unexpected: ValueError('`my_func_tool` is not strict. Only `strict` function tools can be auto-parsed')
Traceback (most recent call last):
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/celery/app/trace.py", line 453, in trace_task
R = retval = fun(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/sentry_sdk/utils.py", line 1720, in runner
return sentry_patched_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/sentry_sdk/integrations/celery/__init__.py", line 406, in _inner
reraise(*exc_info)
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/sentry_sdk/utils.py", line 1649, in reraise
raise value
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/sentry_sdk/integrations/celery/__init__.py", line 401, in _inner
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/celery/app/trace.py", line 736, in __protected_call__
return self.run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shneor/Desktop/projects/work/my_api_v3/app/main.py", line 114, in process_chat_message_task
return asyncio.run(process_chat_message(chat_request_dict))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shneor/.pyenv/versions/3.11.4/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/home/shneor/.pyenv/versions/3.11.4/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shneor/.pyenv/versions/3.11.4/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/shneor/Desktop/projects/work/my_api_v3/app/main.py", line 270, in process_chat_message
for index, event in enumerate(events):
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1285, in stream
for _ in runner.tick(
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/langgraph/pregel/runner.py", line 56, in tick
run_with_retry(t, retry_policy)
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/langgraph/pregel/retry.py", line 29, in run_with_retry
task.proc.invoke(task.input, config)
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/langgraph/utils/runnable.py", line 410, in invoke
input = context.run(step.invoke, input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/langgraph/utils/runnable.py", line 184, in invoke
ret = context.run(self.func, input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shneor/Desktop/projects/work/my_api_v3/app/graph/assistant.py", line 132, in __call__
raise e
File "/home/shneor/Desktop/projects/work/my_api_v3/app/graph/assistant.py", line 96, in __call__
result = runnable.invoke(state)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3024, in invoke
input = context.run(step.invoke, input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5354, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke
self.generate_prompt(
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate
raise e
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate
self._generate_with_cache(
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 851, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 677, in _generate
response = self.root_client.beta.chat.completions.parse(**payload)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/openai/resources/beta/chat/completions.py", line 105, in parse
_validate_input_tools(tools)
File "/home/shneor/.cache/pypoetry/virtualenvs/my-api-v3-xF5Uu3dv-py3.11/lib/python3.11/site-packages/openai/lib/_parsing/_completions.py", line 53, in validate_input_tools
raise ValueError(
ValueError: `my_func_tool` is not strict. Only `strict` function tools can be auto-parsed
```
### Description
Hey, I hope this example is enough to understand the issue. I'm not able to make an actual reproducible example cuz the codebase is massive and too many nested function calls.
So basically everything was working fine, until I updated the `openai` library from `1.38.0` to `1.40.0`.
I don't understand the `strict` error as the function has type annotations.
Also as a disclaimer I'm pretty new to the langchain/langgraph ecosystem, so I have no idea whats going on.
### System Info
System Information
------------------
> OS: Linux
> OS Version: langchain-ai/langgraph#40~22.04.3-Ubuntu SMP PREEMPT_DYNAMIC Tue Jul 30 17:30:19 UTC 2
> Python Version: 3.11.4 (main, Jun 18 2023, 17:04:26) [GCC 11.3.0]
Package Information
-------------------
> langchain_core: 0.3.10
> langchain: 0.3.3
> langchain_community: 0.3.2
> langsmith: 0.1.135
> langchain_anthropic: 0.2.3
> langchain_groq: 0.2.0
> langchain_openai: 0.2.2
> langchain_text_splitters: 0.3.0
> langchain_together: 0.2.0
> langgraph: 0.2.36
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> langgraph-checkpoint: 2.0.1
> langgraph-sdk: 0.1.33
> openai: 1.40.0
> pydantic: 2.9.2
| 🤖:bug | low | Critical |
2,594,574,646 | kubernetes | Unexpected Job Creation After CronJob Schedule Update | ### What happened?
I have a CronJob running on an EKS cluster (1.28) with an initial schedule of `40 23 * * *`.
1. At `23:40 UTC`, a job was successfully completed, taking few seconds.
2. At around `23:55 UTC`, I changed the CronJob schedule to `50 23 * * *`.
3. At `03:31:03 UTC`, an unexpected job was created.
I was able to reproduce the issue on another EKS cluster (1.28). Since I can't increase the verbosity level in EKS to gather more detailed logs from the controller manager, I decided to reproduce the issue on a local cluster created via Kind. I managed to reproduce the problem on both 1.28 and 1.31.
When testing on local cluster(1.28), I discovered a more interesting scenario. I had two CronJobs `cronjob-test-2` and `cronjob-test-3`
- cronjob-test-2: `49 9 * * *` -> `51 9 * * *`, with the change occurring shortly after `9:51 UTC`
- cronjob-test-3: `35 4 * * *` -> `38 4 * * *`, with the change occurring shortly after `4:38 UTC`
Surprisingly, both CronJobs created an unexpected job at the same time, `22:50:22 UTC`. Additionally, `cronjob-test-2` even missed an expected job
```
NAME COMPLETIONS DURATION AGE
cronjob-test-2-28816429 1/1 8s 38h # a job was missed after this one
cronjob-test-2-28817871 1/1 6s 110m # 22:50:22 UTC
cronjob-test-3-28817555 1/1 5s 20h
cronjob-test-3-28817558 1/1 4s 110m # 22:50:22 UTC
```
<details>
<summary>Cronjob Controller Logs</summary>
Following is the logs from cronjob_controllerv2 and job_controller, and the verbosity was set as 4.
```
I1015 09:45:19.354428 1 job_controller.go:226] "Starting job controller"
I1015 09:45:19.507521 1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
I1015 09:47:47.961434 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-2"
I1015 09:47:47.961466 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-2" requeueAfter="1m12.13858604s"
I1015 09:49:00.115477 1 cronjob_controllerv2.go:633] "Created Job" job="default/cronjob-test-2-28816429" cronjob="default/cronjob-test-2"
I1015 09:49:00.115531 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28816429"
I1015 09:49:00.115632 1 job_controller.go:1545] "Too few pods running" key="default/cronjob-test-2-28816429" need=1 creating=1
I1015 09:49:00.119191 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-2" requeueAfter="23h59m59.999440367s"
I1015 09:49:00.119243 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-2"
I1015 09:49:00.119255 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-2" requeueAfter="23h59m59.980770867s"
I1015 09:49:00.122067 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28816429"
I1015 09:49:00.123710 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28816429"
I1015 09:49:00.124323 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28816429"
I1015 09:49:00.124356 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-2-28816429" elapsed="8.80725ms"
I1015 09:49:00.124368 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-2"
I1015 09:49:00.124392 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-2" requeueAfter="23h59m59.975645909s"
I1015 09:49:00.127551 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28816429"
I1015 09:49:01.122770 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-2-28816429" elapsed="373.625µs"
I1015 09:49:06.780723 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28816429"
I1015 09:49:07.781300 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-2-28816429" elapsed="113.042µs"
I1015 09:49:07.843924 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28816429"
I1015 09:49:08.784246 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28816429"
I1015 09:49:08.849759 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28816429"
I1015 09:49:08.849794 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-2"
I1015 09:49:08.849808 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-2" requeueAfter="23h59m51.250220655s"
I1015 09:49:08.852371 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28816429"
I1015 09:49:08.854308 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-2-28816429" elapsed="9.805583ms"
I1015 09:49:08.854309 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28816429"
I1015 09:49:08.854353 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-2"
I1015 09:49:08.856452 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-2" requeueAfter="23h59m51.245670446s"
I1015 09:49:08.856481 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-2"
I1015 09:49:08.858155 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-2" requeueAfter="23h59m51.243529696s"
I1015 09:49:08.858215 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-2"
I1015 09:49:08.858229 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-2" requeueAfter="23h59m51.241795738s"
I1015 09:49:09.850763 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-2-28816429" elapsed="88.25µs"
I1016 04:33:45.800541 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-3"
I1016 04:33:45.800579 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="1m14.299487083s"
I1016 04:35:00.107918 1 cronjob_controllerv2.go:633] "Created Job" job="default/cronjob-test-3-28817555" cronjob="default/cronjob-test-3"
I1016 04:35:00.108011 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817555"
I1016 04:35:00.108159 1 job_controller.go:1545] "Too few pods running" key="default/cronjob-test-3-28817555" need=1 creating=1
I1016 04:35:00.109944 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="23h59m59.999111701s"
I1016 04:35:00.109981 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-3"
I1016 04:35:00.109991 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="23h59m59.990031618s"
I1016 04:35:00.110014 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-3"
I1016 04:35:00.110039 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="23h59m59.989994201s"
I1016 04:35:00.113206 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817555"
I1016 04:35:00.115032 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817555"
I1016 04:35:00.115046 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-3-28817555" elapsed="7.007833ms"
I1016 04:35:00.115074 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-3"
I1016 04:35:00.115103 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="23h59m59.984938868s"
I1016 04:35:00.115152 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817555"
I1016 04:35:00.118528 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817555"
I1016 04:35:01.115214 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-3-28817555" elapsed="191.917µs"
I1016 04:35:03.142287 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817555"
I1016 04:35:04.142862 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-3-28817555" elapsed="230.125µs"
I1016 04:35:04.210513 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817555"
I1016 04:35:05.149604 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817555"
I1016 04:35:05.218654 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817555"
I1016 04:35:05.218724 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-3"
I1016 04:35:05.218770 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="23h59m54.881306115s"
I1016 04:35:05.221914 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817555"
I1016 04:35:05.223483 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817555"
I1016 04:35:05.223533 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-3-28817555" elapsed="11.988792ms"
I1016 04:35:05.223535 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-3"
I1016 04:35:05.225526 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="23h59m54.876482823s"
I1016 04:35:05.225572 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-3"
I1016 04:35:05.227006 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="23h59m54.874438157s"
I1016 04:35:05.227037 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-3"
I1016 04:35:05.227051 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="23h59m54.87297224s"
I1016 04:35:06.220824 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-3-28817555" elapsed="90.458µs"
I1016 22:50:22.080092 1 cronjob_controllerv2.go:633] "Created Job" job="default/cronjob-test-3-28817558" cronjob="default/cronjob-test-3"
I1016 22:50:22.080204 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817558"
I1016 22:50:22.080190 1 cronjob_controllerv2.go:633] "Created Job" job="default/cronjob-test-2-28817871" cronjob="default/cronjob-test-2"
I1016 22:50:22.080302 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28817871"
I1016 22:50:22.080310 1 job_controller.go:1545] "Too few pods running" key="default/cronjob-test-3-28817558" need=1 creating=1
I1016 22:50:22.080342 1 job_controller.go:1545] "Too few pods running" key="default/cronjob-test-2-28817871" need=1 creating=1
I1016 22:50:22.082913 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="5h47m38.030291112s"
I1016 22:50:22.082947 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-3"
I1016 22:50:22.082983 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="5h47m38.017065028s"
I1016 22:50:22.083007 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-2" requeueAfter="11h0m38.030373028s"
I1016 22:50:22.083050 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-2"
I1016 22:50:22.083069 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-2" requeueAfter="11h0m38.016961653s"
I1016 22:50:22.086221 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28817871"
I1016 22:50:22.086290 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817558"
I1016 22:50:22.087934 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817558"
I1016 22:50:22.087979 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-3"
I1016 22:50:22.088001 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="5h47m38.012036153s"
I1016 22:50:22.088076 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-3-28817558" elapsed="7.858958ms"
I1016 22:50:22.088471 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28817871"
I1016 22:50:22.088499 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-2"
I1016 22:50:22.088512 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-2" requeueAfter="11h0m38.011512362s"
I1016 22:50:22.088654 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-2-28817871" elapsed="8.343708ms"
I1016 22:50:22.088727 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817558"
I1016 22:50:22.088770 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28817871"
I1016 22:50:22.092991 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28817871"
I1016 22:50:22.095171 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817558"
I1016 22:50:23.087955 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-3-28817558" elapsed="118.291µs"
I1016 22:50:23.087976 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-2-28817871" elapsed="154.625µs"
I1016 22:50:24.887122 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817558"
I1016 22:50:25.888042 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-3-28817558" elapsed="201.375µs"
I1016 22:50:25.929334 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817558"
I1016 22:50:26.891926 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817558"
I1016 22:50:26.894910 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28817871"
I1016 22:50:26.932416 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817558"
I1016 22:50:26.932463 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-3"
I1016 22:50:26.932503 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="5h47m33.167550999s"
I1016 22:50:26.934834 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817558"
I1016 22:50:26.936763 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-3-28817558"
I1016 22:50:26.936769 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-3-28817558" elapsed="6.805541ms"
I1016 22:50:26.936798 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-3"
I1016 22:50:26.938716 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="5h47m33.163216499s"
I1016 22:50:26.938804 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-3"
I1016 22:50:26.940126 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="5h47m33.161207957s"
I1016 22:50:26.940153 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-3"
I1016 22:50:26.940166 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-3" requeueAfter="5h47m33.15985679s"
I1016 22:50:27.895393 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-2-28817871" elapsed="355.792µs"
I1016 22:50:27.932640 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-3-28817558" elapsed="31.875µs"
I1016 22:50:27.958622 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28817871"
I1016 22:50:28.906520 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28817871"
I1016 22:50:28.963964 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28817871"
I1016 22:50:28.964008 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-2"
I1016 22:50:28.964022 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-2" requeueAfter="11h0m31.136008123s"
I1016 22:50:28.966974 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28817871"
I1016 22:50:28.968936 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-2-28817871" elapsed="9.14725ms"
I1016 22:50:28.968962 1 job_controller.go:562] "enqueueing job" key="default/cronjob-test-2-28817871"
I1016 22:50:28.968997 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-2"
I1016 22:50:28.972178 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-2" requeueAfter="11h0m31.131016956s"
I1016 22:50:28.972209 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-2"
I1016 22:50:28.973757 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-2" requeueAfter="11h0m31.127801331s"
I1016 22:50:28.973821 1 cronjob_controllerv2.go:526] "No unmet start times" cronjob="default/cronjob-test-2"
I1016 22:50:28.973837 1 cronjob_controllerv2.go:219] "Re-queuing cronjob" cronjob="default/cronjob-test-2" requeueAfter="11h0m31.126191373s"
I1016 22:50:29.964871 1 job_controller.go:717] "Finished syncing job" key="default/cronjob-test-2-28817871" elapsed="122.875µs"
```
</details>
### What did you expect to happen?
I expected the CronJobs to execute only according to the updated schedules and not trigger any additional or unscheduled jobs.
### How can we reproduce it (as minimally and precisely as possible)?
1. Create a CronJob with the schedule to run at time `A` every day.
2. Allow the job to complete at time `A`.
2. At a later time `C`, update the schedule to `B`. `A < B < C`, `A` is the earliest time.
3. Wait for several hours. Based on my tests, an unexpected job will be triggered, sometimes even more than 24 hours after `C`. When this happens after 24 hours, the expected job that should be created at time `C` on the next day is replaced by the unexpected one.
```
kubectl create cronjob cronjob-test --image=busybox --schedule="A.minute A.hour * * *" -- date
# wait until job created at time A, run it at time C
kubectl patch cronjob cronjob-test -p '{"spec": {"schedule": "B.minute B.hour * * *"}}'
```
Unfortunately, I haven't been able to narrow down the creation time of the unexpected job, but this sequence reliably reproduces the issue.
### Anything else we need to know?
Timezone:
```
cat /etc/localtime
TZif2UTCTZif2UTC
UTC0
```
Based on my observation:
- **All the unexpected jobs were triggered at the exact same time**
For example, I had three CronJobs with different schedules. After rescheduling them to a later time, a few hours later, the controller manager created all three unexpected jobs simultaneously, even though neither the original nor the new schedules of these CronJobs were the same.
- **The issue only occurs when the reschedule creates an unmet job schedule.** For example, updating from `49 9 * * *` to `48 9 * * *` works fine, but doing the reverse (moving from an earlier to a later time) causes the problem.
This is similar to the issue mentioned [here](https://github.com/kubernetes/kubernetes/issues/123220#), though in my case, the unexpected jobs can be created either on the same day or the next day.
A big thanks to @soltysh for helping me troubleshoot this issue.
### Kubernetes version
<details>
I managed to reproduce the same issue across three Kubernetes clusters with the following versions
```
eks: v1.28.13-eks-a737599
kind: v1.28.12 and v1.31.0
```
</details>
### Cloud provider
<details>
EKS and Kind on local macOS
</details>
| kind/bug,sig/apps,triage/accepted | low | Major |
2,594,582,105 | next.js | Including exclamation mark(" ! ") in assetPrefix compilation will result in an error | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/xenodochial-easley-ddh8vq?workspaceId=6bdce0be-af7a-405b-adec-ac633e9ed70d
### To Reproduce
Add the following config to "next.config.mjs":
```js
output: "export",
assetPrefix: "https://mycdn.com/!mark",
```
if the assetPrefix incude "!" will trigger error
if change the "!" to "21%" will be ok (but not my need)
### Current vs. Expected behavior
Current:
```sh
➜ workspace git:(master) ✗ yarn build
yarn run v1.22.19
$ next build
▲ Next.js 15.0.0-canary.196
Creating an optimized production build ...
Failed to compile.
Module not found: Error: Can't resolve 'mark&nextConfigOutput=export&flyingShuttle=false&nextConfigExperimentalUseEarlyImport=&preferredRegion=&middlewareConfig=e30%3D' in '/project/workspace'
Module not found: Error: Can't resolve 'mark&nextConfigOutput=export&flyingShuttle=false&nextConfigExperimentalUseEarlyImport=&preferredRegion=&middlewareConfig=e30%3D' in '/project/workspace'
Module not found: Error: Can't resolve 'mark&nextConfigOutput=export&flyingShuttle=false&nextConfigExperimentalUseEarlyImport=&preferredRegion=&middlewareConfig=e30%3D' in '/project/workspace'
> Build failed because of webpack errors
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
```
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.0.0-canary.196 // Latest available version is detected (15.0.0-canary.196).
eslint-config-next: N/A
react: 19.0.0-rc-77b637d6-20241016
react-dom: 19.0.0-rc-77b637d6-20241016
typescript: 5.3.3
Next.js Config:
output: export
```
### Which area(s) are affected? (Select all that apply)
create-next-app
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
Both 15.0 and 14.x is not ok | create-next-app,bug | low | Critical |
2,594,601,539 | PowerToys | Add Ḱ ḱ | ### Description of the new feature / enhancement
Being able to select
- **ḱ** on [**K**]
- **Ḱ** on [**Shift**] +[**K**]
### Scenario when this would be used?
Transcription of the Proto-Indo-European language.
### Supporting information
https://en.wikipedia.org/wiki/%E1%B8%B0 | Idea-Enhancement,Good first issue,Resolution-Fix Committed,Product-Quick Accent | low | Minor |
2,594,615,110 | ant-design | Table组件,设置了sticky和 summary属性, scroll: { x: 'max-content' }不生效 | ### Reproduction link
[![Edit on CodeSandbox][(https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/p/sandbox/ji-ben-yong-fa-antd-5-21-4-forked-tnc9fy?file=%2Fdemo.tsx%3A79%2C9&workspaceId=73767ffe-8e59-46a5-8f0d-9959add1714c)](https://codesandbox.io/p/sandbox/ji-ben-yong-fa-antd-5-21-4-forked-znmjgj?workspaceId=73767ffe-8e59-46a5-8f0d-9959add1714c)
### Steps to reproduce
进入链接codesandbox点击 <Button type="primary" onClick={addColumn}>
Add Column
</Button>
### What is expected?
当我设置sticky,summary属性后,期望在动态向表格中添加列能够表格自适应,能够滚动内容
### What is actually happening?
在动态添加内容时,表格不会自适应,添加的column,
| Environment | Info |
| --- | --- |
| antd | 5.21.4 |
| React | 18 |
| System | mac |
| Browser | chorme |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive | low | Major |
2,594,678,468 | flutter | NSInternalInconsistencyException(Modifications to the layout engine must not be performed from a background thread after it has been accessed from the main thread.) | ### Steps to reproduce
The crash is occasional. I refer to issue https://github.com/flutter/flutter/issues/135501 that illustrates the crash will
be solved after 3.16.x. But I meet the occasional crash on flutter 3.22.2
### Expected results
NO crash
### Actual results
Crash
### Code sample
No code
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
crash info
```
Triggered by Thread: 64 io.flutter.2.raster
Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Modifications to the layout engine must not be performed from a background thread after it has been accessed from the main thread.'
0 CoreFoundation 0x00000001a0e5cf20 0x00000001a0dd9000 + 540448
1 libobjc.A.dylib 0x0000000198d072b8 objc_exception_throw + 60
2 CoreAutoLayout 0x00000001c2108224 0x00000001c20f6000 + 74276
3 CoreAutoLayout 0x00000001c20fb89c 0x00000001c20f6000 + 22684
4 CoreAutoLayout 0x00000001c20f8208 0x00000001c20f6000 + 8712
5 CoreAutoLayout 0x00000001c20f7f94 0x00000001c20f6000 + 8084
6 UIKitCore 0x00000001a30b7264 0x00000001a305c000 + 373348
7 UIKitCore 0x00000001a306c918 0x00000001a305c000 + 67864
8 QuartzCore 0x00000001a24ca26c 0x00000001a244b000 + 520812
9 QuartzCore 0x00000001a24c9df0 0x00000001a244b000 + 519664
10 QuartzCore 0x00000001a2524fd8 0x00000001a244b000 + 892888
11 QuartzCore 0x00000001a2499ee0 0x00000001a244b000 + 323296
12 QuartzCore 0x00000001a24e3c34 0x00000001a244b000 + 625716
13 CoreFoundation 0x00000001a0dfd658 0x00000001a0dd9000 + 149080
14 CoreFoundation 0x00000001a0dfd414 0x00000001a0dd9000 + 148500
15 CoreFoundation 0x00000001a0e2c54c 0x00000001a0dd9000 + 341324
16 CoreFoundation 0x00000001a0e2bcd8 CFRunLoopRunSpecific + 608
17 Flutter 0x0000000112585fa4 fml::MessageLoopDarwin::Run() (../../flutter/fml/platform/darwin/message_loop_darwin.mm:52)
18 Flutter 0x0000000112585618 __thread_proxy<std::_LIBCPP_ABI_NAMESPACE::tuple<std::_LIBCPP_ABI_NAMESPACE::unique_ptr<std::_LIBCPP_ABI_NAMESPACE::__thread_struct, std::_LIBCPP_ABI_NAMESPACE::default_delete<std::_LIBCPP_ABI_NAMESPACE::__thread_struct> >, (lambda at ../../flutter/fml/thread.cc:79:7)> > (../../flutter/fml/message_loop_impl.cc:0)
19 libsystem_pthread.dylib 0x00000001fd8b606c _pthread_start + 136
20 libsystem_pthread.dylib 0x00000001fd8b10d8 thread_start + 8
```
### Flutter Doctor output
```
Flutter 3.22.2
Framework • revision 761747bfc5 (4 个月前)
Engine • revision edd8546116
Tools • Dart 3.4.3 • DevTools 2.34.3
``` | c: crash,platform-ios,a: production,P2,c: fatal crash,needs repro info,team-ios,triaged-ios | low | Critical |
2,594,697,622 | vscode | nushell doesn't work with debugpy (python debugger) | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Well. I need Python extension to make it run.
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
# The issue description (system details at the bottom):
Seems like even though it says that nushell is supported now in VSCode in August release ([**here**](https://code.visualstudio.com/updates/v1_93#_terminal) is the announce), it still can't be used as one of Internal Terminals when you try to debug python script. It gets call with a sequence not compatible with nushell syntax like this one.
a) Pixi's environemnt python bad command example:
```
cmd /C "c:\dev\data_processing\.pixi\envs\default\python.exe c:\Users\360ci\.vscode\extensions\ms-python.debugpy-2024.12.0-win32-x64\bundled\libs\debugpy\adapter/../..\debugpy\launcher 54917 -- c:\dev\data_processing\test.py "
```
b) Conda's environment python bad command example:
```
c: && cd c:\dev\sfms && cmd /C "c:\programs\miniconda\envs\data_proc\python.exe c:\Users\360ci\.vscode\extensions\ms-python.debugpy-2024.12.0-win32-x64\bundled\libs\debugpy\adapter/../..\debugpy\launcher 50270 -- c:\dev\sfms\test.py "
```
Screenshot:

It DOES work when you just try to run the script because comparing with debug sequence it doesn't have nushell incompatible command in this case. Because when you try to just run it correctly forms the line. Here is example of the command which works
Conda's good and compatible command when you just run without debugger:
```
C:/programs/miniconda/envs/data_proc/python.exe c:/dev/sfms/test.py
```
@anthonykim1, I saw you latest work on having nushell and julia integrated into VSCode, but I think there is a small chance that I misunderstand how it is expected to work?
The isssue is also discussed here in nushell repo:
https://github.com/nushell/nushell/issues/14022
Here is also one of my comments on this problem with some advices from one of the authors:
https://github.com/nushell/nushell/issues/2775#issuecomment-2412005812
@IanManske point outed [here](https://github.com/nushell/nushell/issues/14022#issuecomment-2403325882
) that the problem in these lines where nushell is not supported:
https://github.com/microsoft/vscode/blob/321e1e5b8a0af43a0ee7549713f606936a1ac9ac/src/vs/workbench/contrib/debug/node/terminals.ts#L81
# Details
- VS Code Version: 1.94.2
- OS Version: Win10: 10.0.19045 Build 19045
- Plugins: Python plugin, but no nushell plugin (is it needed?)
Steps to Reproduce:
1. Install nushell in your system.
2. Go to Settings, to to Terminal > Integrated > Default Profile: Windows, set up "path" to installed nushell
3. Have a small python program like ```print("Hello nu in vscode")```
4. Click on the "Run" arrow, selecting "Python Debugger: Debug Python File"
5. Get error message caused by ill-formed command ```Error: × Invalid literal``` complaining (see the above the full command it calls) about "\" or "&&" symbols in the string. | bug,debug | low | Critical |
2,594,699,295 | tensorflow | tf.linalg.expm fails to support half/float16 data type, which is inconsistent with doc | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.17.0
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
According to the documentation: https://www.tensorflow.org/api_docs/python/tf/linalg/expm
tf.linalg.expm is expected to accept float16 input, but it fails on float16 when actually running the following code.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
import numpy as np
input = tf.constant(np.random.randn(1,1), dtype='float16')
out = tf.linalg.expm(input)
```
```
### Relevant log output
```shell
tensorflow.python.framework.errors_impl.NotFoundError: Could not find device for node: {{node MatrixSolve}} = MatrixSolve[T=DT_HALF, adjoint=false]
All kernels registered for op MatrixSolve:
device='XLA_CPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_HALF]
device='XLA_GPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_HALF]
device='GPU'; T in [DT_COMPLEX128]
device='GPU'; T in [DT_COMPLEX64]
device='GPU'; T in [DT_DOUBLE]
device='GPU'; T in [DT_FLOAT]
device='CPU'; T in [DT_COMPLEX128]
device='CPU'; T in [DT_COMPLEX64]
device='CPU'; T in [DT_DOUBLE]
device='CPU'; T in [DT_FLOAT]
[Op:MatrixSolve] name:
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | low | Critical |
2,594,749,659 | stable-diffusion-webui | [Bug]: AttributeError: 'ImageDraw' object has no attribute 'multiline_textsize' | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I am using this google colab notebook https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb which install the latest version of your webui.
I try to do img2img => inpaint upload, then I select X/Y/Z Plot, I set the X type to Prompt S/R from the list, then I enter a value for example mall,bedroom and finally click in generate, it generates the image but in the end fails to display it with this error [Bug]: AttributeError: 'ImageDraw' object has no attribute 'multiline_textsize'
### Steps to reproduce the problem
1. Go to img2img
2. Select Input Upload and upload files
3. input prompt and negative prompt
4. choose the script X/Y/Z Plot
5. For X Type choose from the list Prompt S/R
6. Input X Type values "mall,bedroom"
7. Click on generate
### What should have happened?
Generated image should be displayed properly and no error
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-10-17-13-23.json](https://github.com/user-attachments/files/17413181/sysinfo-2024-10-17-13-23.json)
### Console logs
```Shell
Loading weights [88967f03f2] from /content/gdrive/MyDrive/sd/stable-diffusion-webui/models/Stable-diffusion/juggernaut_final.safetensors
Creating model from config: /content/gdrive/MyDrive/sd/stable-diffusion-webui/configs/v1-inference.yaml
Running on public URL: https://5427cc1b69b8f33387.gradio.live
✔ Connected
Startup time: 15.1s (import torch: 8.6s, import gradio: 0.8s, setup paths: 0.9s, other imports: 0.5s, load scripts: 0.6s, create ui: 0.8s, gradio launch: 1.6s, add APIs: 1.2s).
Applying attention optimization: xformers... done.
Model loaded in 4.9s (load weights from disk: 1.2s, create model: 0.5s, apply weights to model: 2.1s, load textual inversion embeddings: 0.7s, calculate empty prompt: 0.2s).
100% 20/20 [00:09<00:00, 2.20it/s]
X/Y/Z plot will create 2 images on 1 2x1 grid. (Total steps to process: 40)
100% 20/20 [00:09<00:00, 2.18it/s]
100% 20/20 [00:09<00:00, 2.12it/s]
*** Error completing request
*** Arguments: ('task(n9dp069tm1booho)', <gradio.routes.Request object at 0x7e4bfdc11fc0>, 4, 'Photograph of cinematic photo realistic skin texture, photorealistic, raw portrait photo of 20 year old Ukrainina girl wearing white dress, big breast, neutral, diamond and angular face, grey eyes, straight and high nose, (with long purple pin curly hairstyle:1.5), (blemish pale skin, skin flaws:1.6), (freckles:1.7), 8k, realistic beautiful, gorgeous insanely detailed octane render, 35mgraph, film, bokeh, ultramodern, vibrant, professional, 4k, highly detailed background of mall, front view and side view', '(worst quality, low quality:1.4), (deformed, distorted, disfigured:1.2), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, blurry, amputation. tattoo, watermark, text, black and white photo', [], None, None, None, None, None, <PIL.Image.Image image mode=RGB size=1536x768 at 0x7E4BFDD8A1D0>, <PIL.Image.Image image mode=RGBA size=1536x768 at 0x7E4BFDD8AFB0>, 4, 0, 2, 1, 1, 9, 1.5, 0.95, 0.0, 768, 768, 1, 0, 1, 0, 0, '', '', '', [], False, [], '', 'upload', None, 8, False, 1, 0.5, 4, 0, 0.5, 2, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 7, 'white dress,sport cloth', [], 0, '', [], 0, '', [], True, False, False, True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/img2img.py", line 240, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 780, in run
processed = script.run(p, *script_args)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/scripts/xyz_grid.py", line 769, in run
processed = draw_xyz_grid(
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/scripts/xyz_grid.py", line 380, in draw_xyz_grid
grid = images.draw_grid_annotations(grid, grid_max_w, grid_max_h, hor_texts, ver_texts, margin_size)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/images.py", line 228, in draw_grid_annotations
draw_texts(d, x, y, hor_texts[col], fnt, fontsize)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/images.py", line 171, in draw_texts
while drawing.multiline_textsize(line.text, font=fnt)[0] > line.allowed_width and fontsize > 0:
AttributeError: 'ImageDraw' object has no attribute 'multiline_textsize'
---
X/Y/Z plot will create 2 images on 1 2x1 grid. (Total steps to process: 40)
100% 20/20 [00:09<00:00, 2.07it/s]
100% 20/20 [00:10<00:00, 2.00it/s]
*** Error completing request
*** Arguments: ('task(fru5mdqk4k0ohne)', <gradio.routes.Request object at 0x7e4bfddd1c90>, 4, 'Photograph of cinematic photo realistic skin texture, photorealistic, raw portrait photo of 20 year old Ukrainina girl wearing white dress, big breast, neutral, diamond and angular face, grey eyes, straight and high nose, (with long purple pin curly hairstyle:1.5), (blemish pale skin, skin flaws:1.6), (freckles:1.7), 8k, realistic beautiful, gorgeous insanely detailed octane render, 35mgraph, film, bokeh, ultramodern, vibrant, professional, 4k, highly detailed background of mall, front view and side view', '(worst quality, low quality:1.4), (deformed, distorted, disfigured:1.2), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, blurry, amputation. tattoo, watermark, text, black and white photo', [], None, None, None, None, None, <PIL.Image.Image image mode=RGB size=1536x768 at 0x7E4BFDDD1FC0>, <PIL.Image.Image image mode=RGBA size=1536x768 at 0x7E4BFDDD1720>, 4, 0, 2, 1, 1, 9, 1.5, 0.95, 0.0, 768, 768, 1, 0, 1, 0, 0, '', '', '', [], False, [], '', 'upload', None, 8, False, 1, 0.5, 4, 0, 0.5, 2, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 7, 'white dress,sport cloth', [], 0, '', [], 0, '', [], True, False, False, True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/img2img.py", line 240, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 780, in run
processed = script.run(p, *script_args)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/scripts/xyz_grid.py", line 769, in run
processed = draw_xyz_grid(
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/scripts/xyz_grid.py", line 380, in draw_xyz_grid
grid = images.draw_grid_annotations(grid, grid_max_w, grid_max_h, hor_texts, ver_texts, margin_size)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/images.py", line 228, in draw_grid_annotations
draw_texts(d, x, y, hor_texts[col], fnt, fontsize)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/images.py", line 171, in draw_texts
while drawing.multiline_textsize(line.text, font=fnt)[0] > line.allowed_width and fontsize > 0:
AttributeError: 'ImageDraw' object has no attribute 'multiline_textsize'
```
### Additional information
_No response_ | bug-report | low | Critical |
2,594,751,317 | bitcoin | Mining Interface doesn't allow for Bitcoin Core to create blocks when it wants | Finally getting around to reviewing the mining interface, and sadly its missing some critical features that a new mining protocol should have. Specifically, one of the key goals of replacing `getblocktemplate` is that Bitcoin Core should be able to push work to clients at opportune times. This includes the ability to create a new block in between validating a new block and updating the mempool for a full CNB run. Sadly, forcing the client of the interface to explicitly call CNB makes this impossible. | Mining | medium | Major |
2,594,772,309 | three.js | CCDIKSolver constraint limits | ### Description
When using CCDIK Solver with GLTF models, the constraints only works well when it's an interval between [-π, π] ( -180° and 180°).
My problem is when bone have already an offset rotation like π by default (Seems pretty common with the mixamo model I found on internet).
In the case of an offset of -π on the bone for example, if we want a -45 +45° amplitude constraint on x axis, we should have :
```
{
target: 10,
effector: 9,
links: [
{
index: 8,
rotationMin: new Vector3(-Math.PI - Math.PI/4, 0, 0)), // - Offset
rotationMax: new Vector3(-Math.PI + Math.PI/ 4, 0, 0)) // + Offset
}
]
},
```
-Math.PI - Math.PI/4 equivalent to -225°
-Math.PI + Math.PI/4 equivalent to -135°
But in this case, as the value seems to exceed the intervals [-π, π] , the constraint doesn't work properly.
I've tried to invert rotationMin and rotationMax, but it doesn't work.
Is this a known limitations of CCDIKSolver ?
### Reproduction steps
1. Load a model with default bone rotation near limits of interval [-π, π]
2. Add IK constraint
### Code
```js
// code goes here
```
### Live example
https://codesandbox.io/p/sandbox/zm9zjk?file=%2Fsrc%2FApp.js
### Screenshots
_No response_
### Version
r169
### Device
_No response_
### Browser
_No response_
### OS
_No response_ | Addons | low | Minor |
2,594,790,421 | pytorch | NaN value produced by F.conv2d (cuda) with large values in input | ### 🐛 Describe the bug
The result values of this simple convolution operation is NaN starting with pytorch `2.4.0`. Values of input tensor needs to be big enough (e.g. 4e36 used below), and channels large enough (in my test >= 28, somewhat a strange number) to trigger this bug. I haven't tested if other convolution operations suffer from the same bug.
``` python
import torch
from torch.nn import functional as F
print(torch.__version__)
input = torch.full((1, 28, 7, 7), 4e36, device='cuda')
weight = torch.zeros((1, 28, 3, 3), device='cuda')
out = F.conv2d(input, weight)
print(out)
```
Output:
```
2.4.1
tensor([[[[nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan]]]], device='cuda:0')
2.3.1
tensor([[[[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]]]], device='cuda:0')
```
### Versions
```
Collecting environment information...
PyTorch version: 2.4.1
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: 20.0.0 (++20241014081629+d4ea08687f2d-1~exp1~20241014081816.474)
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 20%
CPU max MHz: 5200.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.4.1
[pip3] torchaudio==2.4.1
[pip3] torchvision==0.19.1
[pip3] triton==3.0.0
[conda] blas 1.0 mkl defaults
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344 defaults
[conda] mkl-service 2.4.0 py311h5eee18b_1 defaults
[conda] mkl_fft 1.3.10 py311h5eee18b_0 defaults
[conda] mkl_random 1.2.7 py311ha02d727_0 defaults
[conda] numpy 2.0.1 py311h08b1b3b_1 defaults
[conda] numpy-base 2.0.1 py311hf175353_1 defaults
[conda] pytorch 2.4.1 py3.11_cuda12.1_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.4.1 py311_cu121 pytorch
[conda] torchtriton 3.0.0 py311 pytorch
[conda] torchvision 0.19.1 py311_cu121 pytorch
```
cc @ptrblck @msaroufim | needs reproduction,module: cuda,triaged,module: NaNs and Infs | low | Critical |
2,594,796,926 | pytorch | `.all()` on `float32` CUDA tensors causes "illegal memory access" exception with PyTorch 2.5 | ### 🐛 Describe the bug
# Summary
Using `Tensor.all()` on CUDA tensors with `float32` dtype causes an "illegal memory access" exception after upgrading to PyTorch 2.5 in combination with Python 3.12 (does not happen with 3.10 and 3.11).
# Code to reproduce
```python
import torch
x = torch.ones(4, device="cuda", dtype=torch.float32)
x.all().cpu()
```
# Exception
```pytb
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[4], line 2
1 x = torch.ones(4, device="cuda", dtype=torch.float32)
----> 2 x.all().cpu()
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
# Notes
- Only happens with `float32` dtype. `float64` and `float16` are fine.
- `Tensor.any()` works fine with `float32` CUDA tensors.
- Does not happen with CPU tensors.
- Does not happen with PyTorch 2.4 with same CUDA and cuDNN versions (12.1 and 9.1.0).
- Does not happen with Python 3.10 and 3.11 but with 3.12. Other versions not tested.
### Versions
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: 12.1
cuDNN version: 9.1.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-193-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Quadro P620
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 545.23.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 33
Model name: AMD Ryzen 9 5900X 12-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 3079.564
CPU max MHz: 3700.0000
CPU min MHz: 2200.0000
BogoMIPS: 7386.59
Virtualization: AMD-V
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 6 MiB
L3 cache: 64 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.10 py312h5eee18b_0
[conda] mkl_random 1.2.7 py312h526ad5a_0
[conda] numpy 2.1.1 py312hc5e2394_0
[conda] numpy-base 2.1.1 py312h0da6c21_0
[conda] pytorch 2.5.0 py3.12_cuda12.1_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.0 py312_cu121 pytorch
[conda] torchtriton 3.1.0 py312 pytorch
[conda] torchvision 0.20.0 py312_cu121 pytorch
cc @ptrblck @msaroufim | needs reproduction,module: cuda,triaged | low | Critical |
2,594,813,540 | pytorch | [fx] make_fx with tracing_mode="real" errors with 'PythonKeyTracer' object has no attribute 'graph' | ### 🐛 Describe the bug
```python
import torch
from torch.fx.experimental.proxy_tensor import make_fx
def foo(x):
return x + 1
g = make_fx(foo)(torch.randn(3,))
```
Error
```python
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 2060, in _trace_inner
t = dispatch_trace(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_compile.py", line 27, in inner
import torch._dynamo
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/__init__.py", line 3, in <module>
from . import convert_frame, eval_frame, resume_execution
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 55, in <module>
from . import config, exc, trace_rules
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/trace_rules.py", line 46, in <module>
from .variables import (
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/__init__.py", line 108, in <module>
from .torch import TorchCtxManagerClassVariable, TorchInGraphFunctionVariable
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/torch.py", line 19, in <module>
from ..codegen import PyCodegen
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/codegen.py", line 34, in <module>
from .variables.torch_function import TensorWithTFOverrideVariable
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/torch_function.py", line 185, in <module>
populate_builtin_to_tensor_fn_map()
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/torch_function.py", line 146, in populate_builtin_to_tensor_fn_map
inp0 = torch.ones(1)
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1240, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1333, in __torch_dispatch__
return proxy_call(self, func, self.pre_dispatch, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 907, in proxy_call
name=proxy_mode.tracer.graph._target_to_str(func.overloadpacket.__name__),
^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'PythonKeyTracer' object has no attribute 'graph'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/pytorch/lightning-thunder/test.py", line 9, in <module>
g = make_fx(foo)(torch.randn(3,))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 2151, in wrapped
return make_fx_tracer.trace(f, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 2089, in trace
return self._trace_inner(f, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 2066, in _trace_inner
trace_structured(
File "/usr/local/lib/python3.12/dist-packages/torch/_logging/_internal.py", line 1179, in trace_structured
payload = payload_fn()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 2072, in <lambda>
payload_fn=lambda: self.fx_tracer.graph.python_code( # type: ignore[union-attr]
^^^^^^^^^^^^^^^^^^^^
AttributeError: 'PythonKeyTracer' object has no attribute 'graph'
```
### Versions
main f4158558aa5cfe504639ee9f7f73acf1d002f273
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | triaged,module: fx,oncall: pt2,module: ProxyTensor,module: pt2-dispatcher | low | Critical |
2,594,826,274 | opencv | OpenCL pow with integer order fails on kernel compilation stage | ### System Information
Platform: Linux + NVIDIA OpenCL (GF 1080)
Become visible after https://github.com/opencv/opencv/pull/26061
### Detailed description
```
[ RUN ] OCL_PowFixture_iPow.iPow/0, where GetParam() = (640x480, 8UC1)
1 error generated.
OpenCL program build log: core/arithm
Status -11: CL_BUILD_PROGRAM_FAILURE
-D dstT=uchar -D DEPTH_dst=0 -D rowsPerWI=1 -D OP_POWN -D UNARY_OP -D DOUBLE_SUPPORT
<kernel>:422:1: error: call to 'pown' is ambiguous
PROCESS_ELEM;
^~~~~~~~~~~~
<kernel>:226:31: note: expanded from macro 'PROCESS_ELEM'
#define PROCESS_ELEM storedst(pown(srcelem1, srcelem2))
^~~~
<kernel>:48:64: note: expanded from macro 'storedst'
#define storedst(val) *(__global dstT *)(dstptr + dst_index) = val
^~~
cl_kernel.h:3546:25: note: candidate function
double __OVERLOADABLE__ pown(double a, int b);
^
cl_kernel.h:6311:24: note: candidate function
float __OVERLOADABLE__ pown(float a, int b);
^
cl_kernel.h:3538:26: note: candidate function
float2 __OVERLOADABLE__ pown(float2 x, int2 a) ;
^
cl_kernel.h:3540:26: note: candidate function
float3 __OVERLOADABLE__ pown(float3 x, int3 a) ;
^
cl_kernel.h:3542:26: note: candidate function
float4 __OVERLOADABLE__ pown(float4 x, int4 a) ;
^
cl_kernel.h:3543:26: note: candidate function
float8 __OVERLOADABLE__ pown(float8 x, int8 a) ;
^
cl_kernel.h:3544:26: note: candidate function
float16 __OVERLOADABLE__ pown(float16 x, int16 a) ;
^
cl_kernel.h:3547:27: note: candidate function
double2 __OVERLOADABLE__ pown(double2 x, int2 a) ;
^
cl_kernel.h:3549:27: note: candidate function
double3 __OVERLOADABLE__ pown(double3 x, int3 a) ;
^
cl_kernel.h:3551:27: note: candidate function
double4 __OVERLOADABLE__ pown(double4 x, int4 a) ;
^
cl_kernel.h:3552:27: note: candidate function
double8 __OVERLOADABLE__ pown(double8 x, int8 a) ;
^
cl_kernel.h:3553:27: note: candidate function
double16 __OVERLOADABLE__ pown(double16 x, int16 a) ;
^
[ PERFSTAT ] (samples=100 mean=1.07 median=1.06 min=1.00 stddev=0.04 (4.0%))
[ OK ] OCL_PowFixture_iPow.iPow/0 (129 ms)
```
### Steps to reproduce
-
### Issue submission checklist
- [X] I report the issue, it's not a question
- [ ] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [ ] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: core,category: ocl | low | Critical |
2,594,854,346 | opencv | estimateAffinePartial2D without scaling | ### Describe the feature and motivation
I use [estimateAffinePartial2D](https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#gad767faff73e9cbd8b9d92b955b50062d) function to stabilize a set of images of a fixed camera dedicated to coastline monitoring. The camera zoom is fixed. There are small camera movements due to for example the thermal dilation of the structure on which the camera is installed.
The stabilization result with estimateAffinePartial2D is quite good, but it could be really better if scaling factor was not computed and left to 1, as it would better suit the camera configuration.
It would be great to have a boolean parameter in estimateAffinePartial2D, in which the user can specifiy if scaling factor has to be computed or not. Otherwise, it might be more simple to implement an other function called for example 'estimateRigidBody2D' that would compute a rigid body (translation + rotation) transformation from matching points.
### Additional context
_No response_ | feature | low | Minor |
2,594,911,974 | pytorch | Dynamo x autograd.Function: leverage compiled autograd on graph break in backwards | There are four quadrants (see table below); we're talking about quadrant III. When we are torch.compiling an autograd.Function, we trace the forward and backward into subgraphs. If the forward can be traced into a subgraph, but the backward cannot, we graph break on the forward.
With compiled autograd in the loop, we don't actually need to graph break on the forward: we can put a black box into the backward graph and rely on compiled autograd to read it, like how we handle backward hooks today.
<meta charset="utf-8"><b style="font-weight:normal;" id="docs-internal-guid-328eaa92-7fff-09c1-9f67-4aa95a98e8d3"><div dir="ltr" style="margin-left:0pt;" align="left">
| forward safe | forward unsafe
-- | -- | --
backward safe | II) HOP capture | I) graph break in forward
backward unsafe | III) (this issue!) add a compiled autograd integration | IV) graph break in forward
cc @ezyang @chauhang @penguinwu @xmfan @yf225 | triaged,oncall: pt2,module: compiled autograd,dynamo-autograd-function | low | Minor |
2,594,920,904 | ollama | Pull Private Huggingface Model | Hi, so I believe it's now possible to pull huggingface models directly by prepending hf.co to the pull statement. I would just like to get clarity on how this works with private models? I have my huggingface token set as an environment variable, but I can't seem to pull a private model. | feature request | low | Minor |
2,594,971,927 | pytorch | [PT2] Don't graph break on loss.backward() | @vmoens mentioned this recently in a talk on TorchRL pain points but I didn't see an existing issue.
We can actually support loss.backward() in Dynamo. There are a couple of cases (ranging from easy to difficult):
1) If the only differentiable inputs to a torch.compile region are graph leaves, then we can handle all of this inside Dynamo: allow_in_graph some call to compute gradients in the graph and Dynamo can update the .grad field of the inputs. This essentially requires the full model to be compile able.
2) If any inputs are differentiable and not leaves: maybe we can do something with compiled autograd here. What if we could trigger compiled autograd and then inline the graph we get from that into the graph that Dynamo is producing?
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec @xmfan @yf225 | triaged,oncall: pt2,module: dynamo,module: compiled autograd | low | Minor |
2,594,995,513 | deno | `util.inspect` behaves differently from Node.js when using proxies | Version: Deno 2.0.0
The following code throws in Deno, while it returns a string in Node.js
```js
import util from "node:util"
util.inspect(new Proxy({ x: 1 }, { ownKeys: () => undefined }))
```
| bug,ext/console | low | Minor |
2,595,003,551 | langchain | Context failed to generate pydantic json schema, so chains containing Context steps not working in LangServe | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_core.beta.runnables.context import Context, ContextSet, ContextGet
x = ContextSet('question')
x.config_schema().model_json_schema()
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
PydanticInvalidForJsonSchema Traceback (most recent call last)
Cell In[40], line 2
1 x = ContextSet('question')
----> 2 x.config_schema().model_json_schema()
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\main.py:476, in BaseModel.model_json_schema(cls, by_alias, ref_template, schema_generator, mode)
456 @classmethod
457 def model_json_schema(
458 cls,
(...)
462 mode: JsonSchemaMode = 'validation',
463 ) -> dict[str, Any]:
464 """Generates a JSON schema for a model class.
465
466 Args:
(...)
474 The JSON schema for the given model class.
475 """
--> 476 return model_json_schema(
477 cls, by_alias=by_alias, ref_template=ref_template, schema_generator=schema_generator, mode=mode
478 )
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:2280, in model_json_schema(cls, by_alias, ref_template, schema_generator, mode)
2277 raise AttributeError('model_json_schema() must be called on a subclass of BaseModel, not BaseModel itself.')
2279 assert not isinstance(cls.__pydantic_core_schema__, _mock_val_ser.MockCoreSchema), 'this is a bug! please report it'
-> 2280 return schema_generator_instance.generate(cls.__pydantic_core_schema__, mode=mode)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:415, in GenerateJsonSchema.generate(self, schema, mode)
408 if self._used:
409 raise PydanticUserError(
410 'This JSON schema generator has already been used to generate a JSON schema. '
411 f'You must create a new instance of {type(self).__name__} to generate a new JSON schema.',
412 code='json-schema-already-used',
413 )
--> 415 json_schema: JsonSchemaValue = self.generate_inner(schema)
416 json_ref_counts = self.get_json_ref_counts(json_schema)
418 ref = cast(JsonRef, json_schema.get('$ref'))
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:552, in GenerateJsonSchema.generate_inner(self, schema)
548 return json_schema
550 current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func)
--> 552 json_schema = current_handler(schema)
553 if _core_utils.is_core_schema(schema):
554 json_schema = populate_defs(schema, json_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py:37, in GenerateJsonSchemaHandler.__call__(self, core_schema)
36 def __call__(self, core_schema: CoreSchemaOrField, /) -> JsonSchemaValue:
---> 37 return self.handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:527, in GenerateJsonSchema.generate_inner.<locals>.new_handler_func(schema_or_field, current_handler, js_modify_function)
522 def new_handler_func(
523 schema_or_field: CoreSchemaOrField,
524 current_handler: GetJsonSchemaHandler = current_handler,
525 js_modify_function: GetJsonSchemaFunction = js_modify_function,
526 ) -> JsonSchemaValue:
--> 527 json_schema = js_modify_function(schema_or_field, current_handler)
528 if _core_utils.is_core_schema(schema_or_field):
529 json_schema = populate_defs(schema_or_field, json_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\main.py:697, in BaseModel.__get_pydantic_json_schema__(cls, core_schema, handler)
673 @classmethod
674 def __get_pydantic_json_schema__(
675 cls,
(...)
678 /,
679 ) -> JsonSchemaValue:
680 """Hook into generating the model's JSON schema.
681
682 Args:
(...)
695 A JSON schema, as a Python object.
696 """
--> 697 return handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py:37, in GenerateJsonSchemaHandler.__call__(self, core_schema)
36 def __call__(self, core_schema: CoreSchemaOrField, /) -> JsonSchemaValue:
---> 37 return self.handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:527, in GenerateJsonSchema.generate_inner.<locals>.new_handler_func(schema_or_field, current_handler, js_modify_function)
522 def new_handler_func(
523 schema_or_field: CoreSchemaOrField,
524 current_handler: GetJsonSchemaHandler = current_handler,
525 js_modify_function: GetJsonSchemaFunction = js_modify_function,
526 ) -> JsonSchemaValue:
--> 527 json_schema = js_modify_function(schema_or_field, current_handler)
528 if _core_utils.is_core_schema(schema_or_field):
529 json_schema = populate_defs(schema_or_field, json_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_generate_schema.py:272, in modify_model_json_schema(schema_or_field, handler, cls, title)
268 from ._dataclasses import is_builtin_dataclass
270 BaseModel = import_cached_base_model()
--> 272 json_schema = handler(schema_or_field)
273 original_schema = handler.resolve_ref_schema(json_schema)
274 if title is not None:
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py:37, in GenerateJsonSchemaHandler.__call__(self, core_schema)
36 def __call__(self, core_schema: CoreSchemaOrField, /) -> JsonSchemaValue:
---> 37 return self.handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:511, in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field)
509 if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field):
510 generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']]
--> 511 json_schema = generate_for_schema_type(schema_or_field)
512 else:
513 raise TypeError(f'Unexpected schema type: schema={schema_or_field}')
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:1415, in GenerateJsonSchema.model_schema(self, schema)
1412 title = config.get('title')
1414 with self._config_wrapper_stack.push(config):
-> 1415 json_schema = self.generate_inner(schema['schema'])
1417 json_schema_extra = config.get('json_schema_extra')
1418 if cls.__pydantic_root_model__:
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:552, in GenerateJsonSchema.generate_inner(self, schema)
548 return json_schema
550 current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func)
--> 552 json_schema = current_handler(schema)
553 if _core_utils.is_core_schema(schema):
554 json_schema = populate_defs(schema, json_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py:37, in GenerateJsonSchemaHandler.__call__(self, core_schema)
36 def __call__(self, core_schema: CoreSchemaOrField, /) -> JsonSchemaValue:
---> 37 return self.handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:511, in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field)
509 if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field):
510 generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']]
--> 511 json_schema = generate_for_schema_type(schema_or_field)
512 else:
513 raise TypeError(f'Unexpected schema type: schema={schema_or_field}')
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:1510, in GenerateJsonSchema.model_fields_schema(self, schema)
1508 if self.mode == 'serialization':
1509 named_required_fields.extend(self._name_required_computed_fields(schema.get('computed_fields', [])))
-> 1510 json_schema = self._named_required_fields_schema(named_required_fields)
1511 extras_schema = schema.get('extras_schema', None)
1512 if extras_schema is not None:
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:1318, in GenerateJsonSchema._named_required_fields_schema(self, named_required_fields)
1316 name = self._get_alias_name(field, name)
1317 try:
-> 1318 field_json_schema = self.generate_inner(field).copy()
1319 except PydanticOmit:
1320 continue
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:552, in GenerateJsonSchema.generate_inner(self, schema)
548 return json_schema
550 current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func)
--> 552 json_schema = current_handler(schema)
553 if _core_utils.is_core_schema(schema):
554 json_schema = populate_defs(schema, json_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py:37, in GenerateJsonSchemaHandler.__call__(self, core_schema)
36 def __call__(self, core_schema: CoreSchemaOrField, /) -> JsonSchemaValue:
---> 37 return self.handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:545, in GenerateJsonSchema.generate_inner.<locals>.new_handler_func(schema_or_field, current_handler, js_modify_function)
540 def new_handler_func(
541 schema_or_field: CoreSchemaOrField,
542 current_handler: GetJsonSchemaHandler = current_handler,
543 js_modify_function: GetJsonSchemaFunction = js_modify_function,
544 ) -> JsonSchemaValue:
--> 545 json_schema = js_modify_function(schema_or_field, current_handler)
546 if _core_utils.is_core_schema(schema_or_field):
547 json_schema = populate_defs(schema_or_field, json_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_generate_schema.py:2469, in get_json_schema_update_func.<locals>.json_schema_update_func(core_schema_or_field, handler)
2466 def json_schema_update_func(
2467 core_schema_or_field: CoreSchemaOrField, handler: GetJsonSchemaHandler
2468 ) -> JsonSchemaValue:
-> 2469 json_schema = {**handler(core_schema_or_field), **json_schema_update}
2470 add_json_schema_extra(json_schema, json_schema_extra)
2471 return json_schema
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py:37, in GenerateJsonSchemaHandler.__call__(self, core_schema)
36 def __call__(self, core_schema: CoreSchemaOrField, /) -> JsonSchemaValue:
---> 37 return self.handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:511, in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field)
509 if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field):
510 generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']]
--> 511 json_schema = generate_for_schema_type(schema_or_field)
512 else:
513 raise TypeError(f'Unexpected schema type: schema={schema_or_field}')
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:1386, in GenerateJsonSchema.model_field_schema(self, schema)
1377 def model_field_schema(self, schema: core_schema.ModelField) -> JsonSchemaValue:
1378 """Generates a JSON schema that matches a schema that defines a model field.
1379
1380 Args:
(...)
1384 The generated JSON schema.
1385 """
-> 1386 return self.generate_inner(schema['schema'])
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:552, in GenerateJsonSchema.generate_inner(self, schema)
548 return json_schema
550 current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func)
--> 552 json_schema = current_handler(schema)
553 if _core_utils.is_core_schema(schema):
554 json_schema = populate_defs(schema, json_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py:37, in GenerateJsonSchemaHandler.__call__(self, core_schema)
36 def __call__(self, core_schema: CoreSchemaOrField, /) -> JsonSchemaValue:
---> 37 return self.handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:511, in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field)
509 if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field):
510 generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']]
--> 511 json_schema = generate_for_schema_type(schema_or_field)
512 else:
513 raise TypeError(f'Unexpected schema type: schema={schema_or_field}')
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:1042, in GenerateJsonSchema.default_schema(self, schema)
1033 def default_schema(self, schema: core_schema.WithDefaultSchema) -> JsonSchemaValue:
1034 """Generates a JSON schema that matches a schema with a default value.
1035
1036 Args:
(...)
1040 The generated JSON schema.
1041 """
-> 1042 json_schema = self.generate_inner(schema['schema'])
1044 if 'default' not in schema:
1045 return json_schema
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:552, in GenerateJsonSchema.generate_inner(self, schema)
548 return json_schema
550 current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func)
--> 552 json_schema = current_handler(schema)
553 if _core_utils.is_core_schema(schema):
554 json_schema = populate_defs(schema, json_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py:37, in GenerateJsonSchemaHandler.__call__(self, core_schema)
36 def __call__(self, core_schema: CoreSchemaOrField, /) -> JsonSchemaValue:
---> 37 return self.handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:527, in GenerateJsonSchema.generate_inner.<locals>.new_handler_func(schema_or_field, current_handler, js_modify_function)
522 def new_handler_func(
523 schema_or_field: CoreSchemaOrField,
524 current_handler: GetJsonSchemaHandler = current_handler,
525 js_modify_function: GetJsonSchemaFunction = js_modify_function,
526 ) -> JsonSchemaValue:
--> 527 json_schema = js_modify_function(schema_or_field, current_handler)
528 if _core_utils.is_core_schema(schema_or_field):
529 json_schema = populate_defs(schema_or_field, json_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\main.py:697, in BaseModel.__get_pydantic_json_schema__(cls, core_schema, handler)
673 @classmethod
674 def __get_pydantic_json_schema__(
675 cls,
(...)
678 /,
679 ) -> JsonSchemaValue:
680 """Hook into generating the model's JSON schema.
681
682 Args:
(...)
695 A JSON schema, as a Python object.
696 """
--> 697 return handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py:37, in GenerateJsonSchemaHandler.__call__(self, core_schema)
36 def __call__(self, core_schema: CoreSchemaOrField, /) -> JsonSchemaValue:
---> 37 return self.handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:527, in GenerateJsonSchema.generate_inner.<locals>.new_handler_func(schema_or_field, current_handler, js_modify_function)
522 def new_handler_func(
523 schema_or_field: CoreSchemaOrField,
524 current_handler: GetJsonSchemaHandler = current_handler,
525 js_modify_function: GetJsonSchemaFunction = js_modify_function,
526 ) -> JsonSchemaValue:
--> 527 json_schema = js_modify_function(schema_or_field, current_handler)
528 if _core_utils.is_core_schema(schema_or_field):
529 json_schema = populate_defs(schema_or_field, json_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_generate_schema.py:272, in modify_model_json_schema(schema_or_field, handler, cls, title)
268 from ._dataclasses import is_builtin_dataclass
270 BaseModel = import_cached_base_model()
--> 272 json_schema = handler(schema_or_field)
273 original_schema = handler.resolve_ref_schema(json_schema)
274 if title is not None:
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py:37, in GenerateJsonSchemaHandler.__call__(self, core_schema)
36 def __call__(self, core_schema: CoreSchemaOrField, /) -> JsonSchemaValue:
---> 37 return self.handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:511, in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field)
509 if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field):
510 generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']]
--> 511 json_schema = generate_for_schema_type(schema_or_field)
512 else:
513 raise TypeError(f'Unexpected schema type: schema={schema_or_field}')
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:1415, in GenerateJsonSchema.model_schema(self, schema)
1412 title = config.get('title')
1414 with self._config_wrapper_stack.push(config):
-> 1415 json_schema = self.generate_inner(schema['schema'])
1417 json_schema_extra = config.get('json_schema_extra')
1418 if cls.__pydantic_root_model__:
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:552, in GenerateJsonSchema.generate_inner(self, schema)
548 return json_schema
550 current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func)
--> 552 json_schema = current_handler(schema)
553 if _core_utils.is_core_schema(schema):
554 json_schema = populate_defs(schema, json_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py:37, in GenerateJsonSchemaHandler.__call__(self, core_schema)
36 def __call__(self, core_schema: CoreSchemaOrField, /) -> JsonSchemaValue:
---> 37 return self.handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:511, in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field)
509 if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field):
510 generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']]
--> 511 json_schema = generate_for_schema_type(schema_or_field)
512 else:
513 raise TypeError(f'Unexpected schema type: schema={schema_or_field}')
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:1510, in GenerateJsonSchema.model_fields_schema(self, schema)
1508 if self.mode == 'serialization':
1509 named_required_fields.extend(self._name_required_computed_fields(schema.get('computed_fields', [])))
-> 1510 json_schema = self._named_required_fields_schema(named_required_fields)
1511 extras_schema = schema.get('extras_schema', None)
1512 if extras_schema is not None:
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:1318, in GenerateJsonSchema._named_required_fields_schema(self, named_required_fields)
1316 name = self._get_alias_name(field, name)
1317 try:
-> 1318 field_json_schema = self.generate_inner(field).copy()
1319 except PydanticOmit:
1320 continue
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:552, in GenerateJsonSchema.generate_inner(self, schema)
548 return json_schema
550 current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func)
--> 552 json_schema = current_handler(schema)
553 if _core_utils.is_core_schema(schema):
554 json_schema = populate_defs(schema, json_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py:37, in GenerateJsonSchemaHandler.__call__(self, core_schema)
36 def __call__(self, core_schema: CoreSchemaOrField, /) -> JsonSchemaValue:
---> 37 return self.handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:545, in GenerateJsonSchema.generate_inner.<locals>.new_handler_func(schema_or_field, current_handler, js_modify_function)
540 def new_handler_func(
541 schema_or_field: CoreSchemaOrField,
542 current_handler: GetJsonSchemaHandler = current_handler,
543 js_modify_function: GetJsonSchemaFunction = js_modify_function,
544 ) -> JsonSchemaValue:
--> 545 json_schema = js_modify_function(schema_or_field, current_handler)
546 if _core_utils.is_core_schema(schema_or_field):
547 json_schema = populate_defs(schema_or_field, json_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_generate_schema.py:2469, in get_json_schema_update_func.<locals>.json_schema_update_func(core_schema_or_field, handler)
2466 def json_schema_update_func(
2467 core_schema_or_field: CoreSchemaOrField, handler: GetJsonSchemaHandler
2468 ) -> JsonSchemaValue:
-> 2469 json_schema = {**handler(core_schema_or_field), **json_schema_update}
2470 add_json_schema_extra(json_schema, json_schema_extra)
2471 return json_schema
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py:37, in GenerateJsonSchemaHandler.__call__(self, core_schema)
36 def __call__(self, core_schema: CoreSchemaOrField, /) -> JsonSchemaValue:
---> 37 return self.handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:511, in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field)
509 if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field):
510 generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']]
--> 511 json_schema = generate_for_schema_type(schema_or_field)
512 else:
513 raise TypeError(f'Unexpected schema type: schema={schema_or_field}')
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:1386, in GenerateJsonSchema.model_field_schema(self, schema)
1377 def model_field_schema(self, schema: core_schema.ModelField) -> JsonSchemaValue:
1378 """Generates a JSON schema that matches a schema that defines a model field.
1379
1380 Args:
(...)
1384 The generated JSON schema.
1385 """
-> 1386 return self.generate_inner(schema['schema'])
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:552, in GenerateJsonSchema.generate_inner(self, schema)
548 return json_schema
550 current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func)
--> 552 json_schema = current_handler(schema)
553 if _core_utils.is_core_schema(schema):
554 json_schema = populate_defs(schema, json_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py:37, in GenerateJsonSchemaHandler.__call__(self, core_schema)
36 def __call__(self, core_schema: CoreSchemaOrField, /) -> JsonSchemaValue:
---> 37 return self.handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:511, in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field)
509 if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field):
510 generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']]
--> 511 json_schema = generate_for_schema_type(schema_or_field)
512 else:
513 raise TypeError(f'Unexpected schema type: schema={schema_or_field}')
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:1042, in GenerateJsonSchema.default_schema(self, schema)
1033 def default_schema(self, schema: core_schema.WithDefaultSchema) -> JsonSchemaValue:
1034 """Generates a JSON schema that matches a schema with a default value.
1035
1036 Args:
(...)
1040 The generated JSON schema.
1041 """
-> 1042 json_schema = self.generate_inner(schema['schema'])
1044 if 'default' not in schema:
1045 return json_schema
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:552, in GenerateJsonSchema.generate_inner(self, schema)
548 return json_schema
550 current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func)
--> 552 json_schema = current_handler(schema)
553 if _core_utils.is_core_schema(schema):
554 json_schema = populate_defs(schema, json_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py:37, in GenerateJsonSchemaHandler.__call__(self, core_schema)
36 def __call__(self, core_schema: CoreSchemaOrField, /) -> JsonSchemaValue:
---> 37 return self.handler(core_schema)
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:511, in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field)
509 if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field):
510 generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']]
--> 511 json_schema = generate_for_schema_type(schema_or_field)
512 else:
513 raise TypeError(f'Unexpected schema type: schema={schema_or_field}')
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:835, in GenerateJsonSchema.callable_schema(self, schema)
824 def callable_schema(self, schema: core_schema.CallableSchema) -> JsonSchemaValue:
825 """Generates a JSON schema that matches a callable value.
826
827 Unless overridden in a subclass, this raises an error.
(...)
833 The generated JSON schema.
834 """
--> 835 return self.handle_invalid_for_json_schema(schema, 'core_schema.CallableSchema')
File C:\Projects\AI\ll-rag\.venv\lib\site-packages\pydantic\json_schema.py:2185, in GenerateJsonSchema.handle_invalid_for_json_schema(self, schema, error_info)
2184 def handle_invalid_for_json_schema(self, schema: CoreSchemaOrField, error_info: str) -> JsonSchemaValue:
-> 2185 raise PydanticInvalidForJsonSchema(f'Cannot generate a JsonSchema for {error_info}')
PydanticInvalidForJsonSchema: Cannot generate a JsonSchema for core_schema.CallableSchema
For further information visit https://errors.pydantic.dev/2.9/u/invalid-for-json-schema
### Description
The code produces error PydanticInvalidForJsonSchema: Cannot generate a JsonSchema for core_schema.CallableSchema`
So when I try to use Context in my chain the LangServe - it raises error when I try to go to playground
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.8
> langchain: 0.3.1
> langchain_community: 0.3.1
> langsmith: 0.1.130
> langchain_chroma: 0.1.4
> langchain_experimental: 0.3.2
> langchain_openai: 0.2.1
> langchain_text_splitters: 0.3.0
> langchainhub: 0.1.21
> langserve: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
Other Dependencies
------------------
> aiohttp: 3.10.8
> async-timeout: 4.0.3
> chromadb: 0.5.11
> dataclasses-json: 0.6.7
> fastapi: 0.115.0
> httpx: 0.27.2
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.46.1
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> sse-starlette: 1.8.2
> tenacity: 8.5.0
> tiktoken: 0.7.0
> types-requests: 2.32.0.20240914
> typing-extensions: 4.12.2 | 🤖:bug,Ɑ: core | low | Critical |
2,595,004,869 | pytorch | Update auto_functionalized_v2 to pass view subgraphs | In Inductor, auto_functionalization_v2 needs to reconstruct views given some base tensors. The way we do that today is that we pass the (sizes, strides, storage_offsets) of the views to auto_functionalization_v2 and then generate .as_strided() calls into the Inductor post_grad_graph.
Our current design has a couple of issues:
- as_strided calls in inductor may be inefficient, since they need to realize the tensor
- To avoid generating as_strided calls in inductor, we store some information about if the view is an alias or a select/slice/alias in the auto_functionalized_v2 HOP. This logic doesn't work for more complicated subgraphs
Instead, we should pass subgraphs that represent how to reconstruct the views to the auto_functionalized HOP.
cc @bdhirsh @ezyang @chauhang @penguinwu @yf225 | triaged,module: functionalization,oncall: pt2,module: pt2-dispatcher | low | Minor |
2,595,007,404 | rust | Tracking issue for all the ways in which -C compiler flags can alter the ABI | If a `-C` flag alters the ABI, mixing crates built with different flags causes UB. This issue is gathering all the ways in which this can happen, so that we can figure out what to do with them. The general goal is to make it impossible to alter these flags without realizing that they are "special" and need to be set consistently across all crates. Ideally rustc can even check that they are set consistently, though that will not cover dynamic linking.
- `-Ctarget-features` can affect the ABI in a bunch of ways
- https://github.com/rust-lang/rust/issues/116558
- https://github.com/rust-lang/rust/issues/116344
- https://github.com/rust-lang/rust/issues/131058
- https://github.com/rust-lang/rust/issues/131819
- https://github.com/rust-lang/rust/issues/64609
- https://github.com/rust-lang/rust/issues/89586
- https://github.com/rust-lang/rust/issues/131300
- https://github.com/rust-lang/rust/issues/129893
- `-Cllvm-args` can set all sorts of flags, some of which can [change the ABI](https://rust-lang.zulipchat.com/#narrow/channel/131828-t-compiler/topic/-C.20flags.20that.20change.20ABI/near/477359366)
Flags that are sus:
- `-Cno-redzone` [could potentially be a problem](https://rust-lang.zulipchat.com/#narrow/channel/131828-t-compiler/topic/stabilizing.20compiler.20flags.20for.20Rust-for-Linux/near/477650067)
- Probably `-Clink-arg(s)` can also do bad shenanigans... but it can't really affect ABI, can it? The ABI is already baked into the object files at this point.
This list might be incomplete! | T-compiler,C-tracking-issue,A-ABI,A-target-feature | low | Minor |
2,595,013,480 | yt-dlp | [yandexmusic:track] Can't fully fetch track title from yandex music | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Russia
### Provide a description that is worded well enough to be understood
Hello!
I'm using next statement:
`yt-dlp -vU -O "%(track)s" https://music.yandex.ru/album/9881481/track/62579582`
But it fetches only part of the title.
The whole title is "So Heavy I Fell Through the Earth Algorithm Mix"
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '-O', '%(track)s', 'https://music.yandex.ru/album/9881481/track/62579582']
[debug] Encodings: locale cp1251, fs utf-8, pref cp1251, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version master@2024.10.16.035413 from yt-dlp/yt-dlp-master-builds [fbc66e3ab] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 6.1.1-full_build-www.gyan.dev (setts), ffprobe 6.1.1-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-master-builds/releases/latest
Latest version: master@2024.10.16.035413 from yt-dlp/yt-dlp-master-builds
yt-dlp is up to date (master@2024.10.16.035413 from yt-dlp/yt-dlp-master-builds)
[yandexmusic:track] Extracting URL: https://music.yandex.ru/album/9881481/track/62579582
[yandexmusic:track] 62579582: Downloading track JSON
[yandexmusic:track] 62579582: Downloading track location url JSON
[yandexmusic:track] 62579582: Downloading track location JSON
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 62579582: Downloading 1 format(s): 0
So Heavy I Fell Through the Earth
```
| geo-blocked,site-enhancement | low | Critical |
2,595,014,100 | pytorch | auto_functionalized_v2 doesn't support export | When we implemented auto_functionalized_v2, it was easier to add support for it to compile and then worry about export later. Today, export is still using auto_functionalized_v1.
The main delta is that auto_functionalized_v2 supports reinplacing custom ops that mutate inputs, where the inputs may be views of other tensors. It needs a backend (like Inductor) to actually do the reinplacing.
The main thing to do to get auto_functionalized_v2 to work with export is to develop a BC serialization scheme for it. We should do https://github.com/pytorch/pytorch/issues/138220 as a part of that work so that we don't need to break BC in the future when we implement that.
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,oncall: export | low | Minor |
2,595,047,440 | terminal | On keypress or output, openconsole should scroll down like conhost | ### Windows Terminal version
1.22.2702.0
### Windows build number
10.0.19045.5011
### Other Software
cmd.exe
bash.exe (wsl1)
### Steps to reproduce
get some output from dir or from ls -alR so that the windows scrolls.
then scroll up
then write anything.
conhost.exe scrolls to the input line. openconsole.exe does not.
also, if a program outputs data while you are "scrolled up", conhost scrolls down.
This is a matter of taste and preference and If I where you I would put a flag in the **Terminal scrolling** section.
### Expected Behavior
Scroll to the last line on keypress or on output.
### Actual Behavior
The scroll stays where it is.
I repeat: this is not a big deal... and it's a matter of taste.. but I would put the flag in the preferences.
| Product-Conhost,Area-Interaction,Issue-Bug,In-PR | low | Minor |
2,595,063,803 | react | Bug: Uncaught runtime error with 'removeChild' and long lists | React version: 18.3.1
## Steps To Reproduce
1. `$ npx create-react-app my-app`
2. `$ cd my-app`
3. Replace the contents of App.js with the code listed below
4. `$ npm run start`
5. Open the page in Google Chrome or Edge (issue does not reproduce in Firefox/Safari)
6. Click one of the "x" labels
7. Click "hide"
```js
// App.js
import { useState } from 'react';
function App() {
const [show, setShow] = useState(false);
return (
<div>
{Array(950).fill().map(() => <span onClick={() => setShow(true)} key={`rng_${Math.random()}`}>x</span>)}
<div>
{show && (
<div>
<button onClick={() => setShow(false)}>Hide</button>
</div>
)}
</div>
</div>
);
}
export default App;
```
Link to code example: https://codesandbox.io/p/sandbox/sleepy-resonance-29sgks
## The current behavior
It fails with "Failed to execute 'removeChild on 'Node': [..]"
This error message typically indicates that somebody or something has manipulated the DOM outside of React. However, as you can see in the very small reproducible example, this should not be the case.
I am able to reliably reproduce this on Google Chrome, but **not** Safari or Firefox. I tested on two different computers (both running OSX).
I could reproduce with a project set up with three separate environments; using vite, create-react-app, and inside codesandbox.
I also tried downgrading to react 17.0.2, and same problem there.
If the size of the list is reduced, the crash either stops or does not happen as often.
## The expected behavior
I should be able to re-render components based on state. | Status: Unconfirmed | low | Critical |
2,595,080,170 | ollama | add module/plug-in system to ollama | llm > go code > ollama add-on
no ollama recompil but fast evolution without you have more work... | feature request | low | Minor |
2,595,093,963 | pytorch | No guards for frozen dataclasses and readonly configs | ### 🚀 The feature, motivation and pitch
Currently, a guard will be put for a frozen dataclass:
```
TORCH_LOGS="+guards" python -c """from dataclasses import dataclass
import torch
@dataclass(frozen=True)
class MyDC:
x: int
@torch.compile(fullgraph=True)
def func(dc: MyDC, t):
return dc.x + t
func(MyDC(0), torch.randn(()))
"""
```
gives a guard for
```
V1017 16:30:38.479000 75647 torch/_dynamo/guards.py:2314] [0/0] [__guards] | | +- GuardManager: source=L['dc'].x, accessed_by=GetAttrGuardAccessor(x)
V1017 16:30:38.479000 75647 torch/_dynamo/guards.py:2314] [0/0] [__guards] | | | +- EQUALS_MATCH: L['dc'].x == 0 # # <string>:10 in func
```
This is unnecessary as `dc.x` cannot be changed (`dc` can be guarded but `dc.x` is frozen).
The same would apply for readonly configs from omegaconf (when #138224 is fixed, this code will show guards for x):
```python
import torch
from omegaconf import DictConfig
@torch.compile(fullgraph=True) # this will break currently
def func(dc, t):
return dc.x + t
cfg = DictConfig({'x': 0}, flags={'readonly': True})
func(cfg, torch.randn(()))
```
### Alternatives
I guess someone could hack their way through a frozen dataclass and that could lead to some errors but is it that much of a risk?
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec | triaged,topic: not user facing,oncall: pt2,module: dynamo,module: guards,dynamo-triage-june2024 | low | Critical |
2,595,148,222 | Python | Add TSP problem in Graph Data Structure | ### Feature description
I will add the Traveling Salesman Problem (TSP) Implementation to the Graph Data Structure file. As it's a must do problem when solving graphs in DS.
Kindly assign this issue to me. | enhancement | low | Minor |
2,595,160,354 | flutter | [Impeller] adjustment to DlImageFilter APIs required for coverage computation | In https://github.com/flutter/engine/pull/55843 I attempted to switch the impeller dispatcher to use the DL mechanism for filter bounds computation. Specifically, using `get_input_device_bounds` transform the coverage limit from the parent coordinate space to the child coordinate space:
There are two current issues with this:
1. This API accepts a Skia 3x3 matrix, while the Impeller canvas stores a 4x4.
2. This API automatically rounds out the result coverage, whereas we'd like the dispatcher to be able to choose between a RoundOut and a RoundIn
FYI @flar | P2,e: impeller,team-engine,triaged-engine | low | Minor |
2,595,162,145 | PowerToys | Powertoys not opening & keyboard delayed by 1-2 seconds | ### Microsoft PowerToys version
0.85.1
### Installation method
PowerToys auto-update, Microsoft Store
### Running as admin
No
### Area(s) with issue?
General
### Steps to reproduce
Not sure
### ✔️ Expected Behavior
Keyboard works properly, and PowerToys settings window opens
### ❌ Actual Behavior
Keyboard is delayed by 1-2 seconds in non-admin apps, and no PowerToys windows open.
### Other Software
Any application not running as Admin | Issue-Bug,Needs-Triage | low | Major |
2,595,163,467 | material-ui | [Autocomplete] renderInput is using an old api | ### Steps to reproduce
The props passed through renderInput use the old API InputProps and InputLabelProps.


### Current behavior
_No response_
### Expected behavior
_No response_
### Context
_No response_
### Your environment
<details>
System:
OS: Windows 11 10.0.22621
Binaries:
Node: 20.17.0 - C:\Program Files\nodejs\node.EXE
npm: 9.8.1 - C:\Program Files\nodejs\npm.CMD
pnpm: 9.9.0 - C:\Program Files\nodejs\pnpm.CMD
Browsers:
Chrome: Not Found
Edge: Chromium (127.0.2651.74)
npmPackages:
@emotion/react: ^11.11.4 => 11.13.3
@emotion/styled: ^11.11.5 => 11.13.0
@mui/icons-material: ^6.1.3 => 6.1.3
@mui/lab: 6.0.0-beta.11 => 6.0.0-beta.11
@mui/material: ^6.1.3 => 6.1.3
@mui/material-nextjs: ^6.1.3 => 6.1.3
@mui/styled-engine-sc: ^6.1.3 => 6.1.3
@mui/system: ^6.1.3 => 6.1.3
@mui/utils: ^6.1.3 => 6.1.3
@mui/x-data-grid-pro: ^7.20.0 => 7.20.0
@mui/x-date-pickers: ^7.20.0 => 7.20.0
@mui/x-date-pickers-pro: ^7.20.0 => 7.20.0
@mui/x-license: ^7.20.0 => 7.20.0
@types/react: ^18.2.66 => 18.3.5
react: ^18.2.0 => 18.3.1
react-dom: ^18.2.0 => 18.3.1
typescript: ^5.2.2 => 5.5.4
</details>
**Search keywords**: autocomplete, api | component: autocomplete,enhancement | low | Minor |
2,595,185,734 | flutter | [packages] Scrub Android plugins for pre-API 21 support | The current stable version of Flutter now requires API 21+, but most of our plugins have not been updated to reflect that, and still support API 19+. We have some tech debt related to older API support, and it would be good to find an clean out proactively (examples we've seen so far include https://github.com/flutter/packages/pull/7876/ and https://github.com/flutter/packages/pull/7874/files#r1802064758, but those are just things we've hit randomly so an audit would almost certainly turn up more)
This would look something like:
- Ensure that all the Android plugin implementation packages require Flutter 3.24+ (already true for many)
- Update them to `minSdkVersion 21` (this should not be a breaking change as long as Flutter itself requires 21, but IIRC `minSdkVersion` *can* trigger behavior changes—although I don't remember if that's true at the level of our setting, or only at the level of the client app building them)
- Update the app-facing package README support table to say 21+ (some are probably quite stale; we don't currently have any checks in CI that these match the implementation package, which is a tracked issue).
- Scrub for any references to `Build.VERSION` that reference `KITKAT` or `LOLLIPOP` to see what's still needed in code.
- Delete or migrate any robolectric tests that reference api 20 or below. Tests that are verifying removed api behavior should be deleted, tests that verify apis work should be migrated.
- Ideally search any Java/Kotlin code for other references to KitKat, Lollipop, 19, 20, and 21 to try to catch things that aren't specifically `Build` references (like the second example above). | team,platform-android,package,P2,c: tech-debt,team-android,triaged-android | low | Minor |
2,595,213,480 | kubernetes | 🐘 Switch to`opencontainers/runc` as a library (Was: Explore replacing opencontainers/runc with containerd/cgroups) | Kubernetes use of `opencontainers/runc` as a library is placing undue burden on the runc team, for example:
- https://github.com/opencontainers/runc/issues/3028
- https://github.com/opencontainers/runc/issues/3221#issuecomment-925972992
We now have a cgroups specific library in containerd org that we can explore to start slowly replacing functionality we needed earlier from runc i think.
https://github.com/containerd/cgroups
As of right now k/k master shows the following imports of opencontainers/runc:
```
❯ rg '"github.com/opencontainers/runc' | grep -v vendor | cut -f 2 -d '"' | sort | uniq -c | sort
1 github.com/opencontainers/runc/libcontainer/cgroups/systemd
1 github.com/opencontainers/runc/libcontainer/utils
2 github.com/opencontainers/runc/libcontainer/apparmor
2 github.com/opencontainers/runc/libcontainer/cgroups/manager
2 github.com/opencontainers/runc/libcontainer/configs
2 github.com/opencontainers/runc/libcontainer/userns
3 github.com/opencontainers/runc/libcontainer/cgroups/fscommon
17 github.com/opencontainers/runc/libcontainer/cgroups
```
/sig node | sig/node,area/code-organization,needs-triage | medium | Major |
2,595,225,840 | node | Request signal isn't aborted after garbage collection | ## Bug Description
Passing a signal to Request() and aborting it does not cause the underlying request signal to be aborted after it's been garbage collected. This leads to issues where you want to listen on the request signal even after the request itself falls out of scope - e.g., the request is instantiated with an abort controller signal and downstream code is waiting for that signal to be aborted via `request.signal`.
## Reproducible By
Run the following using `node --expose-gc main.js`:
```js
const ac = new AbortController();
ac.signal.addEventListener("abort", () => {
console.log("ac signal aborted");
});
const request = new Request("https://google.ca", { signal: ac.signal });
request.signal.addEventListener("abort", () => {
console.log("request signal aborted");
});
setTimeout(() => {
global.gc();
ac.abort();
}, 0);
```
## Expected Behavior
`ac signal aborted` **and** `request signal aborted` should be logged to the console.
Instead, only `ac signal aborted` is logged.
## Environment
Latest node and undici. Tried it in some older versions as well.
### Additional context
A few folks are running into this while trying to close event stream requests in the remix web framework. The underlying requests are never closed because the signal doesn't get aborted. | confirmed-bug | low | Critical |
2,595,230,361 | godot | Problems inferring type of variables in an autoloaded node's script at runtime | ### Tested versions
- Reproducible in: 4.3, 4.4.dev2, 4.4dev3
### System information
Godot v4.4.dev3 - Arch Linux #1 SMP PREEMPT_DYNAMIC Thu, 12 Sep 2024 17:21:02 +0000 on Wayland - Wayland display driver, Single-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1080 Ti (nvidia; 560.35.03) - AMD Ryzen 5 2600X Six-Core Processor (12 threads)
### Issue description
I noticed that type inference does not work under very specific circumstances: when inferring the type of properties of a script that is attached to the root node of an autoloaded scene via the global autoload name within the script itself, or within another script that is used by the autoloaded script. The problem only occurs at runtime - in the editor the lines are shown as being typed and autocompletion works as well, but when running the project it's as if type inference did not work at all. See the reproduction steps and MRP below.

### Steps to reproduce
- Add an "autoload" scene to the project
- In the root node of that scene, attach a script
- In that script, add variables with types
- Also in that script, write functions where the type of those variables is being inferred (using the global autoload name)
- Alternatively, do that in another script that is loaded by the autoloaded node's script
- It works in the editor, but at runtime you will get warnings / errors
### Minimal reproduction project (MRP)
[type_inference_mrp.zip](https://github.com/user-attachments/files/17416031/type_inference_mrp.zip)
| bug,topic:gdscript | low | Critical |
2,595,249,910 | flutter | [Impeller] compute minimum input hint for shared backdrop layer. | Right now we're defaulting to blurring the entire backdrop, but this can be speed up by computing the union of all saveLayer coverages during the pre-pass. | P2,e: impeller,team-engine,triaged-engine | low | Minor |
2,595,260,399 | flutter | [Impeller] AHB swapchain broken on NVIDIA SHIELD Android TV. | ### Steps to reproduce
I was following with this link: https://api.flutter.dev/flutter/widgets/CustomMultiChildLayout-class.html
The app works fine on my Android Pixel 7 Pro phone. It works fine on my Windows 11 desktop. However, it renders a large dash across the screen on my Android TV (Android SHIELD)
### Expected results
A clean rendered screen without large white dashes.
### Actual results
A TV covered in large white dashes.



### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| e: device-specific,platform-android,engine,c: rendering,P1,e: impeller,team-engine,triaged-engine,e: impeller-naughty-driver | medium | Critical |
2,595,267,618 | pytorch | [cond] Cond lifts duplicate symint inputs | ### 🐛 Describe the bug
```python
def test_cond_symint_input(self):
class M(torch.nn.Module):
def forward(self, x, y, z):
a = y.shape[0]
b = z.shape[0]
def true_fn(x):
return x + a
def false_fn(x):
return x + b
return torch.cond(x.shape[0] > 5, true_fn, false_fn, (x,))
inp = (torch.ones(3, 3), torch.ones(3, 3), torch.ones(3, 3))
ep = torch.export.export(M(), inp, dynamic_shapes={"x": {0: Dim("d")}, "y": {0: Dim("d")}, "z": {0: Dim("d")}})
```
Errors with:
```
======================================================================
ERROR: test_cond_symint_input (__main__.TestMoo)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/data/users/angelayi/pytorch/torch/testing/_internal/common_utils.py", line 2983, in wrapper
method(*args, **kwargs)
File "/data/users/angelayi/pytorch/moo.py", line 1373, in test_cond_symint_input
path = torch._inductor.aoti_compile_and_package(ep, inp)
File "/data/users/angelayi/pytorch/torch/export/__init__.py", line 366, in export
return _export(
File "/data/users/angelayi/pytorch/torch/export/_trace.py", line 1021, in wrapper
raise e
File "/data/users/angelayi/pytorch/torch/export/_trace.py", line 994, in wrapper
ep = fn(*args, **kwargs)
File "/data/users/angelayi/pytorch/torch/export/exported_program.py", line 116, in wrapper
return fn(*args, **kwargs)
File "/data/users/angelayi/pytorch/torch/export/_trace.py", line 1940, in _export
export_artifact = export_func( # type: ignore[operator]
File "/data/users/angelayi/pytorch/torch/export/_trace.py", line 1234, in _strict_export
return _strict_export_lower_to_aten_ir(
File "/data/users/angelayi/pytorch/torch/export/_trace.py", line 1343, in _strict_export_lower_to_aten_ir
aten_export_artifact = lower_to_aten_callback(
File "/data/users/angelayi/pytorch/torch/export/_trace.py", line 756, in _export_to_aten_ir
placeholder_naming_pass(
File "/data/users/angelayi/pytorch/torch/_export/utils.py", line 812, in placeholder_naming_pass
_name_hoo_subgraph_placeholders(gm)
File "/data/users/angelayi/pytorch/torch/_export/utils.py", line 704, in _name_hoo_subgraph_placeholders
subgraph.recompile()
File "/data/users/angelayi/pytorch/torch/fx/graph_module.py", line 814, in recompile
cls.forward = _forward_from_src(self._code, python_code.globals, co_fields)
File "/data/users/angelayi/pytorch/torch/fx/graph_module.py", line 91, in _forward_from_src
return _method_from_src(
File "/data/users/angelayi/pytorch/torch/fx/graph_module.py", line 101, in _method_from_src
_exec_with_source(src, globals_copy, co_fields)
File "/data/users/angelayi/pytorch/torch/fx/graph_module.py", line 87, in _exec_with_source
exec(compile(src, key, "exec"), globals)
File "<eval_with_key>.26 from /data/users/angelayi/pytorch/torch/fx/experimental/proxy_tensor.py:1177 in wrapped", line 4
SyntaxError: duplicate argument 'sym_size_int_3' in function definition
```
Graph from AOTAutograd:
```
class <lambda>(torch.nn.Module):
def forward(self, arg0_1: "f32[s0, 3]", arg1_1: "f32[s0, 3]", arg2_1: "f32[s0, 3]"):
#
sym_size_int_3: "Sym(s0)" = torch.ops.aten.sym_size.int(arg0_1, 0)
# File: /data/users/angelayi/pytorch/moo.py:1368 in forward, code: return torch.cond(x.shape[0] > 5, true_fn, false_fn, (x,))
gt: "Sym(s0 > 5)" = sym_size_int_3 > 5
# File: /data/users/angelayi/pytorch/torch/_higher_order_ops/cond.py:133 in cond, code: return cond_op(pred, true_fn, false_fn, operands)
true_graph_0 = self.true_graph_0
false_graph_0 = self.false_graph_0
cond = torch.ops.higher_order.cond(gt, true_graph_0, false_graph_0, [arg0_1, sym_size_int_3, sym_size_int_3]); gt = true_graph_0 = false_graph_0 = arg0_1 = sym_size_int_3 = None
getitem: "f32[s0, 3]" = cond[0]; cond = None
return (getitem,)
class true_graph_0(torch.nn.Module):
def forward(self, arg0_1: "f32[s0, 3]", arg1_1: "Sym(s0)", arg2_1: "Sym(s0)"):
# File: /data/users/angelayi/pytorch/moo.py:1363 in true_fn, code: return x + a
add_3: "f32[s0, 3]" = torch.ops.aten.add.Tensor(arg0_1, arg1_1); arg0_1 = arg1_1 = None
return (add_3,)
class false_graph_0(torch.nn.Module):
def forward(self, arg0_1: "f32[s0, 3]", arg1_1: "Sym(s0)", arg2_1: "Sym(s0)"):
# File: /data/users/angelayi/pytorch/moo.py:1366 in false_fn, code: return x + b
add_3: "f32[s0, 3]" = torch.ops.aten.add.Tensor(arg0_1, arg2_1); arg0_1 = arg2_1 = None
return (add_3,)
```
Note: `sym_size_int_3` is lifted as input to cond twice.
### Versions
main
cc @ezyang @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 | triaged,oncall: pt2,module: higher order operators,module: pt2-dispatcher | low | Critical |
2,595,284,871 | PowerToys | Color picker add copy default code when color selected | ### Description of the new feature / enhancement
When a color is selected, we should have an option to automatically copy our default code (I use hex, but you can set other defaults) so you can save an extra mouse-click.
### Scenario when this would be used?
All the time when selecting colors and needing a code for it. Just select it, and then go to the application you are using it in, instead of selecting color, selecting copy button, then changing apps and pasting.
### Supporting information
_No response_ | Needs-Triage,Needs-Team-Response | low | Minor |
2,595,320,082 | godot | Cannot specify base url for GDExtension on web platform | ### Tested versions
Tested in 4.3-stable
### System information
Web/WASM
### Issue description
I am trying to serve a Godot project on the web and am unable to load GDExtension wasm from urls not relative to the current page. The urls used by the server look something like:
```
https://servername.com/play/
https://servername.com/static/
```
with `play/` running through an application server to template the html file while `static/` files (js, wasm, etc) are served from something like S3.
This works fine until GDExtensions are involved because you can set `mainPack` and `executable` to the path you want and they will be fetched correctly. For extensions, it will fetch them as a relative path and there does not seem to be anything to change it. Using the urls above it would go to `https://servername.com/play/libname.whatever.wasm` instead of `https://servername.com/static/libname.whatever.wasm`. When I modify the application server to return the wasm on the url Godot uses, everything works.
Three questions:
* Am I missing something? Is there some way to set this?
* Is this a common problem that should be fixed for all godot users or am I on my own here?
* If it is a common problem then any pointers on what the 'correct' way to approach it would be appreciated.
Thanks!
### Steps to reproduce
N/A
### Minimal reproduction project (MRP)
N/A | platform:web,needs testing,topic:gdextension | low | Minor |
2,595,329,032 | rust | Duplicate error for using allow inside forbid | ### Code
```
// USE `-Zdeduplicate-diagnostics=false`
#[forbid(unsafe_code)] // NO UNSAFE CODE IN HERE!!
fn main() {
{
#[allow(unsafe_code)] // let's have some unsafe code in here
{
}
}
}
```
### Current output
```
error[E0453]: allow(unsafe_code) incompatible with previous forbid
--> tests/ui/lint/deny-inside-forbid-ignored.rs:4:17
|
1 | #[forbid(unsafe_code)] // NO UNSAFE CODE IN HERE!!
| ----------- `forbid` level set here
...
4 | #[allow(unsafe_code)] // let's have some unsafe code in here
| ^^^^^^^^^^^ overruled by previous forbid
error[E0453]: allow(unsafe_code) incompatible with previous forbid
--> tests/ui/lint/deny-inside-forbid-ignored.rs:4:17
|
1 | #[forbid(unsafe_code)] // NO UNSAFE CODE IN HERE!!
| ----------- `forbid` level set here
...
4 | #[allow(unsafe_code)] // let's have some unsafe code in here
| ^^^^^^^^^^^ overruled by previous forbid
|
= note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no`
error: aborting due to 2 previous errors
For more information about this error, try `rustc --explain E0453`.
```
### Desired output
```
error[E0453]: allow(unsafe_code) incompatible with previous forbid
--> tests/ui/lint/deny-inside-forbid-ignored.rs:4:17
|
1 | #[forbid(unsafe_code)] // NO UNSAFE CODE IN HERE!!
| ----------- `forbid` level set here
...
4 | #[allow(unsafe_code)] // let's have some unsafe code in here
| ^^^^^^^^^^^ overruled by previous forbid
error: aborting due to 1 previous error
For more information about this error, try `rustc --explain E0453`.
```
### Rationale and extra context
Use `-Zdeduplicate-diagnostics=false`
more context can be found in `tests/ui/lint/issue-70819-dont-override-forbid-in-same-scope.rs`
### Other cases
_No response_
### Rust Version
rustc 1.83.0-nightly (14f303bc1 2024-10-04)
binary: rustc
commit-hash: 14f303bc1430a78ddaa91b3e104bbe4c0413184e
commit-date: 2024-10-04
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,595,340,388 | godot | Using enum from other class as a signal parameter type doesn't work as expected | ### Tested versions
- Reproducible in Godot v4.4.dev3.mono
### System information
Godot v4.4.dev3.mono - Windows 10.0.19045 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 6GB (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz (4 threads)
### Issue description
Creating a signal that uses enum from some other class (`SubNode` in this case) works weirdly. My expectation is that you should use the full path of the enum, in this case `SubNode.Foo`:
```gdscript
signal signal_using_enum_from_another_class1(SubNode.Foo)
```
But you get an error message:
```
Parse Error: Expected closing ")" after signal parameters.
```
However, if only the enum name without class is given, signal definition works:
```gdscript
signal signal_using_enum_from_another_class2(Foo)
```
Using enum from the same class also works.
I didn't test what happens if multiple classes have an enum with same name.
### Steps to reproduce

subnode.gd:
```gdscript
class_name SubNode
extends Node2D
enum Foo {BAR}
```
main.gd
```gdscript
extends Node2D
enum MyEnum {ZAP}
signal signal_using_my_enum(MyEnum)
var a: SubNode.Foo
signal signal_using_enum_from_another_class1(SubNode.Foo) # <---- error
signal signal_using_enum_from_another_class2(Foo) # <---- for some reason this works
```
### Minimal reproduction project (MRP)
[signal_enum_test.zip](https://github.com/user-attachments/files/17416676/signal_enum_test.zip)
| discussion,topic:gdscript | low | Critical |
2,595,352,397 | kubernetes | Pod admission can fail due to webhooks + context deadline exceeded, even when all webhooks are set to failurePolicy = Ignore | ### What happened?
Pod admission failed with the error "Timeout: request did not complete within requested timeout - context deadline exceeded"
This occurred with Datadog's admission controller, when a default deny policy was applied and the network policy allowing ingress to Datadog's admission controller was missing. This is despite each of the webhook configurations specifying `failurePolicy: Ignore`. I can understand why they did that: failing pod admission basically kills the cluster.
### What did you expect to happen?
Pod should be successfully admitted since all webhooks were set to `failurePolicy: Ignore`
### How can we reproduce it (as minimally and precisely as possible)?
1. Create 4 mutating or validating webhooks, each with a timeout of 10 seconds, and failurePolicy of Ignore. The webhooks should return an error.
2. Try admitting a Pod. It should work - failurePolicy is set to Ignore.
3. Add a default deny network policy to the webhook's namespace. This will cause connections from the kube-apiserver to the webhook admission controller to fail with a timeout. Overall, 10 seconds will be spent on each one, totaling 40 seconds.
4. Pod admission fails with "Timeout: request did not complete within requested timeout - context deadline exceeded"
### Anything else we need to know?
It seems that pod admission has a global 30 second timeout. By specifying 4 webhooks, each with a timeout of 10 seconds, it is possible to exceed that timeout and cause pod admission to fail, **even when all webhooks have failurePolicy set to Ignore**. This also results in a message that just says "context deadline exceeded", and doesn't name the offending webhooks, since they didn't actually cause the failure. This makes it super hard to debug, as you have to dig into the kube-apiserver looks to find the offending webhook. It also seems unintentional: my expectation was that having no webhooks set to `failurePolicy: Fail`, pod admission should not be able to fail due to webhooks.
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.31.1
Kustomize Version: v5.4.2
Server Version: v1.31.1-eks-ce1d5eb
```
</details>
### Cloud provider
<details>
AWS
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/api-machinery,triage/accepted | low | Critical |
2,595,368,116 | next.js | Next.js App Router ISR doesn't remove noindex meta tag on cache revalidation | ### Link to the code that reproduces this issue
https://stackblitz.com/edit/stackblitz-starters-ws74xq?file=README.md
### To Reproduce
1. Build the project and start it in production mode `npm run build && npm start`
2. Open a new terminal and run the backend server `npm run server`
3. Open the application on the `/admin` path. Click on any item in the list. Change the status to "Draft" and click on save button. If the post already has a "Draft" status then first change it to "Published" and make it "Draft" again.
4. Copy the id of the post and go to page `/posts/<id>`. Refresh it until the not found page appears.
5. Check the generated page in `.next` directory: `cat .next/server/app/posts/<id>.html`. There should be `<meta name="robots" content="noindex"/>` tag.
6. Now go the the page `/admin/posts/<id>` and change the status to "Published" and save.
7. Go to the page `/posts/<id>` and refresh it until the non-404 page appears. Run the `cat .next/server/app/posts/<id>.html` command in terminal again. There is still `<meta name="robots" content="noindex"/>` tag.
### Current vs. Expected behavior
I expect Next.js to remove the robots tag on revalidation.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: Ubuntu 20.04.0 LTS Thu Oct 17 2024 22:46:17 GMT+0500 (Uzbekistan Standard Time)
Available memory (MB): NaN
Available CPU cores: 8
Binaries:
Node: 18.20.3
npm: 10.2.3
Yarn: 1.22.19
pnpm: 8.15.6
Relevant Packages:
next: 14.2.8 // There is a newer version (14.2.15) available, upgrade recommended!
eslint-config-next: 14.2.8
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next build (local), next start (local)
### Additional context
_No response_ | bug | low | Minor |
2,595,375,446 | deno | deno migrate command | Hello there, i'm new to deno exiting ecosytem, recently i'm experimenting with my old nodejs project to deno, my project has standard configuration of a nodejs typescript project, it has tsconfig, eslint, pritter etc...
When i'm run deno install command and deno task dev command the project run okay, but if i open a file there are lot of error related to imports, also in my deno migration i do not need any tsconfig, eslint, pritter config.
is it possible to add deno migrate command with --from flag, from can have value like "node" or "bun", then if i run this command following will happen
- it will update all import part from *.ts/*.js file -- or we can pass a file pattern, by default it will ignore node_modules directory
- from package.json it will create deno.json file and delete package.lock.json file
- it will remove all typescript, eslint, pritter config
Thanks
| cli,suggestion | low | Critical |
2,595,403,540 | excalidraw | Add 2 new urls to the Web Embed URLs whitelist | I would like the two urls to be whitelisted:
- https://perchance.org/cynews
- https://dddice.com/room/hx6kaq_/stream/chat?key=[REDACTED]
dddice.com used to working in a previous version, but that does not seem the case anymore.
Thanks in advance, | whitelist | low | Minor |
2,595,428,354 | node | `parallel/test-performance-measure` is flaky | ### Test
`test-performance-measure`
### Platform
Linux x64
### Console output
```console
---
duration_ms: 1225.808
exitcode: 1
severity: fail
stack: |-
node:internal/assert/utils:281
throw err;
^
AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
assert.ok(duration > (DELAY - ALLOWED_MARGIN))
at /home/iojs/build/workspace/node/test/parallel/test-performance-measure.js:13:12
at Array.forEach (<anonymous>)
at PerformanceObserver.<anonymous> (/home/iojs/build/workspace/node/test/parallel/test-performance-measure.js:12:22)
at PerformanceObserver.<anonymous> (/home/iojs/build/workspace/node/test/common/index.js:491:15)
at [kDispatch] (node:internal/perf/observe:354:19)
at Immediate._onImmediate (node:internal/perf/observe:130:25)
at process.processImmediate (node:internal/timers:511:21) {
generatedMessage: true,
code: 'ERR_ASSERTION',
actual: false,
expected: true,
operator: '=='
}
Node.js v24.0.0-pre
...
```
### Build links
- https://ci.nodejs.org/job/node-test-commit-linux/nodes=rhel8-x64/61234/testReport/junit/(root)/parallel/test_performance_measure/
### Additional information
_No response_ | flaky-test,perf_hooks,linux | low | Critical |
2,595,454,441 | rust | Implicit calls to `deref` and `deref_mut` are a footgun with raw pointers | I tried this code (which models library code with invariants for all valid instances of A but without all the details):
```rust
use std::ops::Deref;
use std::mem::MaybeUninit;
struct A{
b: MaybeUninit<B>,
}
struct B{
field: String,
}
impl Deref for A {
type Target = B;
fn deref(&self)->&B{
println!("Called deref!");
// SAFETY
// This deref is expected to be called only outside of the current module and field b is private
// so as long as all instances of A have it initialized before being accessible outside,
// this code is sound.
unsafe {
self.b.assume_init_ref()
}
}
}
// Note: while this is `main`, it is intended to show code that used for
// initialization of new instance of A. I omitted wrapping of A in a module
// and using separate factory method for brewity.
fn main() {
let mut a: MaybeUninit<A> = MaybeUninit::uninit();
unsafe {
let p = a.as_ptr();
// The only way to get pointer to field of pointee.
let b_ptr = &raw const (*p).b;
// However, it may cause unintended calls to a deref if we are not vigilant
// and mistakenly type name of a field of a type our pointee dereferences to.
let field_ptr = &raw const (*p).field;
}
}
```
I expected to see this happen: this code should be rejected because it causes implicit calls to `Deref::deref` which assumes `*p` to be initialized.
Instead, this happened: code compiles and prints `Called deref!` when executed.
### Meta
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (9322d183f 2024-10-14)
binary: rustc
commit-hash: 9322d183f45e0fd5a509820874cc5ff27744a479
commit-date: 2024-10-14
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
```
| C-discussion | low | Major |
2,595,482,293 | godot | Godot unresponsive - any and all windows related to the application | ### Tested versions
Issue has been consistent since 3.5 and is still occurring in 4.3 stable
### System information
Godot v4.3.stable - Windows 10.0.26100 - GLES3 (Compatibility) - NVIDIA GeForce RTX 3060 Ti (NVIDIA; 32.0.15.6590) - Intel(R) Core(TM) i5-9400F CPU @ 2.90GHz (6 Threads)
### Issue description
extensive information given from my attempts at resolving the issue prior to this bug submission from the official godot forum: https://forum.godotengine.org/t/godot-unresponsive-bug/85785
### Steps to reproduce
Any time I launch the application, as well as any subsequent application windows
I recorded and uploaded an example of the issue: https://www.youtube.com/watch?v=kP_wSwQhNmI&t=63s
This only happens to godot. I have no other applications that this issue occurs for
### Minimal reproduction project (MRP)
This is not exclusive to any one project as it is the engine itself that's having an issue. I've already attempted downloading older versions as well as completely wiping all data relating to godot before re-downloading and relaunching. | bug,topic:rendering,needs testing | low | Critical |
2,595,483,983 | flutter | [Release] Update release documentation | As part of regular release engineering maintenance we need to ensure release documentation is accurate and up-to-date. This issue tracks the verification and updating of these documents.
Once a document is verified and updated, please check the corresponding box. If a change is made, please reference the PR.
- [ ] [Bad-Builds.md](https://github.com/flutter/flutter/blob/master/docs/releases/Bad-Builds.md)
- [ ] [Flutter-Cherrypick-Process.md](https://github.com/flutter/flutter/blob/master/docs/releases/Flutter-Cherrypick-Process.md)
- [ ] [Flutter-build-release-channels.md](https://github.com/flutter/flutter/blob/master/docs/releases/Flutter-build-release-channels.md)
- [ ] https://github.com/flutter/flutter/blob/master/docs/releases/Hotfix-Documentation-Best-Practices.md
- [ ] https://github.com/flutter/flutter/blob/master/docs/releases/Quality-Assurance.md
- [ ] [Release-process.md](https://github.com/flutter/flutter/blob/master/docs/releases/Release-process.md)
- [ ] [Release-versioning.md](https://github.com/flutter/flutter/blob/master/docs/releases/Release-versioning.md)
- [ ] [Where's-my-commit.md](https://github.com/flutter/flutter/blob/master/docs/releases/Where's-my-commit.md) | team-release | low | Minor |
2,595,506,668 | ollama | Pulling models from private OCI Registries | According to #2388 it should be possible to push and pull models to a Docker/OCI registry (without authentication).
Even though it's an unsupported feature, I find it very useful and would like to contribute a short description how to do this.
Potential use cases are
- organisation-internal registries for orgs that limit internet access,
- serving private models,
- running Ollama on air gapped systems, and
- saving bandwidth and download time at edge locations.
I've tried it with a local docker registry: Pushing seems to work, pulling of the manifest works, as well, but pulling the blobs apparently did not work. Here is what I've tried:
Run a local docker registry v2:
```bash
docker run -d -p 5000:5000 --restart=always --name registry registry:2
```
Copy a model and push it to the registry:
```bash
ollama cp phi localhost:5000/mitja/phi
ollama push localhost:5000/mitja/phi --insecure
```
Remove the copied model and pull it, again (that works, I believe because the blobs from the original phi model are still there):
```bash
ollama rm localhost:5000/mitja/phi
ollama pull localhost:5000/mitja/phi --insecure
```
Remove both the copied and the original model, then pull the model from the private registry, again (does not work):
```bash
ollama rm phi
ollama rm localhost:5000/mitja/phi
ollama pull localhost:5000/mitja/phi --insecure
```
Runs into `Error: http: no Location header in response`
Pull the original model, then the copied model (works):
```bash
ollama pull phi
ollama pull localhost:5000/mitja/phi --insecure
ollama run localhost:5000/mitja/phi
```
Remove the registry container to clean up:
```bash
docker stop /registry
docker rm /registry
```
Did I miss a step or did I make a mistake, or is pushing/pulling the blobs not yet possible? | feature request | low | Critical |
2,595,535,448 | excalidraw | text not synced if pasted into wysiwyg | in collab (affects E+ too):
1. enter wysiwyg editor
2. paste text that's in font and code range that aren't currently already rendered on canvas
3. the text element is not synced to other client (even after confirmation)
/cc @Mrazator | bug,collaboration | low | Minor |
2,595,547,739 | godot | Tileset collision polygon does not appear when clicking away after drawing the polygon | ### Tested versions
- Reproduce in v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - AMD Radeon RX 6750 XT (Advanced Micro Devices, Inc.; 32.0.12011.1036) - 12th Gen Intel(R) Core(TM) i5-12600KF (16 Threads)
### Issue description
When clicking away after drawing the collision polygon, it will not show up on the tile in the tileset. If you click the square surrounding the tile in the drawing area it will show the polygon in the tileset over that tile.
Note: Here I've been using Compatibility but this also happens in Forward+.
https://github.com/user-attachments/assets/1e9ce277-1a7b-445c-ab8b-3ef06a35eb6b
### Steps to reproduce
1. Create a tileset
2. Add a Physics Layer
3. Press the "Add polygon tool" and start drawing the polygon
4. Click outside the drawing area after enclosing the polygon - Polygon is missing
5. Click the gray (or whatever it's color is on your system) area surrounding the tile in the drawing area - Polygon is present
### Minimal reproduction project (MRP)
[tileset_collision_polygon_bug.zip](https://github.com/user-attachments/files/17421413/tileset_collision_polygon_bug.zip)
| bug,topic:editor,topic:2d | low | Critical |
2,595,549,230 | kubernetes | `kubeapiservertesting.StartTestServer` mutates global state and prevents parallel tests within the same package | The following recent changes make it hard to run parallel integration tests from within the same package (the `utilruntime.Must` call will occasionally `panic`):
https://github.com/kubernetes/kubernetes/blob/632ed16e002d87fa7166a8f4ca1dc48d4f0a9725/cmd/kube-apiserver/app/testing/testserver.go#L196-L208
This can be hacked around but adding locks (though I suspect that does not cover all cases):
https://github.com/kubernetes/kubernetes/blob/632ed16e002d87fa7166a8f4ca1dc48d4f0a9725/test/integration/controlplane/transformation/transformation_test.go#L151-L158
Note that while many integration tests cannot be run in parallel because the test logic itself mutates global state, the base `kubeapiservertesting.StartTestServer` should support running parallel tests for the cases in which the test logic itself does not need to mutate global state.
cc @deads2k @jpbetz @liggitt
/sig api-machinery
/triage accepted | sig/api-machinery,triage/accepted | low | Minor |
2,595,564,146 | svelte | Feat: [Types] Improves "svelte/element" with directive-free types, and a configurable children | ### Describe the problem
The module "svelte/elements" provides the definitions of HTML attributes that can be used to declare props to spread in a component that "wrap" a HTML element.
For example for a component `Button`, using directly the type `HTMLButtonAttributes`
```svelte
<script lang="ts">
import type { HTMLButtonAttributes } from 'svelte/elements';
let { children, ...rest } : HTMLButtonAttributes = $props();
</script>
<button {...rest}>
{@render children?.()}
</button>
```
Or via the equivalent using `SvelteHTMLElements `
```svelte
<script lang="ts">
import type { SvelteHTMLElements } from 'svelte/elements';
let { children, ...rest } : SvelteHTMLElements['button'] = $props();
</script>
<button {...rest}>
{@render children?.()}
</button>
```
But there are 2 flaws :
* This include the definition of `bind:` and `on:` directives, which are therefore proposed by autocompletion
* The children is defined with zero parameter, and this cannot be changed easily.
In order to remove the `bind:`/`on:` directives, II need to write something like that :
```ts
let { children, ...rest } : Omit<HTMLButtonAttributes, `bind:${string}` | `on:${string}`> = $props();
```
And if I need to specify a parameter for the children snippet, I have to write :
```ts
let { children, ...rest } : Omit<HTMLButtonAttributes, `bind:${string}` | `on:${string}` | 'children'>
& { children?: Snippet<[number]> } = $props();
```
### Describe the proposed solution
It would be nice if Svelte 5 had an official type to handle this in "svelte/elements".
Something like this might work :
```ts
export type SvelteHTMLProps<TagName extends string, Parameters extends unknown[] = []> = {
children?: import('svelte').Snippet<Parameters>;
} & Omit<SvelteHTMLElements[TagName], `bind:${string}` | `on:${string}` | `children`>;
```
So we can use `SvelteHTMLProps<'button'>`in order to define our Button component :
```svelte
<script lang="ts">
import type { SvelteHTMLProps } from 'svelte/elements';
let { children, ...rest } : SvelteHTMLProps<'button'> = $props();
</script>
<button {...rest}>
{@render children?.()}
</button>
```
Or `SvelteHTMLProps<'button', [number]>`to define the children parameter type :
```svelte
<script lang="ts">
import type { SvelteHTMLProps } from 'svelte/elements';
let { children, onclick, ...rest } : SvelteHTMLProps<'button', [number]> = $props();
let count = $state(0);
function countClick(evt) {
count++;
onclick?.(evt);
}
</script>
<button onclick={countClick} {...rest}>
{@render children?.(count)}
</button>
```
### Importance
nice to have | types / typescript | low | Minor |
2,595,589,825 | kubernetes | Race when scheduling statefulset pods with local PV, resulting in pods pending forever | ### What happened?
A statefulset has a volumeclaimtemplate which uses a local PV storage class. The PVs created by this storage class are tightly bound to a single node.
In this example lets say that ordinal 0 of the statefulset is running on node A with a PVC which is bound to a PV on node A as well.
1. delete the PVC for ordinal 0
2. immediately after, delete the pod for ordinal 0
3. the new pod for ordinal 0 gets scheduled on node A
4. the new PVC/PV for ordinal 0 gets bound to node B
5. the new pod is stuck in Pending state forever since the new PVC is only available on node B
I believe the issue is that the scheduler when scheduling the new pod is looking at the OLD PVC object which is the progress of being deleted. Since PVCs are referenced by name and statefulsets use consistent naming for PVCs, the scheduler can use the old PVC definition when doing Filter decisions.
This can happen if the pod informer/watcher in the scheduler is ahead of the pvc informer/watcher.
### What did you expect to happen?
The scheduler should not place pods using local PVs in an unschedulable state.
### How can we reproduce it (as minimally and precisely as possible)?
Create a statefulset with a local PVC claimref. Delete the pod and pvc for an ordinal at the same time. Retry until the pod is stuck in Pending state.
### Anything else we need to know?
Thinking about possible solutions:
1. in the scheduler, do not use deleting PVCs/PVs when placing the pod. this does not fix the race but it most likely reduces the frequency
2. in the scheduler, form a consistent snapshot where all objects used in scheduling decisions are from the same ETCD revision (I tried to see if there is anything in the k8s project that does this but I couldn't find anything)
3. in the statefulset controller, create an owner reference on PVCs to owning pod. during scheduling, if the PVC is not owned by the pod being scheduled, backoff.
### Kubernetes version
<details>
```console
$ kubectl version
1.29.9
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/scheduling,lifecycle/stale,needs-triage | low | Minor |
2,595,640,433 | excalidraw | Failure to Automatically Read and Save to Existing Excalidraw Canvas File After Reload | When saving an Excalidraw canvas file and reloading the webpage, the app fails to automatically read from the previously saved file.
After reloading, users cannot save changes to the existing file. Instead, they must save to a new file or manually reopen the existing one to continue working | bug | low | Critical |
2,595,671,866 | pytorch | Adam optimizer without first moment estimate, for less vram | ### 🚀 The feature, motivation and pitch
Many GANs like StyleGan use the Adam optimizer with betas=(0.0, 0.999), this means that the first moment estimate is disabled but it still uses a lot of vram for it, is it possible to add an option to stop this from happening and reduce vram usage?
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @vincentqb @janeyx99 @crcrpar | module: optimizer,triaged,actionable | low | Major |
2,595,728,783 | pytorch | torch._inductor.exc.LoweringException in `torch._export.aot_compile` | ### 🐛 Describe the bug
Install most recent diffusers, torchao and pytorch
```
pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu121
pip install git+https://github.com/huggingface/diffusers
pip install torchao
```
<details>
<summary>Repro</summary>
```python
import torch
from diffusers import FluxPipeline, FluxTransformer2DModel
import torch.utils.benchmark as benchmark
from functools import partial
from torchao.quantization import quantize_, int8_weight_only
def get_example_inputs():
example_inputs = {
"hidden_states": torch.randn(1, 4096, 64, dtype=torch.bfloat16, device="cuda"),
"encoder_hidden_states": torch.randn(1, 512, 4096, dtype=torch.bfloat16, device="cuda"),
"pooled_projections": torch.randn(1, 768, dtype=torch.bfloat16, device="cuda"),
"timestep": torch.tensor([1.0], device="cuda"),
"img_ids": torch.randn(4096, 3, dtype=torch.bfloat16, device="cuda"),
"txt_ids": torch.randn(512, 3, dtype=torch.bfloat16, device="cuda"),
"guidance": None,
"joint_attention_kwargs": None,
"return_dict": False
}
return example_inputs
def benchmark_fn(f, *args, **kwargs):
t0 = benchmark.Timer(
stmt="f(*args, **kwargs)",
globals={"args": args, "kwargs": kwargs, "f": f},
num_threads=torch.get_num_threads(),
)
return f"{(t0.blocked_autorange().mean):.3f}"
def load_model():
model = FluxTransformer2DModel.from_pretrained(
"black-forest-labs/FLUX.1-schnell", subfolder="transformer", torch_dtype=torch.bfloat16
).to("cuda")
return model
def aot_compile(name, model, **sample_kwargs):
path = f"./{name}.so"
print(f"{path=}")
options = {
"aot_inductor.output_path": path,
"max_autotune": True,
"triton.cudagraphs": True,
}
torch._inductor.aoti_compile_and_package(
torch.export.export(model, (), sample_kwargs),
(),
sample_kwargs,
)
# torch._export.aot_compile(
# fn,
# (),
# sample_kwargs,
# options=options,
# disable_constraint_solver=True,
# )
return path
def aot_load(path):
return torch._export.aot_load(path, "cuda")
@torch.no_grad()
def f(model, **kwargs):
return model(**kwargs)
model = load_model()
quantize_(model, int8_weight_only())
inputs1 = get_example_inputs()
from torchao.utils import unwrap_tensor_subclass
unwrap_tensor_subclass(model)
path1 = aot_compile("bs_1_1024", model, **inputs1)
compiled_func_1 = aot_load(path1)
print(f"{compiled_func_1(**inputs1)[0].shape=}")
for _ in range(5):
_ = compiled_func_1(**inputs1)[0]
time = benchmark_fn(f, compiled_func_1, **inputs1)
print(time)
```
</details>
<details>
<summary>Error</summary>
```
Fetching 3 files: 0%| | 0/3 [00:00<?, ?it/s]
Fetching 3 files: 100%|██████████| 3/3 [00:00<00:00, 40072.97it/s]
Traceback (most recent call last):
File "/data/users/jerryzh/transformers/repro.py", line 70, in <module>
path1 = aot_compile("bs_1_1024", model, **inputs1)
File "/data/users/jerryzh/transformers/repro.py", line 44, in aot_compile
torch._inductor.aoti_compile_and_package(
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/__init__.py", line 91, in aoti_compile_and_package
aoti_files = aot_compile(m, args, kwargs, options=inductor_configs) # type: ignore[arg-type]
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/__init__.py", line 204, in aot_compile
return compile_fx_aot(
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 1153, in compile_fx_aot
compiled_lib_path = compile_fx(
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 1330, in compile_fx
return compile_fx(
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 1369, in compile_fx
return compile_fx(
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 1593, in compile_fx
return inference_compiler(unlifted_gm, example_inputs_)
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 1424, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 1495, in _fw_compiler_base
return inner_compile(
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 470, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_dynamo/repro/after_aot.py", line 85, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 664, in _compile_fx_inner
compiled_graph = codegen_and_compile(
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 565, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 875, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/graph.py", line 1969, in compile_to_fn
code, linemap = self.codegen_with_cpp_wrapper()
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/graph.py", line 1840, in codegen_with_cpp_wrapper
return self.codegen()
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/graph.py", line 1854, in codegen
self.scheduler.codegen()
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/scheduler.py", line 3449, in codegen
return self._codegen()
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/scheduler.py", line 3514, in _codegen
self.codegen_extern_call(node)
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/scheduler.py", line 3396, in codegen_extern_call
node.codegen(V.graph.wrapper_code)
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/ir.py", line 5152, in codegen
kernel_name = self.get_kernel_name()
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/ir.py", line 4558, in get_kernel_name
V.graph.wrapper_code.get_c_shim_func_name(self.cpp_kernel_name) # type: ignore[attr-defined]
File "/home/jerryzh/anaconda3/envs/diffuser/lib/python3.9/site-packages/torch/_inductor/codegen/cpp_wrapper_cpu.py", line 1164, in get_c_shim_func_name
if kernel.startswith("aoti_torch_"):
AttributeError: 'NoneType' object has no attribute 'startswith'
```
</details>
### Versions
main
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 | oncall: pt2,oncall: export,module: aotinductor | low | Critical |
2,595,731,969 | ui | [bug]: breaking changes for calendar using react-day-picker 9.1.4 | ### Describe the bug
As it is, the calendar component breaks, if you update react-day-picker to `9.1.4`.
I discovered this while I initially wanted to try out the new timeZone feature.
More details on what may be affected are provided in the docs:
https://daypicker.dev/upgrading
Styles & custom components are affected.
Example Quote from the migration guide
```
//Broken
IconLeft: ({ ...props }) => <ChevronLeft className="h-4 w-4" />,
IconRight: ({ ...props }) => <ChevronRight className="h-4 w-4" />,
//Fix
Chevron(props) {
if (props.orientation === 'left') {
return <ChevronLeft className="h-4 w-4" />;
}
return <ChevronRight className="h-4 w-4" />;
},
}}
/>
```
Please let me know if you need additional information.
### Affected component/components
calendar
### How to reproduce
Update react-day-picker from `8.10.1` to `9.1.4`
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows/Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,595,760,487 | PowerToys | PowerToys Settings not showing window properly | ### Microsoft PowerToys version
0.85.1
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Welcome / PowerToys Tour window
### Steps to reproduce
I've installed noramlly by Winget at first when i got this error, then I removed all files and unistalled all, to try the install by MS-Store, but I'm getting the same issue all the ways i tried
### ✔️ Expected Behavior
The settings window sshould show all the stuff
### ❌ Actual Behavior
Nothing is being showed, just the whell icon a little on top, the image above is the settings "opened"

### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response | low | Critical |
2,595,765,392 | pytorch | `_native_batch_norm_legit`: `aot_eager` backend incorrectly raising an error. | ### 🐛 Describe the bug
`out=` variant of `_native_batch_norm_legit` operation does not work with `aot_eager` backend.
```python
args = (
torch.rand(5, 5, 5),
torch.rand(5),
torch.rand(5),
torch.rand(5),
torch.rand(5),
True,
0.5,
0.6,
)
out = (
torch.empty(5, 5, 5),
torch.empty(5),
torch.empty(5),
)
>>> torch.compile(torch._native_batch_norm_legit, backend="aot_eager")(*args, out=out)
Traceback (most recent call last):
File "examples/bn.py", line 20, in <module>
torch.compile(torch._native_batch_norm_legit, backend="aot_eager")(*args, out=out)
File "torch/_dynamo/eval_frame.py", line 487, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 1350, in __call__
return self._torchdynamo_orig_callable(
File "torch/_dynamo/convert_frame.py", line 1141, in __call__
result = self._inner_convert(
File "torch/_dynamo/convert_frame.py", line 543, in __call__
return _compile(
File "torch/_dynamo/convert_frame.py", line 963, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "torch/_dynamo/convert_frame.py", line 694, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 727, in _compile_inner
out_code = transform_code_object(code, transform)
File "torch/_dynamo/bytecode_transformation.py", line 1337, in transform_code_object
transformations(instructions, code_options)
File "torch/_dynamo/convert_frame.py", line 228, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 656, in transform
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 2794, in run
super().run()
File "torch/_dynamo/symbolic_convert.py", line 996, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 908, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 2985, in RETURN_VALUE
self._return(inst)
File "torch/_dynamo/symbolic_convert.py", line 2970, in _return
self.output.compile_subgraph(
File "torch/_dynamo/output_graph.py", line 1143, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "torch/_dynamo/output_graph.py", line 1371, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "torch/_dynamo/output_graph.py", line 1418, in call_user_compiler
return self._call_user_compiler(gm)
File "torch/_dynamo/output_graph.py", line 1467, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "torch/_dynamo/output_graph.py", line 1448, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "torch/__init__.py", line 2309, in __call__
return self.compiler_fn(model_, inputs_, **self.kwargs)
File "torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "torch/_functorch/aot_autograd.py", line 1087, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "torch/_functorch/aot_autograd.py", line 1063, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "torch/_functorch/aot_autograd.py", line 524, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "torch/_functorch/aot_autograd.py", line 762, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 145, in aot_dispatch_base
fw_module, updated_flat_args, maybe_subclass_meta = aot_dispatch_base_graph( # type: ignore[misc]
File "torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 135, in aot_dispatch_base_graph
fw_module = _create_graph(
File "torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 54, in _create_graph
fx_g = make_fx(
File "torch/fx/experimental/proxy_tensor.py", line 2159, in wrapped
return make_fx_tracer.trace(f, *args)
File "torch/fx/experimental/proxy_tensor.py", line 2097, in trace
return self._trace_inner(f, *args)
File "torch/fx/experimental/proxy_tensor.py", line 2068, in _trace_inner
t = dispatch_trace(
File "torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
File "torch/_dynamo/eval_frame.py", line 654, in _fn
return fn(*args, **kwargs)
File "torch/fx/experimental/proxy_tensor.py", line 1133, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
File "torch/_dynamo/eval_frame.py", line 654, in _fn
return fn(*args, **kwargs)
File "torch/fx/_symbolic_trace.py", line 823, in trace
(self.create_arg(fn(*args)),),
File "torch/fx/experimental/proxy_tensor.py", line 1188, in wrapped
out = f(*tensors)
File "<string>", line 1, in <lambda>
File "torch/_functorch/_aot_autograd/traced_function_transforms.py", line 693, in inner_fn
outs = fn(*args)
File "torch/_functorch/_aot_autograd/traced_function_transforms.py", line 614, in _functionalized_f_helper
inpt_old.copy_(inpt_new)
File "torch/fx/experimental/proxy_tensor.py", line 1236, in __torch_function__
return func(*args, **kwargs)
File "torch/_subclasses/functional_tensor.py", line 541, in __torch_dispatch__
outs_unwrapped = func._op_dk(
File "torch/_meta_registrations.py", line 399, in meta_copy_
aten.expand_copy.default(intermediate, self.size())
File "torch/_ops.py", line 723, in __call__
return self._op(*args, **kwargs)
File "torch/_refs/__init__.py", line 2227, in _fn
result = fn(*args, out=out, **kwargs)
File "torch/_prims_common/wrappers.py", line 289, in _fn
result = fn(*args, **kwargs)
File "torch/_ops.py", line 1123, in __call__
return self._op(*args, **(kwargs or {}))
File "torch/_refs/__init__.py", line 2983, in expand
torch._check(
File "torch/__init__.py", line 1594, in _check
_check_with(RuntimeError, cond, message)
File "torch/__init__.py", line 1576, in _check_with
raise error_type(message_evaluated)
torch._dynamo.exc.BackendCompilerFailed: backend='aot_eager' raised:
RuntimeError: expand: the requested shape has too few dimensions!
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
Note: since `native_batch_norm` may call `_native_batch_norm_legit`, #137807 may be related.
### Versions
PyTorch version: 2.5.0a0+git7128504
Is debug build: True
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
cc @ezyang @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | triaged,oncall: pt2,module: pt2-dispatcher | low | Critical |
2,595,811,547 | rust | LUB coercions can't combine unsafe fn item with safe fn item or pointer | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code: [playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=e3e838d025de542f3773ff98ab7e4488)
```rust
fn safe_fn() {}
unsafe fn unsafe_fn() {}
fn lub() {
// safe fn item + unsafe fn item
// doesn't work
let _ = if true { safe_fn } else { unsafe_fn };
// safe fn ptr + unsafe fn item
// doesn't work
let _ = if true { safe_fn as fn() } else { unsafe_fn };
// safe fn item + unsafe fn ptr
// works
let _ = if true { safe_fn } else { unsafe_fn as unsafe fn() };
// closure + unsafe fn item
// works
let _ = if true { || {} } else { unsafe_fn };
}
```
I expected to see this happen: I expected the safe fn item and unsafe fn item to combine to an unsafe fn ptr, especially since the closure and unsafe fn item do combine to an unsafe fn ptr.
Instead, this happened:
```
error[E0308]: `if` and `else` have incompatible types
--> src/lib.rs:7:40
|
7 | let _ = if true { safe_fn } else { unsafe_fn };
| ------- ^^^^^^^^^ expected safe fn, found unsafe fn
| |
| expected because of this
|
= note: expected fn item `fn() {safe_fn}`
found fn item `unsafe fn() {unsafe_fn}`
error[E0308]: `if` and `else` have incompatible types
--> src/lib.rs:11:48
|
11 | let _ = if true { safe_fn as fn() } else { unsafe_fn };
| --------------- ^^^^^^^^^ expected safe fn, found unsafe fn
| |
| expected because of this
|
= note: expected fn pointer `fn()`
found fn item `unsafe fn() {unsafe_fn}`
= note: unsafe functions cannot be coerced into safe function pointers
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
playground nightly
```
Build using the Nightly version: 1.84.0-nightly
(2024-10-16 798fb83f7d24e31b16ac)
```
@rustbot label A-coercions T-types | C-bug,A-coercions,T-types | low | Critical |
2,595,835,669 | vscode | Have a setting to always delete untitled workspaces on close | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
Version: 1.94.2 (user setup)
Commit: 384ff7382de624fb94dbaf6da11977bba1ecd427
Date: 2024-10-09T16:08:44.566Z
Electron: 30.5.1
ElectronBuildId: 10262041
Chromium: 124.0.6367.243
Node.js: 20.16.0
V8: 12.4.254.20-electron.0
OS: Windows_NT x64 10.0.19045
Steps to Reproduce:
1. Close all VSCode windows
2. Open folder in VSCode via Windows context menu ("Open with Code")
3. Add another folder to the workspace to create an untitled workspace
4. Close VSCode
5. Open the same folder in VSCode via Windows context menu ("Open with Code")
You will now have two windows open. No matter if you have the "Confirm Save Untitled Workspace" setting enabled or not.
With those two windows open, what you do next affects how VSCode functions:
Scenario A> If you close the window with the untitled workspace FIRST, it will ask you if you want to save the workspace. If you check the box to "Always discard untitled workspaces without asking" then you will not get this window anymore, but steps 1 through 5 still produce the same result.
Scenario B> If you close the single folder window first, and then close the untitled workspace window, you will not get a popup asking you if you want to save the workspace. Opening another folder in VSCode will open two windows again.
Result:
If the window with the untitled workspace is the LAST window to close, it will never ask you if you want to save the untitled workspace and will ALWAYS save it no matter what the setting is.
If the window with the untitled workspace is NOT the last window to close, it will ask you if you want to discard the workspace if the setting is not enabled to always discard untitled workspaces, or it will properly discard the workspace if the setting is enabled.
This is a bug. The functionality of VSCode changes depending on whether the untitled workspace window is closed last or not and it ignores the setting unless there are other windows open. | feature-request,workbench-multiroot | low | Critical |
2,595,840,518 | pytorch | `out=` dynamo support using `eager` (on CPU) backend. | List of operations, whose `out=` variants are not consistent with eager (i.e. run with eager, but fail with dynamo). I have grouped them according to the error each of them raise.
_Note: ~I'm only using CPU, here. Using CUDA seems to generate a different list of operations.~ I have confirmed both CPU and CUDA result in the same set of operations._
**Output Not Contiguous**
- [ ] `cat`
- [ ] `cholesky`
- [ ] `cholesky_inverse`
- [ ] `cholesky_solve`
- [ ] `fft_fft2`
- [ ] `fft_fft`
- [ ] `fft_fftn`
- [ ] `fft_hfft2`
- [ ] `fft_hfft`
- [ ] `fft_hfftn`
- [ ] `fft_ifft2`
- [ ] `fft_ifft`
- [ ] `fft_ifftn`
- [ ] `fft_ihfft2`
- [ ] `fft_ihfft`
- [ ] `fft_ihfftn`
- [ ] `fft_irfft2`
- [ ] `fft_irfft`
- [ ] `fft_irfftn`
- [ ] `fft_rfft2`
- [ ] `fft_rfft`
- [ ] `fft_rfftn`
- [ ] `linalg_cholesky`
- [ ] `linalg_householder_product`
- [ ] `linalg_inv`
- [ ] `linalg_ldl_solve`
- [ ] `linalg_lu_solve`
- [ ] `linalg_matrix_power`
- [ ] `linalg_solve`
- [ ] `linalg_solve_triangular`
- [ ] `linalg_tensorinv`
- [ ] `lu_solve`
- [ ] `ormqr`
<details>
<summary>Error Example</summary>
```python
Traceback (most recent call last):
File "examples/ops.py", line 134, in run
f(input, args, kwargs, expected)
File "torch/_dynamo/eval_frame.py", line 487, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 1350, in __call__
return self._torchdynamo_orig_callable(
File "torch/_dynamo/convert_frame.py", line 543, in __call__
return _compile(
File "torch/_dynamo/convert_frame.py", line 963, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "torch/_dynamo/convert_frame.py", line 694, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 727, in _compile_inner
out_code = transform_code_object(code, transform)
File "torch/_dynamo/bytecode_transformation.py", line 1337, in transform_code_object
transformations(instructions, code_options)
File "torch/_dynamo/convert_frame.py", line 228, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 656, in transform
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 2794, in run
super().run()
File "torch/_dynamo/symbolic_convert.py", line 996, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 908, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 614, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1693, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 843, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/lazy.py", line 169, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "torch/_dynamo/variables/user_defined.py", line 938, in call_function
return self.call_method(tx, "__call__", args, kwargs)
File "torch/_dynamo/variables/user_defined.py", line 798, in call_method
return UserMethodVariable(method, self, source=source).call_function(
File "torch/_dynamo/variables/functions.py", line 401, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 340, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 112, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 849, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3009, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3137, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 996, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 908, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 614, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1693, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 843, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/lazy.py", line 169, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "torch/_dynamo/variables/torch.py", line 1016, in call_function
unimplemented(
File "torch/_dynamo/exc.py", line 304, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: out= op was called where output tensor was non-contiguous
from user code:
File "examples/ops.py", line 129, in op_out
return op(input, *args, **kwargs, out=out)
File "torch/testing/_internal/opinfo/core.py", line 1169, in __call__
return self.op(*args, **kwargs)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "torch/testing/_internal/common_device_type.py", line 1140, in test_wrapper
return test(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1371, in only_fn
return fn(slf, *args, **kwargs)
File "examples/ops.py", line 142, in test_dynamo_out
raise RuntimeError(f"eager didn't fail, but dynamo did.") from dynamo_err
RuntimeError: eager didn't fail, but dynamo did.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/unittest/case.py", line 59, in testPartExecutor
yield
File "/usr/local/lib/python3.10/unittest/case.py", line 591, in run
self._callTestMethod(testMethod)
File "/usr/local/lib/python3.10/unittest/case.py", line 549, in _callTestMethod
method()
File "torch/testing/_internal/common_utils.py", line 2983, in wrapper
method(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 448, in instantiated_test
result = test(self, **param_kwargs)
File "torch/testing/_internal/common_utils.py", line 1530, in wrapper
fn(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1152, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 8: SampleInput(input=TensorList[Tensor[size=(2, 2, 2, 2), device="cpu", dtype=torch.float32, contiguous=False]], args=(1), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=8 python ops.py TestCommonCPU.test_dynamo_out_cat_cpu_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
**`DynamicOutputShapeException`**
- [ ] `linalg_lstsq`
- [ ] `masked_select`
- [ ] `nonzero`
<details>
<summary>Error Example</summary>
```python
Traceback (most recent call last):
File "torch/_dynamo/utils.py", line 2184, in run_node
return node.target(*args, **kwargs)
File "torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File "torch/_subclasses/fake_tensor.py", line 1241, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 1695, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 1351, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 2017, in _dispatch_impl
op_impl_out = op_impl(self, func, *args, **kwargs)
File "torch/_subclasses/fake_impls.py", line 268, in dyn_shape
raise DynamicOutputShapeException(func)
torch._subclasses.fake_tensor.DynamicOutputShapeException: aten.linalg_lstsq.out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "torch/_dynamo/utils.py", line 2069, in get_fake_value
ret_val = wrap_fake_exception(
File "torch/_dynamo/utils.py", line 1626, in wrap_fake_exception
return fn()
File "torch/_dynamo/utils.py", line 2070, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "torch/_dynamo/utils.py", line 2202, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "torch/_dynamo/utils.py", line 2184, in run_node
return node.target(*args, **kwargs)
File "torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File "torch/_subclasses/fake_tensor.py", line 1241, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 1695, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 1351, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 2017, in _dispatch_impl
op_impl_out = op_impl(self, func, *args, **kwargs)
File "torch/_subclasses/fake_impls.py", line 268, in dyn_shape
raise DynamicOutputShapeException(func)
RuntimeError: Failed running call_function <built-in function linalg_lstsq>(*(FakeTensor(..., size=(2, 3)), FakeTensor(..., size=(2, 3))), **{'driver': 'gels', 'out': (FakeTensor(..., size=(3, 3)), FakeTensor(..., size=(0,)), FakeTensor(..., size=(0,), dtype=torch.int64), FakeTensor(..., size=(0,)))}):
aten.linalg_lstsq.out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "examples/ops.py", line 134, in run
f(input, args, kwargs, expected)
File "torch/_dynamo/eval_frame.py", line 487, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 1350, in __call__
return self._torchdynamo_orig_callable(
File "torch/_dynamo/convert_frame.py", line 543, in __call__
return _compile(
File "torch/_dynamo/convert_frame.py", line 963, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "torch/_dynamo/convert_frame.py", line 694, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 727, in _compile_inner
out_code = transform_code_object(code, transform)
File "torch/_dynamo/bytecode_transformation.py", line 1337, in transform_code_object
transformations(instructions, code_options)
File "torch/_dynamo/convert_frame.py", line 228, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 656, in transform
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 2794, in run
super().run()
File "torch/_dynamo/symbolic_convert.py", line 996, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 908, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 614, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1693, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 843, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/lazy.py", line 169, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "torch/_dynamo/variables/user_defined.py", line 938, in call_function
return self.call_method(tx, "__call__", args, kwargs)
File "torch/_dynamo/variables/user_defined.py", line 798, in call_method
return UserMethodVariable(method, self, source=source).call_function(
File "torch/_dynamo/variables/functions.py", line 401, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 340, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 112, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 849, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3009, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3137, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 996, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 908, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 614, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1693, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 843, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/lazy.py", line 169, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "torch/_dynamo/variables/torch.py", line 953, in call_function
tensor_variable = wrap_fx_proxy(
File "torch/_dynamo/variables/builder.py", line 2057, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "torch/_dynamo/variables/builder.py", line 2144, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "torch/_dynamo/utils.py", line 2095, in get_fake_value
unimplemented(
File "torch/_dynamo/exc.py", line 304, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: dynamic shape operator: aten.linalg_lstsq.out; Operator does not have a meta kernel that supports dynamic output shapes, please report an issue to PyTorch
from user code:
File "examples/ops.py", line 129, in op_out
return op(input, *args, **kwargs, out=out)
File "torch/testing/_internal/opinfo/core.py", line 1169, in __call__
return self.op(*args, **kwargs)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "torch/testing/_internal/common_device_type.py", line 1140, in test_wrapper
return test(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1371, in only_fn
return fn(slf, *args, **kwargs)
File "examples/ops.py", line 142, in test_dynamo_out
raise RuntimeError(f"eager didn't fail, but dynamo did.") from dynamo_err
RuntimeError: eager didn't fail, but dynamo did.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/unittest/case.py", line 59, in testPartExecutor
yield
File "/usr/local/lib/python3.10/unittest/case.py", line 591, in run
self._callTestMethod(testMethod)
File "/usr/local/lib/python3.10/unittest/case.py", line 549, in _callTestMethod
method()
File "torch/testing/_internal/common_utils.py", line 2983, in wrapper
method(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 448, in instantiated_test
result = test(self, **param_kwargs)
File "torch/testing/_internal/common_device_type.py", line 1210, in dep_fn
return fn(slf, *args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1210, in dep_fn
return fn(slf, *args, **kwargs)
File "torch/testing/_internal/common_utils.py", line 1530, in wrapper
fn(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1152, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(2, 3), device="cpu", dtype=torch.float32], args=TensorList[Tensor[size=(2, 3), device="cpu", dtype=torch.float32]], kwargs={'driver': "'gels'"}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python ops.py TestCommonCPU.test_dynamo_out_linalg_lstsq_cpu_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
**`DataDependentOutputException`**
- [ ] `nanquantile`
- [ ] `quantile`
<details>
<summary>Error Example</summary>
```python
Traceback (most recent call last):
File "torch/_dynamo/utils.py", line 2184, in run_node
return node.target(*args, **kwargs)
File "torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File "torch/_subclasses/fake_tensor.py", line 1241, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 1695, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 1351, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 2017, in _dispatch_impl
op_impl_out = op_impl(self, func, *args, **kwargs)
File "torch/_subclasses/fake_impls.py", line 517, in data_dep
raise DataDependentOutputException(func)
torch._subclasses.fake_tensor.DataDependentOutputException: aten.equal.default
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "torch/_dynamo/utils.py", line 2069, in get_fake_value
ret_val = wrap_fake_exception(
File "torch/_dynamo/utils.py", line 1626, in wrap_fake_exception
return fn()
File "torch/_dynamo/utils.py", line 2070, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "torch/_dynamo/utils.py", line 2202, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "torch/_dynamo/utils.py", line 2184, in run_node
return node.target(*args, **kwargs)
File "torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File "torch/_subclasses/fake_tensor.py", line 1241, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 1695, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 1351, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 2017, in _dispatch_impl
op_impl_out = op_impl(self, func, *args, **kwargs)
File "torch/_subclasses/fake_impls.py", line 517, in data_dep
raise DataDependentOutputException(func)
RuntimeError: Failed running call_function <built-in method quantile of type object at 0x7f4c7bced0a0>(*(FakeTensor(..., size=()), 0.5), **{'out': FakeTensor(..., size=())}):
aten.equal.default
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "examples/ops.py", line 134, in run
f(input, args, kwargs, expected)
File "torch/_dynamo/eval_frame.py", line 487, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 1350, in __call__
return self._torchdynamo_orig_callable(
File "torch/_dynamo/convert_frame.py", line 543, in __call__
return _compile(
File "torch/_dynamo/convert_frame.py", line 963, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "torch/_dynamo/convert_frame.py", line 694, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 727, in _compile_inner
out_code = transform_code_object(code, transform)
File "torch/_dynamo/bytecode_transformation.py", line 1337, in transform_code_object
transformations(instructions, code_options)
File "torch/_dynamo/convert_frame.py", line 228, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 656, in transform
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 2794, in run
super().run()
File "torch/_dynamo/symbolic_convert.py", line 996, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 908, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 614, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1693, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 843, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/lazy.py", line 169, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "torch/_dynamo/variables/user_defined.py", line 938, in call_function
return self.call_method(tx, "__call__", args, kwargs)
File "torch/_dynamo/variables/user_defined.py", line 798, in call_method
return UserMethodVariable(method, self, source=source).call_function(
File "torch/_dynamo/variables/functions.py", line 401, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 340, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 112, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 849, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3009, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3137, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 996, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 908, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 614, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1693, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 843, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/lazy.py", line 169, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "torch/_dynamo/variables/torch.py", line 953, in call_function
tensor_variable = wrap_fx_proxy(
File "torch/_dynamo/variables/builder.py", line 2057, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "torch/_dynamo/variables/builder.py", line 2144, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "torch/_dynamo/utils.py", line 2082, in get_fake_value
unimplemented(
File "torch/_dynamo/exc.py", line 304, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: data dependent operator: aten.equal.default; to enable, set torch._dynamo.config.capture_scalar_outputs = True
from user code:
File "examples/ops.py", line 129, in op_out
return op(input, *args, **kwargs, out=out)
File "torch/testing/_internal/opinfo/core.py", line 1169, in __call__
return self.op(*args, **kwargs)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "torch/testing/_internal/common_device_type.py", line 1140, in test_wrapper
return test(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1371, in only_fn
return fn(slf, *args, **kwargs)
File "examples/ops.py", line 142, in test_dynamo_out
raise RuntimeError(f"eager didn't fail, but dynamo did.") from dynamo_err
RuntimeError: eager didn't fail, but dynamo did.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/unittest/case.py", line 59, in testPartExecutor
yield
File "/usr/local/lib/python3.10/unittest/case.py", line 591, in run
self._callTestMethod(testMethod)
File "/usr/local/lib/python3.10/unittest/case.py", line 549, in _callTestMethod
method()
File "torch/testing/_internal/common_utils.py", line 2983, in wrapper
method(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 448, in instantiated_test
result = test(self, **param_kwargs)
File "torch/testing/_internal/common_utils.py", line 1530, in wrapper
fn(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1152, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(), device="cpu", dtype=torch.float32], args=(0.5), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python ops.py TestCommonCPU.test_dynamo_out_quantile_cpu_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
**Other Operations**
- [ ] #137780
<details>
<summary>Error Traceback</summary>
```python
Traceback (most recent call last):
File "examples/ops.py", line 134, in run
f(input, args, kwargs, expected)
File "torch/_dynamo/eval_frame.py", line 487, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 1350, in __call__
return self._torchdynamo_orig_callable(
File "torch/_dynamo/convert_frame.py", line 543, in __call__
return _compile(
File "torch/_dynamo/convert_frame.py", line 963, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "torch/_dynamo/convert_frame.py", line 694, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 727, in _compile_inner
out_code = transform_code_object(code, transform)
File "torch/_dynamo/bytecode_transformation.py", line 1337, in transform_code_object
transformations(instructions, code_options)
File "torch/_dynamo/convert_frame.py", line 228, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 656, in transform
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 2794, in run
super().run()
File "torch/_dynamo/symbolic_convert.py", line 996, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 908, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 614, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1693, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 843, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/lazy.py", line 169, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "torch/_dynamo/variables/user_defined.py", line 938, in call_function
return self.call_method(tx, "__call__", args, kwargs)
File "torch/_dynamo/variables/user_defined.py", line 798, in call_method
return UserMethodVariable(method, self, source=source).call_function(
File "torch/_dynamo/variables/functions.py", line 401, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 340, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 112, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 849, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3009, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3137, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 996, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 908, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 614, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1693, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 843, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/lazy.py", line 169, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "torch/_dynamo/variables/torch.py", line 953, in call_function
tensor_variable = wrap_fx_proxy(
File "torch/_dynamo/variables/builder.py", line 2057, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "torch/_dynamo/variables/builder.py", line 2144, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "torch/_dynamo/utils.py", line 2134, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "torch/_dynamo/utils.py", line 2069, in get_fake_value
ret_val = wrap_fake_exception(
File "torch/_dynamo/utils.py", line 1626, in wrap_fake_exception
return fn()
File "torch/_dynamo/utils.py", line 2070, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "torch/_dynamo/utils.py", line 2202, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "torch/_dynamo/utils.py", line 2184, in run_node
return node.target(*args, **kwargs)
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method nanmean of type object at 0x7f7766ced0a0>(*(FakeTensor(..., size=()),), **{'out': FakeTensor(..., size=())}):
Cannot access data pointer of Tensor (e.g. FakeTensor, FunctionalTensor). If you're using torch.compile/export/fx, it is likely that we are erroneously tracing into a custom kernel. To fix this, please wrap the custom kernel into an opaque custom op. Please see the following for details: https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html
from user code:
File "examples/ops.py", line 129, in op_out
return op(input, *args, **kwargs, out=out)
File "torch/testing/_internal/opinfo/core.py", line 1169, in __call__
return self.op(*args, **kwargs)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "torch/testing/_internal/common_device_type.py", line 1140, in test_wrapper
return test(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1371, in only_fn
return fn(slf, *args, **kwargs)
File "examples/ops.py", line 142, in test_dynamo_out
raise RuntimeError(f"eager didn't fail, but dynamo did.") from dynamo_err
RuntimeError: eager didn't fail, but dynamo did.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/unittest/case.py", line 59, in testPartExecutor
yield
File "/usr/local/lib/python3.10/unittest/case.py", line 591, in run
self._callTestMethod(testMethod)
File "/usr/local/lib/python3.10/unittest/case.py", line 549, in _callTestMethod
method()
File "torch/testing/_internal/common_utils.py", line 2983, in wrapper
method(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 448, in instantiated_test
result = test(self, **param_kwargs)
File "torch/testing/_internal/common_utils.py", line 1530, in wrapper
fn(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1152, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(), device="cpu", dtype=torch.float32], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python ops.py TestCommonCPU.test_dynamo_out_nanmean_cpu_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
- [ ] `triangular_solve`
<details>
<summary>Error Traceback</summary>
```python
Traceback (most recent call last):
File "torch/_dynamo/utils.py", line 2184, in run_node
return node.target(*args, **kwargs)
File "torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File "torch/_subclasses/fake_tensor.py", line 1241, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 1695, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 1342, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 2047, in _dispatch_impl
r = func(*args, **kwargs)
File "torch/_ops.py", line 723, in __call__
return self._op(*args, **kwargs)
File "torch/_decomp/__init__.py", line 101, in _fn
return f(*args, **kwargs, out=None if is_none else out_kwargs)
File "torch/_prims_common/wrappers.py", line 289, in _fn
result = fn(*args, **kwargs)
TypeError: triangular_solve_meta() got an unexpected keyword argument 'X'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "torch/_dynamo/utils.py", line 2069, in get_fake_value
ret_val = wrap_fake_exception(
File "torch/_dynamo/utils.py", line 1626, in wrap_fake_exception
return fn()
File "torch/_dynamo/utils.py", line 2070, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "torch/_dynamo/utils.py", line 2202, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "torch/_dynamo/utils.py", line 2184, in run_node
return node.target(*args, **kwargs)
File "torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File "torch/_subclasses/fake_tensor.py", line 1241, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 1695, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 1342, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 2047, in _dispatch_impl
r = func(*args, **kwargs)
File "torch/_ops.py", line 723, in __call__
return self._op(*args, **kwargs)
File "torch/_decomp/__init__.py", line 101, in _fn
return f(*args, **kwargs, out=None if is_none else out_kwargs)
File "torch/_prims_common/wrappers.py", line 289, in _fn
result = fn(*args, **kwargs)
RuntimeError: Failed running call_function <built-in method triangular_solve of type object at 0x7f010dd740e0>(*(FakeTensor(..., siz
e=(5, 1)), FakeTensor(..., size=(5, 5))), **{'out': (FakeTensor(..., size=(5, 1)), FakeTensor(..., size=(5, 5)))}):
triangular_solve_meta() got an unexpected keyword argument 'X'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "examples/ops.py", line 134, in run
f(input, args, kwargs, expected)
File "torch/_dynamo/eval_frame.py", line 487, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 1350, in __call__
return self._torchdynamo_orig_callable(
File "torch/_dynamo/convert_frame.py", line 543, in __call__
return _compile(
File "torch/_dynamo/convert_frame.py", line 963, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "torch/_dynamo/convert_frame.py", line 694, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 727, in _compile_inner
out_code = transform_code_object(code, transform)
File "torch/_dynamo/bytecode_transformation.py", line 1337, in transform_code_object
transformations(instructions, code_options)
File "torch/_dynamo/convert_frame.py", line 228, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 656, in transform
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 2794, in run
super().run()
File "torch/_dynamo/symbolic_convert.py", line 996, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 908, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 614, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1693, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 843, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/lazy.py", line 169, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "torch/_dynamo/variables/user_defined.py", line 938, in call_function
return self.call_method(tx, "__call__", args, kwargs)
File "torch/_dynamo/variables/user_defined.py", line 798, in call_method
return UserMethodVariable(method, self, source=source).call_function(
File "torch/_dynamo/variables/functions.py", line 401, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 340, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 112, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 849, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3009, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3137, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 996, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 908, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 614, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1693, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 843, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/lazy.py", line 169, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "torch/_dynamo/variables/torch.py", line 953, in call_function
tensor_variable = wrap_fx_proxy(
File "torch/_dynamo/variables/builder.py", line 2057, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "torch/_dynamo/variables/builder.py", line 2144, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "torch/_dynamo/utils.py", line 2132, in get_fake_value
unimplemented(f"TypeError {node.target}: {cause}")
File "torch/_dynamo/exc.py", line 304, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: TypeError <built-in method triangular_solve of type object at 0x7f010dd740e0>: triangular_solve_meta() got an unexpected keyword argument 'X'
from user code:
File "examples/ops.py", line 129, in op_out
return op(input, *args, **kwargs, out=out)
File "torch/testing/_internal/opinfo/core.py", line 1169, in __call__
return self.op(*args, **kwargs)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "torch/testing/_internal/common_device_type.py", line 1140, in test_wrapper
return test(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1371, in only_fn
return fn(slf, *args, **kwargs)
File "examples/ops.py", line 142, in test_dynamo_out
raise RuntimeError(f"eager didn't fail, but dynamo did.") from dynamo_err
RuntimeError: eager didn't fail, but dynamo did.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/lib/python3.9/unittest/case.py", line 59, in testPartExecutor
yield
File "/lib/python3.9/unittest/case.py", line 592, in run
self._callTestMethod(testMethod)
File "/lib/python3.9/unittest/case.py", line 550, in _callTestMethod
method()
File "torch/testing/_internal/common_utils.py", line 2983, in wrapper
method(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 448, in instantiated_test
result = test(self, **param_kwargs)
File "torch/testing/_internal/common_device_type.py", line 1210, in dep_fn
return fn(slf, *args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1210, in dep_fn
return fn(slf, *args, **kwargs)
File "torch/testing/_internal/common_utils.py", line 1530, in wrapper
fn(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1152, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(5, 1), device="cpu", dtype=torch.float32], args=TensorL
ist[Tensor[size=(5, 5), device="cpu", dtype=torch.float32]], kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python ops.py TestCommonCPU.test_dynamo_out_triangular_solve_cpu_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
## Test Setup
In order to reproduce these results, besides the actual test below, we needed to make `wrapper_set_seed` function a no-op:
```
--- a/torch/testing/_internal/common_methods_invocations.py
+++ b/torch/testing/_internal/common_methods_invocations.py
@@ -40,8 +40,9 @@ from torch.testing._internal.common_utils import (
GRADCHECK_NONDET_TOL, slowTest, TEST_WITH_SLOW,
TEST_WITH_TORCHINDUCTOR
)
-from torch.testing._utils import wrapper_set_seed
+# from torch.testing._utils import wrapper_set_seed
+import torch
import torch._refs as refs # noqa: F401
import torch._refs.nn.functional
import torch._refs.special
@@ -50,6 +51,9 @@ import torch._prims as prims # noqa: F401
from torch.utils import _pytree as pytree
+def wrapper_set_seed(op, *args, **kwargs):
+ return op(*args, **kwargs)
+
from packaging import version
from torch.testing._internal.opinfo.core import ( # noqa: F401
--
2.47.0
```
<details>
<summary>OpInfo Test</summary>
```python
import torch
import torch.utils._pytree as pytree
from torch.testing._internal.common_methods_invocations import op_db
from torch.testing._internal.common_device_type import ops, instantiate_device_type_tests, OpDTypes, onlyCUDA, onlyCPU
from torch.testing._internal.common_utils import TestCase, run_tests
class TestCommon(TestCase):
@ops([op for op in op_db if op.supports_out], allowed_dtypes=(torch.float32,))
def test_dynamo_out(self, device, dtype, op):
torch._dynamo.config.capture_dynamic_output_shape_ops = True
torch._dynamo.config.capture_scalar_outputs = True
fullgraph = True
backend = "eager"
samples = list(op.sample_inputs(device, dtype))
for i, sample in enumerate(samples):
torch._dynamo.reset()
input, args, kwargs = (sample.input, sample.args, sample.kwargs)
# Run the functional version of the operation, using eager.
try:
expected = op(input, *args, **kwargs)
if isinstance(expected, tuple):
expected = tuple(expected)
except:
# If that doesn't work out, go to the next sample.
continue
def op_out(input, args, kwargs, expected):
# Create the output inside the compiled function, since resizing
# graph inputs are not allowed.
out = pytree.tree_map_only(torch.Tensor, lambda t: torch.empty_like(t), expected)
return op(input, *args, **kwargs, out=out)
def run(f):
# Try running the operation, and return the raised error, if any.
try:
f(input, args, kwargs, expected)
except Exception as e:
return e
eager_err = run(op_out)
dynamo_err = run(torch.compile(op_out, backend=backend, fullgraph=fullgraph))
if eager_err is None and dynamo_err is not None:
raise RuntimeError(f"eager didn't fail, but dynamo did.") from dynamo_err
elif eager_err is not None and dynamo_err is None:
raise RuntimeError(f"eager failed, but dynamo didn't.") from eager_err
instantiate_device_type_tests(TestCommon, globals())
if __name__ == "__main__":
run_tests()
```
</details>
### Versions
PyTorch version: 2.5.0a0+git7128504
Is debug build: True
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec | triaged,oncall: pt2,module: dynamo | low | Critical |
2,595,850,012 | neovim | windows: `nvim.exe` exits with code -1073741819 immediately after startup | ### Problem
when run `nvim`, cmd is cleared and only a new prompt at the end of the cmd screen, pressing more enter key will cause the cmd to crash (dev version will directly crash cmd when press enter or run echo %ERRORLEVEL% after run nvim)
both 0.10.2 and dev version have this issue
in the picture, the next ran command of `echo %ERRORLEVEL%` also is suppressed by this issue but only outputted its result `-1073741819`

### Steps to reproduce
open a brand new cmd window,
run nvim.exe
### Expected behavior
nvim starts to run
### Nvim version (nvim -v)
0.10.2 and dev
### Vim (not Nvim) behaves the same?
no, vim 8.2.227
### Operating system/version
win10 1803
### Terminal name/version
cmd
### $TERM environment variable
na
### Installation
zip | bug,platform:windows,bug-crash,startup | low | Critical |
2,595,855,009 | deno | Feat: Configure permission profiles to shorten task commands, like `deno run --permission-profile main-script main.ts` | Task commands for a simple application look like this:
```json
{
"tasks": {
"start": "deno run --allow-env --allow-read --allow-run npm:concurrently 'deno run server' 'deno run bundle'",
"bundle": "deno run --allow-env --allow-read --allow-run npm:esbuild src/main.ts --watch --bundle --outfile=public/main.bundle.js",
"serve": "deno serve --allow-env --allow-sys --allow-net --allow-read --watch src/server.ts"
}
}
```
As I'm sure others have noted, this is quite verbose and it's difficult to read or edit the end of the command, which is what I usually care about.
I'm proposing a new configuration section and a new CLI flag for `deno run` and `deno serve`, `--permission-profile` or `-P`, to define permission profiles for improved readability and reduced repetition:
```json
{
"tasks": {
"start": "deno run -P tool npm:concurrently 'deno run server' 'deno run bundle'",
"bundle": "deno run -P tool npm:esbuild src/main.ts --watch --bundle --outfile=public/main.bundle.js",
"serve": "deno serve -P server --watch src/server.ts"
},
"permissionProfiles": {
"tool": "--allow-env --allow-read --allow-run",
"server": "--allow-env --allow-sys --allow-net --allow-read"
}
}
``` | suggestion,task runner | low | Major |
2,595,860,299 | go | cmd/nm: running nm on object from different version can crash | ### Go version
tip
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/iant/.cache/go-build'
GOENV='/home/iant/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/iant/gopath/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/iant/gopath'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org'
GOROOT='/home/iant/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/home/iant/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='devel go1.24-bae2e968e2 Mon Sep 30 22:04:40 2024 +0000'
GODEBUG=''
GOTELEMETRY='on'
GOTELEMETRYDIR='/home/iant/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/home/iant/go/src/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build1550637861=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Where go is current go tip (version go1.24-bae2e968e2) and ~/go1.23/bin/go is go 1.23 (version 1.23.2):
```
> cat foo.go
package p
func F() any {
return struct{ p *error}{nil}
}
> go build -o foo.a foo.go
> ~/go1.23/bin/go tool nm foo.a
panic: runtime error: index out of range [255] with length 255
goroutine 1 [running]:
cmd/internal/goobj.BuiltinName(...)
cmd/internal/goobj/builtin.go:22
cmd/internal/objfile.(*goobjFile).symbols.func2({0x5502a0?, 0x0?})
cmd/internal/objfile/goobj.go:143 +0x35a
cmd/internal/objfile.(*goobjFile).symbols(0xc0000aa380?)
cmd/internal/objfile/goobj.go:199 +0xdb0
cmd/internal/objfile.(*Entry).Symbols(0x7ffdf24635a2?)
cmd/internal/objfile/objfile.go:130 +0x1c
main.nm({0x7ffdf24635a2, 0x5})
cmd/nm/nm.go:120 +0x1d1
main.main()
cmd/nm/nm.go:92 +0x2b1
```
### What did you see happen?
Running go1.23 tool nm on an object created by go tip crashed.
### What did you expect to see?
I expected to see an error, or ordinary nm output. I did not expect to see a crash.
Since making nm forward compatible seems infeasible, we should change it so that if run on a newer object version it reports an error and exits, rather than crashing.
The object file format already has a magic number that supports this (cmd/internal/goobj.Magic). That magic number is normally changed when the object file format changes. In this case what happened is that the list of builtin names changed. That is part of the object file format, because the index into the builtin name slice is stored in the file format (when the symbol `PkgIdx` is `goobj.PkgIdxBuiltin`, the symbol `SymIdx` is an index into `goobj.builtins`).
Perhaps we should make cmd/internal/goobj/mkbuiltin.go automatically update the magic number. Or at least print a message saying that the magic number should be updated. Or perhaps we need some different mechanism, like a hash of the builtin names.
CC @golang/compiler | NeedsFix,compiler/runtime | low | Critical |
2,595,874,054 | pytorch | Torch 2.5.0 vs 2.4.1: torch nested in BetterTransformer fastpath implementation | ### 🐛 Describe the bug
Thanks for the new torch release!
I am using torch nested in the bettertransfromer implementation. https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/
Here is the exact code where torch compile breaks:
https://github.com/huggingface/optimum/blob/1e5014e70f17e0437c4b0a7f4e65e170688d8ab0/optimum/bettertransformer/models/encoder_models.py#L203 and
https://github.com/huggingface/optimum/blob/1e5014e70f17e0437c4b0a7f4e65e170688d8ab0/optimum/bettertransformer/models/encoder_models.py#L146
```
/home/michael/infinity/libs/infinity_emb/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:725: UserWarning: Graph break due to unsupported builtin torch._VariableFunctionsClass._nested_tensor_from_mask. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
torch._dynamo.utils.warn_once(msg)
/home/michael/infinity/libs/infinity_emb/.venv/lib/python3.10/site-packages/optimum/bettertransformer/models/encoder_models.py:301: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at ../aten/src/ATen/NestedTensorImpl.cpp:178.)
hidden_states = torch._nested_tensor_from_mask(hidden_states, ~attention_mask)
terminate called after throwing an instance of 'c10::Error'
what(): Internal error: NestedTensorImpl doesn't support strides. Please file an issue.
Exception raised from sym_strides_custom at ../aten/src/ATen/NestedTensorImpl.cpp:285 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7fa5cf6b9446 in /home/michael/infinity/libs/infinity_emb/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x68 (0x7fa5cf6637ad in /home/michael/infinity/libs/infinity_emb/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: <unknown function> + 0x1887313 (0x7fa5babf4313 in /home/michael/infinity/libs/infinity_emb/.venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #3: torch::dynamo::TensorCheck::check(torch::dynamo::LocalState const&, at::Tensor const&) + 0x434 (0x7fa5ce897994 in /home/michael/infinity/libs/infinity_emb/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x8ebc7d (0x7fa5ce88bc7d in /home/michael/infinity/libs/infinity_emb/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #5: <unknown function> + 0x8ec46e (0x7fa5ce88c46e in /home/michael/infinity/libs/infinity_emb/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #6: <unknown function> + 0x8e8430 (0x7fa5ce888430 in /home/michael/infinity/libs/infinity_emb/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x8e6cf6 (0x7fa5ce886cf6 in /home/michael/infinity/libs/infinity_emb/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #8: <unknown function> + 0x16b3ce (0x55dd1371c3ce in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #9: _PyEval_EvalFrameDefault + 0x285e (0x55dd136f8a6e in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #10: <unknown function> + 0x16b3ce (0x55dd1371c3ce in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #11: _PyEval_EvalFrameDefault + 0x285e (0x55dd136f8a6e in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #12: _PyObject_FastCallDictTstate + 0xc4 (0x55dd13703474 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #13: _PyObject_Call_Prepend + 0xc1 (0x55dd13719321 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #14: <unknown function> + 0x2826d0 (0x55dd138336d0 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #15: _PyObject_MakeTpCall + 0x25b (0x55dd137042db in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #16: _PyEval_EvalFrameDefault + 0x64d6 (0x55dd136fc6e6 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #17: <unknown function> + 0x16b281 (0x55dd1371c281 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #18: PyObject_Call + 0x122 (0x55dd1371cf22 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #19: _PyEval_EvalFrameDefault + 0x285e (0x55dd136f8a6e in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #20: <unknown function> + 0x16b281 (0x55dd1371c281 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #21: PyObject_Call + 0x122 (0x55dd1371cf22 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #22: _PyEval_EvalFrameDefault + 0x285e (0x55dd136f8a6e in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #23: _PyFunction_Vectorcall + 0x7c (0x55dd1370e42c in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #24: _PyObject_FastCallDictTstate + 0x16d (0x55dd1370351d in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #25: _PyObject_Call_Prepend + 0x5c (0x55dd137192bc in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #26: <unknown function> + 0x2826d0 (0x55dd138336d0 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #27: _PyObject_MakeTpCall + 0x25b (0x55dd137042db in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #28: _PyEval_EvalFrameDefault + 0x72ea (0x55dd136fd4fa in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #29: <unknown function> + 0x8e6ebd (0x7fa5ce886ebd in /home/michael/infinity/libs/infinity_emb/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #30: <unknown function> + 0x16b281 (0x55dd1371c281 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #31: PyObject_Call + 0x122 (0x55dd1371cf22 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #32: _PyEval_EvalFrameDefault + 0x285e (0x55dd136f8a6e in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #33: <unknown function> + 0x16b281 (0x55dd1371c281 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #34: PyObject_Call + 0x122 (0x55dd1371cf22 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #35: _PyEval_EvalFrameDefault + 0x285e (0x55dd136f8a6e in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #36: <unknown function> + 0x16b281 (0x55dd1371c281 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #37: PyObject_Call + 0x122 (0x55dd1371cf22 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #38: _PyEval_EvalFrameDefault + 0x285e (0x55dd136f8a6e in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #39: _PyFunction_Vectorcall + 0x7c (0x55dd1370e42c in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #40: PyObject_Call + 0x122 (0x55dd1371cf22 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #41: _PyEval_EvalFrameDefault + 0x285e (0x55dd136f8a6e in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #42: <unknown function> + 0x16b281 (0x55dd1371c281 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #43: PyObject_Call + 0x122 (0x55dd1371cf22 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #44: _PyEval_EvalFrameDefault + 0x285e (0x55dd136f8a6e in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #45: _PyFunction_Vectorcall + 0x7c (0x55dd1370e42c in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #46: _PyObject_FastCallDictTstate + 0x16d (0x55dd1370351d in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #47: _PyObject_Call_Prepend + 0x5c (0x55dd137192bc in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #48: <unknown function> + 0x2826d0 (0x55dd138336d0 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #49: PyObject_Call + 0xbb (0x55dd1371cebb in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #50: _PyEval_EvalFrameDefault + 0x285e (0x55dd136f8a6e in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #51: <unknown function> + 0x16b3ce (0x55dd1371c3ce in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #52: _PyEval_EvalFrameDefault + 0x285e (0x55dd136f8a6e in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #53: <unknown function> + 0x16b3ce (0x55dd1371c3ce in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #54: _PyEval_EvalFrameDefault + 0x285e (0x55dd136f8a6e in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #55: _PyObject_FastCallDictTstate + 0xc4 (0x55dd13703474 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #56: _PyObject_Call_Prepend + 0x5c (0x55dd137192bc in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #57: <unknown function> + 0x2826d0 (0x55dd138336d0 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #58: PyObject_Call + 0xbb (0x55dd1371cebb in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #59: _PyEval_EvalFrameDefault + 0x285e (0x55dd136f8a6e in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #60: <unknown function> + 0x16b281 (0x55dd1371c281 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #61: _PyEval_EvalFrameDefault + 0x613a (0x55dd136fc34a in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
frame #62: <unknown function> + 0x16b281 (0x55dd1371c281 in /home/michael/infinity/libs/infinity_emb/.venv/bin/python)
```
### Versions
Torch=2.5.0
Torch=2.4.1
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ @ezyang @chauhang @penguinwu | needs reproduction,triaged,module: nestedtensor,oncall: pt2 | low | Critical |
2,595,891,524 | flutter | [Flutter GPU] Use unsigned Dart FFI types where appropriate. | Dart FFI supports unsigned types like Uint32. So we should use those instead of int types when passing things like enums to the engine. | engine,c: proposal,P2,team-engine,triaged-engine,flutter-gpu | low | Minor |
2,595,931,643 | PowerToys | PowerToys tray icon missing after upgrade to Windows 11 Home 24H2. | ### Microsoft PowerToys version
0.85.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
System tray interaction
### Steps to reproduce
Update Windows 11 Home 23H PC to Windows 11 Home 24H2. (Without NPU).
(Optionally: uninstall PowerToys usings winget, and install it system wide using installed from GitHub)
Restart computer
### ✔️ Expected Behavior
PowerToys tray icon is displaying.
### ❌ Actual Behavior
PowerToys tray icon is not displaying.

### Other Software
After every boot, I get a windows update screen with the message: to have a moment of patience.
All PowerToys features are working. | Issue-Bug,Needs-Triage,Needs-Team-Response | low | Major |
2,595,954,952 | pytorch | Refactor FlexibleLayout to separate out "this stride can be changed" and "how this buffer is allocated can be changed" | ### 🚀 The feature, motivation and pitch
Currently, we have two layouts:
- FixedLayout
- FlexibleLayout
Where FixedLayout basically means "We already decided the layout, don't change it" while FlexibleLayout means "we are free to change this layout".
However, I think there are actually two different components of "decided this layout":
1. What is the output **stride** of this layout?
2. Who allocates the actual buffer for this tensor?
I believe conflating these causes some problems:
- For inductor template tuning, we care about the **stride** of the output layout, but we don't care who allocated the buffer (e.g. if it's just a view into a larger concat buffer). And Elias points out that he noticed this too here: https://github.com/pytorch/pytorch/pull/132554#issue-2445835622
- For Yifu's recent PR (https://github.com/pytorch/pytorch/pull/138029), he cares about "who allocates the buffer for this layout", but he doesn't care about "what is the actual stride of this layout".
My proposal is that we scrap our current subclasses of Layout and refactor it into:
```
class Layout:
stride: FlexibleStride or FixedStride
allocator: NonOwningAllocator (view into another allocation) or Flexible or SymmMem
```
cc: @eellison @yifuwang @shunting314 @jansel
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: inductor | low | Minor |
2,595,969,024 | kubernetes | Recreate the Device Manager gRPC server if failed | We need to ensure Device Plugin infrastracture is reliable by implementing the following:
1. Retry to start the server if it failed (see https://github.com/kubernetes/kubernetes/pull/125513 for places where the gRPC server may fail).
2. If server is not up, integrate it as a source for the kubelet health status (see https://github.com/kubernetes/kubernetes/issues/127460)
Retry should be reasonable easy to implement. Instead of a goroutine that simply started the Server, an advanced goroutine that Serve and retries can be used. I think it would be best to also try to re-create the socket if it was deleted. The retry logic must be unit tested as well as e2e_node test should be added.
Integrating with the health checking will require implementing an interface that will report "unhealthy". The important part here is that the health checking must not involve any lock or calculations. A simple bool value check should be a preferred way to do the health check.
/cc @ffromani @bart0sh
/help
/kind feature
/sig node
/priority backlog
/area kubelet
| priority/backlog,area/kubelet,sig/node,kind/feature,help wanted,triage/accepted | low | Critical |
2,595,993,758 | rust | Footgun with Rc::assume_init and related methods | ### Location
I came across https://doc.rust-lang.org/nightly/std/rc/struct.Rc.html#method.assume_init recently, and spotted a footgun that I think ought to be called out:
### Summary
The safety section does not clarify whether multiple `Rc`s are allowed to exist when `assume_init` is called. If they are, then whether `Drop` is called on the inner value will depend on the drop order of those `Rc`s. In the case of `Arc` this might well be non-deterministic.
IMO, the documentation should specify whether this is allowed, and if so should point out that callers must take care around this potential issue. | T-libs-api,A-docs,T-libs | low | Major |
2,596,043,771 | next.js | Missing `src.blurDataURL` for default metadata images while importing `apple-icon.png`/`icon.png`/`opengraph-image.png`/`twitter-image.png` with Turbopack enabled | ### Link to the code that reproduces this issue
https://github.com/dimaMachina/repro-next/tree/missing-src.blurDataURL
### To Reproduce
Run `pnpm dev`
### Current vs. Expected behavior
See errors:
```text
Error: Image with src "/_next/static/media/opengraph-image.41136f47.png" has "placeholder='blur'" property but is missing the "blurDataURL" property.
Error: Image with src "/_next/static/media/icon.92de3cc2.png" has "placeholder='blur'" property but is missing the "blurDataURL" property.
Error: Image with src "/_next/static/media/opengraph-image.41136f47.png" has "placeholder='blur'" property but is missing the "blurDataURL" property.
Error: Image with src "/_next/static/media/twitter-image.92de3cc2.png" has "placeholder='blur'" property but is missing the "blurDataURL" property.
```
Without `--turbo` I don't have any error.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: x64
Version: Darwin Kernel Version 22.5.0: Mon Apr 24 20:53:19 PDT 2023; root:xnu-8796.121.2~5/RELEASE_ARM64_T6020
Available memory (MB): 98304
Available CPU cores: 12
Binaries:
Node: 20.15.1
npm: 10.7.0
Yarn: N/A
pnpm: 9.12.1
Relevant Packages:
next: 15.0.0-canary.196 // There is a newer canary version (15.0.0-canary.197) available, please upgrade!
eslint-config-next: 14.1.0
react: 18.2.0
react-dom: 18.2.0
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug,Turbopack,linear: turbopack | low | Critical |
2,596,074,517 | create-react-app | How to fix module not found error | how would i fix this error:
Module not found: Error: Can't resolve '.src/.Components' in 'C:\Users\12142\OneDrive\Desktop\Form Data 2 JS\datascripts\src' | needs triage | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.